You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@ambari.apache.org by av...@apache.org on 2018/04/01 19:13:56 UTC

[ambari] branch trunk updated (bf7ce08 -> ba21ebe)

This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git.


    from bf7ce08  [AMBARI-23332] Recommend SSO configuration values for ATLAS and RANGER in the stack advisor
     new e60b106  AMBARI-21079. Add ability to sink Raw metrics to external system via Http. (swagle)
     new e10b5da  AMBARI-21106 : Ambari Metrics Anomaly detection prototype.(avijayan)
     new 485cdf4  AMBARI-21106 : Ambari Metrics Anomaly detection prototype (Commit 2). (avijayan)
     new b1ec22e  AMBARI-21106 : Ambari Metrics Anomaly detection prototype (Commit 3). (avijayan)
     new 58e8d4b  AMBARI-21079. Add ability to sink Raw metrics to external system via Http. Renamed files to fix build. (swagle)
     new d81c072  Fixing rat check failures and compilation issues. (avijayan)
     new 13dee7c  AMBARI-21079. Add ability to sink Raw metrics to external system via Http. Compilation error fix. (swagle)
     new 46c0af8  AMBARI-21214 : Use a uuid vs long row key for metrics in AMS schema. (avijayan)
     new 1e43864  AMBARI-21244 Add https support to local metrics aggregator application (dsen)
     new 935f332  AMBARI-17382 : Migrate AMS queries to use ROW_TIMESTAMP instead of native timerange hint. (avijayan)
     new 4efcf9d  AMBARI-21279 Handle scenario when host in-memory aggregation is not working (dsen)
     new 54e4a97  AMBARI-21458 Provide ability to shard Cluster second aggregation across appId. (dsen)
     new e466cc4  AMBARI-21686 : Implement a test driver that provides a set of metric series with different kinds of metric behavior. (avijayan)
     new 2fdf774  AMBARI-21106 : ML-Prototype: Detect timeseries anomaly for a metric. (Refine PIT & Trend subsystems, Integrate with AMS, Ambari Alerts.)
     new 73aee5c  Fixed compile errors from Merge trunk into branch-3.0-ams
     new 6830f00  Fixed compile errors from Merge trunk into branch-3.0-ams
     new 2eb160c  Fixed rat errors from Merge trunk into branch-3.0-ams
     new d991f3c  AMBARI-22077 : Create maven module and package structure for the anomaly detection engine. (avijayan)
     new fb75e61  AMBARI-22077 : Create maven module and package structure for the anomaly detection engine. (Commit 2) (avijayan)
     new 4d62937  AMBARI-22163 : Anomaly Storage: Design Metric anomalies schema. (avijayan)
     new 166fff6  AMBARI-22215 Refine cluster second aggregator by aligning sink publish times to 1 minute boundaries. (dsen)
     new 0c0c627  AMBARI-22192. Setup an application server for hosting the AD System Manager.
     new 6ffb35b  AMBARI-22192. Setup an application server for hosting the AD System Manager. (avijayan)
     new 64dcc02  AMBARI-22343. Add ability in AMS to tee metrics to a set of configured Kafka brokers. (swagle)
     new d580f27  AMBARI-22348 : Metric Definition Service V1 Implementation. (avijayan)
     new afa154b  AMBARI-22359 : Fix Serialization issues in Metric Definition Service (avijayan).
     new cc510d6  AMBARI-22365. Add storage support for storing metric definitions using LevelDB. (swagle)
     new 940b23a  AMBARI-22437 : Create an 'AD Manager' component in Ambari Metrics Service stack side. (avijayan)
     new 8357de8  AMBARI-22470 : Refine Metric Definition Service and AD Query service. (avijayan)
     new d46abc7  AMBARI-22567 : Integrate Spark lifecycle management into AMS AD Manager. (avijayan)
     new 6177221  AMBARI-22688. Fix AMS compilation issues and unit test with hbase,hadoop and phoenix upgraded. (swagle)
     new 541e2a5  AMBARI-22717 : Remove Anomaly Detection code from branch-3.0-ams. (avijayan)
     new 84f939d  AMBARI-22744. Fix issues with webapp deployment with new Hadoop common changes. (swagle)
     new a0b8d06  AMBARI-22744. Fix issues with webapp deployment with new Hadoop common changes. Addendum. (swagle)
     new 003c522  Fix AMS phoenix, hbase and hadoop versions in pom.xml
     new ca62ed0  AMBARI-23100 : Merge branch-3.0-ams onto trunk. (Remove logsearch-it pom change)
     new 01243e2  AMBARI-23225: Ambari Server login fails due to TimelineMetricsCacheSizeOfEngine error in AMS perf branch.
     new c5c9d03  AMBARI-23250 : Fix deployment issues in AMS perf branch.
     new ba21ebe  AMBARI-23250 : Fix deployment issues in AMS perf branch. (hbase-site change).

The 39 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../logfeeder/metrics/LogFeederAMSClient.java      |    5 +
 ambari-metrics/ambari-metrics-assembly/pom.xml     |    2 +-
 ambari-metrics/ambari-metrics-common/pom.xml       |   42 +-
 .../sink/timeline/AbstractTimelineMetricsSink.java |  108 +-
 .../sink/timeline/SingleValuedTimelineMetric.java  |   22 +-
 .../metrics2/sink/timeline/TimelineMetric.java     |   34 +-
 .../sink/timeline/TimelineMetricMetadata.java      |   37 +-
 .../metrics2/sink/timeline/TimelineMetrics.java    |   11 +-
 .../availability/MetricCollectorHAHelper.java      |    8 +-
 .../cache/TimelineMetricsEhCacheSizeOfEngine.java  |  119 +-
 .../timeline/AbstractTimelineMetricSinkTest.java   |  240 +++
 .../AbstractTimelineMetricSinkTest.java            |  108 --
 .../availability/MetricCollectorHATest.java        |    5 +
 .../timeline/cache/HandleConnectExceptionTest.java |   10 +
 .../src/main/conf/flume-metrics2.properties.j2     |    8 +-
 .../sink/flume/FlumeTimelineMetricsSink.java       |   13 +-
 .../sink/timeline/HadoopTimelineMetricsSink.java   |   15 +-
 .../timeline/HadoopTimelineMetricsSinkTest.java    |    8 +-
 .../ambari-metrics-host-aggregator/pom.xml         |   10 +
 .../host/aggregator/AggregatorApplication.java     |   51 +-
 .../sink/timeline/AbstractMetricPublisher.java     |   12 +-
 .../sink/timeline/AggregatedMetricsPublisher.java  |    5 +
 .../sink/timeline/RawMetricsPublisher.java         |    5 +
 .../timeline/AggregatedMetricsPublisherTest.java   |    6 +-
 .../sink/timeline/RawMetricsPublisherTest.java     |    2 +-
 .../src/main/python/core/aggregator.py             |    6 +-
 .../src/main/python/core/application_metric_map.py |   52 +-
 .../src/main/python/core/config_reader.py          |    9 +-
 .../src/main/python/core/emitter.py                |   42 +-
 .../src/main/python/core/host_info.py              |    1 -
 .../src/main/python/core/stop_handler.py           |    4 +-
 .../test/python/core/TestApplicationMetricMap.py   |   38 +-
 .../sink/kafka/KafkaTimelineMetricsReporter.java   |   10 +-
 .../sink/storm/StormTimelineMetricsReporter.java   |   16 +-
 .../sink/storm/StormTimelineMetricsSink.java       |   13 +-
 .../sink/storm/StormTimelineMetricsReporter.java   |   13 +-
 .../sink/storm/StormTimelineMetricsSink.java       |   13 +-
 .../conf/unix/ambari-metrics-collector             |    6 +-
 .../ambari-metrics-timelineservice/pom.xml         |  209 ++-
 .../AMSApplicationServer.java                      |  143 ++
 .../ApplicationHistoryClientService.java           |  215 ---
 .../ApplicationHistoryManager.java                 |  146 --
 .../ApplicationHistoryManagerImpl.java             |  250 ---
 .../ApplicationHistoryReader.java                  |  117 --
 .../ApplicationHistoryServer.java                  |  203 ---
 .../ApplicationHistoryStore.java                   |   37 -
 .../ApplicationHistoryWriter.java                  |  112 --
 .../FileSystemApplicationHistoryStore.java         |  784 ----------
 .../MemoryApplicationHistoryStore.java             |  274 ----
 .../NullApplicationHistoryStore.java               |  127 --
 .../metrics/loadsimulator/LoadRunner.java          |   40 +-
 .../loadsimulator/MetricsLoadSimulator.java        |    6 +-
 .../metrics/loadsimulator/MetricsSenderWorker.java |   21 +-
 .../loadsimulator/data/HostMetricsGenerator.java   |    8 +-
 .../data/MetricsGeneratorConfigurer.java           |   12 +-
 .../loadsimulator/net/RestMetricsSender.java       |    9 +-
 .../metrics/loadsimulator/util/Json.java           |    4 +-
 ...Store.java => HBaseTimelineMetricsService.java} |  182 ++-
 .../metrics/timeline/PhoenixHBaseAccessor.java     |  555 ++++---
 .../timeline/TimelineMetricConfiguration.java      |  289 +++-
 .../timeline/TimelineMetricDistributedCache.java}  |   45 +-
 .../metrics/timeline/TimelineMetricStore.java      |   24 +-
 .../timeline/TimelineMetricStoreWatcher.java       |   15 +-
 .../metrics/timeline/TimelineMetricsFilter.java    |    7 -
 .../timeline/TimelineMetricsIgniteCache.java       |  296 ++++
 .../aggregators/AbstractTimelineAggregator.java    |   89 +-
 .../timeline/aggregators/AggregatorUtils.java      |  192 +++
 .../timeline/aggregators/DownSamplerUtils.java     |   10 +-
 .../aggregators/TimelineClusterMetric.java         |    6 +-
 .../TimelineMetricAggregatorFactory.java           |   98 +-
 .../aggregators/TimelineMetricAppAggregator.java   |   57 +-
 .../TimelineMetricClusterAggregator.java           |   35 +-
 .../TimelineMetricClusterAggregatorSecond.java     |  252 +--
 ...tricClusterAggregatorSecondWithCacheSource.java |  104 ++
 .../TimelineMetricFilteringHostAggregator.java     |   94 ++
 .../aggregators/TimelineMetricHostAggregator.java  |   33 +-
 .../aggregators/TimelineMetricReadHelper.java      |   75 +-
 .../timeline/aggregators/TopNDownSampler.java      |   14 +-
 .../v2/TimelineMetricClusterAggregator.java        |   18 +-
 .../v2/TimelineMetricFilteringHostAggregator.java  |  119 ++
 .../v2/TimelineMetricHostAggregator.java           |   16 +-
 .../availability/AggregationTaskRunner.java        |   24 +-
 .../timeline/availability/CheckpointManager.java   |    4 +-
 .../availability/MetricCollectorHAController.java  |   45 +-
 .../OnlineOfflineStateModelFactory.java            |    4 +-
 .../discovery/TimelineMetricHostMetadata.java      |   60 +
 .../discovery/TimelineMetricMetadataKey.java       |   26 +-
 .../discovery/TimelineMetricMetadataManager.java   |  396 ++++-
 .../discovery/TimelineMetricMetadataSync.java      |   25 +-
 ...ractTimelineMetricsSeriesAggregateFunction.java |    9 +-
 .../metrics/timeline/query/Condition.java          |    7 +-
 .../metrics/timeline/query/ConditionBuilder.java   |   16 +-
 .../metrics/timeline/query/DefaultCondition.java   |  263 ++--
 .../timeline/query/DefaultPhoenixDataSource.java   |   10 +-
 .../metrics/timeline/query/EmptyCondition.java     |   20 +-
 .../timeline/query/PhoenixConnectionProvider.java  |    6 +-
 .../metrics/timeline/query/PhoenixTransactSQL.java |  324 ++--
 .../query/SplitByMetricNamesCondition.java         |   54 +-
 .../metrics/timeline/query/TopNCondition.java      |   66 +-
 .../timeline/sink/DefaultFSSinkProvider.java       |  153 ++
 .../metrics/timeline/sink/ExternalMetricsSink.java |   48 +
 .../timeline/sink/ExternalSinkProvider.java}       |   46 +-
 .../metrics/timeline/sink/HttpSinkProvider.java    |  231 +++
 .../metrics/timeline/sink/KafkaSinkProvider.java   |  118 ++
 .../DefaultInternalMetricsSourceProvider.java      |   42 +
 .../InternalMetricsSource.java}                    |   19 +-
 .../InternalSourceProvider.java}                   |   30 +-
 .../metrics/timeline/source/RawMetricsSource.java  |   85 ++
 .../source/cache/InternalMetricCacheKey.java       |  109 ++
 .../cache/InternalMetricCacheValue.java}           |   36 +-
 .../source/cache/InternalMetricsCache.java         |  229 +++
 .../source/cache/InternalMetricsCacheProvider.java |   48 +
 .../cache/InternalMetricsCacheSizeOfEngine.java    |   52 +
 .../timeline/uuid/HashBasedUuidGenStrategy.java    |  206 +++
 .../timeline/uuid/MetricUuidGenStrategy.java}      |   55 +-
 .../timeline/uuid/RandomUuidGenStrategy.java       |   53 +
 .../timeline/EntityIdentifier.java                 |  100 --
 .../timeline/LeveldbTimelineStore.java             | 1473 ------------------
 .../timeline/MemoryTimelineStore.java              |  360 -----
 .../timeline/TimelineWriter.java                   |    4 +-
 .../webapp/AHSLogsPage.java                        |   55 -
 .../applicationhistoryservice/webapp/AHSView.java  |   90 --
 .../webapp/AHSWebApp.java                          |   66 -
 .../webapp/AHSWebServices.java                     |  162 --
 .../AMSController.java}                            |   23 +-
 .../webapp/{ContainerPage.java => AMSWebApp.java}  |   29 +-
 .../webapp/AppAttemptPage.java                     |   69 -
 .../applicationhistoryservice/webapp/AppPage.java  |   71 -
 .../applicationhistoryservice/webapp/NavBlock.java |   51 -
 .../webapp/TimelineWebServices.java                |  277 +---
 .../main/resources/metrics_def/AMBARI_SERVER.dat   |   40 +
 .../src/main/resources/metrics_def/HOST.dat        |    6 +
 .../resources/metrics_def/JOBHISTORYSERVER.dat     |   58 +
 .../main/resources/metrics_def/MASTER_HBASE.dat    |  230 ++-
 .../src/main/resources/metrics_def/SLAVE_HBASE.dat |  700 +++++++--
 .../ApplicationHistoryStoreTestUtils.java          |   84 -
 .../TestApplicationHistoryClientService.java       |  209 ---
 .../TestApplicationHistoryManagerImpl.java         |   76 -
 .../TestApplicationHistoryServer.java              |  266 ----
 .../TestFileSystemApplicationHistoryStore.java     |  233 ---
 .../TestMemoryApplicationHistoryStore.java         |  206 ---
 .../timeline/AbstractMiniHBaseClusterTest.java     |   64 +-
 ...t.java => HBaseTimelineMetricsServiceTest.java} |    8 +-
 .../metrics/timeline/ITPhoenixHBaseAccessor.java   |  165 +-
 .../metrics/timeline/MetricTestHelper.java         |    4 +-
 .../metrics/timeline/PhoenixHBaseAccessorTest.java |  181 +--
 .../metrics/timeline/TestPhoenixTransactSQL.java   |  105 +-
 .../metrics/timeline/TestTimelineMetricStore.java  |   12 +-
 .../timeline/TimelineMetricStoreWatcherTest.java   |    4 +-
 .../TimelineMetricsAggregatorMemorySink.java       |    4 +-
 .../timeline/TimelineMetricsIgniteCacheTest.java   |  240 +++
 .../AbstractTimelineAggregatorTest.java            |   12 +-
 .../timeline/aggregators/DownSamplerTest.java      |    2 +
 .../timeline/aggregators/ITClusterAggregator.java  |   86 +-
 .../timeline/aggregators/ITMetricAggregator.java   |   22 +-
 .../TimelineMetricClusterAggregatorSecondTest.java |   93 +-
 ...ClusterAggregatorSecondWithCacheSourceTest.java |  115 ++
 .../MetricCollectorHAControllerTest.java           |    1 +
 .../timeline/discovery/TestMetadataManager.java    |  172 ++-
 .../timeline/discovery/TestMetadataSync.java       |   36 +-
 .../timeline/query/DefaultConditionTest.java       |  194 ++-
 .../timeline/source/RawMetricsSourceTest.java      |  141 ++
 .../uuid/TimelineMetricUuidManagerTest.java        |  184 +++
 .../timeline/TestLeveldbTimelineStore.java         |  253 ---
 .../timeline/TestMemoryTimelineStore.java          |   83 -
 .../timeline/TimelineStoreTestUtils.java           |  789 ----------
 .../webapp/TestAHSWebApp.java                      |  199 ---
 .../webapp/TestAHSWebServices.java                 |  302 ----
 .../webapp/TestTimelineWebServices.java            |  303 +---
 .../test/resources/test_data/full_whitelist.dat    | 1615 ++++++++++++++++++++
 ambari-metrics/pom.xml                             |   29 +-
 .../metrics/timeline/MetricsRequestHelper.java     |    6 +-
 .../cache/TimelineMetricsCacheSizeOfEngine.java    |    3 +-
 .../metrics/system/impl/AmbariMetricSinkImpl.java  |    5 +
 .../ACCUMULO/1.6.1.2.2.0/package/scripts/params.py |    6 +
 .../hadoop-metrics2-accumulo.properties.j2         |    3 +
 .../0.1.0/configuration/ams-hbase-site.xml         |   16 +
 .../0.1.0/configuration/ams-site.xml               |    5 +-
 .../AMBARI_METRICS/0.1.0/metainfo.xml              |   13 +-
 .../AMBARI_METRICS/0.1.0/package/scripts/ams.py    |   42 +-
 .../AMBARI_METRICS/0.1.0/package/scripts/params.py |   10 +-
 .../templates/hadoop-metrics2-hbase.properties.j2  |  100 +-
 .../0.1.0/package/templates/metric_monitor.ini.j2  |    3 +
 .../package/templates/smoketest_metrics.json.j2    |    1 -
 .../AMBARI_METRICS/0.1.0/service_advisor.py        |    1 +
 .../FLUME/1.4.0.2.0/package/scripts/params.py      |    6 +
 .../package/templates/flume-metrics2.properties.j2 |    3 +
 .../0.96.0.2.0/package/scripts/params_linux.py     |    6 +
 ...oop-metrics2-hbase.properties-GANGLIA-MASTER.j2 |    3 +
 .../hadoop-metrics2-hbase.properties-GANGLIA-RS.j2 |    3 +
 .../package/alerts/alert_metrics_deviation.py      |    4 +-
 .../0.12.0.2.0/package/scripts/params_linux.py     |    6 +
 .../hadoop-metrics2-hivemetastore.properties.j2    |    4 +-
 .../hadoop-metrics2-hiveserver2.properties.j2      |    3 +
 .../templates/hadoop-metrics2-llapdaemon.j2        |    3 +
 .../templates/hadoop-metrics2-llaptaskscheduler.j2 |    3 +
 .../KAFKA/0.8.1/configuration/kafka-broker.xml     |    5 +
 .../KAFKA/0.8.1/package/scripts/params.py          |    6 +
 .../STORM/0.9.1/package/scripts/params_linux.py    |    6 +
 .../STORM/0.9.1/package/templates/config.yaml.j2   |    7 +-
 .../package/templates/storm-metrics2.properties.j2 |    3 +
 .../stack-hooks/before-START/scripts/params.py     |    6 +
 .../templates/hadoop-metrics2.properties.j2        |    3 +
 .../STORM/package/templates/config.yaml.j2         |   48 +-
 .../configuration/hadoop-metrics2.properties.xml   |    3 +
 .../configuration/hadoop-metrics2.properties.xml   |    5 +
 .../metrics/timeline/MetricsPaddingMethodTest.java |   14 +-
 .../cache/TimelineMetricCacheSizingTest.java       |    1 -
 .../system/impl/TestAmbariMetricsSinkImpl.java     |    5 +
 .../2.0.6/AMBARI_METRICS/test_metrics_monitor.py   |  142 ++
 .../stacks/2.0.6/configs/default_ams_embedded.json |    1 +
 ambari-utility/pom.xml                             |    4 +-
 .../HDF/2.0/hooks/before-START/scripts/params.py   |    6 +
 .../templates/hadoop-metrics2.properties.j2        |   10 +-
 .../stacks/HDF/2.0/services/stack_advisor.py       |    1 +
 .../ODPi/2.0/hooks/before-START/scripts/params.py  |   19 +
 .../services/HIVE/package/scripts/params_linux.py  |    9 +
 .../hadoop-metrics2-hivemetastore.properties.j2    |   10 +-
 .../hadoop-metrics2-hiveserver2.properties.j2      |   10 +-
 .../templates/hadoop-metrics2-llapdaemon.j2        |   11 +-
 .../templates/hadoop-metrics2-llaptaskscheduler.j2 |    9 +-
 pom.xml                                            |   14 +-
 222 files changed, 9625 insertions(+), 10717 deletions(-)
 copy ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java => ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsEhCacheSizeOfEngine.java (55%)
 create mode 100644 ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricSinkTest.java
 delete mode 100644 ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/availability/AbstractTimelineMetricSinkTest.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/AMSApplicationServer.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryClientService.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManager.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerImpl.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryReader.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryStore.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryWriter.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/MemoryApplicationHistoryStore.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/NullApplicationHistoryStore.java
 rename ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/{HBaseTimelineMetricStore.java => HBaseTimelineMetricsService.java} (79%)
 copy ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/{webapp/AHSController.java => metrics/timeline/TimelineMetricDistributedCache.java} (53%)
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCache.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSource.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricFilteringHostAggregator.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricFilteringHostAggregator.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricHostMetadata.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/DefaultFSSinkProvider.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/ExternalMetricsSink.java
 copy ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/{webapp/AHSController.java => metrics/timeline/sink/ExternalSinkProvider.java} (57%)
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/HttpSinkProvider.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/KafkaSinkProvider.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/DefaultInternalMetricsSourceProvider.java
 copy ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/{query/PhoenixConnectionProvider.java => source/InternalMetricsSource.java} (71%)
 copy ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/{query/PhoenixConnectionProvider.java => source/InternalSourceProvider.java} (57%)
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSource.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricCacheKey.java
 copy ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/{query/PhoenixConnectionProvider.java => source/cache/InternalMetricCacheValue.java} (62%)
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCache.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheProvider.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheSizeOfEngine.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java
 rename ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/{webapp/AHSController.java => metrics/timeline/uuid/MetricUuidGenStrategy.java} (51%)
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/RandomUuidGenStrategy.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/EntityIdentifier.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/LeveldbTimelineStore.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/MemoryTimelineStore.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSLogsPage.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSView.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebApp.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
 rename ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/{timeline/package-info.java => webapp/AMSController.java} (74%)
 rename ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/{ContainerPage.java => AMSWebApp.java} (55%)
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AppAttemptPage.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AppPage.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/NavBlock.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/AMBARI_SERVER.dat
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/JOBHISTORYSERVER.dat
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryStoreTestUtils.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryClientService.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerImpl.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestFileSystemApplicationHistoryStore.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestMemoryApplicationHistoryStore.java
 rename ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/{HBaseTimelineMetricStoreTest.java => HBaseTimelineMetricsServiceTest.java} (93%)
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCacheTest.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSourceTest.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSourceTest.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/TimelineMetricUuidManagerTest.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TestLeveldbTimelineStore.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TestMemoryTimelineStore.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TimelineStoreTestUtils.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebApp.java
 delete mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebServices.java
 create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/test/resources/test_data/full_whitelist.dat
 create mode 100644 ambari-server/src/test/python/stacks/2.0.6/AMBARI_METRICS/test_metrics_monitor.py

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 36/39: AMBARI-23100 : Merge branch-3.0-ams onto trunk. (Remove logsearch-it pom change)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit ca62ed06b27baa1ec06578a2e455500bf5785639
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Tue Feb 27 12:25:49 2018 -0800

    AMBARI-23100 : Merge branch-3.0-ams onto trunk. (Remove logsearch-it pom change)
---
 ambari-logsearch/ambari-logsearch-it/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ambari-logsearch/ambari-logsearch-it/pom.xml b/ambari-logsearch/ambari-logsearch-it/pom.xml
index b3a1d45..db3e09f 100644
--- a/ambari-logsearch/ambari-logsearch-it/pom.xml
+++ b/ambari-logsearch/ambari-logsearch-it/pom.xml
@@ -122,7 +122,7 @@
   </dependencies>
 
   <build>
-    <testOutputDirectory>test/target/classes</testOutputDirectory>
+    <testOutputDirectory>target/classes</testOutputDirectory>
     <testResources>
       <testResource>
         <directory>src/test/java/</directory>

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 28/39: AMBARI-22437 : Create an 'AD Manager' component in Ambari Metrics Service stack side. (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 940b23a8080acda0ea268f3a1d8826822fdd63a8
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Tue Nov 14 09:46:00 2017 -0800

    AMBARI-22437 : Create an 'AD Manager' component in Ambari Metrics Service stack side. (avijayan)
---
 .../conf/unix/ambari-metrics-admanager.sh          | 194 +++++++++++++++++++++
 .../conf/unix/log4j.properties                     |  31 ++++
 .../pom.xml                                        |  14 +-
 .../src/main/resources/config.yml                  |   6 +-
 .../adservice/app/AnomalyDetectionAppConfig.scala  |   7 +-
 .../adservice/app/AnomalyDetectionAppModule.scala  |   9 +-
 .../configuration/HBaseConfiguration.scala         |   2 +
 .../MetricCollectorConfiguration.scala             |  16 +-
 .../MetricDefinitionDBConfiguration.scala          |   6 +-
 .../adservice/db/LevelDbStoreAccessor.scala        |  56 ++++++
 .../adservice/leveldb/LevelDBDatasource.scala      |  17 +-
 .../adservice/metadata/ADMetadataProvider.scala    |  17 +-
 .../metadata/MetricDefinitionServiceImpl.scala     |  32 ++--
 .../adservice/resource/AnomalyResource.scala       |   2 +-
 .../resource/MetricDefinitionResource.scala        |  24 ++-
 .../subsystem/trend/TrendAnomalyInstance.scala     |  17 ++
 .../app/AnomalyDetectionAppConfigTest.scala        |  14 +-
 .../adservice/app/DefaultADResourceSpecTest.scala  |   4 +-
 .../adservice/leveldb/LevelDBDataSourceTest.scala  |   4 +-
 .../0.1.0/configuration/ams-admanager-config.xml   |  51 ++++++
 .../0.1.0/configuration/ams-admanager-env.xml      |  12 +-
 .../0.1.0/configuration/ams-admanager-log4j.xml    |  86 +++++++++
 .../AMBARI_METRICS/0.1.0/metainfo.xml              |   1 +
 .../AMBARI_METRICS/0.1.0/package/scripts/ams.py    |  23 ++-
 .../AMBARI_METRICS/0.1.0/package/scripts/params.py |  15 +-
 .../0.1.0/package/scripts/status_params.py         |   2 +-
 .../package/templates/admanager_config.yaml.j2     |  20 +++
 .../stacks/2.0.6/AMBARI_METRICS/test_admanager.py  | 106 +++++++++++
 .../test/python/stacks/2.0.6/configs/default.json  |  13 ++
 .../stacks/2.0.6/configs/default_ams_embedded.json |  13 ++
 30 files changed, 751 insertions(+), 63 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/ambari-metrics-admanager.sh b/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/ambari-metrics-admanager.sh
new file mode 100644
index 0000000..f1a1ae3
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/ambari-metrics-admanager.sh
@@ -0,0 +1,194 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific
+
+PIDFILE=/var/run//var/run/ambari-metrics-anomaly-detection/ambari-metrics-admanager.pid
+OUTFILE=/var/log/ambari-metrics-anomaly-detection/ambari-metrics-admanager.out
+
+CONF_DIR=/etc/ambari-metrics-anomaly-detection/conf
+DAEMON_NAME=ams_admanager
+
+STOP_TIMEOUT=5
+
+function write_pidfile
+{
+    local pidfile="$1"
+    echo $! > "${pidfile}" 2>/dev/null
+    if [[ $? -gt 0 ]]; then
+      echo "ERROR:  Cannot write pid ${pidfile}." | tee -a $STARTUPFILE
+      exit 1;
+    fi
+}
+
+function java_setup
+{
+  # Bail if we did not detect it
+  if [[ -z "${JAVA_HOME}" ]]; then
+    echo "ERROR: JAVA_HOME is not set and could not be found."
+    exit 1
+  fi
+
+  if [[ ! -d "${JAVA_HOME}" ]]; then
+    echo "ERROR: JAVA_HOME ${JAVA_HOME} does not exist."
+    exit 1
+  fi
+
+  JAVA="${JAVA_HOME}/bin/java"
+
+  if [[ ! -x "$JAVA" ]]; then
+    echo "ERROR: $JAVA is not executable."
+    exit 1
+  fi
+}
+
+function daemon_status()
+{
+  #
+  # LSB 4.1.0 compatible status command (1)
+  #
+  # 0 = program is running
+  # 1 = dead, but still a pid (2)
+  # 2 = (not used by us)
+  # 3 = not running
+  #
+  # 1 - this is not an endorsement of the LSB
+  #
+  # 2 - technically, the specification says /var/run/pid, so
+  #     we should never return this value, but we're giving
+  #     them the benefit of a doubt and returning 1 even if
+  #     our pid is not in in /var/run .
+  #
+
+  local pidfile="$1"
+  shift
+
+  local pid
+
+  if [[ -f "${pidfile}" ]]; then
+    pid=$(cat "${pidfile}")
+    if ps -p "${pid}" > /dev/null 2>&1; then
+      return 0
+    fi
+    return 1
+  fi
+  return 3
+}
+
+function start()
+{
+  java_setup
+
+  daemon_status "${PIDFILE}"
+  if [[ $? == 0  ]]; then
+    echo "AMS AD Manager is running as process $(cat "${PIDFILE}"). Exiting" | tee -a $STARTUPFILE
+    exit 0
+  else
+    # stale pid file, so just remove it and continue on
+    rm -f "${PIDFILE}" >/dev/null 2>&1
+  fi
+
+  nohup "${JAVA}" "-Xms$AMS_AD_HEAPSIZE" "-Xmx$AMS_AD_HEAPSIZE" ${AMS_AD_OPTS} "-Dlog4j.configuration=file://$CONF_DIR/log4j.properties" "-jar" "/usr/lib/ambari-metrics-anomaly-detection/ambari-metrics-anomaly-detection-service.jar" "server" "${CONF_DIR}/config.yaml" "$@" > $OUTFILE 2>&1 &
+  PID=$!
+  write_pidfile "${PIDFILE}"
+  sleep 2
+
+  echo "Verifying ${DAEMON_NAME} process status..."
+  if [ -z "`ps ax -o pid | grep ${PID}`" ]; then
+    if [ -s ${OUTFILE} ]; then
+      echo "ERROR: ${DAEMON_NAME} start failed. For more details, see ${OUTFILE}:"
+      echo "===================="
+      tail -n 10 ${OUTFILE}
+      echo "===================="
+    else
+      echo "ERROR: ${DAEMON_NAME} start failed"
+      rm -f ${PIDFILE}
+    fi
+    echo "Anomaly Detection Manager out at: ${OUTFILE}"
+    exit -1
+  fi
+
+  rm -f $STARTUPFILE #Deleting startup file
+  echo "Anomaly Detection Manager successfully started."
+  }
+
+function stop()
+{
+  pidfile=${PIDFILE}
+
+  if [[ -f "${pidfile}" ]]; then
+    pid=$(cat "$pidfile")
+
+    kill "${pid}" >/dev/null 2>&1
+    sleep "${STOP_TIMEOUT}"
+
+    if kill -0 "${pid}" > /dev/null 2>&1; then
+      echo "WARNING: ${DAEMON_NAME} did not stop gracefully after ${STOP_TIMEOUT} seconds: Trying to kill with kill -9"
+      kill -9 "${pid}" >/dev/null 2>&1
+    fi
+
+    if ps -p "${pid}" > /dev/null 2>&1; then
+      echo "ERROR: Unable to kill ${pid}"
+    else
+      rm -f "${pidfile}" >/dev/null 2>&1
+    fi
+  fi
+}
+
+# execute ams-env.sh
+if [[ -f "${CONF_DIR}/ams-admanager-env.sh" ]]; then
+  . "${CONF_DIR}/ams-admanager-env.sh"
+else
+  echo "ERROR: Cannot execute ${CONF_DIR}/ams-admanager-env.sh." 2>&1
+  exit 1
+fi
+
+# set these env variables only if they were not set by ams-env.sh
+: ${AMS_AD_LOG_DIR:=/var/log/ambari-metrics-anomaly-detection}
+
+# set pid dir path
+if [[ -n "${AMS_AD_PID_DIR}" ]]; then
+  PIDFILE=${AMS_AD_PID_DIR}/admanager.pid
+fi
+
+# set out file path
+if [[ -n "${AMS_AD_LOG_DIR}" ]]; then
+  OUTFILE=${AMS_AD_LOG_DIR}/ambari-metrics-admanager.out
+fi
+
+#TODO manage 3 hbase daemons for start/stop/status
+case "$1" in
+
+	start)
+    start
+
+    ;;
+	stop)
+    stop
+
+    ;;
+	status)
+	    daemon_status "${PIDFILE}"
+	    if [[ $? == 0  ]]; then
+            echo "AMS AD Manager is running as process $(cat "${PIDFILE}")."
+        else
+            echo "AMS AD Manager is not running."
+        fi
+    ;;
+	restart)
+	  stop
+	  start
+	;;
+
+esac
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/log4j.properties b/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/log4j.properties
new file mode 100644
index 0000000..9dba1da
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/log4j.properties
@@ -0,0 +1,31 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Define some default values that can be overridden by system properties
+# Root logger option
+log4j.rootLogger=INFO,file
+
+# Direct log messages to a log file
+log4j.appender.file=org.apache.log4j.RollingFileAppender
+log4j.appender.file.File=/var/log/ambari-metrics-anomaly-detection/ambari-metrics-admanager.log
+log4j.appender.file.MaxFileSize=80MB
+log4j.appender.file.MaxBackupIndex=60
+log4j.appender.file.layout=org.apache.log4j.PatternLayout
+log4j.appender.file.layout.ConversionPattern=%d{ABSOLUTE} %5p [%t] %c{1}:%L - %m%n
+
+
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
index cfa8124..142f02f 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
@@ -135,7 +135,7 @@
         <version>3.1.0</version>
         <configuration>
           <createDependencyReducedPom>false</createDependencyReducedPom>
-          <minimizeJar>true</minimizeJar>
+          <!--<minimizeJar>true</minimizeJar>-->
           <filters>
             <filter>
               <artifact>*:*</artifact>
@@ -231,6 +231,12 @@
       <groupId>org.apache.kafka</groupId>
       <artifactId>connect-json</artifactId>
       <version>0.10.1.0</version>
+      <exclusions>
+        <exclusion>
+          <artifactId>jackson-databind</artifactId>
+          <groupId>com.fasterxml.jackson.core</groupId>
+        </exclusion>
+      </exclusions>
     </dependency>
     <dependency>
       <groupId>org.apache.spark</groupId>
@@ -262,6 +268,10 @@
           <artifactId>jersey-json</artifactId>
           <groupId>com.sun.jersey</groupId>
         </exclusion>
+        <exclusion>
+          <groupId>com.fasterxml.jackson.core</groupId>
+          <artifactId>jackson-databind</artifactId>
+        </exclusion>
       </exclusions>
     </dependency>
     <dependency>
@@ -307,7 +317,6 @@
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-common</artifactId>
       <version>${hadoop.version}</version>
-      <scope>provided</scope>
       <exclusions>
         <exclusion>
           <groupId>commons-el</groupId>
@@ -446,7 +455,6 @@
       <groupId>com.google.guava</groupId>
       <artifactId>guava</artifactId>
       <version>21.0</version>
-      <scope>test</scope>
     </dependency>
     <dependency>
       <groupId>io.dropwizard.metrics</groupId>
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
index 299a472..9402f6e 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
@@ -21,10 +21,12 @@ logging:
   type: external
 
 metricDefinitionService:
-  inputDefinitionDirectory: /etc/ambari-metrics-anomaly-detection/conf
+  inputDefinitionDirectory: /etc/ambari-metrics-anomaly-detection/conf/definitionDirectory
 
 metricsCollector:
-  hostPortList: host1:6188,host2:6188
+  hosts: host1,host2
+  port: 6188
+  protocol: http
   metadataEndpoint: /v1/timeline/metrics/metadata/keys
 
 adQueryService:
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
index aa20223..93f6b28 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
@@ -20,14 +20,16 @@ package org.apache.ambari.metrics.adservice.app
 
 import javax.validation.Valid
 
-import org.apache.ambari.metrics.adservice.configuration._
+import org.apache.ambari.metrics.adservice.configuration.{HBaseConfiguration, _}
+
+import com.fasterxml.jackson.annotation.{JsonIgnore, JsonIgnoreProperties, JsonProperty}
 
-import com.fasterxml.jackson.annotation.JsonProperty
 import io.dropwizard.Configuration
 
 /**
   * Top Level AD System Manager config items.
   */
+@JsonIgnoreProperties(ignoreUnknown=true)
 class AnomalyDetectionAppConfig extends Configuration {
 
   /*
@@ -54,6 +56,7 @@ class AnomalyDetectionAppConfig extends Configuration {
   /*
    HBase Conf
     */
+  @JsonIgnore
   def getHBaseConf : org.apache.hadoop.conf.Configuration = {
     HBaseConfiguration.getHBaseConf
   }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
index 28b2880..a896563 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
@@ -17,14 +17,16 @@
   */
 package org.apache.ambari.metrics.adservice.app
 
-import org.apache.ambari.metrics.adservice.db.MetadataDatasource
+import org.apache.ambari.metrics.adservice.db.{AdMetadataStoreAccessor, LevelDbStoreAccessor, MetadataDatasource}
 import org.apache.ambari.metrics.adservice.leveldb.LevelDBDataSource
-import org.apache.ambari.metrics.adservice.resource.{AnomalyResource, RootResource}
+import org.apache.ambari.metrics.adservice.metadata.{MetricDefinitionService, MetricDefinitionServiceImpl}
+import org.apache.ambari.metrics.adservice.resource.{AnomalyResource, MetricDefinitionResource, RootResource}
 import org.apache.ambari.metrics.adservice.service.{ADQueryService, ADQueryServiceImpl}
 
 import com.codahale.metrics.health.HealthCheck
 import com.google.inject.AbstractModule
 import com.google.inject.multibindings.Multibinder
+
 import io.dropwizard.setup.Environment
 
 class AnomalyDetectionAppModule(config: AnomalyDetectionAppConfig, env: Environment) extends AbstractModule {
@@ -34,8 +36,11 @@ class AnomalyDetectionAppModule(config: AnomalyDetectionAppConfig, env: Environm
     val healthCheckBinder = Multibinder.newSetBinder(binder(), classOf[HealthCheck])
     healthCheckBinder.addBinding().to(classOf[DefaultHealthCheck])
     bind(classOf[AnomalyResource])
+    bind(classOf[MetricDefinitionResource])
     bind(classOf[RootResource])
+    bind(classOf[AdMetadataStoreAccessor]).to(classOf[LevelDbStoreAccessor])
     bind(classOf[ADQueryService]).to(classOf[ADQueryServiceImpl])
+    bind(classOf[MetricDefinitionService]).to(classOf[MetricDefinitionServiceImpl])
     bind(classOf[MetadataDatasource]).to(classOf[LevelDBDataSource])
   }
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala
index a7bbc66..a51a959 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala
@@ -19,12 +19,14 @@ package org.apache.ambari.metrics.adservice.configuration
 import java.net.{MalformedURLException, URISyntaxException}
 
 import org.apache.hadoop.conf.Configuration
+import org.slf4j.{Logger, LoggerFactory}
 
 object HBaseConfiguration {
 
   val HBASE_SITE_CONFIGURATION_FILE: String = "hbase-site.xml"
   val hbaseConf: org.apache.hadoop.conf.Configuration = new Configuration(true)
   var isInitialized: Boolean = false
+  val LOG : Logger = LoggerFactory.getLogger("HBaseConfiguration")
 
   def initConfigs(): Unit = {
     if (!isInitialized) {
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala
index 9418897..2530730 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala
@@ -28,13 +28,25 @@ import com.fasterxml.jackson.annotation.JsonProperty
 class MetricCollectorConfiguration {
 
   @NotNull
-  private var hostPortList: String = _
+  private var hosts: String = _
+
+  @NotNull
+  private var port: String = _
+
+  @NotNull
+  private var protocol: String = _
 
   @NotNull
   private var metadataEndpoint: String = _
 
   @JsonProperty
-  def getHostPortList: String = hostPortList
+  def getHosts: String = hosts
+
+  @JsonProperty
+  def getPort: String = port
+
+  @JsonProperty
+  def getProtocol: String = protocol
 
   @JsonProperty
   def getMetadataEndpoint: String = metadataEndpoint
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionDBConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionDBConfiguration.scala
index 79a350c..ef4e00c 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionDBConfiguration.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionDBConfiguration.scala
@@ -26,12 +26,14 @@ class MetricDefinitionDBConfiguration {
 
   @NotNull
   private var dbDirPath: String = _
+  private var verifyChecksums: Boolean = true
+  private var performParanoidChecks: Boolean = false
 
   @JsonProperty("verifyChecksums")
-  def verifyChecksums: Boolean = true
+  def getVerifyChecksums: Boolean = verifyChecksums
 
   @JsonProperty("performParanoidChecks")
-  def performParanoidChecks: Boolean = false
+  def getPerformParanoidChecks: Boolean = performParanoidChecks
 
   @JsonProperty("dbDirPath")
   def getDbDirPath: String = dbDirPath
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/LevelDbStoreAccessor.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/LevelDbStoreAccessor.scala
new file mode 100644
index 0000000..baad57d
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/LevelDbStoreAccessor.scala
@@ -0,0 +1,56 @@
+package org.apache.ambari.metrics.adservice.db
+
+import org.apache.ambari.metrics.adservice.metadata.MetricSourceDefinition
+
+import com.google.inject.Inject
+
+class LevelDbStoreAccessor extends AdMetadataStoreAccessor{
+
+  @Inject
+  var levelDbDataSource : MetadataDatasource = _
+
+  @Inject
+  def this(levelDbDataSource: MetadataDatasource) = {
+    this
+    this.levelDbDataSource = levelDbDataSource
+  }
+
+  /**
+    * Return all saved component definitions from DB.
+    *
+    * @return
+    */
+  override def getSavedInputDefinitions: List[MetricSourceDefinition] = {
+    List.empty[MetricSourceDefinition]
+  }
+
+  /**
+    * Save a set of component definitions
+    *
+    * @param metricSourceDefinitions Set of component definitions
+    * @return Success / Failure
+    */
+override def saveInputDefinitions(metricSourceDefinitions: List[MetricSourceDefinition]): Boolean = {
+  true
+}
+
+  /**
+    * Save a component definition
+    *
+    * @param metricSourceDefinition component definition
+    * @return Success / Failure
+    */
+  override def saveInputDefinition(metricSourceDefinition: MetricSourceDefinition): Boolean = {
+    true
+  }
+
+  /**
+    * Delete a component definition
+    *
+    * @param definitionName component definition
+    * @return
+    */
+  override def removeInputDefinition(definitionName: String): Boolean = {
+    true
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala
index 6d185bf..a34a60a 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala
@@ -20,6 +20,8 @@ package org.apache.ambari.metrics.adservice.leveldb
 
 import java.io.File
 
+import javax.inject.Inject
+
 import org.apache.ambari.metrics.adservice.app.AnomalyDetectionAppConfig
 import org.apache.ambari.metrics.adservice.configuration.MetricDefinitionDBConfiguration
 import org.apache.ambari.metrics.adservice.db.MetadataDatasource
@@ -29,11 +31,20 @@ import org.iq80.leveldb.impl.Iq80DBFactory
 import com.google.inject.Singleton
 
 @Singleton
-class LevelDBDataSource(appConfig: AnomalyDetectionAppConfig) extends MetadataDatasource {
+class LevelDBDataSource() extends MetadataDatasource {
 
   private var db: DB = _
   @volatile var isInitialized: Boolean = false
 
+  var appConfig: AnomalyDetectionAppConfig = _
+
+  @Inject
+  def this(appConfig: AnomalyDetectionAppConfig) = {
+    this
+    this.appConfig = appConfig
+    initialize()
+  }
+
   override def initialize(): Unit = {
     if (isInitialized) return 
 
@@ -41,8 +52,8 @@ class LevelDBDataSource(appConfig: AnomalyDetectionAppConfig) extends MetadataDa
 
     db = createDB(new LevelDbConfig {
       override val createIfMissing: Boolean = true
-      override val verifyChecksums: Boolean = configuration.verifyChecksums
-      override val paranoidChecks: Boolean = configuration.performParanoidChecks
+      override val verifyChecksums: Boolean = configuration.getVerifyChecksums
+      override val paranoidChecks: Boolean = configuration.getPerformParanoidChecks
       override val path: String = configuration.getDbDirPath
     })
     isInitialized = true
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala
index 3bcf4b0..95b1b63 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala
@@ -32,7 +32,9 @@ import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
   */
 class ADMetadataProvider extends MetricMetadataProvider {
 
-  var metricCollectorHostPorts: Array[String] = Array.empty[String]
+  var metricCollectorHosts: Array[String] = Array.empty[String]
+  var metricCollectorPort: String = _
+  var metricCollectorProtocol: String = _
   var metricMetadataPath: String = "/v1/timeline/metrics/metadata/keys"
 
   val connectTimeout: Int = 10000
@@ -42,9 +44,11 @@ class ADMetadataProvider extends MetricMetadataProvider {
 
   def this(configuration: MetricCollectorConfiguration) {
     this
-    if (StringUtils.isNotEmpty(configuration.getHostPortList)) {
-      metricCollectorHostPorts = configuration.getHostPortList.split(",")
+    if (StringUtils.isNotEmpty(configuration.getHosts)) {
+      metricCollectorHosts = configuration.getHosts.split(",")
     }
+    metricCollectorPort = configuration.getPort
+    metricCollectorProtocol = configuration.getProtocol
     metricMetadataPath = configuration.getMetadataEndpoint
   }
 
@@ -57,8 +61,8 @@ class ADMetadataProvider extends MetricMetadataProvider {
 
     for (metricDef <- metricSourceDefinition.metricDefinitions) {
       if (metricDef.isValid) { //Skip requesting metric keys for invalid definitions.
-        for (hostPort <- metricCollectorHostPorts) {
-          val metricKeys: Set[MetricKey] = getKeysFromMetricsCollector(hostPort + metricMetadataPath, metricDef)
+        for (host <- metricCollectorHosts) {
+          val metricKeys: Set[MetricKey] = getKeysFromMetricsCollector(metricCollectorProtocol, host, metricCollectorPort, metricMetadataPath, metricDef)
           if (metricKeys != null) {
             keysMap += (metricDef -> metricKeys)
             metricKeySet.++(metricKeys)
@@ -76,8 +80,9 @@ class ADMetadataProvider extends MetricMetadataProvider {
     * @param metricDefinition
     * @return
     */
-  def getKeysFromMetricsCollector(url: String, metricDefinition: MetricDefinition): Set[MetricKey] = {
+  def getKeysFromMetricsCollector(protocol: String, host: String, port: String, path: String, metricDefinition: MetricDefinition): Set[MetricKey] = {
 
+    val url: String = protocol + "://" + host + port + "/" + path
     val mapper = new ObjectMapper() with ScalaObjectMapper
     try {
       val connection = new URL(url).openConnection.asInstanceOf[HttpURLConnection]
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala
index ffa9944..c34d2dd 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala
@@ -19,15 +19,16 @@ package org.apache.ambari.metrics.adservice.metadata
 
 import org.apache.ambari.metrics.adservice.app.AnomalyDetectionAppConfig
 import org.apache.ambari.metrics.adservice.db.AdMetadataStoreAccessor
+import org.slf4j.{Logger, LoggerFactory}
 
 import com.google.inject.{Inject, Singleton}
 
 @Singleton
 class MetricDefinitionServiceImpl extends MetricDefinitionService {
 
-  @Inject
-  var adMetadataStoreAccessor: AdMetadataStoreAccessor = _
+  val LOG : Logger = LoggerFactory.getLogger(classOf[MetricDefinitionServiceImpl])
 
+  var adMetadataStoreAccessor: AdMetadataStoreAccessor = _
   var configuration: AnomalyDetectionAppConfig = _
   var metricMetadataProvider: MetricMetadataProvider = _
 
@@ -36,18 +37,10 @@ class MetricDefinitionServiceImpl extends MetricDefinitionService {
   var metricDefinitionMetricKeyMap: Map[MetricDefinition, Set[MetricKey]] = Map()
 
   @Inject
-  def this (anomalyDetectionAppConfig: AnomalyDetectionAppConfig) = {
-    this ()
-    //TODO : Create AD Metadata instance here (or inject)
-    configuration = anomalyDetectionAppConfig
-    initializeService()
-  }
-
-  def this (anomalyDetectionAppConfig: AnomalyDetectionAppConfig, adMetadataStoreAccessor: AdMetadataStoreAccessor) = {
+  def this (anomalyDetectionAppConfig: AnomalyDetectionAppConfig, metadataStoreAccessor: AdMetadataStoreAccessor) = {
     this ()
-    //TODO : Create AD Metadata instance here (or inject). Pass in Schema information.
+    adMetadataStoreAccessor = metadataStoreAccessor
     configuration = anomalyDetectionAppConfig
-    this.adMetadataStoreAccessor = adMetadataStoreAccessor
     initializeService()
   }
 
@@ -67,13 +60,13 @@ class MetricDefinitionServiceImpl extends MetricDefinitionService {
     //Load definitions from metadata store
     val definitionsFromStore: List[MetricSourceDefinition] = adMetadataStoreAccessor.getSavedInputDefinitions
     for (definition <- definitionsFromStore) {
-      validateAndSanitizeMetricSourceDefinition(definition)
+      sanitizeMetricSourceDefinition(definition)
     }
 
     //Load definitions from configs
     val definitionsFromConfig: List[MetricSourceDefinition] = getInputDefinitionsFromConfig
     for (definition <- definitionsFromConfig) {
-      validateAndSanitizeMetricSourceDefinition(definition)
+      sanitizeMetricSourceDefinition(definition)
     }
 
     //Union the 2 sources, with DB taking precedence.
@@ -100,6 +93,9 @@ class MetricDefinitionServiceImpl extends MetricDefinitionService {
 
   @Override
   def getDefinitionByName(name: String): MetricSourceDefinition = {
+    if (!metricSourceDefinitionMap.contains(name)) {
+      LOG.warn("Metric Source Definition with name " + name + " not found")
+    }
     metricSourceDefinitionMap.apply(name)
   }
 
@@ -187,7 +183,13 @@ class MetricDefinitionServiceImpl extends MetricDefinitionService {
     this.adMetadataStoreAccessor = adMetadataStoreAccessor
   }
 
-  def validateAndSanitizeMetricSourceDefinition(metricSourceDefinition: MetricSourceDefinition): Unit = {
+
+  /**
+    * Look into the Metric Definitions inside a Metric Source definition, and push down source level appId &
+    * hosts to Metric definition if they do not have an override.
+    * @param metricSourceDefinition Input Metric Source Definition
+    */
+  def sanitizeMetricSourceDefinition(metricSourceDefinition: MetricSourceDefinition): Unit = {
     val sourceLevelAppId: String = metricSourceDefinition.appId
     val sourceLevelHostList: List[String] = metricSourceDefinition.hosts
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala
index c941ac3..98ce0c4 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala
@@ -23,7 +23,7 @@ import javax.ws.rs.{GET, Path, Produces}
 
 import org.joda.time.DateTime
 
-@Path("/topNAnomalies")
+@Path("/anomaly")
 class AnomalyResource {
 
   @GET
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala
index aacea79..16125fa 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala
@@ -17,12 +17,24 @@
 
 package org.apache.ambari.metrics.adservice.resource
 
+import javax.ws.rs.{GET, Path, Produces}
+import javax.ws.rs.core.MediaType.APPLICATION_JSON
+
+import org.apache.ambari.metrics.adservice.metadata.{MetricDefinitionService, MetricSourceDefinition}
+import org.apache.commons.lang.StringUtils
+
+import com.google.inject.Inject
+
+@Path("/metric-definition")
 class MetricDefinitionResource {
 
-  /*
-    GET component definition
-    POST component definition
-    DELETE component definition
-    PUT component definition
-  */
+  @Inject
+  var metricDefinitionService: MetricDefinitionService = _
+
+  @GET
+  @Produces(Array(APPLICATION_JSON))
+  def getMetricDefinition (definitionName: String) : MetricSourceDefinition = {
+    null
+  }
+
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala
index 125da34..3fc0d6f 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.ambari.metrics.adservice.subsystem.trend
 
 import org.apache.ambari.metrics.adservice.common.{Season, TimeRange}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala
index 104ccea..989ba21 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala
@@ -44,11 +44,21 @@ class AnomalyDetectionAppConfigTest extends FunSuite {
 
     assert(config.isInstanceOf[AnomalyDetectionAppConfig])
 
-    assert(config.getMetricDefinitionServiceConfiguration.getInputDefinitionDirectory == "/etc/ambari-metrics-anomaly-detection/conf")
+    assert(config.getMetricDefinitionServiceConfiguration.getInputDefinitionDirectory ==
+      "/etc/ambari-metrics-anomaly-detection/conf/definitionDirectory")
 
-    assert(config.getMetricCollectorConfiguration.getHostPortList == "host1:6188,host2:6188")
+    assert(config.getMetricCollectorConfiguration.getHosts == "host1,host2")
+
+    assert(config.getMetricCollectorConfiguration.getPort == "6188")
 
     assert(config.getAdServiceConfiguration.getAnomalyDataTtl == 604800)
+
+    assert(config.getMetricDefinitionDBConfiguration.getDbDirPath == "/var/lib/ambari-metrics-anomaly-detection/")
+
+    assert(config.getMetricDefinitionDBConfiguration.getVerifyChecksums)
+
+    assert(!config.getMetricDefinitionDBConfiguration.getPerformParanoidChecks)
+
   }
 
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
index 65cf609..2a4999c 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
@@ -32,10 +32,10 @@ import com.google.common.io.Resources
 
 class DefaultADResourceSpecTest extends FunSpec with Matchers {
 
-  describe("/topNAnomalies") {
+  describe("/anomaly") {
     it("Must return default message") {
       withAppRunning(classOf[AnomalyDetectionApp], Resources.getResource("config.yml").getPath) { rule =>
-        val json = client.target(s"http://localhost:${rule.getLocalPort}/topNAnomalies")
+        val json = client.target(s"http://localhost:${rule.getLocalPort}/anomaly")
           .request().accept(APPLICATION_JSON).buildGet().invoke(classOf[String])
         val now = DateTime.now.toString("MM-dd-yyyy hh:mm")
         assert(json == "{\"message\":\"Anomaly Detection Service!\"," + "\"today\":\"" + now + "\"}")
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDataSourceTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDataSourceTest.scala
index 2ddb7b8..9757d76 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDataSourceTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDataSourceTest.scala
@@ -36,8 +36,8 @@ class LevelDBDataSourceTest extends FunSuite with BeforeAndAfter with Matchers w
     val mdConfig : MetricDefinitionDBConfiguration = mock[MetricDefinitionDBConfiguration]
 
     when(appConfig.getMetricDefinitionDBConfiguration).thenReturn(mdConfig)
-    when(mdConfig.verifyChecksums).thenReturn(true)
-    when(mdConfig.performParanoidChecks).thenReturn(false)
+    when(mdConfig.getVerifyChecksums).thenReturn(true)
+    when(mdConfig.getPerformParanoidChecks).thenReturn(false)
     when(mdConfig.getDbDirPath).thenReturn(file.getAbsolutePath)
 
     db = new LevelDBDataSource(appConfig)
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-config.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-config.xml
index 489850f..2c6bbf7 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-config.xml
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-config.xml
@@ -57,4 +57,55 @@
     </value-attributes>
     <on-ambari-upgrade add="true"/>
   </property>
+  <property>
+    <name>ambari.metrics.admanager.input.definition.directory</name>
+    <value></value>
+    <display-name>AD Manager Input definition directory</display-name>
+    <description>AMS Anomaly Detection Manager definition directory</description>
+    <value-attributes>
+      <type>directory</type>
+      <overridable>false</overridable>
+      <empty-value-valid>true</empty-value-valid>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>content</name>
+    <display-name>ams-admanager-config template</display-name>
+    <value>
+      server:
+        applicationConnectors:
+        - type: http
+          port: {{ams_admanager_port}}
+        requestLog:
+          type: external
+
+      logging:
+        type: external
+
+      metricDefinitionService:
+        inputDefinitionDirectory: {{ams_ad_input_definition_directory}}
+
+      metricsCollector:
+        hosts: {{ams_collector_hosts}}
+        port: {{metric_collector_port}}
+        protocol: {{metric_collector_protocol}}
+        metadataEndpoint: /v1/timeline/metrics/metadata/keys
+
+      adQueryService:
+        anomalyDataTtl: 604800
+
+      metricDefinitionDB:
+        # force checksum verification of all data that is read from the file system on behalf of a particular read
+        verifyChecksums: true
+        # raise an error as soon as it detects an internal corruption
+        performParanoidChecks: false
+        # Path to Level DB directory
+        dbDirPath: {{ams_ad_data_dir}}
+    </value>
+    <value-attributes>
+      <type>content</type>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
 </configuration>
\ No newline at end of file
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-env.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-env.xml
index 99e93a6..a79796b 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-env.xml
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-env.xml
@@ -21,7 +21,7 @@
 -->
 <configuration>
   <property>
-    <name>ams_admanager_log_dir</name>
+    <name>ams_ad_log_dir</name>
     <value>/var/log/ambari-metrics-anomaly-detection</value>
     <display-name>Anomaly Detection Manager log dir</display-name>
     <description>AMS Anomaly Detection Manager log directory.</description>
@@ -31,7 +31,7 @@
     <on-ambari-upgrade add="true"/>
   </property>
   <property>
-    <name>ams_admanager_pid_dir</name>
+    <name>ams_ad_pid_dir</name>
     <value>/var/run/ambari-metrics-anomaly-detection</value>
     <display-name>Anomaly Detection Manager pid dir</display-name>
     <description>AMS Anomaly Detection Manager pid directory.</description>
@@ -41,7 +41,7 @@
     <on-ambari-upgrade add="true"/>
   </property>
   <property>
-    <name>ams_admanager_data_dir</name>
+    <name>ams_ad_data_dir</name>
     <value>/var/lib/ambari-metrics-anomaly-detection</value>
     <display-name>Anomaly Detection Manager data dir</display-name>
     <description>AMS Anomaly Detection Manager data directory.</description>
@@ -74,10 +74,10 @@
       export JAVA_HOME={{java64_home}}
 
       #  Anomaly Detection Manager Log directory for log4j
-      export AMS_AD_LOG_DIR={{ams_admanager_log_dir}}
+      export AMS_AD_LOG_DIR={{ams_ad_log_dir}}
 
       # Anomaly Detection Manager pid directory
-      export AMS_AD_PID_DIR={{ams_admanager_pid_dir}}
+      export AMS_AD_PID_DIR={{ams_ad_pid_dir}}
 
       # Anomaly Detection Manager heapsize
       export AMS_AD_HEAPSIZE={{ams_admanager_heapsize}}
@@ -92,7 +92,7 @@
       {% endif %}
 
       # Anomaly Detection Manager GC options
-      export AMS_AD_GC_OPTS="-XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:{{ams_admanager_log_dir}}/admanager-gc.log-`date +'%Y%m%d%H%M'`"
+      export AMS_AD_GC_OPTS="-XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:{{ams_ad_log_dir}}/admanager-gc.log-`date +'%Y%m%d%H%M'`"
       export AMS_AD_OPTS="$AMS_AD_OPTS $AMS_AD_GC_OPTS"
 
     </value>
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-log4j.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-log4j.xml
new file mode 100644
index 0000000..b1f821e
--- /dev/null
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-log4j.xml
@@ -0,0 +1,86 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~     http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing, software
+  ~ distributed under the License is distributed on an "AS IS" BASIS,
+  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~ See the License for the specific language governing permissions and
+  ~ limitations under the License.
+  -->
+<configuration supports_final="false" supports_adding_forbidden="true">
+    <property>
+        <name>ams_ad_log_max_backup_size</name>
+        <value>80</value>
+        <description>The maximum size of backup file before the log is rotated</description>
+        <display-name>AMS AD Manager Log: backup file size</display-name>
+        <value-attributes>
+            <unit>MB</unit>
+        </value-attributes>
+        <on-ambari-upgrade add="true"/>
+    </property>
+    <property>
+        <name>ams_ad_log_number_of_backup_files</name>
+        <value>60</value>
+        <description>The number of backup files</description>
+        <display-name>AMS AD Manager Log: # of backup files</display-name>
+        <value-attributes>
+            <type>int</type>
+            <minimum>0</minimum>
+        </value-attributes>
+        <on-ambari-upgrade add="true"/>
+    </property>
+    <property>
+        <name>content</name>
+        <display-name>ams-ad-log4j template</display-name>
+        <description>Custom log4j.properties</description>
+        <value>
+            #
+            # Licensed to the Apache Software Foundation (ASF) under one
+            # or more contributor license agreements.  See the NOTICE file
+            # distributed with this work for additional information
+            # regarding copyright ownership.  The ASF licenses this file
+            # to you under the Apache License, Version 2.0 (the
+            # "License"); you may not use this file except in compliance
+            # with the License.  You may obtain a copy of the License at
+            #
+            #     http://www.apache.org/licenses/LICENSE-2.0
+            #
+            # Unless required by applicable law or agreed to in writing, software
+            # distributed under the License is distributed on an "AS IS" BASIS,
+            # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+            # See the License for the specific language governing permissions and
+            # limitations under the License.
+            #
+
+            # Define some default values that can be overridden by system properties
+            ams.ad.log.dir=.
+            ams.ad.log.file=ambari-metrics-admanager.log
+
+            # Root logger option
+            log4j.rootLogger=INFO,file
+
+            # Direct log messages to a log file
+            log4j.appender.file=org.apache.log4j.RollingFileAppender
+            log4j.appender.file.File=${ams.ad.log.dir}/${ams.ad.log.file}
+            log4j.appender.file.MaxFileSize={{ams_ad_log_max_backup_size}}MB
+            log4j.appender.file.MaxBackupIndex={{ams_ad_log_number_of_backup_files}}
+            log4j.appender.file.layout=org.apache.log4j.PatternLayout
+            log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+        </value>
+        <value-attributes>
+            <type>content</type>
+            <show-property-name>false</show-property-name>
+        </value-attributes>
+        <on-ambari-upgrade add="true"/>
+    </property>
+</configuration>
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml
index 0add0cd..41e278d 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml
@@ -137,6 +137,7 @@
             <config-type>ams-hbase-site</config-type>
             <config-type>ams-admanager-config</config-type>
             <config-type>ams-admanager-env</config-type>
+            <config-type>ams-admanager-log4j</config-type>
           </configuration-dependencies>
           <logs>
             <log>
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
index fe6b4ec..7ab0547 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
@@ -532,14 +532,31 @@ def ams(name=None, action=None):
     File(format("{ams_ad_conf_dir}/ams-admanager-env.sh"),
          owner=params.ams_user,
          group=params.user_group,
-         content=InlineTemplate(params.ams_grafana_env_sh_template)
+         content=InlineTemplate(params.ams_admanager_env_sh_template)
          )
 
-    File(format("{conf_dir}/config.yaml"),
-         content=Template("config.yaml.j2"),
+    File(format("{ams_ad_conf_dir}/config.yaml"),
+         content=InlineTemplate(params.ams_admanager_config_template),
          owner=params.ams_user,
          group=params.user_group
          )
+    merged_ams_hbase_site = {}
+    merged_ams_hbase_site.update(params.config['configurations']['ams-hbase-site'])
+    if params.security_enabled:
+      merged_ams_hbase_site.update(params.config['configurations']['ams-hbase-security-site'])
+
+    XmlConfig( "hbase-site.xml",
+             conf_dir = params.ams_ad_conf_dir,
+             configurations = merged_ams_hbase_site,
+             configuration_attributes=params.config['configuration_attributes']['ams-hbase-site'],
+             owner = params.ams_user,
+             )
+
+    if (params.ams_ad_log4j_props != None):
+      File(os.path.join(params.ams_ad_conf_dir, "log4j.properties"),
+         owner=params.ams_user,
+         content=params.ams_ad_log4j_props
+         )
 
     if action != 'stop':
       for dir in ams_ad_directories:
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py
index 5d49939..40d3db6 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py
@@ -189,12 +189,21 @@ ams_hbase_init_check_enabled = default("/configurations/ams-site/timeline.metric
 
 # AD Manager settings
 ams_ad_conf_dir = '/etc/ambari-metrics-anomaly-detection/conf'
-ams_ad_log_dir = default("/configurations/ams-ad-env/ams_admanager_log_dir", 'var/log/ambari-metrics-anomaly-detection')
-ams_ad_pid_dir = status_params.ams_admanager_pid_dir
-ams_ad_data_dir = default("/configurations/ams-ad-env/ams_admanager_data_dir", '/var/lib/ambari-metrics-anomaly-detection')
+ams_ad_log_dir = default("/configurations/ams-admanager-env/ams_ad_log_dir", '/var/log/ambari-metrics-anomaly-detection')
+ams_ad_pid_dir = status_params.ams_ad_pid_dir
+ams_ad_data_dir = default("/configurations/ams-admanager-env/ams_ad_data_dir", '/var/lib/ambari-metrics-anomaly-detection')
+ams_ad_input_definition_directory = config['configurations']['ams-admanager-config']['ambari.metrics.admanager.input.definition.directory']
 
+ams_admanager_env_sh_template = config['configurations']['ams-admanager-env']['content']
+ams_admanager_config_template = config['configurations']['ams-admanager-config']['content']
 ams_admanager_script = "/usr/sbin/ambari-metrics-admanager"
 ams_admanager_port = config['configurations']['ams-admanager-config']['ambari.metrics.admanager.application.port']
+ams_admanager_heapsize = config['configurations']['ams-admanager-env']['ams_admanager_heapsize']
+
+if (('ams-admanager-log4j' in config['configurations']) and ('content' in config['configurations']['ams-admanager-log4j'])):
+  ams_ad_log4j_props = config['configurations']['ams-admanager-log4j']['content']
+else:
+  ams_ad_log4j_props = None
 
 #hadoop params
 
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py
index bc9b7e3..3373592 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py
@@ -33,7 +33,7 @@ hbase_user = ams_user
 ams_collector_pid_dir = config['configurations']['ams-env']['metrics_collector_pid_dir']
 ams_monitor_pid_dir = config['configurations']['ams-env']['metrics_monitor_pid_dir']
 ams_grafana_pid_dir = config['configurations']['ams-grafana-env']['metrics_grafana_pid_dir']
-ams_admanager_pid_dir = config['configurations']['ams-ad-env']['ams_admanager_pid_dir']
+ams_ad_pid_dir = config['configurations']['ams-admanager-env']['ams_ad_pid_dir']
 
 monitor_pid_file = format("{ams_monitor_pid_dir}/ambari-metrics-monitor.pid")
 grafana_pid_file = format("{ams_grafana_pid_dir}/grafana-server.pid")
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/admanager_config.yaml.j2 b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/admanager_config.yaml.j2
index 787aa3e..a403978 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/admanager_config.yaml.j2
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/admanager_config.yaml.j2
@@ -22,3 +22,23 @@ server:
 
 logging:
   type: external
+
+metricDefinitionService:
+  inputDefinitionDirectory: {{ams_ad_input_definition_directory}}
+
+metricsCollector:
+  hosts: {{ams_collector_hosts}}
+  port: {{metric_collector_port}}
+  protocol: {{metric_collector_protocol}}
+  metadataEndpoint: /v1/timeline/metrics/metadata/keys
+
+adQueryService:
+  anomalyDataTtl: 604800
+
+metricDefinitionDB:
+  # force checksum verification of all data that is read from the file system on behalf of a particular read
+  verifyChecksums: true
+  # raise an error as soon as it detects an internal corruption
+  performParanoidChecks: false
+  # Path to Level DB directory
+  dbDirPath: {{ams_ad_data_dir}}
\ No newline at end of file
diff --git a/ambari-server/src/test/python/stacks/2.0.6/AMBARI_METRICS/test_admanager.py b/ambari-server/src/test/python/stacks/2.0.6/AMBARI_METRICS/test_admanager.py
new file mode 100644
index 0000000..dc2f4b0
--- /dev/null
+++ b/ambari-server/src/test/python/stacks/2.0.6/AMBARI_METRICS/test_admanager.py
@@ -0,0 +1,106 @@
+#!/usr/bin/env python
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+from mock.mock import MagicMock, patch
+from stacks.utils.RMFTestCase import *
+import os, sys
+
+@patch("tempfile.mkdtemp", new = MagicMock(return_value='/some_tmp_dir'))
+@patch("os.path.exists", new = MagicMock(return_value=True))
+@patch("platform.linux_distribution", new = MagicMock(return_value="Linux"))
+class TestADManager(RMFTestCase):
+  COMMON_SERVICES_PACKAGE_DIR = "AMBARI_METRICS/0.1.0/package/scripts"
+  STACK_VERSION = "2.0.6"
+
+  file_path = os.path.dirname(os.path.abspath(__file__))
+  file_path = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(file_path)))))
+  file_path = os.path.join(file_path, "main", "resources", "common-services", COMMON_SERVICES_PACKAGE_DIR)
+
+  sys.path.append(file_path)
+  def test_start(self):
+    self.executeScript(self.COMMON_SERVICES_PACKAGE_DIR + "/ams_admanager.py",
+                       classname = "AmsADManager",
+                       command = "start",
+                       config_file="default.json",
+                       stack_version = self.STACK_VERSION,
+                       target = RMFTestCase.TARGET_COMMON_SERVICES
+                       )
+    self.maxDiff=None
+    self.assert_configure()
+    self.assertResourceCalled('Execute', ('chown', u'-R', u'ams', '/etc/ambari-metrics-anomaly-detection/conf'),
+                              sudo = True
+                              )
+    self.assertResourceCalled('Execute', ('chown', u'-R', u'ams', '/var/log/ambari-metrics-anomaly-detection'),
+                              sudo = True
+                              )
+    self.assertResourceCalled('Execute', ('chown', u'-R', u'ams', '/var/lib/ambari-metrics-anomaly-detection'),
+                              sudo = True
+                              )
+    self.assertResourceCalled('Execute', ('chown', u'-R', u'ams', '/var/run/ambari-metrics-anomaly-detection'),
+                              sudo = True
+                              )
+    self.assertResourceCalled('Execute', '/usr/sbin/ambari-metrics-admanager start',
+                              user = 'ams'
+                              )
+    self.assertNoMoreResources()
+
+  def assert_configure(self):
+
+    ams_admanager_directories = [
+      '/etc/ambari-metrics-anomaly-detection/conf',
+      '/var/log/ambari-metrics-anomaly-detection',
+      '/var/lib/ambari-metrics-anomaly-detection',
+      '/var/run/ambari-metrics-anomaly-detection'
+    ]
+
+    for ams_admanager_directory in ams_admanager_directories:
+      self.assertResourceCalled('Directory', ams_admanager_directory,
+                                owner = 'ams',
+                                group = 'hadoop',
+                                mode=0755,
+                                create_parents = True,
+                                recursive_ownership = True
+                                )
+
+    self.assertResourceCalled('File', '/etc/ambari-metrics-anomaly-detection/conf/ams-admanager-env.sh',
+                              owner = 'ams',
+                              group = 'hadoop',
+                              content = InlineTemplate(self.getConfig()['configurations']['ams-admanager-env']['content'])
+                              )
+
+    self.assertResourceCalled('File', '/etc/ambari-metrics-anomaly-detection/conf/config.yaml',
+                              owner = 'ams',
+                              group = 'hadoop',
+                              content = InlineTemplate(self.getConfig()['configurations']['ams-admanager-config']['content']),
+                              )
+
+    merged_ams_hbase_site = {}
+    merged_ams_hbase_site.update(self.getConfig()['configurations']['ams-hbase-site'])
+
+    self.assertResourceCalled('XmlConfig', 'hbase-site.xml',
+                              owner = 'ams',
+                              conf_dir = '/etc/ambari-metrics-anomaly-detection/conf',
+                              configurations = merged_ams_hbase_site,
+                              configuration_attributes = self.getConfig()['configuration_attributes']['ams-hbase-site']
+                              )
+    self.assertResourceCalled('File', '/etc/ambari-metrics-anomaly-detection/conf/log4j.properties',
+                              owner = 'ams',
+                              content = ''
+                              )
diff --git a/ambari-server/src/test/python/stacks/2.0.6/configs/default.json b/ambari-server/src/test/python/stacks/2.0.6/configs/default.json
index b81216f..438a8ed 100644
--- a/ambari-server/src/test/python/stacks/2.0.6/configs/default.json
+++ b/ambari-server/src/test/python/stacks/2.0.6/configs/default.json
@@ -1099,6 +1099,19 @@
         },
         "hadoop-metrics2.properties": {
             "content": "# Licensed to the Apache Software Foundation (ASF) under one or more\r\n# contributor license agreements. See the NOTICE file distributed with\r\n# this work for additional information regarding copyright ownership.\r\n# The ASF licenses this file to You under the Apache License, Version 2.0\r\n# (the \"License\"); you may not use this file except in compliance with\r\n# the License. You may obtain a copy of the License at\r\n#\r\n# http:\/\/www.apache.org\/licens [...]
+        },
+        "ams-admanager-env": {
+            "ams_ad_pid_dir": "/var/run/ambari-metrics-anomaly-detection",
+            "content": "\n"
+        },
+        "ams-admanager-config": {
+            "content": "",
+            "ambari.metrics.admanager.input.definition.directory": "",
+            "ambari.metrics.admanager.spark.operation.mode": "stand-alone",
+            "ambari.metrics.admanager.application.port": "9090"
+        },
+        "ams-admanager-log4j": {
+            "content": ""
         }
     },
     "configurationAttributes": {
diff --git a/ambari-server/src/test/python/stacks/2.0.6/configs/default_ams_embedded.json b/ambari-server/src/test/python/stacks/2.0.6/configs/default_ams_embedded.json
index cf6a7df..759b821 100644
--- a/ambari-server/src/test/python/stacks/2.0.6/configs/default_ams_embedded.json
+++ b/ambari-server/src/test/python/stacks/2.0.6/configs/default_ams_embedded.json
@@ -995,6 +995,19 @@
             "timeline.metrics.daily.aggregator.minute.interval": "86400",
             "timeline.metrics.cluster.aggregator.minute.interval": "120",
             "timeline.metrics.host.aggregator.hourly.interval": "3600"
+        },
+        "ams-admanager-env": {
+            "ams_ad_pid_dir": "/var/run/ambari-metrics-anomaly-detection",
+            "content": "\n"
+        },
+        "ams-admanager-config": {
+            "content": "",
+            "ambari.metrics.admanager.input.definition.directory": "",
+            "ambari.metrics.admanager.spark.operation.mode": "stand-alone",
+            "ambari.metrics.admanager.application.port": "9090"
+        },
+        "ams-admanager-log4j": {
+            "content": ""
         }
     },
     "configurationAttributes": {

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 18/39: AMBARI-22077 : Create maven module and package structure for the anomaly detection engine. (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit d991f3c4d68fe1da7797c5593ee32595edd89fa4
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Wed Sep 27 10:43:48 2017 -0700

    AMBARI-22077 : Create maven module and package structure for the anomaly detection engine. (avijayan)
---
 .../pom.xml                                        |  62 ++++++++-
 .../alertservice/prototype/common/DataSeries.java  |   0
 .../alertservice/prototype/common/ResultSet.java   |   0
 .../prototype/common/StatisticUtils.java           |   0
 .../prototype/core}/AmbariServerInterface.java     |   2 +-
 .../prototype/core}/MetricKafkaProducer.java       |   2 +-
 .../prototype/core}/MetricSparkConsumer.java       |   4 +-
 .../prototype/core}/MetricsCollectorInterface.java |   2 +-
 .../prototype/core}/PointInTimeADSystem.java       |   2 +-
 .../prototype/core}/RFunctionInvoker.java          |   2 +-
 .../prototype/core}/TrendADSystem.java             |   2 +-
 .../alertservice/prototype/core}/TrendMetric.java  |   2 +-
 .../methods/AnomalyDetectionTechnique.java         |   0
 .../prototype/methods/MetricAnomaly.java           |   0
 .../prototype/methods/ema/EmaModel.java            |   0
 .../prototype/methods/ema/EmaModelLoader.java      |   0
 .../prototype/methods/ema/EmaTechnique.java        |   0
 .../prototype/methods/hsdev/HsdevTechnique.java    |   0
 .../prototype/methods/kstest/KSTechnique.java      |   2 +-
 .../utilities}/MetricAnomalyDetectorTestInput.java |   2 +-
 .../testing/utilities}/MetricAnomalyTester.java    |   5 +-
 .../utilities/TestMetricSeriesGenerator.java       |  92 +++++++++++++
 .../testing/utilities}/TestSeriesInputRequest.java |   2 +-
 .../seriesgenerator/AbstractMetricSeries.java      |   0
 .../seriesgenerator/DualBandMetricSeries.java      |   0
 .../MetricSeriesGeneratorFactory.java              |   0
 .../seriesgenerator/MonotonicMetricSeries.java     |   0
 .../seriesgenerator/NormalMetricSeries.java        |   0
 .../SteadyWithTurbulenceMetricSeries.java          |   0
 .../seriesgenerator/StepFunctionMetricSeries.java  |   0
 .../seriesgenerator/UniformMetricSeries.java       |   0
 .../src/main/resources/R-scripts/ema.R             |   0
 .../src/main/resources/R-scripts/hsdev.r           |   0
 .../src/main/resources/R-scripts/iforest.R         |   0
 .../src/main/resources/R-scripts/kstest.r          |   0
 .../src/main/resources/R-scripts/test.R            |   0
 .../src/main/resources/R-scripts/tukeys.r          |   0
 .../src/main/resources/input-config.properties     |   0
 .../metrics/spark/MetricAnomalyDetector.scala      | 127 +++++++++++++++++
 .../ambari/metrics/spark/SparkPhoenixReader.scala  |  16 +--
 .../alertservice/prototype/TestEmaTechnique.java   |   2 +-
 .../prototype/TestRFunctionInvoker.java            |   2 +-
 .../metrics/alertservice/prototype/TestTukeys.java |   3 +-
 .../seriesgenerator/MetricSeriesGeneratorTest.java |   7 -
 ambari-metrics/ambari-metrics-spark/pom.xml        | 151 ---------------------
 .../metrics/spark/MetricAnomalyDetector.scala      | 109 ---------------
 .../ambari-metrics-timelineservice/pom.xml         |   6 -
 .../metrics/TestMetricSeriesGenerator.java         |  87 ------------
 .../webapp/MetricAnomalyDetectorTestService.java   |  87 ------------
 .../webapp/TimelineWebServices.java                |   1 -
 ambari-metrics/pom.xml                             |   3 +-
 51 files changed, 301 insertions(+), 483 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-alertservice/pom.xml b/ambari-metrics/ambari-metrics-anomaly-detector/pom.xml
similarity index 71%
rename from ambari-metrics/ambari-metrics-alertservice/pom.xml
rename to ambari-metrics/ambari-metrics-anomaly-detector/pom.xml
index 4db8a6a..e6e12f2 100644
--- a/ambari-metrics/ambari-metrics-alertservice/pom.xml
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/pom.xml
@@ -26,8 +26,29 @@
         <version>2.0.0.0-SNAPSHOT</version>
     </parent>
     <modelVersion>4.0.0</modelVersion>
-    <artifactId>ambari-metrics-alertservice</artifactId>
+    <artifactId>ambari-metrics-anomaly-detector</artifactId>
     <version>2.0.0.0-SNAPSHOT</version>
+    <properties>
+        <scala.version>2.10.4</scala.version>
+        <scala.binary.version>2.11</scala.binary.version>
+    </properties>
+
+    <repositories>
+        <repository>
+            <id>scala-tools.org</id>
+            <name>Scala-Tools Maven2 Repository</name>
+            <url>http://scala-tools.org/repo-releases</url>
+        </repository>
+    </repositories>
+
+    <pluginRepositories>
+        <pluginRepository>
+            <id>scala-tools.org</id>
+            <name>Scala-Tools Maven2 Repository</name>
+            <url>http://scala-tools.org/repo-releases</url>
+        </pluginRepository>
+    </pluginRepositories>
+
     <build>
         <plugins>
             <plugin>
@@ -37,9 +58,27 @@
                     <target>1.8</target>
                 </configuration>
             </plugin>
+            <plugin>
+                <groupId>org.scala-tools</groupId>
+                <artifactId>maven-scala-plugin</artifactId>
+                <executions>
+                    <execution>
+                        <goals>
+                            <goal>compile</goal>
+                            <goal>testCompile</goal>
+                        </goals>
+                    </execution>
+                </executions>
+                <configuration>
+                    <scalaVersion>${scala.version}</scalaVersion>
+                    <args>
+                        <arg>-target:jvm-1.5</arg>
+                    </args>
+                </configuration>
+            </plugin>
         </plugins>
     </build>
-    <name>Ambari Metrics Alert Service</name>
+    <name>Ambari Metrics Anomaly Detector</name>
     <packaging>jar</packaging>
 
     <dependencies>
@@ -122,7 +161,7 @@
         <dependency>
             <groupId>org.apache.phoenix</groupId>
             <artifactId>phoenix-spark</artifactId>
-            <version>4.7.0-HBase-1.0</version>
+            <version>4.10.0-HBase-1.1</version>
         </dependency>
         <dependency>
             <groupId>org.apache.spark</groupId>
@@ -145,5 +184,22 @@
             <artifactId>httpclient</artifactId>
             <version>4.2.5</version>
         </dependency>
+        <dependency>
+            <groupId>org.scala-lang</groupId>
+            <artifactId>scala-library</artifactId>
+            <version>${scala.version}</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.spark</groupId>
+            <artifactId>spark-core_${scala.binary.version}</artifactId>
+            <version>2.1.1</version>
+            <scope>provided</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.spark</groupId>
+            <artifactId>spark-mllib_${scala.binary.version}</artifactId>
+            <version>2.1.1</version>
+            <scope>provided</scope>
+        </dependency>
     </dependencies>
 </project>
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/DataSeries.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/DataSeries.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/DataSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/DataSeries.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/ResultSet.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/ResultSet.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/ResultSet.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/ResultSet.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/StatisticUtils.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/StatisticUtils.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/StatisticUtils.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/StatisticUtils.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/AmbariServerInterface.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/AmbariServerInterface.java
similarity index 98%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/AmbariServerInterface.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/AmbariServerInterface.java
index b98f04c..b6b1bf5 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/AmbariServerInterface.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/AmbariServerInterface.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.alertservice.prototype;
+package org.apache.ambari.metrics.alertservice.prototype.core;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricKafkaProducer.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricKafkaProducer.java
similarity index 97%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricKafkaProducer.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricKafkaProducer.java
index 8023d15..2287ee3 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricKafkaProducer.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricKafkaProducer.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype;
+package org.apache.ambari.metrics.alertservice.prototype.core;
 
 import com.fasterxml.jackson.databind.JsonNode;
 import com.fasterxml.jackson.databind.ObjectMapper;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricSparkConsumer.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricSparkConsumer.java
similarity index 98%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricSparkConsumer.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricSparkConsumer.java
index 61b3dee..706c69f 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricSparkConsumer.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricSparkConsumer.java
@@ -15,12 +15,11 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype;
+package org.apache.ambari.metrics.alertservice.prototype.core;
 
 import com.fasterxml.jackson.databind.ObjectMapper;
 import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
 import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique;
-import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
@@ -40,7 +39,6 @@ import java.util.*;
 import java.io.FileInputStream;
 import java.io.IOException;
 import java.io.InputStream;
-import java.util.*;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricsCollectorInterface.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricsCollectorInterface.java
similarity index 99%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricsCollectorInterface.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricsCollectorInterface.java
index dab4a0a..246565d 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricsCollectorInterface.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricsCollectorInterface.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype;
+package org.apache.ambari.metrics.alertservice.prototype.core;
 
 import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
 import org.apache.commons.collections.CollectionUtils;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/PointInTimeADSystem.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/PointInTimeADSystem.java
similarity index 99%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/PointInTimeADSystem.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/PointInTimeADSystem.java
index b3e7bd3..c579515 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/PointInTimeADSystem.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/PointInTimeADSystem.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype;
+package org.apache.ambari.metrics.alertservice.prototype.core;
 
 import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
 import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/RFunctionInvoker.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/RFunctionInvoker.java
similarity index 99%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/RFunctionInvoker.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/RFunctionInvoker.java
index 4fdf27d..4538f0b 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/RFunctionInvoker.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/RFunctionInvoker.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype;
+package org.apache.ambari.metrics.alertservice.prototype.core;
 
 
 import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendADSystem.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/TrendADSystem.java
similarity index 99%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendADSystem.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/TrendADSystem.java
index df36a4a..2a205d1 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendADSystem.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/TrendADSystem.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype;
+package org.apache.ambari.metrics.alertservice.prototype.core;
 
 import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
 import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendMetric.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/TrendMetric.java
similarity index 94%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendMetric.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/TrendMetric.java
index 3bead8b..0640142 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendMetric.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/TrendMetric.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype;
+package org.apache.ambari.metrics.alertservice.prototype.core;
 
 import java.io.Serializable;
 
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/AnomalyDetectionTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/AnomalyDetectionTechnique.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/AnomalyDetectionTechnique.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/AnomalyDetectionTechnique.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/MetricAnomaly.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/MetricAnomaly.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/MetricAnomaly.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/MetricAnomaly.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModelLoader.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModelLoader.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModelLoader.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModelLoader.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/kstest/KSTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/kstest/KSTechnique.java
similarity index 97%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/kstest/KSTechnique.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/kstest/KSTechnique.java
index ff8dbcf..a9360d3 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/kstest/KSTechnique.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/kstest/KSTechnique.java
@@ -18,7 +18,7 @@
 
 package org.apache.ambari.metrics.alertservice.prototype.methods.kstest;
 
-import org.apache.ambari.metrics.alertservice.prototype.RFunctionInvoker;
+import org.apache.ambari.metrics.alertservice.prototype.core.RFunctionInvoker;
 import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
 import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
 import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricAnomalyDetectorTestInput.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/MetricAnomalyDetectorTestInput.java
similarity index 97%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricAnomalyDetectorTestInput.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/MetricAnomalyDetectorTestInput.java
index 490328a..268cd15 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricAnomalyDetectorTestInput.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/MetricAnomalyDetectorTestInput.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.alertservice.prototype;
+package org.apache.ambari.metrics.alertservice.prototype.testing.utilities;
 
 import javax.xml.bind.annotation.XmlRootElement;
 import java.util.List;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricAnomalyTester.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/MetricAnomalyTester.java
similarity index 96%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricAnomalyTester.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/MetricAnomalyTester.java
index bff8120..6485ebb 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricAnomalyTester.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/MetricAnomalyTester.java
@@ -15,8 +15,11 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype;
 
+package org.apache.ambari.metrics.alertservice.prototype.testing.utilities;
+
+import org.apache.ambari.metrics.alertservice.prototype.core.MetricsCollectorInterface;
+import org.apache.ambari.metrics.alertservice.prototype.core.RFunctionInvoker;
 import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
 import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
 import org.apache.ambari.metrics.alertservice.seriesgenerator.MetricSeriesGeneratorFactory;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/TestMetricSeriesGenerator.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/TestMetricSeriesGenerator.java
new file mode 100644
index 0000000..b817f3e
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/TestMetricSeriesGenerator.java
@@ -0,0 +1,92 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype.testing.utilities;
+
+/**
+ * Class which was originally used to send test series from AMS to Spark through Kafka.
+ */
+
+public class TestMetricSeriesGenerator {
+  //implements Runnable {
+
+//  private Map<TestSeriesInputRequest, AbstractMetricSeries> configuredSeries = new HashMap<>();
+//  private static final Log LOG = LogFactory.getLog(TestMetricSeriesGenerator.class);
+//  private TimelineMetricStore metricStore;
+//  private String hostname;
+//
+//  public TestMetricSeriesGenerator(TimelineMetricStore metricStore) {
+//    this.metricStore = metricStore;
+//    try {
+//      this.hostname = InetAddress.getLocalHost().getHostName();
+//    } catch (UnknownHostException e) {
+//      e.printStackTrace();
+//    }
+//  }
+//
+//  public void addSeries(TestSeriesInputRequest inputRequest) {
+//    if (!configuredSeries.containsKey(inputRequest)) {
+//      AbstractMetricSeries metricSeries = MetricSeriesGeneratorFactory.generateSeries(inputRequest.getSeriesType(), inputRequest.getConfigs());
+//      configuredSeries.put(inputRequest, metricSeries);
+//      LOG.info("Added series " + inputRequest.getSeriesName());
+//    }
+//  }
+//
+//  public void removeSeries(String seriesName) {
+//    boolean isPresent = false;
+//    TestSeriesInputRequest tbd = null;
+//    for (TestSeriesInputRequest inputRequest : configuredSeries.keySet()) {
+//      if (inputRequest.getSeriesName().equals(seriesName)) {
+//        isPresent = true;
+//        tbd = inputRequest;
+//      }
+//    }
+//    if (isPresent) {
+//      LOG.info("Removing series " + seriesName);
+//      configuredSeries.remove(tbd);
+//    } else {
+//      LOG.info("Series not found : " + seriesName);
+//    }
+//  }
+//
+//  @Override
+//  public void run() {
+//    long currentTime = System.currentTimeMillis();
+//    TimelineMetrics timelineMetrics = new TimelineMetrics();
+//
+//    for (TestSeriesInputRequest input : configuredSeries.keySet()) {
+//      AbstractMetricSeries metricSeries = configuredSeries.get(input);
+//      TimelineMetric timelineMetric = new TimelineMetric();
+//      timelineMetric.setMetricName(input.getSeriesName());
+//      timelineMetric.setAppId("anomaly-engine-test-metric");
+//      timelineMetric.setInstanceId(null);
+//      timelineMetric.setStartTime(currentTime);
+//      timelineMetric.setHostName(hostname);
+//      TreeMap<Long, Double> metricValues = new TreeMap();
+//      metricValues.put(currentTime, metricSeries.nextValue());
+//      timelineMetric.setMetricValues(metricValues);
+//      timelineMetrics.addOrMergeTimelineMetric(timelineMetric);
+//      LOG.info("Emitting metric with appId = " + timelineMetric.getAppId());
+//    }
+//    try {
+//      LOG.info("Publishing test metrics for " + timelineMetrics.getMetrics().size() + " series.");
+//      metricStore.putMetrics(timelineMetrics);
+//    } catch (Exception e) {
+//      LOG.error(e);
+//    }
+//  }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TestSeriesInputRequest.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/TestSeriesInputRequest.java
similarity index 97%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TestSeriesInputRequest.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/TestSeriesInputRequest.java
index 7485f01..a424f8e 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TestSeriesInputRequest.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/TestSeriesInputRequest.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype;
+package org.apache.ambari.metrics.alertservice.prototype.testing.utilities;
 
 import org.apache.htrace.fasterxml.jackson.core.JsonProcessingException;
 import org.apache.htrace.fasterxml.jackson.databind.ObjectMapper;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/AbstractMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/AbstractMetricSeries.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/AbstractMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/AbstractMetricSeries.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/DualBandMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/DualBandMetricSeries.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/DualBandMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/DualBandMetricSeries.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorFactory.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorFactory.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorFactory.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorFactory.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MonotonicMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MonotonicMetricSeries.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MonotonicMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MonotonicMetricSeries.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/NormalMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/NormalMetricSeries.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/NormalMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/NormalMetricSeries.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/StepFunctionMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/StepFunctionMetricSeries.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/StepFunctionMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/StepFunctionMetricSeries.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/UniformMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/UniformMetricSeries.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/UniformMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/UniformMetricSeries.java
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/ema.R b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/ema.R
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/ema.R
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/ema.R
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/hsdev.r b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/hsdev.r
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/hsdev.r
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/hsdev.r
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/iforest.R b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/iforest.R
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/iforest.R
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/iforest.R
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/kstest.r b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/kstest.r
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/kstest.r
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/kstest.r
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/test.R b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/test.R
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/test.R
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/test.R
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/tukeys.r
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/tukeys.r
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/input-config.properties b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/input-config.properties
similarity index 100%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/resources/input-config.properties
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/input-config.properties
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
new file mode 100644
index 0000000..324058b
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.spark
+
+
+import java.io.{FileInputStream, IOException, InputStream}
+import java.util
+import java.util.Properties
+import java.util.logging.LogManager
+
+import com.fasterxml.jackson.databind.ObjectMapper
+import org.apache.ambari.metrics.alertservice.prototype.core.MetricsCollectorInterface
+import org.apache.spark.SparkConf
+import org.apache.spark.streaming._
+import org.apache.spark.streaming.kafka._
+import org.apache.ambari.metrics.alertservice.prototype.methods.{AnomalyDetectionTechnique, MetricAnomaly}
+import org.apache.ambari.metrics.alertservice.prototype.methods.ema.{EmaModelLoader, EmaTechnique}
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics
+import org.apache.log4j.Logger
+import org.apache.spark.storage.StorageLevel
+
+object MetricAnomalyDetector {
+
+  /*
+    Load current EMA model
+    Filter step - Check if anomaly
+    Collect / Write to AMS / Print.
+   */
+
+//  var brokers = "avijayan-ams-1.openstacklocal:2181,avijayan-ams-2.openstacklocal:2181,avijayan-ams-3.openstacklocal:2181"
+//  var groupId = "ambari-metrics-group"
+//  var topicName = "ambari-metrics-topic"
+//  var numThreads = 1
+//  val anomalyDetectionModels: Array[AnomalyDetectionTechnique] = Array[AnomalyDetectionTechnique]()
+//
+//  def readProperties(propertiesFile: String): Properties = try {
+//    val properties = new Properties
+//    var inputStream = ClassLoader.getSystemResourceAsStream(propertiesFile)
+//    if (inputStream == null) inputStream = new FileInputStream(propertiesFile)
+//    properties.load(inputStream)
+//    properties
+//  } catch {
+//    case ioEx: IOException =>
+//      null
+//  }
+//
+//  def main(args: Array[String]): Unit = {
+//
+//    @transient
+//    lazy val log = org.apache.log4j.LogManager.getLogger("MetricAnomalyDetectorLogger")
+//
+//    if (args.length < 1) {
+//      System.err.println("Usage: MetricSparkConsumer <input-config-file>")
+//      System.exit(1)
+//    }
+//
+//    //Read properties
+//    val properties = readProperties(propertiesFile = args(0))
+//
+//    //Load EMA parameters - w, n
+//    val emaW = properties.getProperty("emaW").toDouble
+//    val emaN = properties.getProperty("emaN").toDouble
+//
+//    //collector info
+//    val collectorHost: String = properties.getProperty("collectorHost")
+//    val collectorPort: String = properties.getProperty("collectorPort")
+//    val collectorProtocol: String = properties.getProperty("collectorProtocol")
+//    val anomalyMetricPublisher = new MetricsCollectorInterface(collectorHost, collectorProtocol, collectorPort)
+//
+//    //Instantiate Kafka stream reader
+//    val sparkConf = new SparkConf().setAppName("AmbariMetricsAnomalyDetector")
+//    val streamingContext = new StreamingContext(sparkConf, Duration(10000))
+//
+//    val topicsSet = topicName.toSet
+//    val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers)
+////    val stream = KafkaUtils.createDirectStream()
+//
+//    val kafkaStream = KafkaUtils.createStream(streamingContext, zkQuorum, groupId, Map(topicName -> numThreads), StorageLevel.MEMORY_AND_DISK_SER_2)
+//    kafkaStream.print()
+//
+//    var timelineMetricsStream = kafkaStream.map( message => {
+//      val mapper = new ObjectMapper
+//      val metrics = mapper.readValue(message._2, classOf[TimelineMetrics])
+//      metrics
+//    })
+//    timelineMetricsStream.print()
+//
+//    var appMetricStream = timelineMetricsStream.map( timelineMetrics => {
+//      (timelineMetrics.getMetrics.get(0).getAppId, timelineMetrics)
+//    })
+//    appMetricStream.print()
+//
+//    var filteredAppMetricStream = appMetricStream.filter( appMetricTuple => {
+//      appIds.contains(appMetricTuple._1)
+//    } )
+//    filteredAppMetricStream.print()
+//
+//    filteredAppMetricStream.foreachRDD( rdd => {
+//      rdd.foreach( appMetricTuple => {
+//        val timelineMetrics = appMetricTuple._2
+//        logger.info("Received Metric (1): " + timelineMetrics.getMetrics.get(0).getMetricName)
+//        log.info("Received Metric (2): " + timelineMetrics.getMetrics.get(0).getMetricName)
+//        for (timelineMetric <- timelineMetrics.getMetrics) {
+//          var anomalies = emaModel.test(timelineMetric)
+//          anomalyMetricPublisher.publish(anomalies)
+//        }
+//      })
+//    })
+//
+//    streamingContext.start()
+//    streamingContext.awaitTermination()
+//  }
+  }
diff --git a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
similarity index 84%
rename from ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
index edd6366..ccded6b 100644
--- a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
@@ -19,10 +19,8 @@ package org.apache.ambari.metrics.spark
 
 import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric
-import org.apache.spark.mllib.stat.Statistics
 import org.apache.spark.sql.SQLContext
 import org.apache.spark.{SparkConf, SparkContext}
-import org.apache.spark.rdd.RDD
 
 object SparkPhoenixReader {
 
@@ -71,17 +69,9 @@ object SparkPhoenixReader {
     timelineMetric.setHostName(hostname)
     timelineMetric.setMetricValues(metricValues)
 
-//    var emaModel = new EmaTechnique()
-//    emaModel.train(timelineMetric, weight, timessdev)
-//    emaModel.save(sc, modelDir)
-
-//    var metricData:Seq[Double] = Seq.empty
-//    result.collect().foreach(
-//      t => metricData :+ t.getDouble(4) / t.getInt(5)
-//    )
-//    val data: RDD[Double] = sc.parallelize(metricData)
-//    val myCDF = Map(0.1 -> 0.2, 0.15 -> 0.6, 0.2 -> 0.05, 0.3 -> 0.05, 0.25 -> 0.1)
-//    val testResult2 = Statistics.kolmogorovSmirnovTest(data, myCDF)
+    var emaModel = new EmaTechnique(weight, timessdev)
+    emaModel.test(timelineMetric)
+    emaModel.save(sc, modelDir)
 
   }
 
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java
similarity index 97%
rename from ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java
index d1e2b41..a0b06e6 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java
@@ -17,12 +17,12 @@
  */
 package org.apache.ambari.metrics.alertservice.prototype;
 
+import org.apache.ambari.metrics.alertservice.prototype.core.RFunctionInvoker;
 import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
 import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.junit.Assert;
 import org.junit.Assume;
-import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestRFunctionInvoker.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestRFunctionInvoker.java
similarity index 98%
rename from ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestRFunctionInvoker.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestRFunctionInvoker.java
index 9a102a0..d98ef0c 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestRFunctionInvoker.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestRFunctionInvoker.java
@@ -19,6 +19,7 @@ package org.apache.ambari.metrics.alertservice.prototype;
 
 import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
 import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.alertservice.prototype.core.RFunctionInvoker;
 import org.apache.ambari.metrics.alertservice.seriesgenerator.UniformMetricSeries;
 import org.apache.commons.lang.ArrayUtils;
 import org.junit.Assert;
@@ -31,7 +32,6 @@ import java.net.URISyntaxException;
 import java.net.URL;
 import java.util.HashMap;
 import java.util.Map;
-import java.util.Random;
 
 public class TestRFunctionInvoker {
 
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java
similarity index 95%
rename from ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java
index ef0125f..86590bd 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java
@@ -17,6 +17,8 @@
  */
 package org.apache.ambari.metrics.alertservice.prototype;
 
+import org.apache.ambari.metrics.alertservice.prototype.core.MetricsCollectorInterface;
+import org.apache.ambari.metrics.alertservice.prototype.core.RFunctionInvoker;
 import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
 import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
@@ -26,7 +28,6 @@ import org.junit.BeforeClass;
 import org.junit.Test;
 
 import java.io.File;
-import java.net.InetAddress;
 import java.net.URISyntaxException;
 import java.net.URL;
 import java.net.UnknownHostException;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorTest.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorTest.java
similarity index 93%
rename from ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorTest.java
rename to ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorTest.java
index 575ea8b..fe7dba9 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorTest.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorTest.java
@@ -17,16 +17,9 @@
  */
 package org.apache.ambari.metrics.alertservice.seriesgenerator;
 
-import com.fasterxml.jackson.core.JsonProcessingException;
-import com.fasterxml.jackson.databind.ObjectMapper;
-import org.apache.ambari.metrics.alertservice.prototype.MetricAnomalyDetectorTestInput;
 import org.junit.Assert;
 import org.junit.Test;
 
-import java.util.Arrays;
-import java.util.HashMap;
-import java.util.Map;
-
 public class MetricSeriesGeneratorTest {
 
   @Test
diff --git a/ambari-metrics/ambari-metrics-spark/pom.xml b/ambari-metrics/ambari-metrics-spark/pom.xml
deleted file mode 100644
index 4732cb5..0000000
--- a/ambari-metrics/ambari-metrics-spark/pom.xml
+++ /dev/null
@@ -1,151 +0,0 @@
-<!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one
-  ~ or more contributor license agreements.  See the NOTICE file
-  ~ distributed with this work for additional information
-  ~ regarding copyright ownership.  The ASF licenses this file
-  ~ to you under the Apache License, Version 2.0 (the
-  ~ "License"); you may not use this file except in compliance
-  ~ with the License.  You may obtain a copy of the License at
-  ~
-  ~     http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing, software
-  ~ distributed under the License is distributed on an "AS IS" BASIS,
-  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  ~ See the License for the specific language governing permissions and
-  ~ limitations under the License.
-  -->
-
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
-    <parent>
-        <artifactId>ambari-metrics</artifactId>
-        <groupId>org.apache.ambari</groupId>
-        <version>2.0.0.0-SNAPSHOT</version>
-    </parent>
-    <modelVersion>4.0.0</modelVersion>
-    <artifactId>ambari-metrics-spark</artifactId>
-    <version>2.0.0.0-SNAPSHOT</version>
-    <properties>
-        <scala.version>2.10.4</scala.version>
-    </properties>
-
-    <repositories>
-        <repository>
-            <id>scala-tools.org</id>
-            <name>Scala-Tools Maven2 Repository</name>
-            <url>http://scala-tools.org/repo-releases</url>
-        </repository>
-    </repositories>
-
-    <pluginRepositories>
-        <pluginRepository>
-            <id>scala-tools.org</id>
-            <name>Scala-Tools Maven2 Repository</name>
-            <url>http://scala-tools.org/repo-releases</url>
-        </pluginRepository>
-    </pluginRepositories>
-
-    <dependencies>
-        <dependency>
-            <groupId>org.scala-lang</groupId>
-            <artifactId>scala-library</artifactId>
-            <version>${scala.version}</version>
-        </dependency>
-        <dependency>
-            <groupId>junit</groupId>
-            <artifactId>junit</artifactId>
-            <version>4.4</version>
-            <scope>test</scope>
-        </dependency>
-        <dependency>
-            <groupId>org.specs</groupId>
-            <artifactId>specs</artifactId>
-            <version>1.2.5</version>
-            <scope>test</scope>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.spark</groupId>
-            <artifactId>spark-core_2.10</artifactId>
-            <version>1.6.3</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.spark</groupId>
-            <artifactId>spark-sql_2.10</artifactId>
-            <version>1.6.3</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.phoenix</groupId>
-            <artifactId>phoenix-spark</artifactId>
-            <version>4.7.0-HBase-1.1</version>
-            <scope>provided</scope>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.ambari</groupId>
-            <artifactId>ambari-metrics-alertservice</artifactId>
-            <version>2.0.0.0-SNAPSHOT</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.logging.log4j</groupId>
-            <artifactId>log4j-api-scala_2.10</artifactId>
-            <version>2.8.2</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.spark</groupId>
-            <artifactId>spark-mllib_2.10</artifactId>
-            <version>2.1.1</version>
-        </dependency>
-    </dependencies>
-
-    <build>
-        <sourceDirectory>src/main/scala</sourceDirectory>
-        <plugins>
-            <plugin>
-                <groupId>org.scala-tools</groupId>
-                <artifactId>maven-scala-plugin</artifactId>
-                <executions>
-                    <execution>
-                        <goals>
-                            <goal>compile</goal>
-                            <goal>testCompile</goal>
-                        </goals>
-                    </execution>
-                </executions>
-                <configuration>
-                    <scalaVersion>${scala.version}</scalaVersion>
-                    <args>
-                        <arg>-target:jvm-1.5</arg>
-                    </args>
-                </configuration>
-            </plugin>
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-eclipse-plugin</artifactId>
-                <configuration>
-                    <downloadSources>true</downloadSources>
-                    <buildcommands>
-                        <buildcommand>ch.epfl.lamp.sdt.core.scalabuilder</buildcommand>
-                    </buildcommands>
-                    <additionalProjectnatures>
-                        <projectnature>ch.epfl.lamp.sdt.core.scalanature</projectnature>
-                    </additionalProjectnatures>
-                    <classpathContainers>
-                        <classpathContainer>org.eclipse.jdt.launching.JRE_CONTAINER</classpathContainer>
-                        <classpathContainer>ch.epfl.lamp.sdt.launching.SCALA_CONTAINER</classpathContainer>
-                    </classpathContainers>
-                </configuration>
-            </plugin>
-        </plugins>
-    </build>
-    <reporting>
-        <plugins>
-            <plugin>
-                <groupId>org.scala-tools</groupId>
-                <artifactId>maven-scala-plugin</artifactId>
-                <configuration>
-                    <scalaVersion>${scala.version}</scalaVersion>
-                </configuration>
-            </plugin>
-        </plugins>
-    </reporting>
-</project>
diff --git a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala b/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
deleted file mode 100644
index e51a47f..0000000
--- a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
+++ /dev/null
@@ -1,109 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.spark
-
-
-import java.util
-import java.util.logging.LogManager
-
-import com.fasterxml.jackson.databind.ObjectMapper
-import org.apache.ambari.metrics.alertservice.prototype.MetricsCollectorInterface
-import org.apache.spark.SparkConf
-import org.apache.spark.streaming._
-import org.apache.spark.streaming.kafka._
-import org.apache.ambari.metrics.alertservice.prototype.methods.{AnomalyDetectionTechnique, MetricAnomaly}
-import org.apache.ambari.metrics.alertservice.prototype.methods.ema.{EmaModelLoader, EmaTechnique}
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics
-import org.apache.log4j.Logger
-import org.apache.spark.storage.StorageLevel
-
-import scala.collection.JavaConversions._
-import org.apache.logging.log4j.scala.Logging
-
-object MetricAnomalyDetector extends Logging {
-
-
-  var zkQuorum = "avijayan-ams-1.openstacklocal:2181,avijayan-ams-2.openstacklocal:2181,avijayan-ams-3.openstacklocal:2181"
-  var groupId = "ambari-metrics-group"
-  var topicName = "ambari-metrics-topic"
-  var numThreads = 1
-  val anomalyDetectionModels: Array[AnomalyDetectionTechnique] = Array[AnomalyDetectionTechnique]()
-
-  def main(args: Array[String]): Unit = {
-
-    @transient
-    lazy val log: Logger = org.apache.log4j.LogManager.getLogger("MetricAnomalyDetectorLogger")
-
-    if (args.length < 5) {
-      System.err.println("Usage: MetricAnomalyDetector <method1,method2> <appid1,appid2> <collector_host> <port> <protocol>")
-      System.exit(1)
-    }
-
-    for (method <- args(0).split(",")) {
-      if (method == "ema") anomalyDetectionModels :+ new EmaTechnique(0.5, 3)
-    }
-
-    val appIds = util.Arrays.asList(args(1).split(","))
-
-    val collectorHost = args(2)
-    val collectorPort = args(3)
-    val collectorProtocol = args(4)
-
-    val anomalyMetricPublisher: MetricsCollectorInterface = new MetricsCollectorInterface(collectorHost, collectorProtocol, collectorPort)
-
-    val sparkConf = new SparkConf().setAppName("AmbariMetricsAnomalyDetector")
-
-    val streamingContext = new StreamingContext(sparkConf, Duration(10000))
-
-    val emaModel = new EmaModelLoader().load(streamingContext.sparkContext, "/tmp/model/ema")
-
-    val kafkaStream = KafkaUtils.createStream(streamingContext, zkQuorum, groupId, Map(topicName -> numThreads), StorageLevel.MEMORY_AND_DISK_SER_2)
-    kafkaStream.print()
-
-    var timelineMetricsStream = kafkaStream.map( message => {
-      val mapper = new ObjectMapper
-      val metrics = mapper.readValue(message._2, classOf[TimelineMetrics])
-      metrics
-    })
-    timelineMetricsStream.print()
-
-    var appMetricStream = timelineMetricsStream.map( timelineMetrics => {
-      (timelineMetrics.getMetrics.get(0).getAppId, timelineMetrics)
-    })
-    appMetricStream.print()
-
-    var filteredAppMetricStream = appMetricStream.filter( appMetricTuple => {
-      appIds.contains(appMetricTuple._1)
-    } )
-    filteredAppMetricStream.print()
-
-    filteredAppMetricStream.foreachRDD( rdd => {
-      rdd.foreach( appMetricTuple => {
-        val timelineMetrics = appMetricTuple._2
-        logger.info("Received Metric (1): " + timelineMetrics.getMetrics.get(0).getMetricName)
-        log.info("Received Metric (2): " + timelineMetrics.getMetrics.get(0).getMetricName)
-        for (timelineMetric <- timelineMetrics.getMetrics) {
-          var anomalies = emaModel.test(timelineMetric)
-          anomalyMetricPublisher.publish(anomalies)
-        }
-      })
-    })
-
-    streamingContext.start()
-    streamingContext.awaitTermination()
-  }
-  }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/pom.xml b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
index a8ac1da..3d119f9 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/pom.xml
+++ b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
@@ -346,12 +346,6 @@
     </dependency>
 
     <dependency>
-      <groupId>org.apache.ambari</groupId>
-      <artifactId>ambari-metrics-alertservice</artifactId>
-      <version>${project.version}</version>
-    </dependency>
-
-    <dependency>
       <groupId>javax.servlet</groupId>
       <artifactId>servlet-api</artifactId>
       <version>2.5</version>
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/TestMetricSeriesGenerator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/TestMetricSeriesGenerator.java
deleted file mode 100644
index 2420ef3..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/TestMetricSeriesGenerator.java
+++ /dev/null
@@ -1,87 +0,0 @@
-package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics;
-
-import org.apache.ambari.metrics.alertservice.prototype.TestSeriesInputRequest;
-import org.apache.ambari.metrics.alertservice.seriesgenerator.AbstractMetricSeries;
-import org.apache.ambari.metrics.alertservice.seriesgenerator.MetricSeriesGeneratorFactory;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
-
-import java.io.IOException;
-import java.net.InetAddress;
-import java.net.UnknownHostException;
-import java.sql.SQLException;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.TreeMap;
-
-public class TestMetricSeriesGenerator implements Runnable {
-
-  private Map<TestSeriesInputRequest, AbstractMetricSeries> configuredSeries = new HashMap<>();
-  private static final Log LOG = LogFactory.getLog(TestMetricSeriesGenerator.class);
-  private TimelineMetricStore metricStore;
-  private String hostname;
-
-  public TestMetricSeriesGenerator(TimelineMetricStore metricStore) {
-    this.metricStore = metricStore;
-    try {
-      this.hostname = InetAddress.getLocalHost().getHostName();
-    } catch (UnknownHostException e) {
-      e.printStackTrace();
-    }
-  }
-
-  public void addSeries(TestSeriesInputRequest inputRequest) {
-    if (!configuredSeries.containsKey(inputRequest)) {
-      AbstractMetricSeries metricSeries = MetricSeriesGeneratorFactory.generateSeries(inputRequest.getSeriesType(), inputRequest.getConfigs());
-      configuredSeries.put(inputRequest, metricSeries);
-      LOG.info("Added series " + inputRequest.getSeriesName());
-    }
-  }
-
-  public void removeSeries(String seriesName) {
-    boolean isPresent = false;
-    TestSeriesInputRequest tbd = null;
-    for (TestSeriesInputRequest inputRequest : configuredSeries.keySet()) {
-      if (inputRequest.getSeriesName().equals(seriesName)) {
-        isPresent = true;
-        tbd = inputRequest;
-      }
-    }
-    if (isPresent) {
-      LOG.info("Removing series " + seriesName);
-      configuredSeries.remove(tbd);
-    } else {
-      LOG.info("Series not found : " + seriesName);
-    }
-  }
-
-  @Override
-  public void run() {
-    long currentTime = System.currentTimeMillis();
-    TimelineMetrics timelineMetrics = new TimelineMetrics();
-
-    for (TestSeriesInputRequest input : configuredSeries.keySet()) {
-      AbstractMetricSeries metricSeries = configuredSeries.get(input);
-      TimelineMetric timelineMetric = new TimelineMetric();
-      timelineMetric.setMetricName(input.getSeriesName());
-      timelineMetric.setAppId("anomaly-engine-test-metric");
-      timelineMetric.setInstanceId(null);
-      timelineMetric.setStartTime(currentTime);
-      timelineMetric.setHostName(hostname);
-      TreeMap<Long, Double> metricValues = new TreeMap();
-      metricValues.put(currentTime, metricSeries.nextValue());
-      timelineMetric.setMetricValues(metricValues);
-      timelineMetrics.addOrMergeTimelineMetric(timelineMetric);
-      LOG.info("Emitting metric with appId = " + timelineMetric.getAppId());
-    }
-    try {
-      LOG.info("Publishing test metrics for " + timelineMetrics.getMetrics().size() + " series.");
-      metricStore.putMetrics(timelineMetrics);
-    } catch (Exception e) {
-      LOG.error(e);
-    }
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/MetricAnomalyDetectorTestService.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/MetricAnomalyDetectorTestService.java
deleted file mode 100644
index 6f7b14a..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/MetricAnomalyDetectorTestService.java
+++ /dev/null
@@ -1,87 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
-
-import com.google.inject.Inject;
-import com.google.inject.Singleton;
-import org.apache.ambari.metrics.alertservice.prototype.MetricAnomalyDetectorTestInput;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
-
-import javax.servlet.http.HttpServletRequest;
-import javax.servlet.http.HttpServletResponse;
-import javax.ws.rs.Consumes;
-import javax.ws.rs.GET;
-import javax.ws.rs.POST;
-import javax.ws.rs.Path;
-import javax.ws.rs.Produces;
-import javax.ws.rs.QueryParam;
-import javax.ws.rs.WebApplicationException;
-import javax.ws.rs.core.Context;
-import javax.ws.rs.core.MediaType;
-import javax.ws.rs.core.Response;
-
-@Singleton
-@Path("/ws/v1/metrictestservice")
-public class MetricAnomalyDetectorTestService {
-
-  private static final Log LOG = LogFactory.getLog(MetricAnomalyDetectorTestService.class);
-
-  @Inject
-  public MetricAnomalyDetectorTestService() {
-  }
-
-  private void init(HttpServletResponse response) {
-    response.setContentType(null);
-  }
-
-  @Path("/anomaly")
-  @POST
-  @Consumes({ MediaType.APPLICATION_JSON /* , MediaType.APPLICATION_XML */})
-  public TimelinePutResponse postAnomalyDetectionRequest(
-    @Context HttpServletRequest req,
-    @Context HttpServletResponse res,
-    MetricAnomalyDetectorTestInput input) {
-
-    init(res);
-    if (input == null) {
-      return new TimelinePutResponse();
-    }
-
-    try {
-      return null;
-    } catch (Exception e) {
-      throw new WebApplicationException(e, Response.Status.INTERNAL_SERVER_ERROR);
-    }
-  }
-
-  @GET
-  @Path("/dataseries")
-  @Produces({MediaType.APPLICATION_JSON})
-  public TimelineMetrics getTestDataSeries(
-    @Context HttpServletRequest req,
-    @Context HttpServletResponse res,
-    @QueryParam("type") String seriesType,
-    @QueryParam("configs") String config
-  ) {
-    return null;
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
index 20aba23..5d9bb35 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
@@ -36,7 +36,6 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.TestMetricSeriesGenerator;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.EntityIdentifier;
diff --git a/ambari-metrics/pom.xml b/ambari-metrics/pom.xml
index 0d4767d..386be91 100644
--- a/ambari-metrics/pom.xml
+++ b/ambari-metrics/pom.xml
@@ -29,13 +29,12 @@
     <module>ambari-metrics-kafka-sink</module>
     <module>ambari-metrics-storm-sink</module>
     <module>ambari-metrics-storm-sink-legacy</module>
-    <module>ambari-metrics-alertservice</module>
     <module>ambari-metrics-timelineservice</module>
     <module>ambari-metrics-host-monitoring</module>
     <module>ambari-metrics-grafana</module>
     <module>ambari-metrics-assembly</module>
     <module>ambari-metrics-host-aggregator</module>
-    <module>ambari-metrics-spark</module>
+    <module>ambari-metrics-anomaly-detector</module>
   </modules>
   <properties>
     <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 12/39: AMBARI-21458 Provide ability to shard Cluster second aggregation across appId. (dsen)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 54e4a978240100cd760b993eb9569a1e9b314ab8
Author: Dmytro Sen <ds...@apache.org>
AuthorDate: Thu Aug 31 13:00:14 2017 +0300

    AMBARI-21458 Provide ability to shard Cluster second aggregation across appId. (dsen)
---
 .../availability/MetricCollectorHAHelper.java      |   8 +-
 .../ambari-metrics-timelineservice/pom.xml         |  10 +
 .../timeline/HBaseTimelineMetricsService.java      |  32 ++-
 .../metrics/timeline/PhoenixHBaseAccessor.java     |   2 +-
 .../timeline/TimelineMetricConfiguration.java      |  76 +++++
 ...ta.java => TimelineMetricDistributedCache.java} |  47 +---
 .../timeline/TimelineMetricsIgniteCache.java       | 305 +++++++++++++++++++++
 .../aggregators/AbstractTimelineAggregator.java    |  11 +-
 .../timeline/aggregators/AggregatorUtils.java      | 192 +++++++++++++
 .../TimelineMetricAggregatorFactory.java           |  30 +-
 .../aggregators/TimelineMetricAppAggregator.java   |  15 +-
 .../TimelineMetricClusterAggregatorSecond.java     | 231 ++--------------
 ...tricClusterAggregatorSecondWithCacheSource.java | 132 +++++++++
 .../availability/MetricCollectorHAController.java  |  19 +-
 .../discovery/TimelineMetricHostMetadata.java      |  19 +-
 .../discovery/TimelineMetricMetadataManager.java   |  20 +-
 .../discovery/TimelineMetricMetadataSync.java      |   2 +-
 .../timeline/uuid/HashBasedUuidGenStrategy.java    |   4 +
 .../metrics/timeline/ITPhoenixHBaseAccessor.java   |   4 +-
 .../timeline/TimelineMetricsIgniteCacheTest.java   | 296 ++++++++++++++++++++
 .../AbstractTimelineAggregatorTest.java            |  12 +-
 .../timeline/aggregators/ITClusterAggregator.java  |  10 +-
 .../TimelineMetricClusterAggregatorSecondTest.java |  32 +--
 ...ClusterAggregatorSecondWithCacheSourceTest.java | 178 ++++++++++++
 .../MetricCollectorHAControllerTest.java           |   1 +
 .../timeline/discovery/TestMetadataManager.java    |   8 +-
 ambari-metrics/pom.xml                             |   4 +-
 27 files changed, 1364 insertions(+), 336 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/availability/MetricCollectorHAHelper.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/availability/MetricCollectorHAHelper.java
index c6f6beb..3071cbc 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/availability/MetricCollectorHAHelper.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/availability/MetricCollectorHAHelper.java
@@ -38,7 +38,7 @@ import java.util.concurrent.Callable;
  * does not add a watcher on the znode.
  */
 public class MetricCollectorHAHelper {
-  private final String zookeeperQuorum;
+  private final String zookeeperConnectionURL;
   private final int tryCount;
   private final int sleepMsBetweenRetries;
 
@@ -52,8 +52,8 @@ public class MetricCollectorHAHelper {
 
   private static final Log LOG = LogFactory.getLog(MetricCollectorHAHelper.class);
 
-  public MetricCollectorHAHelper(String zookeeperQuorum, int tryCount, int sleepMsBetweenRetries) {
-    this.zookeeperQuorum = zookeeperQuorum;
+  public MetricCollectorHAHelper(String zookeeperConnectionURL, int tryCount, int sleepMsBetweenRetries) {
+    this.zookeeperConnectionURL = zookeeperConnectionURL;
     this.tryCount = tryCount;
     this.sleepMsBetweenRetries = sleepMsBetweenRetries;
   }
@@ -66,7 +66,7 @@ public class MetricCollectorHAHelper {
     Set<String> collectors = new HashSet<>();
 
     RetryPolicy retryPolicy = new BoundedExponentialBackoffRetry(sleepMsBetweenRetries, 10*sleepMsBetweenRetries, tryCount);
-    final CuratorZookeeperClient client = new CuratorZookeeperClient(zookeeperQuorum,
+    final CuratorZookeeperClient client = new CuratorZookeeperClient(zookeeperConnectionURL,
       SESSION_TIMEOUT, CONNECTION_TIMEOUT, null, retryPolicy);
 
     List<String> liveInstances = null;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/pom.xml b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
index f3e0041..d306ad3 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/pom.xml
+++ b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
@@ -703,6 +703,16 @@
       <version>1.0.0.0-SNAPSHOT</version>
       <scope>test</scope>
     </dependency>
+    <dependency>
+      <groupId>org.apache.ignite</groupId>
+      <artifactId>ignite-core</artifactId>
+      <version>2.1.0</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.ignite</groupId>
+      <artifactId>ignite-log4j</artifactId>
+      <version>2.1.0</version>
+    </dependency>
   </dependencies>
 
   <profiles>
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
index 4318fd3..110b094 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
@@ -64,13 +64,13 @@ import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.Executors;
 import java.util.concurrent.ScheduledExecutorService;
 import java.util.concurrent.ThreadFactory;
 import java.util.concurrent.TimeUnit;
 
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.DEFAULT_TOPN_HOSTS_LIMIT;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_HOST_INMEMORY_AGGREGATION;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.USE_GROUPBY_AGGREGATOR_QUERIES;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.ACTUAL_AGGREGATOR_NAMES;
 
@@ -78,6 +78,7 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
 
   static final Log LOG = LogFactory.getLog(HBaseTimelineMetricsService.class);
   private final TimelineMetricConfiguration configuration;
+  private TimelineMetricDistributedCache cache;
   private PhoenixHBaseAccessor hBaseAccessor;
   private static volatile boolean isInitialized = false;
   private final ScheduledExecutorService watchdogExecutorService = Executors.newSingleThreadScheduledExecutor();
@@ -103,6 +104,12 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
     initializeSubsystem();
   }
 
+  private TimelineMetricDistributedCache startCacheNode() throws MalformedURLException, URISyntaxException {
+    //TODO make configurable
+    return new TimelineMetricsIgniteCache();
+  }
+
+
   private synchronized void initializeSubsystem() {
     if (!isInitialized) {
       hBaseAccessor = new PhoenixHBaseAccessor(null);
@@ -142,6 +149,15 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
         throw new ExceptionInInitializerError("Cannot initialize configuration.");
       }
 
+      if (configuration.isCollectorInMemoryAggregationEnabled()) {
+        try {
+          cache = startCacheNode();
+        } catch (Exception e) {
+          throw new MetricsSystemInitializationException("Unable to " +
+              "start cache node", e);
+        }
+      }
+
       defaultTopNHostsLimit = Integer.parseInt(metricsConf.get(DEFAULT_TOPN_HOSTS_LIMIT, "20"));
       if (Boolean.parseBoolean(metricsConf.get(USE_GROUPBY_AGGREGATOR_QUERIES, "true"))) {
         LOG.info("Using group by aggregators for aggregating host and cluster metrics.");
@@ -150,7 +166,7 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
       // Start the cluster aggregator second
       TimelineMetricAggregator secondClusterAggregator =
         TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(
-          hBaseAccessor, metricsConf, metricMetadataManager, haController);
+          hBaseAccessor, metricsConf, metricMetadataManager, haController, cache);
       scheduleAggregatorThread(secondClusterAggregator);
 
       // Start the minute cluster aggregator
@@ -172,7 +188,7 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
       scheduleAggregatorThread(dailyClusterAggregator);
 
       // Start the minute host aggregator
-      if (Boolean.parseBoolean(metricsConf.get(TIMELINE_METRICS_HOST_INMEMORY_AGGREGATION, "true"))) {
+      if (configuration.isHostInMemoryAggregationEnabled()) {
         LOG.info("timeline.metrics.host.inmemory.aggregation is set to True, switching to filtering host minute aggregation on collector");
         TimelineMetricAggregator minuteHostAggregator =
           TimelineMetricAggregatorFactory.createFilteringTimelineMetricAggregatorMinute(
@@ -383,6 +399,10 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
 
     hBaseAccessor.insertMetricRecordsWithMetadata(metricMetadataManager, metrics, false);
 
+    if (configuration.isCollectorInMemoryAggregationEnabled()) {
+      cache.putMetrics(metrics.getMetrics(), metricMetadataManager);
+    }
+
     return response;
   }
 
@@ -460,7 +480,7 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
     Map<String, TimelineMetricHostMetadata> hostsMetadata = metricMetadataManager.getHostedAppsCache();
     Map<String, Set<String>> hostAppMap = new HashMap<>();
     for (String hostname : hostsMetadata.keySet()) {
-      hostAppMap.put(hostname, hostsMetadata.get(hostname).getHostedApps());
+      hostAppMap.put(hostname, hostsMetadata.get(hostname).getHostedApps().keySet());
     }
     return hostAppMap;
   }
@@ -500,7 +520,7 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
     if (MapUtils.isEmpty(instanceHosts)) {
       Map<String, Set<String>> appHostMap = new HashMap<String, Set<String>>();
       for (String host : hostedApps.keySet()) {
-        for (String app : hostedApps.get(host).getHostedApps()) {
+        for (String app : hostedApps.get(host).getHostedApps().keySet()) {
           if (!appHostMap.containsKey(app)) {
             appHostMap.put(app, new HashSet<String>());
           }
@@ -519,7 +539,7 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
 
         Set<String> hostsWithInstance = instanceHosts.get(instance);
         for (String host : hostsWithInstance) {
-          for (String app : hostedApps.get(host).getHostedApps()) {
+          for (String app : hostedApps.get(host).getHostedApps().keySet()) {
             if (StringUtils.isNotEmpty(appId) && !app.equals(appId)) {
               continue;
             }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
index d207775..da14fd1 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
@@ -1570,7 +1570,7 @@ public class PhoenixHBaseAccessor {
         stmt.clearParameters();
         stmt.setString(1, hostedAppsEntry.getKey());
         stmt.setBytes(2, timelineMetricHostMetadata.getUuid());
-        stmt.setString(3, StringUtils.join(timelineMetricHostMetadata.getHostedApps(), ","));
+        stmt.setString(3, StringUtils.join(timelineMetricHostMetadata.getHostedApps().keySet(), ","));
         try {
           stmt.executeUpdate();
           rowCount++;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
index 6083859..258e9c6 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
@@ -26,8 +26,10 @@ import java.net.MalformedURLException;
 import java.net.URISyntaxException;
 import java.net.URL;
 import java.net.UnknownHostException;
+import java.util.Arrays;
 import java.util.Collections;
 import java.util.HashSet;
+import java.util.List;
 import java.util.Set;
 
 import org.apache.commons.lang.StringUtils;
@@ -57,6 +59,7 @@ public class TimelineMetricConfiguration {
   public static final String HBASE_SITE_CONFIGURATION_FILE = "hbase-site.xml";
   public static final String METRICS_SITE_CONFIGURATION_FILE = "ams-site.xml";
   public static final String METRICS_ENV_CONFIGURATION_FILE = "ams-env.xml";
+  public static final String METRICS_SSL_SERVER_CONFIGURATION_FILE = "ssl-server.xml";
 
   public static final String TIMELINE_METRICS_AGGREGATOR_CHECKPOINT_DIR =
     "timeline.metrics.aggregator.checkpoint.dir";
@@ -118,6 +121,9 @@ public class TimelineMetricConfiguration {
   public static final String CLUSTER_AGGREGATOR_TIMESLICE_INTERVAL =
     "timeline.metrics.cluster.aggregator.second.timeslice.interval";
 
+  public static final String CLUSTER_CACHE_AGGREGATOR_TIMESLICE_INTERVAL =
+    "timeline.metrics.cluster.cache.aggregator.second.timeslice.interval";
+
   public static final String AGGREGATOR_CHECKPOINT_DELAY =
     "timeline.metrics.service.checkpointDelay";
 
@@ -262,6 +268,9 @@ public class TimelineMetricConfiguration {
   public static final String TIMELINE_METRICS_CLUSTER_AGGREGATOR_INTERPOLATION_ENABLED =
     "timeline.metrics.cluster.aggregator.interpolation.enabled";
 
+  public static final String TIMELINE_METRICS_SINK_COLLECTION_PERIOD =
+    "timeline.metrics.sink.collection.period";
+
   public static final String TIMELINE_METRICS_PRECISION_TABLE_DURABILITY =
     "timeline.metrics.precision.table.durability";
 
@@ -331,6 +340,13 @@ public class TimelineMetricConfiguration {
   public static final String AMSHBASE_METRICS_WHITESLIST_FILE = "amshbase_metrics_whitelist";
 
   public static final String TIMELINE_METRICS_HOST_INMEMORY_AGGREGATION = "timeline.metrics.host.inmemory.aggregation";
+
+  public static final String TIMELINE_METRICS_COLLECTOR_INMEMORY_AGGREGATION = "timeline.metrics.collector.inmemory.aggregation";
+
+  public static final String TIMELINE_METRICS_COLLECTOR_IGNITE_NODES = "timeline.metrics.collector.ignite.nodes.list";
+
+  public static final String TIMELINE_METRICS_COLLECTOR_IGNITE_BACKUPS = "timeline.metrics.collector.ignite.nodes.backups";
+
   public static final String INTERNAL_CACHE_HEAP_PERCENT =
     "timeline.metrics.service.cache.%s.heap.percent";
 
@@ -342,6 +358,7 @@ public class TimelineMetricConfiguration {
 
   private Configuration hbaseConf;
   private Configuration metricsConf;
+  private Configuration metricsSslConf;
   private Configuration amsEnvConf;
   private volatile boolean isInitialized = false;
 
@@ -386,6 +403,17 @@ public class TimelineMetricConfiguration {
       metricsConf = new Configuration(true);
       metricsConf.addResource(amsResUrl.toURI().toURL());
 
+      if (metricsConf.get("timeline.metrics.service.http.policy", "HTTP_ONLY").equalsIgnoreCase("HTTPS_ONLY")) {
+        URL amsSllResUrl = classLoader.getResource(METRICS_SSL_SERVER_CONFIGURATION_FILE);
+        LOG.info("Found metric ssl service configuration: " + amsResUrl);
+        if (amsSllResUrl == null) {
+          throw new IllegalStateException("Unable to initialize the metrics " +
+            "subsystem. No ams-ssl-server present in the classpath.");
+        }
+        metricsSslConf = new Configuration(true);
+        metricsSslConf.addResource(amsSllResUrl.toURI().toURL());
+      }
+
       isInitialized = true;
     }
   }
@@ -404,6 +432,13 @@ public class TimelineMetricConfiguration {
     return metricsConf;
   }
 
+  public Configuration getMetricsSslConf() throws URISyntaxException, MalformedURLException {
+    if (!isInitialized) {
+      initialize();
+    }
+    return metricsSslConf;
+  }
+
   public String getZKClientPort() throws MalformedURLException, URISyntaxException {
     return getHbaseConf().getTrimmed("hbase.zookeeper.property.clientPort", "2181");
   }
@@ -609,4 +644,45 @@ public class TimelineMetricConfiguration {
 
     return dirPath;
   }
+
+  public boolean isHostInMemoryAggregationEnabled() {
+    if (metricsConf != null) {
+      return Boolean.valueOf(metricsConf.get(TIMELINE_METRICS_HOST_INMEMORY_AGGREGATION, "false"));
+    } else {
+      return false;
+    }
+  }
+
+  public boolean isCollectorInMemoryAggregationEnabled() {
+    if (metricsConf != null) {
+      return Boolean.valueOf(metricsConf.get(TIMELINE_METRICS_COLLECTOR_INMEMORY_AGGREGATION, "false"));
+    } else {
+      return false;
+    }
+  }
+
+  public List<String> getAppIdsForHostAggregation() {
+    String appIds = metricsConf.get(CLUSTER_AGGREGATOR_APP_IDS);
+    if (!StringUtils.isEmpty(appIds)) {
+      return Arrays.asList(StringUtils.stripAll(appIds.split(",")));
+    }
+    return Collections.emptyList();
+  }
+
+  public String getZkConnectionUrl(String zkClientPort, String zkQuorum) {
+    StringBuilder sb = new StringBuilder();
+    String[] quorumParts = zkQuorum.split(",");
+    String prefix = "";
+    for (String part : quorumParts) {
+      sb.append(prefix);
+      sb.append(part.trim());
+      if (!part.contains(":")) {
+        sb.append(":");
+        sb.append(zkClientPort);
+      }
+      prefix = ",";
+    }
+
+    return sb.toString();
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricHostMetadata.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricDistributedCache.java
similarity index 50%
copy from ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricHostMetadata.java
copy to ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricDistributedCache.java
index 06e9279..3480545 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricHostMetadata.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricDistributedCache.java
@@ -6,46 +6,27 @@
  * to you under the Apache License, Version 2.0 (the
  * "License"); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
- * <p/>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p/>
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
  * Unless required by applicable law or agreed to in writing, software
  * distributed under the License is distributed on an "AS IS" BASIS,
  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline;
 
-package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery;
+import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 
-import java.util.HashSet;
-import java.util.Set;
+import java.util.Collection;
+import java.util.Map;
 
-public class TimelineMetricHostMetadata {
-  private Set<String> hostedApps = new HashSet<>();
-  private byte[] uuid;
-
-  // Default constructor
-  public TimelineMetricHostMetadata() {
-  }
-
-  public TimelineMetricHostMetadata(Set<String> hostedApps) {
-    this.hostedApps = hostedApps;
-  }
-
-  public Set<String> getHostedApps() {
-    return hostedApps;
-  }
-
-  public void setHostedApps(Set<String> hostedApps) {
-    this.hostedApps = hostedApps;
-  }
-
-  public byte[] getUuid() {
-    return uuid;
-  }
-
-  public void setUuid(byte[] uuid) {
-    this.uuid = uuid;
-  }
+public interface TimelineMetricDistributedCache {
+  Map<TimelineClusterMetric, MetricClusterAggregate> evictMetricAggregates(Long startTime, Long endTime);
+  void putMetrics(Collection<TimelineMetric> elements, TimelineMetricMetadataManager metricMetadataManager);
+  Map<String, Double> getPointInTimeCacheMetrics();
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCache.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCache.java
new file mode 100644
index 0000000..aeaa4ba
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCache.java
@@ -0,0 +1,305 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricHostMetadata;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.IgniteCache;
+import org.apache.ignite.Ignition;
+import org.apache.ignite.cache.CacheAtomicityMode;
+import org.apache.ignite.cache.CacheMetrics;
+import org.apache.ignite.cache.CacheMode;
+import org.apache.ignite.cache.query.QueryCursor;
+import org.apache.ignite.cache.query.ScanQuery;
+import org.apache.ignite.configuration.CacheConfiguration;
+import org.apache.ignite.configuration.IgniteConfiguration;
+import org.apache.ignite.lang.IgniteBiPredicate;
+import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
+import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
+import org.apache.ignite.ssl.SslContextFactory;
+
+import javax.cache.Cache;
+import javax.cache.expiry.CreatedExpiryPolicy;
+import javax.cache.expiry.Duration;
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.locks.Lock;
+
+import static java.util.concurrent.TimeUnit.SECONDS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_SECOND_SLEEP_INTERVAL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_CACHE_AGGREGATOR_TIMESLICE_INTERVAL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_APP_ID;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_CLUSTER_AGGREGATOR_INTERPOLATION_ENABLED;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_COLLECTOR_IGNITE_BACKUPS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_COLLECTOR_IGNITE_NODES;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_SINK_COLLECTION_PERIOD;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRIC_AGGREGATION_SQL_FILTERS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_SERVICE_HTTP_POLICY;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedCheckPointTimeMillis;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getTimeSlices;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.sliceFromTimelineMetric;
+
+public class TimelineMetricsIgniteCache implements TimelineMetricDistributedCache {
+  private static final Log LOG =
+      LogFactory.getLog(TimelineMetricsIgniteCache.class);
+  private IgniteCache<TimelineClusterMetric, MetricClusterAggregate> igniteCache;
+  private long cacheSliceIntervalMillis;
+  private int collectionPeriodMillis;
+  private boolean interpolationEnabled;
+  private List<String> skipAggrPatternStrings = new ArrayList<>();
+  private List<String> appIdsToAggregate;
+
+
+  public TimelineMetricsIgniteCache() throws MalformedURLException, URISyntaxException {
+    TimelineMetricConfiguration timelineMetricConfiguration = TimelineMetricConfiguration.getInstance();
+    Configuration metricConf = timelineMetricConfiguration.getMetricsConf();
+    Configuration sslConf = timelineMetricConfiguration.getMetricsSslConf();
+
+    IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
+
+    //TODO add config to disable logging
+
+    //enable ssl for ignite requests
+    if (metricConf.get(TIMELINE_SERVICE_HTTP_POLICY) != null && metricConf.get(TIMELINE_SERVICE_HTTP_POLICY).equalsIgnoreCase("HTTPS_ONLY")) {
+      SslContextFactory sslContextFactory = new SslContextFactory();
+      String keyStorePath = sslConf.get("ssl.server.keystore.location");
+      String keyStorePassword = sslConf.get("ssl.server.keystore.password");
+      String trustStorePath = sslConf.get("ssl.server.truststore.location");
+      String trustStorePassword = sslConf.get("ssl.server.truststore.password");
+
+      sslContextFactory.setKeyStoreFilePath(keyStorePath);
+      sslContextFactory.setKeyStorePassword(keyStorePassword.toCharArray());
+      sslContextFactory.setTrustStoreFilePath(trustStorePath);
+      sslContextFactory.setTrustStorePassword(trustStorePassword.toCharArray());
+      igniteConfiguration.setSslContextFactory(sslContextFactory);
+    }
+
+    //aggregation parameters
+    appIdsToAggregate = timelineMetricConfiguration.getAppIdsForHostAggregation();
+    interpolationEnabled = Boolean.parseBoolean(metricConf.get(TIMELINE_METRICS_CLUSTER_AGGREGATOR_INTERPOLATION_ENABLED, "true"));
+    collectionPeriodMillis = (int) SECONDS.toMillis(metricConf.getInt(TIMELINE_METRICS_SINK_COLLECTION_PERIOD, 10));
+    cacheSliceIntervalMillis = SECONDS.toMillis(metricConf.getInt(CLUSTER_CACHE_AGGREGATOR_TIMESLICE_INTERVAL, 30));
+    Long aggregationInterval = metricConf.getLong(CLUSTER_AGGREGATOR_SECOND_SLEEP_INTERVAL, 120L);
+
+    String filteredMetricPatterns = metricConf.get(TIMELINE_METRIC_AGGREGATION_SQL_FILTERS);
+    if (!StringUtils.isEmpty(filteredMetricPatterns)) {
+      LOG.info("Skipping aggregation for metric patterns : " + filteredMetricPatterns);
+      skipAggrPatternStrings.addAll(Arrays.asList(filteredMetricPatterns.split(",")));
+    }
+
+    if (metricConf.get(TIMELINE_METRICS_COLLECTOR_IGNITE_NODES) != null) {
+      TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
+      TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
+      ipFinder.setAddresses(Arrays.asList(metricConf.get(TIMELINE_METRICS_COLLECTOR_IGNITE_NODES).split(",")));
+      LOG.info("Setting ignite nodes to : " + ipFinder.getRegisteredAddresses());
+      discoverySpi.setIpFinder(ipFinder);
+      igniteConfiguration.setDiscoverySpi(discoverySpi);
+    } else {
+      //get live nodes from ZK
+      String zkClientPort = timelineMetricConfiguration.getClusterZKClientPort();
+      String zkQuorum = timelineMetricConfiguration.getClusterZKQuorum();
+      String zkConnectionURL = timelineMetricConfiguration.getZkConnectionUrl(zkClientPort, zkQuorum);
+      MetricCollectorHAHelper metricCollectorHAHelper = new MetricCollectorHAHelper(zkConnectionURL, 5, 200);
+      Collection<String> liveCollectors = metricCollectorHAHelper.findLiveCollectorHostsFromZNode();
+      if (liveCollectors != null && !liveCollectors.isEmpty()) {
+        TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
+        TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
+        ipFinder.setAddresses(liveCollectors);
+        LOG.info("Setting ignite nodes to : " + ipFinder.getRegisteredAddresses());
+        discoverySpi.setIpFinder(ipFinder);
+        igniteConfiguration.setDiscoverySpi(discoverySpi);
+      }
+    }
+
+
+    //ignite cache configuration
+    CacheConfiguration<TimelineClusterMetric, MetricClusterAggregate> cacheConfiguration = new CacheConfiguration<>();
+    cacheConfiguration.setName("metrics_cache");
+    //set cache mode to partitioned with # of backups
+    cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
+    cacheConfiguration.setBackups(metricConf.getInt(TIMELINE_METRICS_COLLECTOR_IGNITE_BACKUPS, 1));
+    //disable throttling due to cpu impact
+    cacheConfiguration.setRebalanceThrottle(0);
+    //enable locks
+    cacheConfiguration.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
+    //expiry policy to remove lost keys, if any
+    cacheConfiguration.setEagerTtl(true);
+    cacheConfiguration.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(new Duration(TimeUnit.SECONDS, aggregationInterval * 3)));
+
+    Ignite igniteNode = Ignition.start(igniteConfiguration);
+    igniteCache = igniteNode.getOrCreateCache(cacheConfiguration);
+  }
+
+  /**
+   * Looks through the cache and evicts all elements within (startTime; endTime] half-interval
+   * All elements satisfying the half-interval will be removed from the cache.
+   * @param startTime
+   * @param endTime
+   * @return
+   */
+  @Override
+  public Map<TimelineClusterMetric, MetricClusterAggregate> evictMetricAggregates(Long startTime, Long endTime) {
+    Map<TimelineClusterMetric, MetricClusterAggregate> aggregatedMetricsMap = new HashMap<>();
+
+    //construct filter
+    IgniteBiPredicate<TimelineClusterMetric, MetricClusterAggregate> filter =
+        (IgniteBiPredicate<TimelineClusterMetric, MetricClusterAggregate>) (key, value) -> key.getTimestamp() > startTime && key.getTimestamp() <= endTime;
+
+    //get values from cache
+    try (QueryCursor<Cache.Entry<TimelineClusterMetric, MetricClusterAggregate>> cursor = igniteCache.query(new ScanQuery(filter))) {
+      for (Cache.Entry<TimelineClusterMetric, MetricClusterAggregate> e : cursor) {
+        aggregatedMetricsMap.put(e.getKey(), e.getValue());
+      }
+    }
+
+    //remove values from cache
+    igniteCache.removeAllAsync(aggregatedMetricsMap.keySet());
+
+    return aggregatedMetricsMap;
+  }
+
+  /**
+   * Iterates through elements skipping white-listed patterns;
+   * calculates average value for each slice of each metric (last slice values could be ignored in there is the possibility that values from this slice could be present in next post);
+   * updates/adds the value in the cache;
+   * calculates applications host metrics based on the metadata of hosted apps
+   * updates metadata of hosted apps if needed
+   * @param elements
+   * @param metadataManager
+   */
+  @Override
+  public void putMetrics(Collection<TimelineMetric> elements, TimelineMetricMetadataManager metadataManager) {
+    Map<String, TimelineMetricHostMetadata> hostMetadata = metadataManager.getHostedAppsCache();
+    for (TimelineMetric metric : elements) {
+      if (shouldBeSkipped(metric.getMetricName())) {
+        if (LOG.isDebugEnabled()) {
+          LOG.debug(String.format("Skipping %s metric from being aggregated", metric.getMetricName()));
+        }
+        continue;
+      }
+      List<Long[]> timeSlices = getTimeSlices(getRoundedCheckPointTimeMillis(metric.getMetricValues().firstKey(), cacheSliceIntervalMillis), metric.getMetricValues().lastKey(), cacheSliceIntervalMillis);
+      Map<TimelineClusterMetric, Double> slicedClusterMetrics = sliceFromTimelineMetric(metric, timeSlices, interpolationEnabled);
+
+      if (slicedClusterMetrics != null) {
+        for (Map.Entry<TimelineClusterMetric, Double> metricDoubleEntry : slicedClusterMetrics.entrySet()) {
+          if (metricDoubleEntry.getKey().getTimestamp() == timeSlices.get(timeSlices.size()-1)[1] && metricDoubleEntry.getKey().getTimestamp() - metric.getMetricValues().lastKey() > collectionPeriodMillis) {
+            if(LOG.isDebugEnabled()) {
+              LOG.debug("Last skipped timestamp @ " + new Date(metric.getMetricValues().lastKey()) + " slice timestamp @ " + new Date(metricDoubleEntry.getKey().getTimestamp()));
+            }
+            continue;
+          }
+          MetricClusterAggregate newMetricClusterAggregate  = new MetricClusterAggregate(
+              metricDoubleEntry.getValue(), 1, null, metricDoubleEntry.getValue(), metricDoubleEntry.getValue());
+          //put app metric into cache
+          putMetricIntoCache(metricDoubleEntry.getKey(), newMetricClusterAggregate);
+          if (hostMetadata != null) {
+            //calculate app host metric
+            if (metric.getAppId().equalsIgnoreCase(HOST_APP_ID)) {
+              // Candidate metric, update app aggregates
+              if (hostMetadata.containsKey(metric.getHostName())) {
+                updateAppAggregatesFromHostMetric(metricDoubleEntry.getKey(), newMetricClusterAggregate, hostMetadata.get(metric.getHostName()));
+              }
+            } else {
+              // Build the hostedapps map if not a host metric
+              // Check app candidacy for host aggregation
+              //TODO better to lock TimelineMetricHostMetadata instance to avoid dataloss, but generally the data could be lost only during initial collector start
+              if (appIdsToAggregate.contains(metric.getAppId())) {
+                TimelineMetricHostMetadata timelineMetricHostMetadata = hostMetadata.get(metric.getHostName());
+                ConcurrentHashMap<String, String> appIdsMap;
+                if (timelineMetricHostMetadata == null) {
+                  appIdsMap = new ConcurrentHashMap<>();
+                  hostMetadata.put(metric.getHostName(), new TimelineMetricHostMetadata(appIdsMap));
+                } else {
+                  appIdsMap = timelineMetricHostMetadata.getHostedApps();
+                }
+                if (!appIdsMap.containsKey(metric.getAppId())) {
+                  appIdsMap.put(metric.getAppId(), metric.getAppId());
+                  LOG.info("Adding appId to hosted apps: appId = " +
+                      metric.getAppId() + ", hostname = " + metric.getHostName());
+                }
+              }
+            }
+          }
+        }
+      }
+    }
+  }
+
+  private void updateAppAggregatesFromHostMetric(TimelineClusterMetric key, MetricClusterAggregate newMetricClusterAggregate, TimelineMetricHostMetadata timelineMetricHostMetadata) {
+    for (String appId : timelineMetricHostMetadata.getHostedApps().keySet()) {
+      TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric(key.getMetricName(), appId, key.getInstanceId(), key.getTimestamp());
+      putMetricIntoCache(timelineClusterMetric, newMetricClusterAggregate);
+    }
+  }
+
+  private void putMetricIntoCache(TimelineClusterMetric metricKey, MetricClusterAggregate metricValue) {
+    Lock lock = igniteCache.lock(metricKey);
+    lock.lock();
+    try {
+      MetricClusterAggregate metricClusterAggregateFromCache = igniteCache.get(metricKey);
+      if (metricClusterAggregateFromCache == null) {
+        igniteCache.put(metricKey, metricValue);
+      } else {
+        metricClusterAggregateFromCache.updateAggregates(metricValue);
+        igniteCache.put(metricKey, metricClusterAggregateFromCache);
+      }
+    } catch (Exception e) {
+      LOG.error("Exception : ", e);
+    } finally {
+      lock.unlock();
+    }
+  }
+
+  @Override
+  public Map<String, Double> getPointInTimeCacheMetrics() {
+    CacheMetrics clusterIgniteMetrics = igniteCache.metrics();
+    Map<String, Double> metricsMap = new HashMap<>();
+    metricsMap.put("Cluster_AverageGetTime", (double) clusterIgniteMetrics.getAverageGetTime());
+    metricsMap.put("Cluster_AveragePutTime", (double) clusterIgniteMetrics.getAveragePutTime());
+    metricsMap.put("Cluster_KeySize", (double) clusterIgniteMetrics.getKeySize());
+    metricsMap.put("Cluster_OffHeapAllocatedSize", (double) clusterIgniteMetrics.getOffHeapAllocatedSize());
+    return metricsMap;
+  }
+
+  private boolean shouldBeSkipped(String metricName) {
+    for (String pattern : skipAggrPatternStrings) {
+      if (metricName.matches(pattern)) {
+        return true;
+      }
+    }
+    return false;
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java
index b2edb73..89428c0 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java
@@ -43,6 +43,8 @@ import java.util.List;
 import static java.util.concurrent.TimeUnit.SECONDS;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.AGGREGATOR_CHECKPOINT_DELAY;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.RESULTSET_FETCH_SIZE;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedAggregateTimeMillis;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedCheckPointTimeMillis;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.ACTUAL_AGGREGATOR_NAMES;
 
 /**
@@ -389,15 +391,6 @@ public abstract class AbstractTimelineAggregator implements TimelineMetricAggreg
     return checkpointLocation;
   }
 
-  public static long getRoundedCheckPointTimeMillis(long referenceTime, long aggregatorPeriod) {
-    return referenceTime - (referenceTime % aggregatorPeriod);
-  }
-
-  public static long getRoundedAggregateTimeMillis(long aggregatorPeriod) {
-    long currentTime = System.currentTimeMillis();
-    return currentTime - (currentTime % aggregatorPeriod);
-  }
-
   /**
    * Run 1 downsampler query.
    * @param conn
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AggregatorUtils.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AggregatorUtils.java
index 20f72c6..b12cb86 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AggregatorUtils.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AggregatorUtils.java
@@ -18,9 +18,19 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
 
 
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
 import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.PostProcessingUtil;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 
 /**
  *
@@ -59,4 +69,186 @@ public class AggregatorUtils {
 
     return values;
   }
+
+  public static Map<TimelineClusterMetric, Double> sliceFromTimelineMetric(
+      TimelineMetric timelineMetric, List<Long[]> timeSlices, boolean interpolationEnabled) {
+
+    if (timelineMetric.getMetricValues().isEmpty()) {
+      return null;
+    }
+
+    Map<TimelineClusterMetric, Double> timelineClusterMetricMap =
+        new HashMap<>();
+
+    Long prevTimestamp = -1l;
+    TimelineClusterMetric prevMetric = null;
+    int count = 0;
+    double sum = 0.0;
+
+    Map<Long,Double> timeSliceValueMap = new HashMap<>();
+    for (Map.Entry<Long, Double> metric : timelineMetric.getMetricValues().entrySet()) {
+      if (metric.getValue() == null) {
+        continue;
+      }
+
+      Long timestamp = getSliceTimeForMetric(timeSlices, Long.parseLong(metric.getKey().toString()));
+      if (timestamp != -1) {
+        // Metric is within desired time range
+        TimelineClusterMetric clusterMetric = new TimelineClusterMetric(
+            timelineMetric.getMetricName(),
+            timelineMetric.getAppId(),
+            timelineMetric.getInstanceId(),
+            timestamp);
+
+        if (prevTimestamp < 0 || timestamp.equals(prevTimestamp)) {
+          Double newValue = metric.getValue();
+          if (newValue > 0.0) {
+            sum += newValue;
+            count++;
+          }
+        } else {
+          double metricValue = (count > 0) ? (sum / count) : 0.0;
+          timelineClusterMetricMap.put(prevMetric, metricValue);
+          timeSliceValueMap.put(prevMetric.getTimestamp(), metricValue);
+          sum = metric.getValue();
+          count = sum > 0.0 ? 1 : 0;
+        }
+
+        prevTimestamp = timestamp;
+        prevMetric = clusterMetric;
+      }
+    }
+
+    if (prevTimestamp > 0) {
+      double metricValue = (count > 0) ? (sum / count) : 0.0;
+      timelineClusterMetricMap.put(prevMetric, metricValue);
+      timeSliceValueMap.put(prevTimestamp, metricValue);
+    }
+
+    if (interpolationEnabled) {
+      Map<Long, Double> interpolatedValues = interpolateMissingPeriods(timelineMetric.getMetricValues(), timeSlices, timeSliceValueMap, timelineMetric.getType());
+      for (Map.Entry<Long, Double> entry : interpolatedValues.entrySet()) {
+        TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric(timelineMetric.getMetricName(), timelineMetric.getAppId(), timelineMetric.getInstanceId(), entry.getKey());
+        timelineClusterMetricMap.putIfAbsent(timelineClusterMetric, entry.getValue());
+      }
+    }
+
+    return timelineClusterMetricMap;
+  }
+
+  private static Map<Long, Double> interpolateMissingPeriods(TreeMap<Long, Double> metricValues,
+                                               List<Long[]> timeSlices,
+                                               Map<Long, Double> timeSliceValueMap, String type) {
+    Map<Long, Double> resultClusterMetricMap = new HashMap<>();
+
+    if (StringUtils.isNotEmpty(type) && "COUNTER".equalsIgnoreCase(type)) {
+      //For Counter Based metrics, ok to do interpolation and extrapolation
+
+      List<Long> requiredTimestamps = new ArrayList<>();
+      for (Long[] timeSlice : timeSlices) {
+        if (!timeSliceValueMap.containsKey(timeSlice[1])) {
+          requiredTimestamps.add(timeSlice[1]);
+        }
+      }
+      Map<Long, Double> interpolatedValuesMap = PostProcessingUtil.interpolate(metricValues, requiredTimestamps);
+
+      if (interpolatedValuesMap != null) {
+        for (Map.Entry<Long, Double> entry : interpolatedValuesMap.entrySet()) {
+          Double interpolatedValue = entry.getValue();
+
+          if (interpolatedValue != null) {
+            resultClusterMetricMap.put( entry.getKey(), interpolatedValue);
+          } else {
+            LOG.debug("Cannot compute interpolated value, hence skipping.");
+          }
+        }
+      }
+    } else {
+      //For other metrics, ok to do only interpolation
+
+      Double defaultNextSeenValue = null;
+      if (MapUtils.isEmpty(timeSliceValueMap) && MapUtils.isNotEmpty(metricValues)) {
+        //If no value was found within the start_time based slices, but the metric has value in the server_time range,
+        // use that.
+
+        Map.Entry<Long,Double> firstEntry  = metricValues.firstEntry();
+        defaultNextSeenValue = firstEntry.getValue();
+        LOG.debug("Found a data point outside timeslice range: " + new Date(firstEntry.getKey()) + ": " + defaultNextSeenValue);
+      }
+
+      for (int sliceNum = 0; sliceNum < timeSlices.size(); sliceNum++) {
+        Long[] timeSlice = timeSlices.get(sliceNum);
+
+        if (!timeSliceValueMap.containsKey(timeSlice[1])) {
+          LOG.debug("Found an empty slice : " + new Date(timeSlice[0]) + ", " + new Date(timeSlice[1]));
+
+          Double lastSeenValue = null;
+          int index = sliceNum - 1;
+          Long[] prevTimeSlice = null;
+          while (lastSeenValue == null && index >= 0) {
+            prevTimeSlice = timeSlices.get(index--);
+            lastSeenValue = timeSliceValueMap.get(prevTimeSlice[1]);
+          }
+
+          Double nextSeenValue = null;
+          index = sliceNum + 1;
+          Long[] nextTimeSlice = null;
+          while (nextSeenValue == null && index < timeSlices.size()) {
+            nextTimeSlice = timeSlices.get(index++);
+            nextSeenValue = timeSliceValueMap.get(nextTimeSlice[1]);
+          }
+
+          if (nextSeenValue == null) {
+            nextSeenValue = defaultNextSeenValue;
+          }
+
+          Double interpolatedValue = PostProcessingUtil.interpolate(timeSlice[1],
+              (prevTimeSlice != null ? prevTimeSlice[1] : null), lastSeenValue,
+              (nextTimeSlice != null ? nextTimeSlice[1] : null), nextSeenValue);
+
+          if (interpolatedValue != null) {
+            LOG.debug("Interpolated value : " + interpolatedValue);
+            resultClusterMetricMap.put(timeSlice[1], interpolatedValue);
+          } else {
+            LOG.debug("Cannot compute interpolated value, hence skipping.");
+          }
+        }
+      }
+    }
+    return resultClusterMetricMap;
+  }
+
+  /**
+   * Return end of the time slice into which the metric fits.
+   */
+  public static Long getSliceTimeForMetric(List<Long[]> timeSlices, Long timestamp) {
+    for (Long[] timeSlice : timeSlices) {
+      if (timestamp > timeSlice[0] && timestamp <= timeSlice[1]) {
+        return timeSlice[1];
+      }
+    }
+    return -1l;
+  }
+
+  /**
+   * Return time slices to normalize the timeseries data.
+   */
+  public static  List<Long[]> getTimeSlices(long startTime, long endTime, long timeSliceIntervalMillis) {
+    List<Long[]> timeSlices = new ArrayList<Long[]>();
+    long sliceStartTime = startTime;
+    while (sliceStartTime < endTime) {
+      timeSlices.add(new Long[] { sliceStartTime, sliceStartTime + timeSliceIntervalMillis });
+      sliceStartTime += timeSliceIntervalMillis;
+    }
+    return timeSlices;
+  }
+
+  public static long getRoundedCheckPointTimeMillis(long referenceTime, long aggregatorPeriod) {
+    return referenceTime - (referenceTime % aggregatorPeriod);
+  }
+
+  public static long getRoundedAggregateTimeMillis(long aggregatorPeriod) {
+    long currentTime = System.currentTimeMillis();
+    return currentTime - (currentTime % aggregatorPeriod);
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
index e90fa84..c27d712 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
@@ -20,6 +20,8 @@ package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline
 import org.apache.commons.io.FilenameUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricDistributedCache;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 
@@ -39,6 +41,7 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_SECOND_DISABLED;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_SECOND_SLEEP_INTERVAL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_TIMESLICE_INTERVAL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_CACHE_AGGREGATOR_TIMESLICE_INTERVAL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.DEFAULT_CHECKPOINT_LOCATION;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_AGGREGATOR_DAILY_CHECKPOINT_CUTOFF_MULTIPLIER;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_AGGREGATOR_DAILY_DISABLED;
@@ -255,7 +258,8 @@ public class TimelineMetricAggregatorFactory {
   public static TimelineMetricAggregator createTimelineClusterAggregatorSecond(
     PhoenixHBaseAccessor hBaseAccessor, Configuration metricsConf,
     TimelineMetricMetadataManager metadataManager,
-    MetricCollectorHAController haController) {
+    MetricCollectorHAController haController,
+    TimelineMetricDistributedCache distributedCache) {
 
     String checkpointDir = metricsConf.get(
       TIMELINE_METRICS_AGGREGATOR_CHECKPOINT_DIR, DEFAULT_CHECKPOINT_LOCATION);
@@ -269,14 +273,36 @@ public class TimelineMetricAggregatorFactory {
     long timeSliceIntervalMillis = SECONDS.toMillis(metricsConf.getInt
       (CLUSTER_AGGREGATOR_TIMESLICE_INTERVAL, 30));
 
+    long cacheTimeSliceIntervalMillis = SECONDS.toMillis(metricsConf.getInt
+      (CLUSTER_CACHE_AGGREGATOR_TIMESLICE_INTERVAL, 30));
+
     int checkpointCutOffMultiplier =
       metricsConf.getInt(CLUSTER_AGGREGATOR_SECOND_CHECKPOINT_CUTOFF_MULTIPLIER, 2);
 
-    String inputTableName = METRICS_RECORD_TABLE_NAME;
     String outputTableName = METRICS_CLUSTER_AGGREGATE_TABLE_NAME;
     String aggregatorDisabledParam = CLUSTER_AGGREGATOR_SECOND_DISABLED;
 
     // Second based aggregation have added responsibility of time slicing
+    if (TimelineMetricConfiguration.getInstance().isCollectorInMemoryAggregationEnabled()) {
+      return new TimelineMetricClusterAggregatorSecondWithCacheSource(
+        METRIC_AGGREGATE_SECOND,
+        metadataManager,
+        hBaseAccessor, metricsConf,
+        checkpointLocation,
+        sleepIntervalMillis,
+        checkpointCutOffMultiplier,
+        aggregatorDisabledParam,
+        null,
+        outputTableName,
+        120000l,
+        timeSliceIntervalMillis,
+        haController,
+        distributedCache,
+        cacheTimeSliceIntervalMillis
+      );
+    }
+
+    String inputTableName = METRICS_RECORD_TABLE_NAME;
     return new TimelineMetricClusterAggregatorSecond(
       METRIC_AGGREGATE_SECOND,
       metadataManager,
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAppAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAppAggregator.java
index 55104de..09fbe81 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAppAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAppAggregator.java
@@ -31,10 +31,9 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.HashMap;
-import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
-import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
 
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_APP_IDS;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_APP_ID;
@@ -104,15 +103,15 @@ public class TimelineMetricAppAggregator {
       // Check app candidacy for host aggregation
       if (appIdsToAggregate.contains(appId)) {
         TimelineMetricHostMetadata timelineMetricHostMetadata = hostMetadata.get(hostname);
-        Set<String> appIds;
+        ConcurrentHashMap<String, String> appIds;
         if (timelineMetricHostMetadata == null) {
-          appIds = new HashSet<>();
+          appIds = new ConcurrentHashMap<>();
           hostMetadata.put(hostname, new TimelineMetricHostMetadata(appIds));
         } else {
           appIds = timelineMetricHostMetadata.getHostedApps();
         }
-        if (!appIds.contains(appId)) {
-          appIds.add(appId);
+        if (!appIds.containsKey(appId)) {
+          appIds.put(appId, appId);
           LOG.info("Adding appId to hosted apps: appId = " +
             clusterMetric.getAppId() + ", hostname = " + hostname);
         }
@@ -132,8 +131,8 @@ public class TimelineMetricAppAggregator {
     }
 
     TimelineMetricMetadataKey appKey =  new TimelineMetricMetadataKey(clusterMetric.getMetricName(), HOST_APP_ID, clusterMetric.getInstanceId());
-    Set<String> apps = hostMetadata.get(hostname).getHostedApps();
-    for (String appId : apps) {
+    ConcurrentHashMap<String, String> apps = hostMetadata.get(hostname).getHostedApps();
+    for (String appId : apps.keySet()) {
       if (appIdsToAggregate.contains(appId)) {
 
         appKey.setAppId(appId);
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecond.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecond.java
index a2f23de..773e372 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecond.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecond.java
@@ -22,6 +22,8 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_CLUSTER_AGGREGATOR_INTERPOLATION_ENABLED;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_EVENT_METRIC_PATTERNS;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRIC_AGGREGATION_SQL_FILTERS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getTimeSlices;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.sliceFromTimelineMetric;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_METRIC_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_RECORD_TABLE_NAME;
 
@@ -30,7 +32,6 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.ArrayList;
 import java.util.Arrays;
-import java.util.Date;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
@@ -39,12 +40,10 @@ import java.util.Set;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
-import org.apache.commons.collections.MapUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.lang.mutable.MutableInt;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
-import org.apache.hadoop.metrics2.sink.timeline.PostProcessingUtil;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
@@ -65,16 +64,13 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
   // Aggregator to perform app-level aggregates for host metrics
   private final TimelineMetricAppAggregator appAggregator;
   // 1 minute client side buffering adjustment
-  private final Long serverTimeShiftAdjustment;
-  private final boolean interpolationEnabled;
+  protected final Long serverTimeShiftAdjustment;
+  protected final boolean interpolationEnabled;
   private TimelineMetricMetadataManager metadataManagerInstance;
   private String skipAggrPatternStrings;
-<<<<<<< HEAD
   private String skipInterpolationMetricPatternStrings;
   private Set<Pattern> skipInterpolationMetricPatterns = new HashSet<>();
-=======
   private final static String liveHostsMetricName = "live_hosts";
->>>>>>> AMBARI-21214 : Use a uuid vs long row key for metrics in AMS schema. (avijayan)
 
   public TimelineMetricClusterAggregatorSecond(AGGREGATOR_NAME aggregatorName,
                                                TimelineMetricMetadataManager metadataManager,
@@ -99,7 +95,6 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
     this.serverTimeShiftAdjustment = Long.parseLong(metricsConf.get(SERVER_SIDE_TIMESIFT_ADJUSTMENT, "90000"));
     this.interpolationEnabled = Boolean.parseBoolean(metricsConf.get(TIMELINE_METRICS_CLUSTER_AGGREGATOR_INTERPOLATION_ENABLED, "true"));
     this.skipAggrPatternStrings = metricsConf.get(TIMELINE_METRIC_AGGREGATION_SQL_FILTERS);
-<<<<<<< HEAD
     this.skipInterpolationMetricPatternStrings = metricsConf.get(TIMELINE_METRICS_EVENT_METRIC_PATTERNS, "");
 
     if (StringUtils.isNotEmpty(skipInterpolationMetricPatternStrings)) {
@@ -109,9 +104,7 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
         skipInterpolationMetricPatterns.add(Pattern.compile(javaPatternString));
       }
     }
-=======
     this.timelineMetricReadHelper = new TimelineMetricReadHelper(metadataManager, true);
->>>>>>> AMBARI-21214 : Use a uuid vs long row key for metrics in AMS schema. (avijayan)
   }
 
   @Override
@@ -120,7 +113,7 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
     // timestamps with the difference between server time and series start time
     // Also, we do not want to look at the shift time period from the end as well since we can interpolate those points
     // that come earlier than the expected, during the next run.
-    List<Long[]> timeSlices = getTimeSlices(startTime - serverTimeShiftAdjustment, endTime - serverTimeShiftAdjustment);
+    List<Long[]> timeSlices = getTimeSlices(startTime - serverTimeShiftAdjustment, endTime - serverTimeShiftAdjustment, timeSliceIntervalMillis);
     // Initialize app aggregates for host metrics
     appAggregator.init();
     Map<TimelineClusterMetric, MetricClusterAggregate> aggregateClusterMetrics =
@@ -156,19 +149,6 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
     return condition;
   }
 
-  /**
-   * Return time slices to normalize the timeseries data.
-   */
-  List<Long[]> getTimeSlices(long startTime, long endTime) {
-    List<Long[]> timeSlices = new ArrayList<Long[]>();
-    long sliceStartTime = startTime;
-    while (sliceStartTime < endTime) {
-      timeSlices.add(new Long[] { sliceStartTime, sliceStartTime + timeSliceIntervalMillis });
-      sliceStartTime += timeSliceIntervalMillis;
-    }
-    return timeSlices;
-  }
-
   Map<TimelineClusterMetric, MetricClusterAggregate> aggregateMetricsFromResultSet(ResultSet rs, List<Long[]> timeSlices)
       throws SQLException, IOException {
     Map<TimelineClusterMetric, MetricClusterAggregate> aggregateClusterMetrics =
@@ -241,9 +221,25 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
       return 0;
     }
 
-    Map<TimelineClusterMetric, Double> clusterMetrics = sliceFromTimelineMetric(metric, timeSlices);
-    int numHosts = 0;
+    boolean skipInterpolationForMetric = false;
+    for (Pattern pattern : skipInterpolationMetricPatterns) {
+      Matcher m = pattern.matcher(metric.getMetricName());
+      if (m.matches()) {
+        skipInterpolationForMetric = true;
+        LOG.debug("Skipping interpolation for " + metric.getMetricName());
+      }
+    }
 
+    Map<TimelineClusterMetric, Double> clusterMetrics = sliceFromTimelineMetric(metric, timeSlices, !skipInterpolationForMetric && interpolationEnabled);
+
+    return aggregateClusterMetricsFromSlices(clusterMetrics, aggregateClusterMetrics, metric.getHostName());
+  }
+
+  protected int aggregateClusterMetricsFromSlices(Map<TimelineClusterMetric, Double> clusterMetrics,
+                                                  Map<TimelineClusterMetric, MetricClusterAggregate> aggregateClusterMetrics,
+                                                  String hostname) {
+
+    int numHosts = 0;
     if (clusterMetrics != null && !clusterMetrics.isEmpty()) {
       for (Map.Entry<TimelineClusterMetric, Double> clusterMetricEntry : clusterMetrics.entrySet()) {
 
@@ -264,191 +260,14 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
 
         numHosts = aggregate.getNumberOfHosts();
         // Update app level aggregates
-        appAggregator.processTimelineClusterMetric(clusterMetric, metric.getHostName(), avgValue);
+        appAggregator.processTimelineClusterMetric(clusterMetric, hostname, avgValue);
       }
     }
     return numHosts;
   }
 
-  protected Map<TimelineClusterMetric, Double> sliceFromTimelineMetric(
-    TimelineMetric timelineMetric, List<Long[]> timeSlices) {
-
-    if (timelineMetric.getMetricValues().isEmpty()) {
-      return null;
-    }
-
-    Map<TimelineClusterMetric, Double> timelineClusterMetricMap =
-      new HashMap<TimelineClusterMetric, Double>();
-
-    Long prevTimestamp = -1l;
-    TimelineClusterMetric prevMetric = null;
-    int count = 0;
-    double sum = 0.0;
-
-    Map<Long,Double> timeSliceValueMap = new HashMap<>();
-    for (Map.Entry<Long, Double> metric : timelineMetric.getMetricValues().entrySet()) {
-      // TODO: investigate null values - pre filter
-      if (metric.getValue() == null) {
-        continue;
-      }
-
-      Long timestamp = getSliceTimeForMetric(timeSlices, Long.parseLong(metric.getKey().toString()));
-      if (timestamp != -1) {
-        // Metric is within desired time range
-        TimelineClusterMetric clusterMetric = new TimelineClusterMetric(
-          timelineMetric.getMetricName(),
-          timelineMetric.getAppId(),
-          timelineMetric.getInstanceId(),
-          timestamp);
-
-        if (prevTimestamp < 0 || timestamp.equals(prevTimestamp)) {
-          Double newValue = metric.getValue();
-          if (newValue > 0.0) {
-            sum += newValue;
-            count++;
-          }
-        } else {
-          double metricValue = (count > 0) ? (sum / count) : 0.0;
-            timelineClusterMetricMap.put(prevMetric, metricValue);
-          timeSliceValueMap.put(prevMetric.getTimestamp(), metricValue);
-          sum = metric.getValue();
-          count = sum > 0.0 ? 1 : 0;
-        }
-
-        prevTimestamp = timestamp;
-        prevMetric = clusterMetric;
-      }
-    }
-
-    if (prevTimestamp > 0) {
-      double metricValue = (count > 0) ? (sum / count) : 0.0;
-      timelineClusterMetricMap.put(prevMetric, metricValue);
-      timeSliceValueMap.put(prevTimestamp, metricValue);
-    }
-
-    if (interpolationEnabled) {
-      interpolateMissingPeriods(timelineClusterMetricMap, timelineMetric, timeSlices, timeSliceValueMap);
-    }
-
-    return timelineClusterMetricMap;
-  }
-
-  private void interpolateMissingPeriods(Map<TimelineClusterMetric, Double> timelineClusterMetricMap,
-                                         TimelineMetric timelineMetric,
-                                         List<Long[]> timeSlices,
-                                         Map<Long, Double> timeSliceValueMap) {
-
-    for (Pattern pattern : skipInterpolationMetricPatterns) {
-      Matcher m = pattern.matcher(timelineMetric.getMetricName());
-      if (m.matches()) {
-        LOG.debug("Skipping interpolation for " + timelineMetric.getMetricName());
-        return;
-      }
-    }
-
-    if (StringUtils.isNotEmpty(timelineMetric.getType()) && "COUNTER".equalsIgnoreCase(timelineMetric.getType())) {
-      //For Counter Based metrics, ok to do interpolation and extrapolation
-
-      List<Long> requiredTimestamps = new ArrayList<>();
-      for (Long[] timeSlice : timeSlices) {
-        if (!timeSliceValueMap.containsKey(timeSlice[1])) {
-          requiredTimestamps.add(timeSlice[1]);
-        }
-      }
-      Map<Long, Double> interpolatedValuesMap = PostProcessingUtil.interpolate(timelineMetric.getMetricValues(), requiredTimestamps);
-
-      if (interpolatedValuesMap != null) {
-        for (Map.Entry<Long, Double> entry : interpolatedValuesMap.entrySet()) {
-          Double interpolatedValue = entry.getValue();
-
-          if (interpolatedValue != null) {
-            TimelineClusterMetric clusterMetric = new TimelineClusterMetric(
-              timelineMetric.getMetricName(),
-              timelineMetric.getAppId(),
-              timelineMetric.getInstanceId(),
-              entry.getKey());
-
-            timelineClusterMetricMap.put(clusterMetric, interpolatedValue);
-          } else {
-            LOG.debug("Cannot compute interpolated value, hence skipping.");
-          }
-        }
-      }
-    } else {
-      //For other metrics, ok to do only interpolation
-
-      Double defaultNextSeenValue = null;
-      if (MapUtils.isEmpty(timeSliceValueMap) && MapUtils.isNotEmpty(timelineMetric.getMetricValues())) {
-        //If no value was found within the start_time based slices, but the metric has value in the server_time range,
-        // use that.
-
-        LOG.debug("No value found within range for metric : " + timelineMetric.getMetricName());
-        Map.Entry<Long,Double> firstEntry  = timelineMetric.getMetricValues().firstEntry();
-        defaultNextSeenValue = firstEntry.getValue();
-        LOG.debug("Found a data point outside timeslice range: " + new Date(firstEntry.getKey()) + ": " + defaultNextSeenValue);
-      }
-
-      for (int sliceNum = 0; sliceNum < timeSlices.size(); sliceNum++) {
-        Long[] timeSlice = timeSlices.get(sliceNum);
-
-        if (!timeSliceValueMap.containsKey(timeSlice[1])) {
-          LOG.debug("Found an empty slice : " + new Date(timeSlice[0]) + ", " + new Date(timeSlice[1]));
-
-          Double lastSeenValue = null;
-          int index = sliceNum - 1;
-          Long[] prevTimeSlice = null;
-          while (lastSeenValue == null && index >= 0) {
-            prevTimeSlice = timeSlices.get(index--);
-            lastSeenValue = timeSliceValueMap.get(prevTimeSlice[1]);
-          }
-
-          Double nextSeenValue = null;
-          index = sliceNum + 1;
-          Long[] nextTimeSlice = null;
-          while (nextSeenValue == null && index < timeSlices.size()) {
-            nextTimeSlice = timeSlices.get(index++);
-            nextSeenValue = timeSliceValueMap.get(nextTimeSlice[1]);
-          }
-
-          if (nextSeenValue == null) {
-            nextSeenValue = defaultNextSeenValue;
-          }
-
-          Double interpolatedValue = PostProcessingUtil.interpolate(timeSlice[1],
-            (prevTimeSlice != null ? prevTimeSlice[1] : null), lastSeenValue,
-            (nextTimeSlice != null ? nextTimeSlice[1] : null), nextSeenValue);
-
-          if (interpolatedValue != null) {
-            TimelineClusterMetric clusterMetric = new TimelineClusterMetric(
-              timelineMetric.getMetricName(),
-              timelineMetric.getAppId(),
-              timelineMetric.getInstanceId(),
-              timeSlice[1]);
-
-            LOG.debug("Interpolated value : " + interpolatedValue);
-            timelineClusterMetricMap.put(clusterMetric, interpolatedValue);
-          } else {
-            LOG.debug("Cannot compute interpolated value, hence skipping.");
-          }
-        }
-      }
-    }
-  }
-
-  /**
-   * Return end of the time slice into which the metric fits.
-   */
-  private Long getSliceTimeForMetric(List<Long[]> timeSlices, Long timestamp) {
-    for (Long[] timeSlice : timeSlices) {
-      if (timestamp > timeSlice[0] && timestamp <= timeSlice[1]) {
-        return timeSlice[1];
-      }
-    }
-    return -1l;
-  }
-
   /* Add cluster metric for number of hosts that are hosting an appId */
-  private void processLiveAppCountMetrics(Map<TimelineClusterMetric, MetricClusterAggregate> aggregateClusterMetrics,
+  protected void processLiveAppCountMetrics(Map<TimelineClusterMetric, MetricClusterAggregate> aggregateClusterMetrics,
       Map<String, MutableInt> appHostsCount, long timestamp) {
 
     for (Map.Entry<String, MutableInt> appHostsEntry : appHostsCount.entrySet()) {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSource.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSource.java
new file mode 100644
index 0000000..0c030b6
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSource.java
@@ -0,0 +1,132 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
+
+import org.apache.commons.lang.mutable.MutableInt;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricDistributedCache;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
+
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getSliceTimeForMetric;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getTimeSlices;
+
+public class TimelineMetricClusterAggregatorSecondWithCacheSource extends TimelineMetricClusterAggregatorSecond {
+  private TimelineMetricDistributedCache distributedCache;
+  private Long cacheTimeSliceIntervalMillis;
+  public TimelineMetricClusterAggregatorSecondWithCacheSource(AggregationTaskRunner.AGGREGATOR_NAME metricAggregateSecond, TimelineMetricMetadataManager metricMetadataManager, PhoenixHBaseAccessor hBaseAccessor, Configuration metricsConf, String checkpointLocation, long sleepIntervalMillis, int checkpointCutOffMultiplier, String aggregatorDisabledParam, String inputTableName, String outputTableName,
+                                                              Long nativeTimeRangeDelay,
+                                                              Long timeSliceInterval,
+                                                              MetricCollectorHAController haController, TimelineMetricDistributedCache distributedCache, Long cacheTimeSliceIntervalMillis) {
+    super(metricAggregateSecond, metricMetadataManager, hBaseAccessor, metricsConf, checkpointLocation, sleepIntervalMillis, checkpointCutOffMultiplier, aggregatorDisabledParam, inputTableName, outputTableName, nativeTimeRangeDelay, timeSliceInterval, haController);
+    this.distributedCache = distributedCache;
+    this.cacheTimeSliceIntervalMillis = cacheTimeSliceIntervalMillis;
+  }
+
+  @Override
+  public boolean doWork(long startTime, long endTime) {
+    LOG.info("Start aggregation cycle @ " + new Date() + ", " +
+          "startTime = " + new Date(startTime) + ", endTime = " + new Date(endTime));
+    try {
+      Map<String, Double> caheMetrics;
+      if (LOG.isDebugEnabled()) {
+        caheMetrics = distributedCache.getPointInTimeCacheMetrics();
+        LOG.debug("Ignite metrics before eviction : " + caheMetrics);
+      }
+
+      LOG.info("Trying to evict elements from cache");
+      Map<TimelineClusterMetric, MetricClusterAggregate> metricsFromCache = distributedCache.evictMetricAggregates(startTime - serverTimeShiftAdjustment, endTime - serverTimeShiftAdjustment);
+      LOG.info(String.format("Evicted %s elements from cache.", metricsFromCache.size()));
+
+      if (LOG.isDebugEnabled()) {
+        caheMetrics = distributedCache.getPointInTimeCacheMetrics();
+        LOG.debug("Ignite metrics after eviction : " + caheMetrics);
+      }
+
+      List<Long[]> timeSlices = getTimeSlices(startTime - serverTimeShiftAdjustment, endTime - serverTimeShiftAdjustment, timeSliceIntervalMillis);
+      Map<TimelineClusterMetric, MetricClusterAggregate> result = aggregateMetricsFromMetricClusterAggregates(metricsFromCache, timeSlices);
+
+      LOG.info("Saving " + result.size() + " metric aggregates.");
+      hBaseAccessor.saveClusterAggregateRecords(result);
+      LOG.info("End aggregation cycle @ " + new Date());
+      return true;
+    } catch (Exception e) {
+      LOG.error("Exception during aggregation. ", e);
+      return false;
+    }
+  }
+
+  //Slices in cache could be different from aggregate slices, so need to recalculate. Counts hosted apps
+  Map<TimelineClusterMetric, MetricClusterAggregate> aggregateMetricsFromMetricClusterAggregates(Map<TimelineClusterMetric, MetricClusterAggregate> metricsFromCache, List<Long[]> timeSlices) {
+    Map<TimelineClusterMetric, MetricClusterAggregate> result = new HashMap<>();
+
+    //normalize if slices in cache are different from the aggregation slices
+    //TODO add basic interpolation, current implementation assumes that cacheTimeSliceIntervalMillis <= timeSliceIntervalMillis
+    if (cacheTimeSliceIntervalMillis.equals(timeSliceIntervalMillis)) {
+      result = metricsFromCache;
+    } else {
+      for (Map.Entry<TimelineClusterMetric, MetricClusterAggregate> clusterMetricAggregateEntry : metricsFromCache.entrySet()) {
+        Long timestamp = getSliceTimeForMetric(timeSlices, clusterMetricAggregateEntry.getKey().getTimestamp());
+        if (timestamp <= 0) {
+          LOG.warn("Entry doesn't match any slice. Slices : " + timeSlices + " metric timestamp : " + clusterMetricAggregateEntry.getKey().getTimestamp());
+          continue;
+        }
+        TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric(clusterMetricAggregateEntry.getKey().getMetricName(), clusterMetricAggregateEntry.getKey().getAppId(), clusterMetricAggregateEntry.getKey().getInstanceId(), timestamp);
+        if (result.containsKey(timelineClusterMetric)) {
+          MetricClusterAggregate metricClusterAggregate = result.get(timelineClusterMetric);
+          metricClusterAggregate.updateMax(clusterMetricAggregateEntry.getValue().getMax());
+          metricClusterAggregate.updateMin(clusterMetricAggregateEntry.getValue().getMin());
+          metricClusterAggregate.setSum((metricClusterAggregate.getSum() + clusterMetricAggregateEntry.getValue().getSum()) / 2D);
+          metricClusterAggregate.setNumberOfHosts(Math.max(metricClusterAggregate.getNumberOfHosts(), clusterMetricAggregateEntry.getValue().getNumberOfHosts()));
+        } else {
+          result.put(timelineClusterMetric, clusterMetricAggregateEntry.getValue());
+        }
+      }
+    }
+
+    //TODO investigate if needed, maybe add config to disable/enable
+    //count hosted apps
+    Map<String, MutableInt> hostedAppCounter = new HashMap<>();
+    for (Map.Entry<TimelineClusterMetric, MetricClusterAggregate> clusterMetricAggregateEntry : result.entrySet()) {
+      int numHosts = clusterMetricAggregateEntry.getValue().getNumberOfHosts();
+      String appId = clusterMetricAggregateEntry.getKey().getAppId();
+      if (!hostedAppCounter.containsKey(appId)) {
+        hostedAppCounter.put(appId, new MutableInt(numHosts));
+      } else {
+        int currentHostCount = hostedAppCounter.get(appId).intValue();
+        if (currentHostCount < numHosts) {
+          hostedAppCounter.put(appId, new MutableInt(numHosts));
+        }
+      }
+    }
+
+    // Add liveHosts per AppId metrics.
+    processLiveAppCountMetrics(result, hostedAppCounter, timeSlices.get(timeSlices.size() - 1)[1]);
+
+    return result;
+  }
+
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/MetricCollectorHAController.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/MetricCollectorHAController.java
index a06f4e8..d387394 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/MetricCollectorHAController.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/MetricCollectorHAController.java
@@ -94,7 +94,7 @@ public class MetricCollectorHAController {
           + zkClientPort +", quorum = " + zkQuorum);
       }
 
-      zkConnectUrl = getZkConnectionUrl(zkClientPort, zkQuorum);
+      zkConnectUrl = configuration.getZkConnectionUrl(zkClientPort, zkQuorum);
 
     } catch (Exception e) {
       LOG.error("Unable to load hbase-site from classpath.", e);
@@ -228,23 +228,6 @@ public class MetricCollectorHAController {
     manager.addLiveInstanceChangeListener(controller);
   }
 
-  private String getZkConnectionUrl(String zkClientPort, String zkQuorum) {
-    StringBuilder sb = new StringBuilder();
-    String[] quorumParts = zkQuorum.split(",");
-    String prefix = "";
-    for (String part : quorumParts) {
-      sb.append(prefix);
-      sb.append(part.trim());
-      if (!part.contains(":")) {
-        sb.append(":");
-        sb.append(zkClientPort);
-      }
-      prefix = ",";
-    }
-
-    return sb.toString();
-  }
-
   public AggregationTaskRunner getAggregationTaskRunner() {
     return aggregationTaskRunner;
   }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricHostMetadata.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricHostMetadata.java
index 06e9279..37c6394 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricHostMetadata.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricHostMetadata.java
@@ -18,26 +18,35 @@
 
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery;
 
-import java.util.HashSet;
 import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
 
 public class TimelineMetricHostMetadata {
-  private Set<String> hostedApps = new HashSet<>();
+  //need concurrent data structure, only keys are used.
+  private ConcurrentHashMap<String, String> hostedApps = new ConcurrentHashMap<>();
   private byte[] uuid;
 
   // Default constructor
   public TimelineMetricHostMetadata() {
   }
 
-  public TimelineMetricHostMetadata(Set<String> hostedApps) {
+  public TimelineMetricHostMetadata(ConcurrentHashMap<String, String> hostedApps) {
     this.hostedApps = hostedApps;
   }
 
-  public Set<String> getHostedApps() {
+  public TimelineMetricHostMetadata(Set<String> hostedApps) {
+    ConcurrentHashMap<String, String> appIdsMap = new ConcurrentHashMap<>();
+    for (String appId : hostedApps) {
+      appIdsMap.put(appId, appId);
+    }
+    this.hostedApps = appIdsMap;
+  }
+
+  public ConcurrentHashMap<String, String> getHostedApps() {
     return hostedApps;
   }
 
-  public void setHostedApps(Set<String> hostedApps) {
+  public void setHostedApps(ConcurrentHashMap<String, String> hostedApps) {
     this.hostedApps = hostedApps;
   }
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
index bd508c4..f9ad773 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
@@ -213,9 +213,9 @@ public class TimelineMetricMetadataManager {
    */
   public void putIfModifiedHostedAppsMetadata(String hostname, String appId) {
     TimelineMetricHostMetadata timelineMetricHostMetadata = HOSTED_APPS_MAP.get(hostname);
-    Set<String> apps = (timelineMetricHostMetadata != null) ? timelineMetricHostMetadata.getHostedApps() : null;
+    ConcurrentHashMap<String, String> apps = (timelineMetricHostMetadata != null) ? timelineMetricHostMetadata.getHostedApps() : null;
     if (apps == null) {
-      apps = new HashSet<>();
+      apps = new ConcurrentHashMap<>();
       if (timelineMetricHostMetadata == null) {
         HOSTED_APPS_MAP.put(hostname, new TimelineMetricHostMetadata(apps));
       } else {
@@ -223,8 +223,8 @@ public class TimelineMetricMetadataManager {
       }
     }
 
-    if (!apps.contains(appId)) {
-      apps.add(appId);
+    if (!apps.containsKey(appId)) {
+      apps.put(appId, appId);
       SYNC_HOSTED_APPS_METADATA.set(true);
     }
   }
@@ -362,8 +362,9 @@ public class TimelineMetricMetadataManager {
 
     String uuidStr = new String(uuid);
     if (uuidHostMap.containsKey(uuidStr)) {
+      //TODO fix the collisions
       LOG.error("Duplicate key computed for " + hostname +", Collides with  " + uuidHostMap.get(uuidStr));
-      return null;
+      return uuid;
     }
 
     if (timelineMetricHostMetadata == null) {
@@ -398,8 +399,15 @@ public class TimelineMetricMetadataManager {
     String uuidStr = new String(uuid);
     if (uuidKeyMap.containsKey(uuidStr) && !uuidKeyMap.get(uuidStr).equals(key)) {
       TimelineMetricMetadataKey collidingKey = (TimelineMetricMetadataKey)uuidKeyMap.get(uuidStr);
+      //TODO fix the collisions
+      /**
+       * 2017-08-23 14:12:35,922 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager:
+       * Duplicate key [52, 50, 51, 53, 50, 53, 53, 53, 49, 54, 57, 50, 50, 54, 0, 0]([B@278a93f9) computed for
+       * TimelineClusterMetric{metricName='sdisk_dm-11_write_count', appId='hbase', instanceId='', timestamp=1503497400000}, Collides with
+       * TimelineMetricMetadataKey{metricName='sdisk_dm-20_write_count', appId='hbase', instanceId=''}
+       */
       LOG.error("Duplicate key " + Arrays.toString(uuid) + "(" + uuid +  ") computed for " + timelineClusterMetric.toString() + ", Collides with  " + collidingKey.toString());
-      return null;
+      return uuid;
     }
 
     if (timelineMetricMetadata == null) {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataSync.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataSync.java
index f808cd7..96af877 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataSync.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataSync.java
@@ -134,7 +134,7 @@ public class TimelineMetricMetadataSync implements Runnable {
           // No persistence / stale data in store
           if (persistedData == null || persistedData.isEmpty() ||
             !persistedData.containsKey(cacheEntry.getKey()) ||
-            !persistedData.get(cacheEntry.getKey()).getHostedApps().containsAll(cacheEntry.getValue().getHostedApps())) {
+            !persistedData.get(cacheEntry.getKey()).getHostedApps().keySet().containsAll(cacheEntry.getValue().getHostedApps().keySet())) {
             dataToSync.put(cacheEntry.getKey(), cacheEntry.getValue());
           }
         }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java
index f35c23a..10e9c61 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java
@@ -192,6 +192,10 @@ public class HashBasedUuidGenStrategy implements MetricUuidGenStrategy {
       }
     }
 
+    if (numericValue != 0) {
+      seed+=numericValue;
+    }
+
     String seedStr = String.valueOf(seed);
     if (seedStr.length() < maxLength) {
       return null;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
index c60554c..57f9796 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
@@ -202,7 +202,7 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(
-        hdb, new Configuration(), new TimelineMetricMetadataManager(new Configuration(), hdb), null);
+        hdb, new Configuration(), new TimelineMetricMetadataManager(new Configuration(), hdb), null, null);
 
     long startTime = System.currentTimeMillis();
     long ctime = startTime + 1;
@@ -242,7 +242,7 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(hdb,
-        new Configuration(), new TimelineMetricMetadataManager(new Configuration(), hdb), null);
+        new Configuration(), new TimelineMetricMetadataManager(new Configuration(), hdb), null, null);
 
     long startTime = System.currentTimeMillis();
     long ctime = startTime + 1;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCacheTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCacheTest.java
new file mode 100644
index 0000000..d3c6061
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCacheTest.java
@@ -0,0 +1,296 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline;
+
+
+import junit.framework.Assert;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.powermock.core.classloader.annotations.PowerMockIgnore;
+import org.powermock.core.classloader.annotations.PrepareForTest;
+import org.powermock.modules.junit4.PowerMockRunner;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_APP_IDS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_APP_ID;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_COLLECTOR_IGNITE_NODES;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedCheckPointTimeMillis;
+import static org.easymock.EasyMock.anyObject;
+import static org.easymock.EasyMock.createNiceMock;
+import static org.easymock.EasyMock.expect;
+import static org.easymock.EasyMock.replay;
+import static org.powermock.api.easymock.PowerMock.mockStatic;
+import static org.powermock.api.easymock.PowerMock.replayAll;
+
+@RunWith(PowerMockRunner.class)
+@PrepareForTest(TimelineMetricConfiguration.class)
+
+@PowerMockIgnore("javax.management.*")
+public class TimelineMetricsIgniteCacheTest {
+  private static TimelineMetricsIgniteCache timelineMetricsIgniteCache;
+  @BeforeClass
+  public static void setupConf() throws Exception {
+    TimelineMetricConfiguration conf = new TimelineMetricConfiguration(new
+      Configuration(), new Configuration());
+    mockStatic(TimelineMetricConfiguration.class);
+    expect(TimelineMetricConfiguration.getInstance()).andReturn(conf).anyTimes();
+    conf.getMetricsConf().set(CLUSTER_AGGREGATOR_APP_IDS, "appIdForHostsAggr");
+    conf.getMetricsConf().set(TIMELINE_METRICS_COLLECTOR_IGNITE_NODES, "localhost");
+    replayAll();
+
+    timelineMetricsIgniteCache = new TimelineMetricsIgniteCache();
+  }
+
+  @Test
+  public void putEvictMetricsFromCacheSlicesMerging() throws Exception {
+    long cacheSliceIntervalMillis = 30000L;
+
+    TimelineMetricMetadataManager metricMetadataManagerMock = createNiceMock(TimelineMetricMetadataManager.class);
+    expect(metricMetadataManagerMock.getUuid(anyObject(TimelineClusterMetric.class))).andReturn(new byte[16]).once();
+    replay(metricMetadataManagerMock);
+
+    long startTime = getRoundedCheckPointTimeMillis(System.currentTimeMillis(), cacheSliceIntervalMillis);
+
+    long seconds = 1000;
+    TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
+    /*
+
+    0        +30s      +60s
+    |         |         |
+     (1)(2)(3) (4)(5)(6)  h1
+
+    */
+    // Case 1 : data points are distributed equally, no values are lost, single host.
+    metricValues.put(startTime + 4*seconds, 1.0);
+    metricValues.put(startTime + 14*seconds, 2.0);
+    metricValues.put(startTime + 24*seconds, 3.0);
+    metricValues.put(startTime + 34*seconds, 4.0);
+    metricValues.put(startTime + 44*seconds, 5.0);
+    metricValues.put(startTime + 54*seconds, 6.0);
+
+    TimelineMetric timelineMetric = new TimelineMetric("metric1", "host1", "app1", "instance1");
+    timelineMetric.setStartTime(metricValues.firstKey());
+    timelineMetric.addMetricValues(metricValues);
+
+    Collection<TimelineMetric> timelineMetrics = new ArrayList<>();
+    timelineMetrics.add(timelineMetric);
+    timelineMetricsIgniteCache.putMetrics(timelineMetrics, metricMetadataManagerMock);
+    Map<TimelineClusterMetric, MetricClusterAggregate> aggregateMap = timelineMetricsIgniteCache.evictMetricAggregates(startTime, startTime + 120*seconds);
+
+    Assert.assertEquals(aggregateMap.size(), 2);
+    TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric(timelineMetric.getMetricName(),
+      timelineMetric.getAppId(), timelineMetric.getInstanceId(), startTime + 30*seconds);
+
+    Assert.assertTrue(aggregateMap.containsKey(timelineClusterMetric));
+    Assert.assertEquals(2.0, aggregateMap.get(timelineClusterMetric).getSum());
+
+    timelineClusterMetric.setTimestamp(startTime + 2*30*seconds);
+    Assert.assertTrue(aggregateMap.containsKey(timelineClusterMetric));
+    Assert.assertEquals(5.0, aggregateMap.get(timelineClusterMetric).getSum());
+
+    metricValues.clear();
+    timelineMetrics.clear();
+
+    /*
+
+    0        +30s      +60s
+    |         |         |
+     (1)(2)(3) (4)(5)(6)   h1, h2
+
+    */
+    // Case 2 : data points are distributed equally, no values are lost, two hosts.
+    metricValues.put(startTime + 4*seconds, 1.0);
+    metricValues.put(startTime + 14*seconds, 2.0);
+    metricValues.put(startTime + 24*seconds, 3.0);
+    metricValues.put(startTime + 34*seconds, 4.0);
+    metricValues.put(startTime + 44*seconds, 5.0);
+    metricValues.put(startTime + 54*seconds, 6.0);
+
+    timelineMetric = new TimelineMetric("metric1", "host1", "app1", "instance1");
+    timelineMetric.setMetricValues(metricValues);
+
+    metricValues = new TreeMap<>();
+    metricValues.put(startTime + 5*seconds, 2.0);
+    metricValues.put(startTime + 15*seconds, 4.0);
+    metricValues.put(startTime + 25*seconds, 6.0);
+    metricValues.put(startTime + 35*seconds, 8.0);
+    metricValues.put(startTime + 45*seconds, 10.0);
+    metricValues.put(startTime + 55*seconds, 12.0);
+    TimelineMetric timelineMetric2 = new TimelineMetric("metric1", "host2", "app1", "instance1");
+    timelineMetric2.setMetricValues(metricValues);
+
+    timelineMetrics = new ArrayList<>();
+    timelineMetrics.add(timelineMetric);
+    timelineMetrics.add(timelineMetric2);
+    timelineMetricsIgniteCache.putMetrics(timelineMetrics, metricMetadataManagerMock);
+    aggregateMap = timelineMetricsIgniteCache.evictMetricAggregates(startTime, startTime + 120*seconds);
+
+    Assert.assertEquals(aggregateMap.size(), 2);
+    timelineClusterMetric = new TimelineClusterMetric(timelineMetric.getMetricName(),
+      timelineMetric.getAppId(), timelineMetric.getInstanceId(), startTime + 30*seconds);
+
+    Assert.assertTrue(aggregateMap.containsKey(timelineClusterMetric));
+    Assert.assertEquals(6.0, aggregateMap.get(timelineClusterMetric).getSum());
+
+    timelineClusterMetric.setTimestamp(startTime + 2*30*seconds);
+    Assert.assertTrue(aggregateMap.containsKey(timelineClusterMetric));
+    Assert.assertEquals(15.0, aggregateMap.get(timelineClusterMetric).getSum());
+
+    metricValues.clear();
+    timelineMetrics.clear();
+
+    /*
+
+    0      +30s    +60s    +90s
+    |       |       |       |
+     (1)      (2)                h1
+                (3)       (4)    h2
+                 (5)      (6)    h1
+
+    */
+    // Case 3 : merging host data points, ignore (2) for h1 as it will conflict with (5), two hosts.
+    metricValues = new TreeMap<>();
+    metricValues.put(startTime + 15*seconds, 1.0);
+    metricValues.put(startTime + 45*seconds, 2.0);
+    timelineMetric = new TimelineMetric("metric1", "host1", "app1", "instance1");
+    timelineMetric.setMetricValues(metricValues);
+    timelineMetrics.add(timelineMetric);
+
+    metricValues = new TreeMap<>();
+    metricValues.put(startTime + 45*seconds, 3.0);
+    metricValues.put(startTime + 85*seconds, 4.0);
+    timelineMetric = new TimelineMetric("metric1", "host2", "app1", "instance1");
+    timelineMetric.setMetricValues(metricValues);
+    timelineMetrics.add(timelineMetric);
+
+    metricValues = new TreeMap<>();
+    metricValues.put(startTime + 55*seconds, 5.0);
+    metricValues.put(startTime + 85*seconds, 6.0);
+    timelineMetric = new TimelineMetric("metric1", "host1", "app1", "instance1");
+    timelineMetric.setMetricValues(metricValues);
+    timelineMetrics.add(timelineMetric);
+
+    timelineMetricsIgniteCache.putMetrics(timelineMetrics, metricMetadataManagerMock);
+
+    aggregateMap = timelineMetricsIgniteCache.evictMetricAggregates(startTime, startTime + 120*seconds);
+
+    Assert.assertEquals(aggregateMap.size(), 3);
+    timelineClusterMetric = new TimelineClusterMetric(timelineMetric.getMetricName(),
+      timelineMetric.getAppId(), timelineMetric.getInstanceId(), startTime + 30*seconds);
+
+    Assert.assertTrue(aggregateMap.containsKey(timelineClusterMetric));
+    Assert.assertEquals(1.0, aggregateMap.get(timelineClusterMetric).getSum());
+    Assert.assertEquals(1, aggregateMap.get(timelineClusterMetric).getNumberOfHosts());
+
+    timelineClusterMetric.setTimestamp(startTime + 2*30*seconds);
+    Assert.assertTrue(aggregateMap.containsKey(timelineClusterMetric));
+    Assert.assertEquals(8.0, aggregateMap.get(timelineClusterMetric).getSum());
+    Assert.assertEquals(2, aggregateMap.get(timelineClusterMetric).getNumberOfHosts());
+
+    timelineClusterMetric.setTimestamp(startTime + 3*30*seconds);
+    Assert.assertTrue(aggregateMap.containsKey(timelineClusterMetric));
+    Assert.assertEquals(10.0, aggregateMap.get(timelineClusterMetric).getSum());
+    Assert.assertEquals(2, aggregateMap.get(timelineClusterMetric).getNumberOfHosts());
+
+    metricValues.clear();
+    timelineMetrics.clear();
+
+    Assert.assertEquals(0d, timelineMetricsIgniteCache.getPointInTimeCacheMetrics().get("Cluster_KeySize"));
+  }
+
+  @Test
+  public void updateAppAggregatesFromHostMetricTest() {
+    //make sure hosts metrics are aggregated for appIds from "timeline.metrics.service.cluster.aggregator.appIds"
+
+    long cacheSliceIntervalMillis = 30000L;
+
+    TimelineMetricMetadataManager metricMetadataManagerMock = createNiceMock(TimelineMetricMetadataManager.class);
+    expect(metricMetadataManagerMock.getUuid(anyObject(TimelineClusterMetric.class))).andReturn(new byte[16]).once();
+    expect(metricMetadataManagerMock.getHostedAppsCache()).andReturn(new HashMap<>()).anyTimes();
+    replay(metricMetadataManagerMock);
+
+    long startTime = getRoundedCheckPointTimeMillis(System.currentTimeMillis(), cacheSliceIntervalMillis);
+
+    long seconds = 1000;
+
+    TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
+    List<TimelineMetric> timelineMetrics = new ArrayList<>();
+    TimelineMetric timelineMetric;
+
+    metricValues = new TreeMap<>();
+    metricValues.put(startTime + 15*seconds, 1.0);
+    metricValues.put(startTime + 55*seconds, 2.0);
+    timelineMetric = new TimelineMetric("host_metric", "host1", HOST_APP_ID, "instance1");
+    timelineMetric.setMetricValues(metricValues);
+    timelineMetrics.add(timelineMetric);
+
+    metricValues = new TreeMap<>();
+    metricValues.put(startTime + 45*seconds, 3.0);
+    metricValues.put(startTime + 85*seconds, 4.0);
+    timelineMetric = new TimelineMetric("app_metric", "host1", "appIdForHostsAggr", "instance1");
+    timelineMetric.setMetricValues(metricValues);
+    timelineMetrics.add(timelineMetric);
+
+    metricValues = new TreeMap<>();
+    metricValues.put(startTime + 85*seconds, 5.0);
+    timelineMetric = new TimelineMetric("host_metric", "host1", HOST_APP_ID, "instance1");
+    timelineMetric.setMetricValues(metricValues);
+    timelineMetrics.add(timelineMetric);
+
+    metricValues = new TreeMap<>();
+    metricValues.put(startTime + 85*seconds, 6.0);
+    timelineMetric = new TimelineMetric("host_metric", "host2", HOST_APP_ID, "instance1");
+    timelineMetric.setMetricValues(metricValues);
+    timelineMetrics.add(timelineMetric);
+
+    timelineMetricsIgniteCache.putMetrics(timelineMetrics, metricMetadataManagerMock);
+
+    Map<TimelineClusterMetric, MetricClusterAggregate> aggregateMap = timelineMetricsIgniteCache.evictMetricAggregates(startTime, startTime + 120*seconds);
+    Assert.assertEquals(aggregateMap.size(), 6);
+    TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric(timelineMetric.getMetricName(),
+        timelineMetric.getAppId(), timelineMetric.getInstanceId(), startTime + 90*seconds);
+
+    Assert.assertTrue(aggregateMap.containsKey(timelineClusterMetric));
+    Assert.assertEquals(11.0, aggregateMap.get(timelineClusterMetric).getSum());
+
+    timelineClusterMetric = new TimelineClusterMetric("app_metric",
+        "appIdForHostsAggr", "instance1", startTime + 90*seconds);
+    Assert.assertTrue(aggregateMap.containsKey(timelineClusterMetric));
+    Assert.assertEquals(4.0, aggregateMap.get(timelineClusterMetric).getSum());
+
+    timelineClusterMetric = new TimelineClusterMetric("host_metric",
+        "appIdForHostsAggr", "instance1", startTime + 90*seconds);
+    Assert.assertTrue(aggregateMap.containsKey(timelineClusterMetric));
+    Assert.assertEquals(5.0, aggregateMap.get(timelineClusterMetric).getSum());
+
+    Assert.assertEquals(0d, timelineMetricsIgniteCache.getPointInTimeCacheMetrics().get("Cluster_KeySize"));
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregatorTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregatorTest.java
index ea947d0..b4d0f0a 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregatorTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregatorTest.java
@@ -30,6 +30,8 @@ import java.util.concurrent.atomic.AtomicLong;
 import static junit.framework.Assert.assertEquals;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.AGGREGATOR_CHECKPOINT_DELAY;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.RESULTSET_FETCH_SIZE;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedAggregateTimeMillis;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedCheckPointTimeMillis;
 
 public class AbstractTimelineAggregatorTest {
 
@@ -114,7 +116,7 @@ public class AbstractTimelineAggregatorTest {
   public void testDoWorkOnZeroDelay() throws Exception {
 
     long currentTime = System.currentTimeMillis();
-    long roundedOffAggregatorTime = AbstractTimelineAggregator.getRoundedCheckPointTimeMillis(currentTime,
+    long roundedOffAggregatorTime = getRoundedCheckPointTimeMillis(currentTime,
       sleepIntervalMillis);
 
     //Test first run of aggregator with no checkpoint
@@ -138,7 +140,7 @@ public class AbstractTimelineAggregatorTest {
     currentTime = System.currentTimeMillis();
     checkPoint.set(currentTime - 16*60*1000); //Old checkpoint
     agg.runOnce(sleepIntervalMillis);
-    long checkPointTime = AbstractTimelineAggregator.getRoundedAggregateTimeMillis(sleepIntervalMillis);
+    long checkPointTime = getRoundedAggregateTimeMillis(sleepIntervalMillis);
     assertEquals("startTime should be zero", checkPointTime - sleepIntervalMillis, startTimeInDoWork.get());
     assertEquals("endTime  should be zero", checkPointTime, endTimeInDoWork.get());
     assertEquals(roundedOffAggregatorTime, checkPoint.get());
@@ -147,10 +149,10 @@ public class AbstractTimelineAggregatorTest {
 
 //    //Test first run with perfect checkpoint (sleepIntervalMillis back)
     currentTime = System.currentTimeMillis();
-    roundedOffAggregatorTime = AbstractTimelineAggregator.getRoundedCheckPointTimeMillis(currentTime,
+    roundedOffAggregatorTime = getRoundedCheckPointTimeMillis(currentTime,
       sleepIntervalMillis);
     checkPointTime = roundedOffAggregatorTime - sleepIntervalMillis;
-    long expectedCheckPoint = AbstractTimelineAggregator.getRoundedCheckPointTimeMillis(checkPointTime, sleepIntervalMillis);
+    long expectedCheckPoint = getRoundedCheckPointTimeMillis(checkPointTime, sleepIntervalMillis);
     checkPoint.set(checkPointTime);
     agg.runOnce(sleepIntervalMillis);
     assertEquals("startTime should the lower rounded time of the checkpoint time",
@@ -165,7 +167,7 @@ public class AbstractTimelineAggregatorTest {
     currentTime = System.currentTimeMillis();
     checkPoint.set(currentTime - 2*sleepIntervalMillis + 5000);
     agg.runOnce(sleepIntervalMillis);
-    long expectedStartTime = AbstractTimelineAggregator.getRoundedCheckPointTimeMillis(currentTime - 2*sleepIntervalMillis + 5000, sleepIntervalMillis);
+    long expectedStartTime = getRoundedCheckPointTimeMillis(currentTime - 2*sleepIntervalMillis + 5000, sleepIntervalMillis);
     assertEquals("startTime should the lower rounded time of the checkpoint time",
       expectedStartTime, startTimeInDoWork.get());
     assertEquals("startTime should the lower rounded time of the checkpoint time + sleepIntervalMillis",
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java
index a9f2b4d..1c5f41f 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java
@@ -69,7 +69,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(hdb,
-        getConfigurationForTest(false), new TimelineMetricMetadataManager(new Configuration(), hdb), null);
+        getConfigurationForTest(false), new TimelineMetricMetadataManager(new Configuration(), hdb), null, null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
 
     long startTime = System.currentTimeMillis();
@@ -121,7 +121,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(hdb,
-        getConfigurationForTest(false), new TimelineMetricMetadataManager(new Configuration(), hdb), null);
+        getConfigurationForTest(false), new TimelineMetricMetadataManager(new Configuration(), hdb), null, null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
 
     long startTime = System.currentTimeMillis();
@@ -196,7 +196,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(hdb,
-        getConfigurationForTest(false), new TimelineMetricMetadataManager(new Configuration(), hdb), null);
+        getConfigurationForTest(false), new TimelineMetricMetadataManager(new Configuration(), hdb), null, null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
 
     // here we put some metrics tha will be aggregated
@@ -479,7 +479,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
     conf.set(CLUSTER_AGGREGATOR_APP_IDS, "app1");
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(hdb,
-        conf, new TimelineMetricMetadataManager(new Configuration(), hdb), null);
+        conf, new TimelineMetricMetadataManager(new Configuration(), hdb), null, null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
 
     long startTime = System.currentTimeMillis();
@@ -529,7 +529,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
   public void testClusterAggregateMetricNormalization() throws Exception {
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(hdb,
-        getConfigurationForTest(false), new TimelineMetricMetadataManager(new Configuration(), hdb), null);
+        getConfigurationForTest(false), new TimelineMetricMetadataManager(new Configuration(), hdb), null, null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
 
     // Sample data
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondTest.java
index 937dd80..eb38625 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondTest.java
@@ -18,12 +18,14 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
 
 
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedAggregateTimeMillis;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedCheckPointTimeMillis;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getTimeSlices;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.sliceFromTimelineMetric;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.AGGREGATOR_NAME.METRIC_AGGREGATE_SECOND;
-import static org.easymock.EasyMock.anyBoolean;
 import static org.easymock.EasyMock.anyObject;
 import static org.easymock.EasyMock.createNiceMock;
 import static org.easymock.EasyMock.expect;
-import static org.easymock.EasyMock.expectLastCall;
 import static org.easymock.EasyMock.replay;
 
 import java.sql.ResultSet;
@@ -51,23 +53,16 @@ public class TimelineMetricClusterAggregatorSecondTest {
     long sliceInterval = 30000l;
     long metricInterval = 10000l;
 
-    Configuration configuration = new Configuration();
     TimelineMetricMetadataManager metricMetadataManagerMock = createNiceMock(TimelineMetricMetadataManager.class);
     expect(metricMetadataManagerMock.getUuid(anyObject(TimelineClusterMetric.class))).andReturn(new byte[16]).once();
     replay(metricMetadataManagerMock);
 
-    TimelineMetricClusterAggregatorSecond secondAggregator = new TimelineMetricClusterAggregatorSecond(
-      METRIC_AGGREGATE_SECOND, metricMetadataManagerMock, null,
-      configuration, null, aggregatorInterval, 2, "false", "", "",
-      aggregatorInterval, sliceInterval, null);
-
-    secondAggregator.timeSliceIntervalMillis = sliceInterval;
-    long roundedEndTime = AbstractTimelineAggregator.getRoundedAggregateTimeMillis(aggregatorInterval);
+    long roundedEndTime = getRoundedAggregateTimeMillis(aggregatorInterval);
     long roundedStartTime = roundedEndTime - aggregatorInterval;
-    List<Long[]> timeSlices = secondAggregator.getTimeSlices(roundedStartTime ,
-      roundedEndTime);
+    List<Long[]> timeSlices = getTimeSlices(roundedStartTime ,
+      roundedEndTime, sliceInterval);
 
-    TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
+    TreeMap<Long, Double> metricValues = new TreeMap<>();
 
     long startTime = roundedEndTime - aggregatorInterval;
 
@@ -85,7 +80,7 @@ public class TimelineMetricClusterAggregatorSecondTest {
     counterMetric.setMetricValues(metricValues);
     counterMetric.setType("COUNTER");
 
-    Map<TimelineClusterMetric, Double> timelineClusterMetricMap = secondAggregator.sliceFromTimelineMetric(counterMetric, timeSlices);
+    Map<TimelineClusterMetric, Double> timelineClusterMetricMap = sliceFromTimelineMetric(counterMetric, timeSlices, true);
 
     TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric(counterMetric.getMetricName(), counterMetric.getAppId(),
       counterMetric.getInstanceId(), 0l);
@@ -104,7 +99,7 @@ public class TimelineMetricClusterAggregatorSecondTest {
     metric.setAppId("TestAppId");
     metric.setMetricValues(metricValues);
 
-    timelineClusterMetricMap = secondAggregator.sliceFromTimelineMetric(metric, timeSlices);
+    timelineClusterMetricMap = sliceFromTimelineMetric(metric, timeSlices, true);
 
     timelineClusterMetric = new TimelineClusterMetric(metric.getMetricName(), metric.getAppId(),
       metric.getInstanceId(), 0l);
@@ -116,7 +111,6 @@ public class TimelineMetricClusterAggregatorSecondTest {
     timelineClusterMetric.setTimestamp(roundedStartTime + 4*sliceInterval);
     Assert.assertTrue(timelineClusterMetricMap.containsKey(timelineClusterMetric));
     Assert.assertEquals(timelineClusterMetricMap.get(timelineClusterMetric), 7.5);
-
   }
 
   @Test
@@ -137,8 +131,8 @@ public class TimelineMetricClusterAggregatorSecondTest {
       aggregatorInterval, 2, "false", "", "", aggregatorInterval, sliceInterval, null
     );
 
-    long startTime = AbstractTimelineAggregator.getRoundedCheckPointTimeMillis(System.currentTimeMillis(),aggregatorInterval);
-    List<Long[]> timeslices = secondAggregator.getTimeSlices(startTime, startTime + aggregatorInterval);
+    long startTime = getRoundedCheckPointTimeMillis(System.currentTimeMillis(),aggregatorInterval);
+    List<Long[]> timeslices = getTimeSlices(startTime, startTime + aggregatorInterval, sliceInterval);
 
     Map<TimelineClusterMetric, MetricClusterAggregate> aggregateClusterMetrics = new HashMap<>();
     long seconds = 1000;
@@ -367,7 +361,7 @@ public class TimelineMetricClusterAggregatorSecondTest {
     long now = System.currentTimeMillis();
     long startTime = now - 120000;
     long seconds = 1000;
-    List<Long[]> slices = secondAggregator.getTimeSlices(startTime, now);
+    List<Long[]> slices = getTimeSlices(startTime, now, sliceInterval);
     ResultSet rs = createNiceMock(ResultSet.class);
 
     TreeMap<Long, Double> metricValues = new TreeMap<>();
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSourceTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSourceTest.java
new file mode 100644
index 0000000..7cddb00
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSourceTest.java
@@ -0,0 +1,178 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
+
+import junit.framework.Assert;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricsIgniteCache;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.powermock.core.classloader.annotations.PowerMockIgnore;
+import org.powermock.core.classloader.annotations.PrepareForTest;
+import org.powermock.modules.junit4.PowerMockRunner;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_COLLECTOR_IGNITE_NODES;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedCheckPointTimeMillis;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getTimeSlices;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.AGGREGATOR_NAME.METRIC_AGGREGATE_SECOND;
+import static org.easymock.EasyMock.anyObject;
+import static org.easymock.EasyMock.createNiceMock;
+import static org.easymock.EasyMock.expect;
+import static org.easymock.EasyMock.replay;
+import static org.powermock.api.easymock.PowerMock.mockStatic;
+import static org.powermock.api.easymock.PowerMock.replayAll;
+
+@RunWith(PowerMockRunner.class)
+@PrepareForTest(TimelineMetricConfiguration.class)
+
+@PowerMockIgnore("javax.management.*")
+public class TimelineMetricClusterAggregatorSecondWithCacheSourceTest {
+
+  private static TimelineMetricsIgniteCache timelineMetricsIgniteCache;
+  @BeforeClass
+  public static void setupConf() throws Exception {
+    TimelineMetricConfiguration conf = new TimelineMetricConfiguration(new
+        Configuration(), new Configuration());
+    mockStatic(TimelineMetricConfiguration.class);
+    expect(TimelineMetricConfiguration.getInstance()).andReturn(conf).anyTimes();
+    conf.getMetricsConf().set(TIMELINE_METRICS_COLLECTOR_IGNITE_NODES, "localhost");
+    replayAll();
+
+    timelineMetricsIgniteCache = new TimelineMetricsIgniteCache();
+  }
+
+  @Test
+  public void testLiveHostCounterMetrics() throws Exception {
+    long aggregatorInterval = 120000;
+    long sliceInterval = 30000;
+
+    Configuration configuration = new Configuration();
+
+    TimelineMetricMetadataManager metricMetadataManagerMock = createNiceMock(TimelineMetricMetadataManager.class);
+    expect(metricMetadataManagerMock.getMetadataCacheValue((TimelineMetricMetadataKey) anyObject())).andReturn(null).anyTimes();
+    replay(metricMetadataManagerMock);
+
+    TimelineMetricClusterAggregatorSecondWithCacheSource secondAggregator = new TimelineMetricClusterAggregatorSecondWithCacheSource(
+        METRIC_AGGREGATE_SECOND, metricMetadataManagerMock, null, configuration, null,
+        aggregatorInterval, 2, "false", "", "", aggregatorInterval,
+        sliceInterval, null, timelineMetricsIgniteCache, 30L);
+
+    long now = System.currentTimeMillis();
+    long startTime = now - 120000;
+    long seconds = 1000;
+
+    Map<TimelineClusterMetric, MetricClusterAggregate> metricsFromCache = new HashMap<>();
+    metricsFromCache.put(new TimelineClusterMetric("m1", "a1", "i1",startTime + 15 * seconds),
+        new MetricClusterAggregate(1.0, 2, 1.0, 1.0, 1.0));
+    metricsFromCache.put(new TimelineClusterMetric("m2", "a2", "i1",startTime + 18 * seconds),
+        new MetricClusterAggregate(1.0, 5, 1.0, 1.0, 1.0));
+
+    List<Long[]> timeslices = getTimeSlices(startTime, startTime + 120*seconds, 30*seconds);
+    Map<TimelineClusterMetric, MetricClusterAggregate> aggregates = secondAggregator.aggregateMetricsFromMetricClusterAggregates(metricsFromCache, timeslices);
+
+    Assert.assertNotNull(aggregates);
+
+    MetricClusterAggregate a1 = null, a2 = null;
+
+    for (Map.Entry<TimelineClusterMetric, MetricClusterAggregate> m : aggregates.entrySet()) {
+      if (m.getKey().getMetricName().equals("live_hosts") && m.getKey().getAppId().equals("a1")) {
+        a1 = m.getValue();
+      }
+      if (m.getKey().getMetricName().equals("live_hosts") && m.getKey().getAppId().equals("a2")) {
+        a2 = m.getValue();
+      }
+    }
+
+    Assert.assertNotNull(a1);
+    Assert.assertNotNull(a2);
+    Assert.assertEquals(2d, a1.getSum());
+    Assert.assertEquals(5d, a2.getSum());
+  }
+
+  @Test
+  public void testSlicesRecalculation() throws Exception {
+    long aggregatorInterval = 120000;
+    long sliceInterval = 30000;
+
+    Configuration configuration = new Configuration();
+
+    TimelineMetricMetadataManager metricMetadataManagerMock = createNiceMock(TimelineMetricMetadataManager.class);
+    expect(metricMetadataManagerMock.getMetadataCacheValue((TimelineMetricMetadataKey) anyObject())).andReturn(null).anyTimes();
+    replay(metricMetadataManagerMock);
+
+    TimelineMetricClusterAggregatorSecondWithCacheSource secondAggregator = new TimelineMetricClusterAggregatorSecondWithCacheSource(
+        METRIC_AGGREGATE_SECOND, metricMetadataManagerMock, null, configuration, null,
+        aggregatorInterval, 2, "false", "", "", aggregatorInterval,
+        sliceInterval, null, timelineMetricsIgniteCache, 30L);
+
+    long seconds = 1000;
+    long now = getRoundedCheckPointTimeMillis(System.currentTimeMillis(), 120*seconds);
+    long startTime = now - 120*seconds;
+
+    Map<TimelineClusterMetric, MetricClusterAggregate> metricsFromCache = new HashMap<>();
+    metricsFromCache.put(new TimelineClusterMetric("m1", "a1", "i1",startTime + 5 * seconds),
+        new MetricClusterAggregate(1.0, 2, 1.0, 1.0, 1.0));
+    metricsFromCache.put(new TimelineClusterMetric("m1", "a1", "i1",startTime + 25 * seconds),
+        new MetricClusterAggregate(2.0, 2, 1.0, 2.0, 2.0));
+    metricsFromCache.put(new TimelineClusterMetric("m1", "a1", "i1",startTime + 45 * seconds),
+        new MetricClusterAggregate(3.0, 2, 1.0, 1.0, 1.0));
+    metricsFromCache.put(new TimelineClusterMetric("m1", "a1", "i1",startTime + 65 * seconds),
+        new MetricClusterAggregate(4.0, 2, 1.0, 4.0, 4.0));
+    metricsFromCache.put(new TimelineClusterMetric("m1", "a1", "i1",startTime + 85 * seconds),
+        new MetricClusterAggregate(5.0, 2, 1.0, 5.0, 5.0));
+
+    List<Long[]> timeslices = getTimeSlices(startTime, startTime + 120*seconds, 30*seconds);
+
+    Map<TimelineClusterMetric, MetricClusterAggregate> aggregates = secondAggregator.aggregateMetricsFromMetricClusterAggregates(metricsFromCache, timeslices);
+
+    Assert.assertNotNull(aggregates);
+    Assert.assertEquals(4, aggregates.size());
+
+    TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric("m1", "a1", "i1", startTime + 30*seconds);
+    MetricClusterAggregate metricClusterAggregate = aggregates.get(timelineClusterMetric);
+    Assert.assertNotNull(metricClusterAggregate);
+    Assert.assertEquals(1.5, metricClusterAggregate.getSum());
+    Assert.assertEquals(1d, metricClusterAggregate.getMin());
+    Assert.assertEquals(2d, metricClusterAggregate.getMax());
+    Assert.assertEquals(2, metricClusterAggregate.getNumberOfHosts());
+
+    timelineClusterMetric.setTimestamp(startTime + 60*seconds);
+    metricClusterAggregate = aggregates.get(timelineClusterMetric);
+    Assert.assertNotNull(metricClusterAggregate);
+    Assert.assertEquals(3d, metricClusterAggregate.getSum());
+
+    timelineClusterMetric.setTimestamp(startTime + 90*seconds);
+    metricClusterAggregate = aggregates.get(timelineClusterMetric);
+    Assert.assertNotNull(metricClusterAggregate);
+    Assert.assertEquals(4.5d, metricClusterAggregate.getSum());
+
+    timelineClusterMetric = new TimelineClusterMetric("live_hosts", "a1", null, startTime + 120*seconds);
+    metricClusterAggregate = aggregates.get(timelineClusterMetric);
+    Assert.assertNotNull(metricClusterAggregate);
+    Assert.assertEquals(2d, metricClusterAggregate.getSum());
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/MetricCollectorHAControllerTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/MetricCollectorHAControllerTest.java
index a0bc77f..e14d069 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/MetricCollectorHAControllerTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/MetricCollectorHAControllerTest.java
@@ -52,6 +52,7 @@ public class MetricCollectorHAControllerTest extends AbstractMiniHBaseClusterTes
 
     expect(configuration.getClusterZKClientPort()).andReturn(port);
     expect(configuration.getClusterZKQuorum()).andReturn(quorum);
+    expect(configuration.getZkConnectionUrl(port, quorum)).andReturn(quorum + ":" + port);
 
     replay(configuration);
   }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java
index f9a1036..3e3b91f 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java
@@ -120,10 +120,10 @@ public class TestMetadataManager extends AbstractMiniHBaseClusterTest {
     Map<String, TimelineMetricHostMetadata> cachedHostData = metadataManager.getHostedAppsCache();
     Map<String, TimelineMetricHostMetadata> savedHostData = metadataManager.getHostedAppsFromStore();
     Assert.assertEquals(cachedData.size(), savedData.size());
-    Assert.assertEquals("dummy_app1", cachedHostData.get("dummy_host1").getHostedApps().iterator().next());
-    Assert.assertEquals("dummy_app2", cachedHostData.get("dummy_host2").getHostedApps().iterator().next());
-    Assert.assertEquals("dummy_app1", savedHostData.get("dummy_host1").getHostedApps().iterator().next());
-    Assert.assertEquals("dummy_app2", savedHostData.get("dummy_host2").getHostedApps().iterator().next());
+    Assert.assertEquals("dummy_app1", cachedHostData.get("dummy_host1").getHostedApps().keySet().iterator().next());
+    Assert.assertEquals("dummy_app2", cachedHostData.get("dummy_host2").getHostedApps().keySet().iterator().next());
+    Assert.assertEquals("dummy_app1", savedHostData.get("dummy_host1").getHostedApps().keySet().iterator().next());
+    Assert.assertEquals("dummy_app2", savedHostData.get("dummy_host2").getHostedApps().keySet().iterator().next());
 
     Map<String, Set<String>> cachedHostInstanceData = metadataManager.getHostedInstanceCache();
     Map<String, Set<String>> savedHostInstanceData = metadataManager.getHostedInstancesFromStore();
diff --git a/ambari-metrics/pom.xml b/ambari-metrics/pom.xml
index b4b070a..0d4767d 100644
--- a/ambari-metrics/pom.xml
+++ b/ambari-metrics/pom.xml
@@ -206,8 +206,8 @@
         <artifactId>maven-compiler-plugin</artifactId>
         <version>3.2</version>
         <configuration>
-          <source>1.7</source>
-          <target>1.7</target>
+          <source>1.8</source>
+          <target>1.8</target>
         </configuration>
       </plugin>
       <plugin>

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 04/39: AMBARI-21106 : Ambari Metrics Anomaly detection prototype (Commit 3). (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit b1ec22e38d60ffdda5cb7a65f9779da9de8b8a3c
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Tue May 30 13:35:54 2017 -0700

    AMBARI-21106 : Ambari Metrics Anomaly detection prototype (Commit 3). (avijayan)
---
 .../metrics/alertservice/R/RFunctionInvoker.java   | 15 ++--
 .../src/main/resources/R-scripts/ema.R             | 79 ++++++++++++++++++++++
 .../src/main/resources/R-scripts/hsdev.r           | 60 ++++++++++++++++
 .../src/main/resources/R-scripts/iforest.R         | 35 ++++++++++
 .../src/main/resources/R-scripts/kstest.r          | 21 ++++++
 .../src/main/resources/R-scripts/test.R            | 67 ++++++++++++++++++
 .../src/main/resources/R-scripts/tukeys.r          | 26 +++++++
 .../src/main/resources/R-scripts/util.R            | 19 ++++++
 8 files changed, 312 insertions(+), 10 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java
index 8d1e520..71ad66d 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java
@@ -31,8 +31,7 @@ public class RFunctionInvoker {
 
     public static ResultSet tukeys(DataSet trainData, DataSet testData, Map<String, String> configs) {
         try {
-            r.eval("library(ambarimetricsAD)");
-            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/tukeys.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
+            r.eval("source('tukeys.r', echo=TRUE)");
 
             int n = Integer.parseInt(configs.get("tukeys.n"));
             r.eval("n <- " + n);
@@ -57,8 +56,7 @@ public class RFunctionInvoker {
 
     public static ResultSet ema_global(DataSet trainData, DataSet testData, Map<String, String> configs) {
         try {
-            r.eval("library(ambarimetricsAD)");
-            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/ema.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
+            r.eval("source('ema.R', echo=TRUE)");
 
             int n = Integer.parseInt(configs.get("ema.n"));
             r.eval("n <- " + n);
@@ -87,8 +85,7 @@ public class RFunctionInvoker {
 
     public static ResultSet ema_daily(DataSet trainData, DataSet testData, Map<String, String> configs) {
         try {
-            r.eval("library(ambarimetricsAD)");
-            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/ema.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
+            r.eval("source('ema.R', echo=TRUE)");
 
             int n = Integer.parseInt(configs.get("ema.n"));
             r.eval("n <- " + n);
@@ -117,8 +114,7 @@ public class RFunctionInvoker {
 
     public static ResultSet ksTest(DataSet trainData, DataSet testData, Map<String, String> configs) {
         try {
-            r.eval("library(ambarimetricsAD)");
-            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/kstest.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
+            r.eval("source('kstest.r', echo=TRUE)");
 
             double p_value = Double.parseDouble(configs.get("ks.p_value"));
             r.eval("p_value <- " + p_value);
@@ -144,8 +140,7 @@ public class RFunctionInvoker {
 
     public static ResultSet hsdev(DataSet trainData, DataSet testData, Map<String, String> configs) {
         try {
-            r.eval("library(ambarimetricsAD)");
-            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/hsdev.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
+            r.eval("source('hsdev.r', echo=TRUE)");
 
             int n = Integer.parseInt(configs.get("hsdev.n"));
             r.eval("n <- " + n);
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/ema.R b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/ema.R
new file mode 100644
index 0000000..d3188f0
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/ema.R
@@ -0,0 +1,79 @@
+# EMA <- w * EMA + (1 - w) * x
+# EMS <- sqrt( w * EMS^2 + (1 - w) * (x - EMA)^2 )
+# Alarm = abs(x - EMA) > n * EMS
+
+ema_global <- function(train_data, test_data, w, n) {
+  
+#  res <- get_data(url)
+#  data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
+#  names(data) <- c("TS", res$metrics[[1]]$metricname)
+#  train_data <- data[which(data$TS >= train_start & data$TS <= train_end), 2]
+#  test_data <- data[which(data$TS >= test_start & data$TS <= test_end), ]
+  
+  anomalies <- data.frame()
+  ema <- 0
+  ems <- 0
+
+  #Train Step
+  for (x in train_data) {
+    ema <- w*ema + (1-w)*x
+    ems <- sqrt(w* ems^2 + (1 - w)*(x - ema)^2)
+  }
+  
+  for ( i in 1:length(test_data[,1])) {
+    x <- test_data[i,2]
+    if (abs(x - ema) > n*ems) {
+      anomaly <- c(as.numeric(test_data[i,1]), x)
+      # print (anomaly)
+      anomalies <- rbind(anomalies, anomaly)
+    }
+    ema <- w*ema + (1-w)*x
+    ems <- sqrt(w* ems^2 + (1 - w)*(x - ema)^2)
+  }
+  
+  if(length(anomalies) > 0) {
+    names(anomalies) <- c("TS", "Value")
+  }
+  return (anomalies)
+}
+
+ema_daily <- function(train_data, test_data, w, n) {
+  
+#  res <- get_data(url)
+#  data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
+#  names(data) <- c("TS", res$metrics[[1]]$metricname)
+#  train_data <- data[which(data$TS >= train_start & data$TS <= train_end), ]
+#  test_data <- data[which(data$TS >= test_start & data$TS <= test_end), ]
+  
+  anomalies <- data.frame()
+  ema <- vector("numeric", 7)
+  ems <- vector("numeric", 7)
+  
+  #Train Step
+  for ( i in 1:length(train_data[,1])) {
+    x <- train_data[i,2]
+    time <- as.POSIXlt(as.numeric(train_data[i,1])/1000, origin = "1970-01-01" ,tz = "GMT")
+    index <- time$wday
+    ema[index] <- w*ema[index] + (1-w)*x
+    ems[index] <- sqrt(w* ems[index]^2 + (1 - w)*(x - ema[index])^2)
+  }
+  
+  for ( i in 1:length(test_data[,1])) {
+    x <- test_data[i,2]
+    time <- as.POSIXlt(as.numeric(test_data[i,1])/1000, origin = "1970-01-01" ,tz = "GMT")
+    index <- time$wday
+    
+    if (abs(x - ema[index+1]) > n*ems[index+1]) {
+      anomaly <- c(as.numeric(test_data[i,1]), x)
+      # print (anomaly)
+      anomalies <- rbind(anomalies, anomaly)
+    }
+    ema[index+1] <- w*ema[index+1] + (1-w)*x
+    ems[index+1] <- sqrt(w* ems[index+1]^2 + (1 - w)*(x - ema[index+1])^2)
+  }
+  
+  if(length(anomalies) > 0) {
+    names(anomalies) <- c("TS", "Value")
+  }
+  return(anomalies)
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/hsdev.r b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/hsdev.r
new file mode 100644
index 0000000..ff8a8f7
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/hsdev.r
@@ -0,0 +1,60 @@
+hsdev_daily <- function(train_data, test_data, n, num_historic_periods, interval, period) {
+
+  #res <- get_data(url)
+  #data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
+  #names(data) <- c("TS", res$metrics[[1]]$metricname)
+  anomalies <- data.frame()
+
+  granularity <- train_data[2,1] - train_data[1,1]
+  test_start <- test_data[1,1]
+  test_end <- test_data[length(test_data[1,]),1]
+  cat ("\n test_start : ", as.numeric(test_start))
+  train_start <- test_start - num_historic_periods*period
+  cat ("\n train_start : ", as.numeric(train_start))
+  # round to start of day
+  train_start <- train_start - (train_start %% interval)
+  cat ("\n train_start after rounding: ", as.numeric(train_start))
+
+  time <- as.POSIXlt(as.numeric(test_data[1,1])/1000, origin = "1970-01-01" ,tz = "GMT")
+  test_data_day <- time$wday
+
+  h_data <- c()
+  for ( i in length(train_data[,1]):1) {
+    ts <- train_data[i,1]
+    if ( ts < train_start) {
+      cat ("\n Breaking out of loop : ", ts)
+      break
+    }
+    time <- as.POSIXlt(as.numeric(ts)/1000, origin = "1970-01-01" ,tz = "GMT")
+    if (time$wday == test_data_day) {
+      x <- train_data[i,2]
+      h_data <- c(h_data, x)
+    }
+  }
+
+  cat ("\n Train data length : ", length(train_data[,1]))
+  cat ("\n Test data length : ", length(test_data[,1]))
+  cat ("\n Historic data length : ", length(h_data))
+  if (length(h_data) < 2*length(test_data[,1])) {
+    cat ("\nNot enough training data")
+    return (anomalies)
+  }
+
+  past_median <- median(h_data)
+  cat ("\npast_median : ", past_median)
+  past_sd <- sd(h_data)
+  cat ("\npast_sd : ", past_sd)
+  curr_median <- median(test_data[,2])
+  cat ("\ncurr_median : ", curr_median)
+
+  if (abs(curr_median - past_median) > n * past_sd) {
+    anomaly <- c(test_start, test_end, curr_median, past_median, past_sd)
+    anomalies <- rbind(anomalies, anomaly)
+  }
+
+  if(length(anomalies) > 0) {
+    names(anomalies) <- c("TS Start", "TS End", "Current Median", "Past Median", " Past SD")
+  }
+
+  return (anomalies)
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/iforest.R b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/iforest.R
new file mode 100644
index 0000000..1e0c534
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/iforest.R
@@ -0,0 +1,35 @@
+ams_iforest <- function(url, train_start, train_end, test_start, test_end, threshold_score) {
+  
+  res <- get_data(url)
+  num_metrics <- length(res$metrics)
+  anomalies <- data.frame()
+  
+  metricname <- res$metrics[[1]]$metricname
+  data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
+  names(data) <- c("TS", res$metrics[[1]]$metricname)
+
+  for (i in 2:num_metrics) {
+    metricname <- res$metrics[[i]]$metricname
+    df <- data.frame(as.numeric(names(res$metrics[[i]]$metrics)), as.numeric(res$metrics[[i]]$metrics))
+    names(df) <- c("TS", res$metrics[[i]]$metricname)
+    data <- merge(data, df)
+  }
+  
+  algo_data <- data[ which(df$TS >= train_start & df$TS <= train_end) , ][c(1:num_metrics+1)]
+  iForest <- IsolationTrees(algo_data)
+  test_data <- data[ which(df$TS >= test_start & df$TS <= test_end) , ]
+  
+  if_res <- AnomalyScore(test_data[c(1:num_metrics+1)], iForest)
+  for (i in 1:length(if_res$outF)) {
+    index <- test_start+i-1
+    if (if_res$outF[i] > threshold_score) {
+      anomaly <- c(test_data[i,1], if_res$outF[i], if_res$pathLength[i])
+      anomalies <- rbind(anomalies, anomaly)
+    } 
+  }
+  
+  if(length(anomalies) > 0) {
+    names(anomalies) <- c("TS", "Anomaly Score", "Path length")
+  }
+  return (anomalies)
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/kstest.r b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/kstest.r
new file mode 100644
index 0000000..af21038
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/kstest.r
@@ -0,0 +1,21 @@
+ams_ks <- function(train_data, test_data, p_value) {
+  
+#  res <- get_data(url)
+#  data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
+#  names(data) <- c("TS", res$metrics[[1]]$metricname)
+#  train_data <- data[which(data$TS >= train_start & data$TS <= train_end), 2]
+#  test_data <- data[which(data$TS >= test_start & data$TS <= test_end), 2]
+  
+  anomalies <- data.frame()
+  res <- ks.test(train_data, test_data[,2])
+  
+  if (res[2] < p_value) {
+    anomaly <- c(test_data[1,1], test_data[length(test_data),1], res[1], res[2])
+    anomalies <- rbind(anomalies, anomaly)
+  }
+ 
+  if(length(anomalies) > 0) {
+    names(anomalies) <- c("TS Start", "TS End", "D", "p-value")
+  }
+  return (anomalies)
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/test.R b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/test.R
new file mode 100644
index 0000000..e66049f
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/test.R
@@ -0,0 +1,67 @@
+tukeys_anomalies <- data.frame()
+ema_global_anomalies <- data.frame()
+ema_daily_anomalies <- data.frame()
+ks_anomalies <- data.frame()
+hsdev_anomalies <- data.frame()
+
+init <- function() {
+  tukeys_anomalies <- data.frame()
+  ema_global_anomalies <- data.frame()
+  ema_daily_anomalies <- data.frame()
+  ks_anomalies <- data.frame()
+  hsdev_anomalies <- data.frame()
+}
+
+test_methods <- function(data) {
+
+  init()
+  #res <- get_data(url)
+  #data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
+  #names(data) <- c("TS", res$metrics[[1]]$metricname)
+
+  limit <- data[length(data[,1]),1]
+  step <- data[2,1] - data[1,1]
+
+  train_start <- data[1,1]
+  train_end <- get_next_day_boundary(train_start, step, limit)
+  test_start <- train_end + step
+  test_end <- get_next_day_boundary(test_start, step, limit)
+  i <- 1
+  day <- 24*60*60*1000
+
+  while (test_start < limit) {
+
+    print (i)
+    i <- i + 1
+    train_data <- data[which(data$TS >= train_start & data$TS <= train_end),]
+    test_data <- data[which(data$TS >= test_start & data$TS <= test_end), ]
+
+    #tukeys_anomalies <<- rbind(tukeys_anomalies, ams_tukeys(train_data, test_data, 3))
+    #ema_global_anomalies <<- rbind(ema_global_anomalies, ema_global(train_data, test_data, 0.9, 3))
+    #ema_daily_anomalies <<- rbind(ema_daily_anomalies, ema_daily(train_data, test_data, 0.9, 3))
+    #ks_anomalies <<- rbind(ks_anomalies, ams_ks(train_data, test_data, 0.05))
+    hsdev_train_data <- data[which(data$TS < test_start),]
+    hsdev_anomalies <<- rbind(hsdev_anomalies, hsdev_daily(hsdev_train_data, test_data, 3, 3, day, 7*day))
+
+    train_start <- test_start
+    train_end <- get_next_day_boundary(train_start, step, limit)
+    test_start <- train_end + step
+    test_end <- get_next_day_boundary(test_start, step, limit)
+  }
+  return (hsdev_anomalies)
+}
+
+get_next_day_boundary <- function(start, step, limit) {
+
+  if (start > limit) {
+    return (-1)
+  }
+
+  while (start <= limit) {
+    if (((start %% (24*60*60*1000)) - 28800000) == 0) {
+      return (start)
+    }
+    start <- start + step
+  }
+  return (start)
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r
new file mode 100644
index 0000000..38f71f2
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r
@@ -0,0 +1,26 @@
+ams_tukeys <- function(train_data, test_data, n) {
+
+#  res <- get_data(url)
+#  data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
+#  names(data) <- c("TS", res$metrics[[1]]$metricname)
+#  train_data <- data[which(data$TS >= train_start & data$TS <= train_end), 2]
+#  test_data <- data[which(data$TS >= test_start & data$TS <= test_end), ]
+
+  anomalies <- data.frame()
+  quantiles <- quantile(train_data[,2])
+  iqr <- quantiles[4] - quantiles[2]
+
+  for ( i in 1:length(test_data[,1])) {
+    x <- test_data[i,2]
+    lb <- quantiles[2] - n*iqr
+    ub <- quantiles[4] + n*iqr
+    if ( (x < lb)  || (x > ub) ) {
+      anomaly <- c(test_data[i,1], x)
+      anomalies <- rbind(anomalies, anomaly)
+    }
+  }
+  if(length(anomalies) > 0) {
+    names(anomalies) <- c("TS", "Value")
+  }
+  return (anomalies)
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/util.R b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/util.R
new file mode 100644
index 0000000..eb19d37
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/util.R
@@ -0,0 +1,19 @@
+#url_prefix = 'http://104.196.95.78:3000/api/datasources/proxy/1/ws/v1/timeline/metrics?'
+#url_suffix = '&startTime=1459972944&endTime=1491508944&precision=MINUTES'
+#data_url <- paste(url_prefix, query, sep ="")
+#data_url <- paste(data_url, url_suffix, sep="")
+
+get_data <- function(url) {
+  library(rjson)
+  res <- fromJSON(readLines(url)[1])
+  return (res)
+}
+
+find_index <- function(data, ts) {
+  for (i in 1:length(data)) {
+    if (as.numeric(ts) == as.numeric(data[i])) {
+      return (i)
+    }
+  }
+  return (-1)
+}
\ No newline at end of file

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 06/39: Fixing rat check failures and compilation issues. (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit d81c072ca7413d92ed6db83edfe83f4d75b41a0a
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Wed May 31 11:31:57 2017 -0700

    Fixing rat check failures and compilation issues. (avijayan)
---
 ambari-metrics/ambari-metrics-alertservice/pom.xml | 23 ++++--
 .../ambari/metrics/alertservice/R/AmsRTest.java    | 17 +++++
 .../metrics/alertservice/R/RFunctionInvoker.java   | 17 +++++
 .../metrics/alertservice/common/DataSet.java       | 17 +++++
 .../metrics/alertservice/common/MethodResult.java  | 17 +++++
 .../metrics/alertservice/common/MetricAnomaly.java | 17 +++++
 .../metrics/alertservice/common/ResultSet.java     | 17 +++++
 .../common/SingleValuedTimelineMetric.java         | 17 +++++
 .../alertservice/common/StatisticUtils.java        | 17 +++++
 .../alertservice/common/TimelineMetric.java        | 17 +++++
 .../alertservice/common/TimelineMetrics.java       | 17 +++++
 .../alertservice/methods/MetricAnomalyModel.java   | 17 +++++
 .../metrics/alertservice/methods/ema/EmaDS.java    | 20 +++++-
 .../metrics/alertservice/methods/ema/EmaModel.java | 21 +++++-
 .../alertservice/methods/ema/EmaModelLoader.java   | 17 +++++
 .../alertservice/methods/ema/EmaResult.java        | 17 +++++
 .../alertservice/methods/ema/TestEmaModel.java     | 17 +++++
 .../alertservice/spark/AmsKafkaProducer.java       | 17 +++++
 .../alertservice/spark/AnomalyMetricPublisher.java | 19 ++++-
 .../alertservice/spark/MetricAnomalyDetector.java  | 21 ++++--
 .../src/main/resources/R-scripts/ema.R             | 19 ++++-
 .../src/main/resources/R-scripts/hsdev.r           | 17 +++++
 .../src/main/resources/R-scripts/iforest.R         | 17 +++++
 .../src/main/resources/R-scripts/kstest.r          | 17 +++++
 .../src/main/resources/R-scripts/test.R            | 18 +++++
 .../src/main/resources/R-scripts/tukeys.r          | 17 +++++
 .../src/main/resources/R-scripts/util.R            | 17 +++++
 ambari-metrics/ambari-metrics-spark/pom.xml        | 20 +++++-
 .../metrics/spark/MetricAnomalyDetector.scala      | 16 +++++
 .../ambari/metrics/spark/SparkPhoenixReader.scala  | 17 +++++
 .../ambari-metrics-timelineservice/pom.xml         | 11 +--
 .../timeline/HBaseTimelineMetricsService.java      |  5 +-
 .../cache/InternalMetricsCacheSizeOfEngine.java    | 81 +++++++++++-----------
 ambari-metrics/pom.xml                             |  2 +-
 34 files changed, 566 insertions(+), 67 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-alertservice/pom.xml b/ambari-metrics/ambari-metrics-alertservice/pom.xml
index 10f920a..4afc80f 100644
--- a/ambari-metrics/ambari-metrics-alertservice/pom.xml
+++ b/ambari-metrics/ambari-metrics-alertservice/pom.xml
@@ -1,4 +1,22 @@
 <?xml version="1.0" encoding="UTF-8"?>
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~     http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing, software
+  ~ distributed under the License is distributed on an "AS IS" BASIS,
+  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~ See the License for the specific language governing permissions and
+  ~ limitations under the License.
+  -->
+
 <project xmlns="http://maven.apache.org/POM/4.0.0"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
@@ -26,11 +44,6 @@
     <packaging>jar</packaging>
 
     <dependencies>
-        <dependency>
-            <groupId>org.apache.ambari</groupId>
-            <artifactId>ambari-metrics-common</artifactId>
-            <version>${project.version}</version>
-        </dependency>
 
         <dependency>
             <groupId>commons-lang</groupId>
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/AmsRTest.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/AmsRTest.java
index 0929f4c..2bbc250 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/AmsRTest.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/AmsRTest.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.R;
 
 import org.apache.ambari.metrics.alertservice.common.ResultSet;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java
index 71ad66d..2713b71 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.R;
 
 
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/DataSet.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/DataSet.java
index 47bf9b6..a709c73 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/DataSet.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/DataSet.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.common;
 
 import java.util.Arrays;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MethodResult.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MethodResult.java
index 915da4c..6bf58df 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MethodResult.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MethodResult.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.common;
 
 public abstract class MethodResult {
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MetricAnomaly.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MetricAnomaly.java
index d237bee..4dbb425 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MetricAnomaly.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MetricAnomaly.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.common;
 
 public class MetricAnomaly {
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/ResultSet.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/ResultSet.java
index 96b74e0..9415c1b 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/ResultSet.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/ResultSet.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.common;
 
 
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/SingleValuedTimelineMetric.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/SingleValuedTimelineMetric.java
index 5118225..acd4452 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/SingleValuedTimelineMetric.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/SingleValuedTimelineMetric.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.common;
 
 
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/StatisticUtils.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/StatisticUtils.java
index dff56e6..81bd77b 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/StatisticUtils.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/StatisticUtils.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.common;
 
 
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetric.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetric.java
index 2a73855..88ad834 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetric.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetric.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.common;
 
 /**
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetrics.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetrics.java
index 500e1e9..7df6a9c 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetrics.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetrics.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.common;
 
 import org.apache.hadoop.classification.InterfaceAudience;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/MetricAnomalyModel.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/MetricAnomalyModel.java
index 7ae91a3..af33d26 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/MetricAnomalyModel.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/MetricAnomalyModel.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.methods;
 
 import org.apache.ambari.metrics.alertservice.common.MetricAnomaly;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaDS.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaDS.java
index ec548c8..32cd96b 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaDS.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaDS.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.methods.ema;
 
 import com.sun.org.apache.commons.logging.Log;
@@ -35,8 +52,6 @@ public class EmaDS implements Serializable {
 
         ema = weight * ema + (1 - weight) * metricValue;
         ems = Math.sqrt(weight * Math.pow(ems, 2.0) + (1 - weight) * Math.pow(metricValue - ema, 2.0));
-
-        System.out.println(ema + ", " + ems);
         LOG.info(ema + ", " + ems);
         return diff > 0 ? new EmaResult(diff) : null;
     }
@@ -44,7 +59,6 @@ public class EmaDS implements Serializable {
     public void update(double metricValue) {
         ema = weight * ema + (1 - weight) * metricValue;
         ems = Math.sqrt(weight * Math.pow(ems, 2.0) + (1 - weight) * Math.pow(metricValue - ema, 2.0));
-        System.out.println(ema + ", " + ems);
         LOG.info(ema + ", " + ems);
     }
 
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModel.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModel.java
index 4aae543..13a0f55 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModel.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModel.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.methods.ema;
 
 import com.google.gson.Gson;
@@ -59,8 +76,6 @@ public class EmaModel implements MetricAnomalyModel, Saveable, Serializable {
         EmaDS emaDS = new EmaDS(metric.getMetricName(), metric.getAppId(), metric.getHostName(), weight, timessdev);
         LOG.info("In EMA Train step");
         for (Long timestamp : metric.getMetricValues().keySet()) {
-            System.out.println(timestamp + " : " + metric.getMetricValues().get(timestamp));
-            LOG.info(timestamp + " : " + metric.getMetricValues().get(timestamp));
             emaDS.update(metric.getMetricValues().get(timestamp));
         }
         trackedEmas.put(key, emaDS);
@@ -83,7 +98,7 @@ public class EmaModel implements MetricAnomalyModel, Saveable, Serializable {
 
         for (Long timestamp : metric.getMetricValues().keySet()) {
             double metricValue = metric.getMetricValues().get(timestamp);
-            MethodResult result = emaDS.test(metricValue);
+            MethodResult result = emaDS.testAndUpdate(metricValue);
             if (result != null) {
                 MetricAnomaly metricAnomaly = new MetricAnomaly(key,timestamp, metricValue, result);
                 anomalies.add(metricAnomaly);
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModelLoader.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModelLoader.java
index f0ef340..0205844 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModelLoader.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModelLoader.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.methods.ema;
 
 import com.google.gson.Gson;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaResult.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaResult.java
index 23f1793..2d24a9c 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaResult.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaResult.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.methods.ema;
 
 import org.apache.ambari.metrics.alertservice.common.MethodResult;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/TestEmaModel.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/TestEmaModel.java
index a090786..b851dab 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/TestEmaModel.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/TestEmaModel.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.methods.ema;
 
 import com.fasterxml.jackson.databind.ObjectMapper;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AmsKafkaProducer.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AmsKafkaProducer.java
index de56825..daaee5c 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AmsKafkaProducer.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AmsKafkaProducer.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.spark;
 
 import com.fasterxml.jackson.databind.JsonNode;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AnomalyMetricPublisher.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AnomalyMetricPublisher.java
index 5a6bb61..d65790e 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AnomalyMetricPublisher.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AnomalyMetricPublisher.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.spark;
 
 import org.apache.ambari.metrics.alertservice.common.MetricAnomaly;
@@ -58,7 +75,6 @@ public class AnomalyMetricPublisher implements Serializable {
     public void publish(List<MetricAnomaly> metricAnomalies) {
         LOG.info("Sending metric anomalies of size : " + metricAnomalies.size());
         List<TimelineMetric> metricList = getTimelineMetricList(metricAnomalies);
-        LOG.info("Sending TimelineMetric list of size : " + metricList.size());
         if (!metricList.isEmpty()) {
             TimelineMetrics timelineMetrics = new TimelineMetrics();
             timelineMetrics.setMetrics(metricList);
@@ -132,7 +148,6 @@ public class AnomalyMetricPublisher implements Serializable {
     }
 
     private boolean emitMetricsJson(String connectUrl, String jsonData) {
-        LOG.info("Metrics Data : " + jsonData);
         int timeout = 10000;
         HttpURLConnection connection = null;
         try {
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/MetricAnomalyDetector.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/MetricAnomalyDetector.java
index ab87a95..3989c67 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/MetricAnomalyDetector.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/MetricAnomalyDetector.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.alertservice.spark;
 
 import com.fasterxml.jackson.databind.ObjectMapper;
@@ -76,7 +93,6 @@ public class MetricAnomalyDetector {
         JavaDStream<TimelineMetrics> timelineMetricsStream = messages.map(new Function<Tuple2<String, String>, TimelineMetrics>() {
             @Override
             public TimelineMetrics call(Tuple2<String, String> message) throws Exception {
-                LOG.info(message._2());
                 ObjectMapper mapper = new ObjectMapper();
                 TimelineMetrics metrics = mapper.readValue(message._2, TimelineMetrics.class);
                 return metrics;
@@ -104,15 +120,12 @@ public class MetricAnomalyDetector {
             rdd.foreach(
                     tuple2 -> {
                         TimelineMetrics metrics = tuple2._2();
-                        LOG.info("Received Metric : " + metrics.getMetrics().get(0).getMetricName());
                         for (TimelineMetric metric : metrics.getMetrics()) {
 
                             TimelineMetric timelineMetric =
                                     new TimelineMetric(metric.getMetricName(), metric.getAppId(), metric.getHostName(), metric.getMetricValues());
-                            LOG.info("Models size : " + anomalyDetectionModels.size());
 
                             for (MetricAnomalyModel model : anomalyDetectionModels) {
-                                LOG.info("Testing against Model : " + model.getClass().getCanonicalName());
                                 List<MetricAnomaly> anomalies = model.test(timelineMetric);
                                 anomalyMetricPublisher.publish(anomalies);
                                 for (MetricAnomaly anomaly : anomalies) {
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/ema.R b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/ema.R
index d3188f0..0b66095 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/ema.R
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/ema.R
@@ -1,4 +1,21 @@
-# EMA <- w * EMA + (1 - w) * x
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+#  EMA <- w * EMA + (1 - w) * x
 # EMS <- sqrt( w * EMS^2 + (1 - w) * (x - EMA)^2 )
 # Alarm = abs(x - EMA) > n * EMS
 
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/hsdev.r b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/hsdev.r
index ff8a8f7..b25e79d 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/hsdev.r
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/hsdev.r
@@ -1,3 +1,20 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
 hsdev_daily <- function(train_data, test_data, n, num_historic_periods, interval, period) {
 
   #res <- get_data(url)
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/iforest.R b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/iforest.R
index 1e0c534..8956400 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/iforest.R
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/iforest.R
@@ -1,3 +1,20 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
 ams_iforest <- function(url, train_start, train_end, test_start, test_end, threshold_score) {
   
   res <- get_data(url)
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/kstest.r b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/kstest.r
index af21038..b4dfdcb 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/kstest.r
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/kstest.r
@@ -1,3 +1,20 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
 ams_ks <- function(train_data, test_data, p_value) {
   
 #  res <- get_data(url)
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/test.R b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/test.R
index e66049f..7650356 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/test.R
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/test.R
@@ -1,3 +1,21 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+
 tukeys_anomalies <- data.frame()
 ema_global_anomalies <- data.frame()
 ema_daily_anomalies <- data.frame()
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r
index 38f71f2..7fffbdd 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r
@@ -1,3 +1,20 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
 ams_tukeys <- function(train_data, test_data, n) {
 
 #  res <- get_data(url)
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/util.R b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/util.R
index eb19d37..3827006 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/util.R
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/util.R
@@ -1,3 +1,20 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
 #url_prefix = 'http://104.196.95.78:3000/api/datasources/proxy/1/ws/v1/timeline/metrics?'
 #url_suffix = '&startTime=1459972944&endTime=1491508944&precision=MINUTES'
 #data_url <- paste(url_prefix, query, sep ="")
diff --git a/ambari-metrics/ambari-metrics-spark/pom.xml b/ambari-metrics/ambari-metrics-spark/pom.xml
index f1c8a13..4732cb5 100644
--- a/ambari-metrics/ambari-metrics-spark/pom.xml
+++ b/ambari-metrics/ambari-metrics-spark/pom.xml
@@ -1,3 +1,21 @@
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~     http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing, software
+  ~ distributed under the License is distributed on an "AS IS" BASIS,
+  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~ See the License for the specific language governing permissions and
+  ~ limitations under the License.
+  -->
+
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
     <parent>
@@ -65,7 +83,7 @@
         <dependency>
             <groupId>org.apache.ambari</groupId>
             <artifactId>ambari-metrics-alertservice</artifactId>
-            <version>2.5.1.0.0</version>
+            <version>2.0.0.0-SNAPSHOT</version>
         </dependency>
         <dependency>
             <groupId>org.apache.logging.log4j</groupId>
diff --git a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala b/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
index d4ed31a..bff094b 100644
--- a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
+++ b/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
@@ -1,3 +1,19 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.ambari.metrics.spark
 
 
diff --git a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala b/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
index 5ca7b17..3c8e1ed 100644
--- a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
+++ b/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.ambari.metrics.spark
 
 import org.apache.ambari.metrics.alertservice.common.TimelineMetric
diff --git a/ambari-metrics/ambari-metrics-timelineservice/pom.xml b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
index 67b7f4b..f3e0041 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/pom.xml
+++ b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
@@ -346,6 +346,12 @@
     </dependency>
 
     <dependency>
+      <groupId>org.apache.ambari</groupId>
+      <artifactId>ambari-metrics-alertservice</artifactId>
+      <version>2.0.0.0-SNAPSHOT</version>
+    </dependency>
+
+    <dependency>
       <groupId>javax.servlet</groupId>
       <artifactId>servlet-api</artifactId>
       <version>2.5</version>
@@ -697,11 +703,6 @@
       <version>1.0.0.0-SNAPSHOT</version>
       <scope>test</scope>
     </dependency>
-      <dependency>
-          <groupId>org.apache.ambari</groupId>
-          <artifactId>ambari-metrics-alertservice</artifactId>
-          <version>2.5.1.0.0</version>
-      </dependency>
   </dependencies>
 
   <profiles>
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
index 3558f87..1ba01bc 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
@@ -83,8 +83,7 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
   private Integer defaultTopNHostsLimit;
   private MetricCollectorHAController haController;
   private boolean containerMetricsDisabled = false;
-  private AmsKafkaProducer kafkaProducer = new AmsKafkaProducer("104.196.85.21:6667");
-
+  private AmsKafkaProducer kafkaProducer;
   /**
    * Construct the service.
    *
@@ -144,6 +143,8 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
         LOG.info("Using group by aggregators for aggregating host and cluster metrics.");
       }
 
+      kafkaProducer = new AmsKafkaProducer(metricsConf.get("kafka.bootstrap.servers")); //104.196.85.21:6667
+
       // Start the cluster aggregator second
       TimelineMetricAggregator secondClusterAggregator =
         TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheSizeOfEngine.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheSizeOfEngine.java
index d1a1a89..071dcd4 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheSizeOfEngine.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheSizeOfEngine.java
@@ -23,44 +23,45 @@ import org.slf4j.LoggerFactory;
 import net.sf.ehcache.pool.Size;
 import net.sf.ehcache.pool.SizeOfEngine;
 
-public class InternalMetricsCacheSizeOfEngine extends TimelineMetricsEhCacheSizeOfEngine {
-  private final static Logger LOG = LoggerFactory.getLogger(InternalMetricsCacheSizeOfEngine.class);
-
-  private InternalMetricsCacheSizeOfEngine(SizeOfEngine underlying) {
-    super(underlying);
-  }
-
-  public InternalMetricsCacheSizeOfEngine() {
-    // Invoke default constructor in base class
-  }
-
-  @Override
-  public Size sizeOf(Object key, Object value, Object container) {
-    try {
-      LOG.debug("BEGIN - Sizeof, key: {}, value: {}", key, value);
-      long size = 0;
-      if (key instanceof InternalMetricCacheKey) {
-        InternalMetricCacheKey metricCacheKey = (InternalMetricCacheKey) key;
-        size += reflectionSizeOf.sizeOf(metricCacheKey.getMetricName());
-        size += reflectionSizeOf.sizeOf(metricCacheKey.getAppId());
-        size += reflectionSizeOf.sizeOf(metricCacheKey.getInstanceId()); // null safe
-        size += reflectionSizeOf.sizeOf(metricCacheKey.getHostname());
-      }
-      if (value instanceof InternalMetricCacheValue) {
-        size += getValueMapSize(((InternalMetricCacheValue) value).getMetricValues());
-      }
-      // Mark size as not being exact
-      return new Size(size, false);
-    } finally {
-      LOG.debug("END - Sizeof, key: {}", key);
-    }
-  }
-
-  @Override
-  public SizeOfEngine copyWith(int maxDepth, boolean abortWhenMaxDepthExceeded) {
-    LOG.debug("Copying tracing sizeof engine, maxdepth: {}, abort: {}",
-      maxDepth, abortWhenMaxDepthExceeded);
-
-    return new InternalMetricsCacheSizeOfEngine(underlying.copyWith(maxDepth, abortWhenMaxDepthExceeded));
-  }
+public class InternalMetricsCacheSizeOfEngine {
+// extends TimelineMetricsEhCacheSizeOfEngine {
+//  private final static Logger LOG = LoggerFactory.getLogger(InternalMetricsCacheSizeOfEngine.class);
+//
+//  private InternalMetricsCacheSizeOfEngine(SizeOfEngine underlying) {
+//    super(underlying);
+//  }
+//
+//  public InternalMetricsCacheSizeOfEngine() {
+//    // Invoke default constructor in base class
+//  }
+//
+//  @Override
+//  public Size sizeOf(Object key, Object value, Object container) {
+//    try {
+//      LOG.debug("BEGIN - Sizeof, key: {}, value: {}", key, value);
+//      long size = 0;
+//      if (key instanceof InternalMetricCacheKey) {
+//        InternalMetricCacheKey metricCacheKey = (InternalMetricCacheKey) key;
+//        size += reflectionSizeOf.sizeOf(metricCacheKey.getMetricName());
+//        size += reflectionSizeOf.sizeOf(metricCacheKey.getAppId());
+//        size += reflectionSizeOf.sizeOf(metricCacheKey.getInstanceId()); // null safe
+//        size += reflectionSizeOf.sizeOf(metricCacheKey.getHostname());
+//      }
+//      if (value instanceof InternalMetricCacheValue) {
+//        size += getValueMapSize(((InternalMetricCacheValue) value).getMetricValues());
+//      }
+//      // Mark size as not being exact
+//      return new Size(size, false);
+//    } finally {
+//      LOG.debug("END - Sizeof, key: {}", key);
+//    }
+//  }
+//
+//  @Override
+//  public SizeOfEngine copyWith(int maxDepth, boolean abortWhenMaxDepthExceeded) {
+//    LOG.debug("Copying tracing sizeof engine, maxdepth: {}, abort: {}",
+//      maxDepth, abortWhenMaxDepthExceeded);
+//
+//    return new InternalMetricsCacheSizeOfEngine(underlying.copyWith(maxDepth, abortWhenMaxDepthExceeded));
+//  }
 }
diff --git a/ambari-metrics/pom.xml b/ambari-metrics/pom.xml
index 79ea06f..b4b070a 100644
--- a/ambari-metrics/pom.xml
+++ b/ambari-metrics/pom.xml
@@ -29,12 +29,12 @@
     <module>ambari-metrics-kafka-sink</module>
     <module>ambari-metrics-storm-sink</module>
     <module>ambari-metrics-storm-sink-legacy</module>
+    <module>ambari-metrics-alertservice</module>
     <module>ambari-metrics-timelineservice</module>
     <module>ambari-metrics-host-monitoring</module>
     <module>ambari-metrics-grafana</module>
     <module>ambari-metrics-assembly</module>
     <module>ambari-metrics-host-aggregator</module>
-    <module>ambari-metrics-alertservice</module>
     <module>ambari-metrics-spark</module>
   </modules>
   <properties>

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 31/39: AMBARI-22688. Fix AMS compilation issues and unit test with hbase, hadoop and phoenix upgraded. (swagle)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 61772212a0c9502e95c06d957d9df9326178b2d5
Author: Siddharth Wagle <sw...@hortonworks.com>
AuthorDate: Thu Dec 21 15:56:53 2017 -0800

    AMBARI-22688. Fix AMS compilation issues and unit test with hbase,hadoop and phoenix upgraded. (swagle)
---
 .../conf/unix/ambari-metrics-collector             |    6 +-
 .../ambari-metrics-timelineservice/pom.xml         |    6 +-
 ...istoryServer.java => AMSApplicationServer.java} |  111 +-
 .../ApplicationHistoryClientService.java           |  215 ---
 .../ApplicationHistoryManager.java                 |  146 --
 .../ApplicationHistoryManagerImpl.java             |  250 ----
 .../ApplicationHistoryReader.java                  |  117 --
 .../ApplicationHistoryStore.java                   |   37 -
 .../ApplicationHistoryWriter.java                  |  112 --
 .../FileSystemApplicationHistoryStore.java         |  784 -----------
 .../MemoryApplicationHistoryStore.java             |  274 ----
 .../NullApplicationHistoryStore.java               |  127 --
 .../metrics/timeline/PhoenixHBaseAccessor.java     |  141 +-
 .../timeline/TimelineMetricConfiguration.java      |    3 -
 .../timeline/query/PhoenixConnectionProvider.java  |    3 +-
 .../metrics/timeline/query/PhoenixTransactSQL.java |    3 +
 .../timeline/EntityIdentifier.java                 |  100 --
 .../timeline/LeveldbTimelineStore.java             | 1473 --------------------
 .../timeline/MemoryTimelineStore.java              |  360 -----
 .../webapp/AHSController.java                      |   55 -
 .../webapp/AHSLogsPage.java                        |   55 -
 .../applicationhistoryservice/webapp/AHSView.java  |   90 --
 .../webapp/AHSWebApp.java                          |   66 -
 .../webapp/AHSWebServices.java                     |  162 ---
 .../AMSController.java}                            |   23 +-
 .../webapp/{ContainerPage.java => AMSWebApp.java}  |   29 +-
 .../webapp/AppAttemptPage.java                     |   69 -
 .../applicationhistoryservice/webapp/AppPage.java  |   71 -
 .../applicationhistoryservice/webapp/NavBlock.java |   51 -
 .../webapp/TimelineWebServices.java                |  250 +---
 .../ApplicationHistoryStoreTestUtils.java          |   84 --
 .../TestApplicationHistoryClientService.java       |  209 ---
 .../TestApplicationHistoryManagerImpl.java         |   76 -
 .../TestApplicationHistoryServer.java              |  267 ----
 .../TestFileSystemApplicationHistoryStore.java     |  233 ----
 .../TestMemoryApplicationHistoryStore.java         |  206 ---
 .../timeline/AbstractMiniHBaseClusterTest.java     |    6 +-
 .../metrics/timeline/ITPhoenixHBaseAccessor.java   |   47 +-
 .../timeline/TestLeveldbTimelineStore.java         |  253 ----
 .../timeline/TestMemoryTimelineStore.java          |   83 --
 .../timeline/TimelineStoreTestUtils.java           |  789 -----------
 .../webapp/TestAHSWebApp.java                      |  199 ---
 .../webapp/TestAHSWebServices.java                 |  302 ----
 .../webapp/TestTimelineWebServices.java            |  297 +---
 ambari-metrics/pom.xml                             |   20 +-
 45 files changed, 241 insertions(+), 8019 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-timelineservice/conf/unix/ambari-metrics-collector b/ambari-metrics/ambari-metrics-timelineservice/conf/unix/ambari-metrics-collector
index 552be48..de764ec 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/conf/unix/ambari-metrics-collector
+++ b/ambari-metrics/ambari-metrics-timelineservice/conf/unix/ambari-metrics-collector
@@ -25,7 +25,7 @@ HBASE_RS_PID=/var/run/ams-hbase/hbase-${USER}-regionserver.pid
 
 HBASE_DIR=/usr/lib/ams-hbase
 
-DAEMON_NAME=timelineserver
+DAEMON_NAME=ams-metrics-collector
 
 COLLECTOR_CONF_DIR=/etc/ambari-metrics-collector/conf
 HBASE_CONF_DIR=/etc/ams-hbase/conf
@@ -238,7 +238,7 @@ function start()
     echo "$(date) Launching in distributed mode. Assuming Hbase daemons up and running." | tee -a $STARTUPFILE
   fi
 
-	CLASS='org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer'
+	CLASS='org.apache.hadoop.yarn.server.applicationhistoryservice.AMSApplicationServer'
 	# YARN_OPTS="${YARN_OPTS} ${YARN_TIMELINESERVER_OPTS}"
 	# if [[ -n "${YARN_TIMELINESERVER_HEAPSIZE}" ]]; then
 	#   JAVA_HEAP_MAX="-Xmx${YARN_TIMELINESERVER_HEAPSIZE}m"
@@ -263,7 +263,7 @@ function start()
   sleep 2
 
   echo "Verifying ${METRIC_COLLECTOR} process status..." | tee -a $STARTUPFILE
-  if [ -z "`ps ax | grep -w ${PID} | grep ApplicationHistoryServer`" ]; then
+  if [ -z "`ps ax | grep -w ${PID} | grep AMSApplicationServer`" ]; then
     if [ -s ${OUTFILE} ]; then
       echo "ERROR: ${METRIC_COLLECTOR} start failed. For more details, see ${OUTFILE}:" | tee -a $STARTUPFILE
       echo "===================="
diff --git a/ambari-metrics/ambari-metrics-timelineservice/pom.xml b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
index 7794a11..e6a7e64 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/pom.xml
+++ b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
@@ -34,9 +34,9 @@
     <!-- Needed for generating FindBugs warnings using parent pom -->
     <!--<yarn.basedir>${project.parent.parent.basedir}</yarn.basedir>-->
     <protobuf.version>2.5.0</protobuf.version>
-    <hadoop.version>2.7.3.2.6.4.0-91</hadoop.version>
-    <phoenix.version>4.7.0.2.6.4.0-91</phoenix.version>
-    <hbase.version>1.1.2.2.6.4.0-91</hbase.version>
+    <hadoop.version>3.0.0.3.0.0.0-623</hadoop.version>
+    <phoenix.version>5.0.0.3.0.0.0-623</phoenix.version>
+    <hbase.version>2.0.0.3.0.0.0-623</hbase.version>
   </properties>
 
   <build>
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/AMSApplicationServer.java
similarity index 54%
rename from ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
rename to ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/AMSApplicationServer.java
index 331670d..f576362 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/AMSApplicationServer.java
@@ -26,9 +26,7 @@ import org.apache.hadoop.http.HttpConfig;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.metrics2.source.JvmMetrics;
 import org.apache.hadoop.service.CompositeService;
-import org.apache.hadoop.service.Service;
 import org.apache.hadoop.util.ExitUtil;
-import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.ShutdownHookManager;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.YarnUncaughtExceptionHandler;
@@ -37,57 +35,42 @@ import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.HBaseTimelineMetricsService;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.MemoryTimelineStore;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.TimelineStore;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.LeveldbTimelineStore;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AHSWebApp;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AMSWebApp;
 import org.apache.hadoop.yarn.webapp.WebApp;
 import org.apache.hadoop.yarn.webapp.WebApps;
 
 import com.google.common.annotations.VisibleForTesting;
 
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.DISABLE_APPLICATION_TIMELINE_STORE;
-
 /**
- * History server that keeps track of all types of history in the cluster.
- * Application specific history to start with.
+ * Metrics collector web server
  */
-public class ApplicationHistoryServer extends CompositeService {
+public class AMSApplicationServer extends CompositeService {
 
   public static final int SHUTDOWN_HOOK_PRIORITY = 30;
-  private static final Log LOG =
-    LogFactory.getLog(ApplicationHistoryServer.class);
+  private static final Log LOG = LogFactory.getLog(AMSApplicationServer.class);
 
-  ApplicationHistoryClientService ahsClientService;
-  ApplicationHistoryManager historyManager;
-  TimelineStore timelineStore;
   TimelineMetricStore timelineMetricStore;
   private WebApp webApp;
   private TimelineMetricConfiguration metricConfiguration;
 
-  public ApplicationHistoryServer() {
-    super(ApplicationHistoryServer.class.getName());
+  public AMSApplicationServer() {
+    super(AMSApplicationServer.class.getName());
   }
 
   @Override
   protected void serviceInit(Configuration conf) throws Exception {
     metricConfiguration = TimelineMetricConfiguration.getInstance();
     metricConfiguration.initialize();
-    historyManager = createApplicationHistory();
-    ahsClientService = createApplicationHistoryClientService(historyManager);
-    addService(ahsClientService);
-    addService((Service) historyManager);
-    timelineStore = createTimelineStore(conf);
     timelineMetricStore = createTimelineMetricStore(conf);
-    addIfService(timelineStore);
     addIfService(timelineMetricStore);
     super.serviceInit(conf);
   }
 
   @Override
   protected void serviceStart() throws Exception {
-    DefaultMetricsSystem.initialize("ApplicationHistoryServer");
-    JvmMetrics.initSingleton("ApplicationHistoryServer", null);
+    DefaultMetricsSystem.initialize("AmbariMetricsSystem");
+    JvmMetrics.initSingleton("AmbariMetricsSystem", null);
 
     startWebApp();
     super.serviceStart();
@@ -102,66 +85,30 @@ public class ApplicationHistoryServer extends CompositeService {
     DefaultMetricsSystem.shutdown();
     super.serviceStop();
   }
-
-  @Private
-  @VisibleForTesting
-  public ApplicationHistoryClientService getClientService() {
-    return this.ahsClientService;
-  }
-
-  protected ApplicationHistoryClientService createApplicationHistoryClientService(
-          ApplicationHistoryManager historyManager) {
-    return new ApplicationHistoryClientService(historyManager, metricConfiguration);
-  }
-
-  protected ApplicationHistoryManager createApplicationHistory() {
-    return new ApplicationHistoryManagerImpl();
-  }
-
-  protected ApplicationHistoryManager getApplicationHistory() {
-    return this.historyManager;
-  }
-
-  static ApplicationHistoryServer launchAppHistoryServer(String[] args) {
-    Thread
-      .setDefaultUncaughtExceptionHandler(new YarnUncaughtExceptionHandler());
-    StringUtils.startupShutdownMessage(ApplicationHistoryServer.class, args,
-      LOG);
-    ApplicationHistoryServer appHistoryServer = null;
+  
+  static AMSApplicationServer launchAppHistoryServer(String[] args) {
+    Thread.setDefaultUncaughtExceptionHandler(new YarnUncaughtExceptionHandler());
+    StringUtils.startupShutdownMessage(AMSApplicationServer.class, args, LOG);
+    AMSApplicationServer amsApplicationServer = null;
     try {
-      appHistoryServer = new ApplicationHistoryServer();
+      amsApplicationServer = new AMSApplicationServer();
       ShutdownHookManager.get().addShutdownHook(
-        new CompositeServiceShutdownHook(appHistoryServer),
+        new CompositeServiceShutdownHook(amsApplicationServer),
         SHUTDOWN_HOOK_PRIORITY);
       YarnConfiguration conf = new YarnConfiguration();
-      appHistoryServer.init(conf);
-      appHistoryServer.start();
+      amsApplicationServer.init(conf);
+      amsApplicationServer.start();
     } catch (Throwable t) {
-      LOG.fatal("Error starting ApplicationHistoryServer", t);
-      ExitUtil.terminate(-1, "Error starting ApplicationHistoryServer");
+      LOG.fatal("Error starting AMSApplicationServer", t);
+      ExitUtil.terminate(-1, "Error starting AMSApplicationServer");
     }
-    return appHistoryServer;
+    return amsApplicationServer;
   }
 
   public static void main(String[] args) {
     launchAppHistoryServer(args);
   }
 
-  protected ApplicationHistoryManager createApplicationHistoryManager(
-      Configuration conf) {
-    return new ApplicationHistoryManagerImpl();
-  }
-
-  protected TimelineStore createTimelineStore(Configuration conf) {
-    if (conf.getBoolean(DISABLE_APPLICATION_TIMELINE_STORE, true)) {
-      LOG.info("Explicitly disabled application timeline store.");
-      return new MemoryTimelineStore();
-    }
-    return ReflectionUtils.newInstance(conf.getClass(
-        YarnConfiguration.TIMELINE_SERVICE_STORE, LeveldbTimelineStore.class,
-        TimelineStore.class), conf);
-  }
-
   protected TimelineMetricStore createTimelineMetricStore(Configuration conf) {
     LOG.info("Creating metrics store.");
     return new HBaseTimelineMetricsService(metricConfiguration);
@@ -174,7 +121,7 @@ public class ApplicationHistoryServer extends CompositeService {
     } catch (Exception e) {
       throw new ExceptionInInitializerError("Cannot find bind address");
     }
-    LOG.info("Instantiating AHSWebApp at " + bindAddress);
+    LOG.info("Instantiating metrics collector at " + bindAddress);
     try {
       Configuration conf = metricConfiguration.getMetricsConf();
       conf.set("hadoop.http.max.threads", String.valueOf(metricConfiguration
@@ -184,25 +131,15 @@ public class ApplicationHistoryServer extends CompositeService {
           HttpConfig.Policy.HTTP_ONLY.name()));
       webApp =
           WebApps
-            .$for("applicationhistory", ApplicationHistoryClientService.class,
-              ahsClientService, "ws")
+            .$for("ambarimetrics", null, null, "ws")
             .withHttpPolicy(conf, policy)
             .at(bindAddress)
-            .start(new AHSWebApp(timelineStore, timelineMetricStore,
-              ahsClientService));
+            .start(new AMSWebApp(timelineMetricStore));
     } catch (Exception e) {
       String msg = "AHSWebApp failed to start.";
       LOG.error(msg, e);
       throw new YarnRuntimeException(msg, e);
     }
   }
-
-  /**
-   * @return ApplicationTimelineStore
-   */
-  @Private
-  @VisibleForTesting
-  public TimelineStore getTimelineStore() {
-    return timelineStore;
-  }
+  
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryClientService.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryClientService.java
deleted file mode 100644
index 08beb5d..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryClientService.java
+++ /dev/null
@@ -1,215 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import java.io.IOException;
-import java.net.InetSocketAddress;
-import java.util.ArrayList;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.classification.InterfaceAudience.Private;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.ipc.Server;
-import org.apache.hadoop.net.NetUtils;
-import org.apache.hadoop.service.AbstractService;
-import org.apache.hadoop.yarn.api.ApplicationHistoryProtocol;
-import org.apache.hadoop.yarn.api.protocolrecords.CancelDelegationTokenRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.CancelDelegationTokenResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetContainerReportRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.GetContainerReportResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetContainersRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.GetContainersResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetDelegationTokenRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.GetDelegationTokenResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.RenewDelegationTokenRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.RenewDelegationTokenResponse;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptReport;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ApplicationReport;
-import org.apache.hadoop.yarn.api.records.ContainerReport;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException;
-import org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException;
-import org.apache.hadoop.yarn.exceptions.ContainerNotFoundException;
-import org.apache.hadoop.yarn.exceptions.YarnException;
-import org.apache.hadoop.yarn.ipc.YarnRPC;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
-
-public class ApplicationHistoryClientService extends AbstractService implements
-  ApplicationHistoryProtocol {
-  private static final Log LOG = LogFactory
-    .getLog(ApplicationHistoryClientService.class);
-  private ApplicationHistoryManager history;
-  private Server server;
-  private InetSocketAddress bindAddress;
-  private TimelineMetricConfiguration metricConfiguration;
-
-  public ApplicationHistoryClientService(ApplicationHistoryManager history) {
-    super("ApplicationHistoryClientService");
-    this.history = history;
-  }
-
-  public ApplicationHistoryClientService(ApplicationHistoryManager history,
-                           TimelineMetricConfiguration metricConfiguration) {
-    this(history);
-    this.metricConfiguration = metricConfiguration;
-  }
-
-  protected void serviceStart() throws Exception {
-    Configuration conf = getConfig();
-    YarnRPC rpc = YarnRPC.create(conf);
-    InetSocketAddress address =
-      NetUtils.createSocketAddr(metricConfiguration.getTimelineServiceRpcAddress(),
-        YarnConfiguration.DEFAULT_TIMELINE_SERVICE_PORT);
-
-    server =
-        rpc.getServer(ApplicationHistoryProtocol.class, this,
-          address, conf, null, metricConfiguration.getTimelineMetricsServiceHandlerThreadCount());
-
-    server.start();
-    this.bindAddress =
-        conf.updateConnectAddr(YarnConfiguration.TIMELINE_SERVICE_ADDRESS,
-          server.getListenerAddress());
-    LOG.info("Instantiated ApplicationHistoryClientService at "
-        + this.bindAddress);
-
-    super.serviceStart();
-  }
-
-  @Override
-  protected void serviceStop() throws Exception {
-    if (server != null) {
-      server.stop();
-    }
-    super.serviceStop();
-  }
-
-  @Private
-  public ApplicationHistoryProtocol getClientHandler() {
-    return this;
-  }
-
-  @Private
-  public InetSocketAddress getBindAddress() {
-    return this.bindAddress;
-  }
-
-
-
-  @Override
-  public CancelDelegationTokenResponse cancelDelegationToken(
-    CancelDelegationTokenRequest request) throws YarnException, IOException {
-    // TODO Auto-generated method stub
-    return null;
-  }
-
-  @Override
-  public GetApplicationAttemptReportResponse getApplicationAttemptReport(
-    GetApplicationAttemptReportRequest request) throws YarnException,
-    IOException {
-    try {
-      GetApplicationAttemptReportResponse response =
-        GetApplicationAttemptReportResponse.newInstance(history
-          .getApplicationAttempt(request.getApplicationAttemptId()));
-      return response;
-    } catch (IOException e) {
-      throw new ApplicationAttemptNotFoundException(e.getMessage());
-    }
-  }
-
-  @Override
-  public GetApplicationAttemptsResponse getApplicationAttempts(
-    GetApplicationAttemptsRequest request) throws YarnException,
-    IOException {
-    GetApplicationAttemptsResponse response =
-      GetApplicationAttemptsResponse
-        .newInstance(new ArrayList<ApplicationAttemptReport>(history
-          .getApplicationAttempts(request.getApplicationId()).values()));
-    return response;
-  }
-
-  @Override
-  public GetApplicationReportResponse getApplicationReport(
-    GetApplicationReportRequest request) throws YarnException, IOException {
-    try {
-      ApplicationId applicationId = request.getApplicationId();
-      GetApplicationReportResponse response =
-        GetApplicationReportResponse.newInstance(history
-          .getApplication(applicationId));
-      return response;
-    } catch (IOException e) {
-      throw new ApplicationNotFoundException(e.getMessage());
-    }
-  }
-
-  @Override
-  public GetApplicationsResponse getApplications(
-    GetApplicationsRequest request) throws YarnException, IOException {
-    GetApplicationsResponse response =
-      GetApplicationsResponse.newInstance(new ArrayList<ApplicationReport>(
-        history.getApplications(request.getLimit()).values()));
-    return response;
-  }
-
-  @Override
-  public GetContainerReportResponse getContainerReport(
-    GetContainerReportRequest request) throws YarnException, IOException {
-    try {
-      GetContainerReportResponse response =
-        GetContainerReportResponse.newInstance(history.getContainer(request
-          .getContainerId()));
-      return response;
-    } catch (IOException e) {
-      throw new ContainerNotFoundException(e.getMessage());
-    }
-  }
-
-  @Override
-  public GetContainersResponse getContainers(GetContainersRequest request)
-    throws YarnException, IOException {
-    GetContainersResponse response =
-      GetContainersResponse.newInstance(new ArrayList<ContainerReport>(
-        history.getContainers(request.getApplicationAttemptId()).values()));
-    return response;
-  }
-
-  @Override
-  public GetDelegationTokenResponse getDelegationToken(
-    GetDelegationTokenRequest request) throws YarnException, IOException {
-    // TODO Auto-generated method stub
-    return null;
-  }
-
-  @Override
-  public RenewDelegationTokenResponse renewDelegationToken(
-    RenewDelegationTokenRequest request) throws YarnException, IOException {
-    // TODO Auto-generated method stub
-    return null;
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManager.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManager.java
deleted file mode 100644
index 5ddb3af..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManager.java
+++ /dev/null
@@ -1,146 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import java.io.IOException;
-import java.util.Map;
-
-import org.apache.hadoop.classification.InterfaceAudience.Private;
-import org.apache.hadoop.classification.InterfaceAudience.Public;
-import org.apache.hadoop.classification.InterfaceStability.Unstable;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptReport;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ApplicationReport;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.api.records.ContainerReport;
-import org.apache.hadoop.yarn.exceptions.YarnException;
-
-@Private
-@Unstable
-public interface ApplicationHistoryManager {
-  /**
-   * This method returns Application {@link ApplicationReport} for the specified
-   * {@link ApplicationId}.
-   *
-   * @param appId
-   *
-   * @return {@link ApplicationReport} for the ApplicationId.
-   * @throws YarnException
-   * @throws IOException
-   */
-  @Public
-  @Unstable
-  ApplicationReport getApplication(ApplicationId appId) throws YarnException,
-    IOException;
-
-  /**
-   * This method returns the given number of Application
-   * {@link ApplicationReport}s.
-   *
-   * @param appsNum
-   *
-   * @return map of {@link ApplicationId} to {@link ApplicationReport}s.
-   * @throws YarnException
-   * @throws IOException
-   */
-  @Public
-  @Unstable
-  Map<ApplicationId, ApplicationReport>
-  getApplications(long appsNum) throws YarnException,
-    IOException;
-
-  /**
-   * Application can have multiple application attempts
-   * {@link ApplicationAttemptReport}. This method returns the all
-   * {@link ApplicationAttemptReport}s for the Application.
-   *
-   * @param appId
-   *
-   * @return all {@link ApplicationAttemptReport}s for the Application.
-   * @throws YarnException
-   * @throws IOException
-   */
-  @Public
-  @Unstable
-  Map<ApplicationAttemptId, ApplicationAttemptReport> getApplicationAttempts(
-    ApplicationId appId) throws YarnException, IOException;
-
-  /**
-   * This method returns {@link ApplicationAttemptReport} for specified
-   * {@link ApplicationId}.
-   *
-   * @param appAttemptId
-   *          {@link ApplicationAttemptId}
-   * @return {@link ApplicationAttemptReport} for ApplicationAttemptId
-   * @throws YarnException
-   * @throws IOException
-   */
-  @Public
-  @Unstable
-  ApplicationAttemptReport getApplicationAttempt(
-    ApplicationAttemptId appAttemptId) throws YarnException, IOException;
-
-  /**
-   * This method returns {@link ContainerReport} for specified
-   * {@link ContainerId}.
-   *
-   * @param containerId
-   *          {@link ContainerId}
-   * @return {@link ContainerReport} for ContainerId
-   * @throws YarnException
-   * @throws IOException
-   */
-  @Public
-  @Unstable
-  ContainerReport getContainer(ContainerId containerId) throws YarnException,
-    IOException;
-
-  /**
-   * This method returns {@link ContainerReport} for specified
-   * {@link ApplicationAttemptId}.
-   *
-   * @param appAttemptId
-   *          {@link ApplicationAttemptId}
-   * @return {@link ContainerReport} for ApplicationAttemptId
-   * @throws YarnException
-   * @throws IOException
-   */
-  @Public
-  @Unstable
-  ContainerReport getAMContainer(ApplicationAttemptId appAttemptId)
-    throws YarnException, IOException;
-
-  /**
-   * This method returns Map of {@link ContainerId} to {@link ContainerReport}
-   * for specified {@link ApplicationAttemptId}.
-   *
-   * @param appAttemptId
-   *          {@link ApplicationAttemptId}
-   * @return Map of {@link ContainerId} to {@link ContainerReport} for
-   *         ApplicationAttemptId
-   * @throws YarnException
-   * @throws IOException
-   */
-  @Public
-  @Unstable
-  Map<ContainerId, ContainerReport> getContainers(
-    ApplicationAttemptId appAttemptId) throws YarnException, IOException;
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerImpl.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerImpl.java
deleted file mode 100644
index d699264..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerImpl.java
+++ /dev/null
@@ -1,250 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import java.io.IOException;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.Map.Entry;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.classification.InterfaceAudience.Private;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.service.AbstractService;
-import org.apache.hadoop.util.ReflectionUtils;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptReport;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ApplicationReport;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.api.records.ContainerReport;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerHistoryData;
-import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
-
-import com.google.common.annotations.VisibleForTesting;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.DISABLE_APPLICATION_TIMELINE_STORE;
-
-public class ApplicationHistoryManagerImpl extends AbstractService implements
-    ApplicationHistoryManager {
-  private static final Log LOG = LogFactory
-    .getLog(ApplicationHistoryManagerImpl.class);
-  private static final String UNAVAILABLE = "N/A";
-
-  private ApplicationHistoryStore historyStore;
-  private String serverHttpAddress;
-
-  public ApplicationHistoryManagerImpl() {
-    super(ApplicationHistoryManagerImpl.class.getName());
-  }
-
-  @Override
-  protected void serviceInit(Configuration conf) throws Exception {
-    LOG.info("ApplicationHistory Init");
-    historyStore = createApplicationHistoryStore(conf);
-    historyStore.init(conf);
-    serverHttpAddress = WebAppUtils.getHttpSchemePrefix(conf) +
-        WebAppUtils.getAHSWebAppURLWithoutScheme(conf);
-    super.serviceInit(conf);
-  }
-
-  @Override
-  protected void serviceStart() throws Exception {
-    LOG.info("Starting ApplicationHistory");
-    historyStore.start();
-    super.serviceStart();
-  }
-
-  @Override
-  protected void serviceStop() throws Exception {
-    LOG.info("Stopping ApplicationHistory");
-    historyStore.stop();
-    super.serviceStop();
-  }
-
-  protected ApplicationHistoryStore createApplicationHistoryStore(
-      Configuration conf) {
-    if (conf.getBoolean(DISABLE_APPLICATION_TIMELINE_STORE, true)) {
-      LOG.info("Explicitly disabled application timeline store.");
-      return new NullApplicationHistoryStore();
-    }
-    return ReflectionUtils.newInstance(conf.getClass(
-      YarnConfiguration.APPLICATION_HISTORY_STORE,
-      NullApplicationHistoryStore.class,
-      ApplicationHistoryStore.class), conf);
-  }
-
-  @Override
-  public ContainerReport getAMContainer(ApplicationAttemptId appAttemptId)
-      throws IOException {
-    ApplicationReport app =
-        getApplication(appAttemptId.getApplicationId());
-    return convertToContainerReport(historyStore.getAMContainer(appAttemptId),
-        app == null ? null : app.getUser());
-  }
-
-  @Override
-  public Map<ApplicationId, ApplicationReport> getApplications(long appsNum)
-    throws IOException {
-    Map<ApplicationId, ApplicationHistoryData> histData =
-      historyStore.getAllApplications();
-    HashMap<ApplicationId, ApplicationReport> applicationsReport =
-      new HashMap<ApplicationId, ApplicationReport>();
-    for (Entry<ApplicationId, ApplicationHistoryData> entry : histData
-      .entrySet()) {
-      applicationsReport.put(entry.getKey(),
-        convertToApplicationReport(entry.getValue()));
-    }
-    return applicationsReport;
-  }
-
-  @Override
-  public ApplicationReport getApplication(ApplicationId appId)
-      throws IOException {
-    return convertToApplicationReport(historyStore.getApplication(appId));
-  }
-
-  private ApplicationReport convertToApplicationReport(
-      ApplicationHistoryData appHistory) throws IOException {
-    ApplicationAttemptId currentApplicationAttemptId = null;
-    String trackingUrl = UNAVAILABLE;
-    String host = UNAVAILABLE;
-    int rpcPort = -1;
-
-    ApplicationAttemptHistoryData lastAttempt =
-        getLastAttempt(appHistory.getApplicationId());
-    if (lastAttempt != null) {
-      currentApplicationAttemptId = lastAttempt.getApplicationAttemptId();
-      trackingUrl = lastAttempt.getTrackingURL();
-      host = lastAttempt.getHost();
-      rpcPort = lastAttempt.getRPCPort();
-    }
-    return ApplicationReport.newInstance(appHistory.getApplicationId(),
-      currentApplicationAttemptId, appHistory.getUser(), appHistory.getQueue(),
-      appHistory.getApplicationName(), host, rpcPort, null,
-      appHistory.getYarnApplicationState(), appHistory.getDiagnosticsInfo(),
-      trackingUrl, appHistory.getStartTime(), appHistory.getFinishTime(),
-      appHistory.getFinalApplicationStatus(), null, "", 100,
-      appHistory.getApplicationType(), null);
-  }
-
-  private ApplicationAttemptHistoryData getLastAttempt(ApplicationId appId)
-      throws IOException {
-    Map<ApplicationAttemptId, ApplicationAttemptHistoryData> attempts =
-        historyStore.getApplicationAttempts(appId);
-    ApplicationAttemptId prevMaxAttemptId = null;
-    for (ApplicationAttemptId attemptId : attempts.keySet()) {
-      if (prevMaxAttemptId == null) {
-        prevMaxAttemptId = attemptId;
-      } else {
-        if (prevMaxAttemptId.getAttemptId() < attemptId.getAttemptId()) {
-          prevMaxAttemptId = attemptId;
-        }
-      }
-    }
-    return attempts.get(prevMaxAttemptId);
-  }
-
-  private ApplicationAttemptReport convertToApplicationAttemptReport(
-      ApplicationAttemptHistoryData appAttemptHistory) {
-    return ApplicationAttemptReport.newInstance(
-      appAttemptHistory.getApplicationAttemptId(), appAttemptHistory.getHost(),
-      appAttemptHistory.getRPCPort(), appAttemptHistory.getTrackingURL(),
-      null,
-      appAttemptHistory.getDiagnosticsInfo(),
-      appAttemptHistory.getYarnApplicationAttemptState(),
-      appAttemptHistory.getMasterContainerId());
-  }
-
-  @Override
-  public ApplicationAttemptReport getApplicationAttempt(
-      ApplicationAttemptId appAttemptId) throws IOException {
-    return convertToApplicationAttemptReport(historyStore
-      .getApplicationAttempt(appAttemptId));
-  }
-
-  @Override
-  public Map<ApplicationAttemptId, ApplicationAttemptReport>
-      getApplicationAttempts(ApplicationId appId) throws IOException {
-    Map<ApplicationAttemptId, ApplicationAttemptHistoryData> histData =
-        historyStore.getApplicationAttempts(appId);
-    HashMap<ApplicationAttemptId, ApplicationAttemptReport> applicationAttemptsReport =
-        new HashMap<ApplicationAttemptId, ApplicationAttemptReport>();
-    for (Entry<ApplicationAttemptId, ApplicationAttemptHistoryData> entry : histData
-      .entrySet()) {
-      applicationAttemptsReport.put(entry.getKey(),
-        convertToApplicationAttemptReport(entry.getValue()));
-    }
-    return applicationAttemptsReport;
-  }
-
-  @Override
-  public ContainerReport getContainer(ContainerId containerId)
-      throws IOException {
-    ApplicationReport app =
-        getApplication(containerId.getApplicationAttemptId().getApplicationId());
-    return convertToContainerReport(historyStore.getContainer(containerId),
-        app == null ? null: app.getUser());
-  }
-
-  private ContainerReport convertToContainerReport(
-      ContainerHistoryData containerHistory, String user) {
-    // If the container has the aggregated log, add the server root url
-    String logUrl = WebAppUtils.getAggregatedLogURL(
-        serverHttpAddress,
-        containerHistory.getAssignedNode().toString(),
-        containerHistory.getContainerId().toString(),
-        containerHistory.getContainerId().toString(),
-        user);
-    return ContainerReport.newInstance(containerHistory.getContainerId(),
-      containerHistory.getAllocatedResource(),
-      containerHistory.getAssignedNode(), containerHistory.getPriority(),
-      containerHistory.getStartTime(), containerHistory.getFinishTime(),
-      containerHistory.getDiagnosticsInfo(), logUrl,
-      containerHistory.getContainerExitStatus(),
-      containerHistory.getContainerState(), serverHttpAddress);
-  }
-
-  @Override
-  public Map<ContainerId, ContainerReport> getContainers(
-      ApplicationAttemptId appAttemptId) throws IOException {
-    ApplicationReport app =
-        getApplication(appAttemptId.getApplicationId());
-    Map<ContainerId, ContainerHistoryData> histData =
-        historyStore.getContainers(appAttemptId);
-    HashMap<ContainerId, ContainerReport> containersReport =
-        new HashMap<ContainerId, ContainerReport>();
-    for (Entry<ContainerId, ContainerHistoryData> entry : histData.entrySet()) {
-      containersReport.put(entry.getKey(),
-        convertToContainerReport(entry.getValue(),
-            app == null ? null : app.getUser()));
-    }
-    return containersReport;
-  }
-
-  @Private
-  @VisibleForTesting
-  public ApplicationHistoryStore getHistoryStore() {
-    return this.historyStore;
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryReader.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryReader.java
deleted file mode 100644
index 590853a..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryReader.java
+++ /dev/null
@@ -1,117 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import java.io.IOException;
-import java.util.Map;
-
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerHistoryData;
-
-@InterfaceAudience.Public
-@InterfaceStability.Unstable
-public interface ApplicationHistoryReader {
-
-  /**
-   * This method returns Application {@link ApplicationHistoryData} for the
-   * specified {@link ApplicationId}.
-   * 
-   * @param appId
-   * 
-   * @return {@link ApplicationHistoryData} for the ApplicationId.
-   * @throws IOException
-   */
-  ApplicationHistoryData getApplication(ApplicationId appId) throws IOException;
-
-  /**
-   * This method returns all Application {@link ApplicationHistoryData}s
-   * 
-   * @return map of {@link ApplicationId} to {@link ApplicationHistoryData}s.
-   * @throws IOException
-   */
-  Map<ApplicationId, ApplicationHistoryData> getAllApplications()
-      throws IOException;
-
-  /**
-   * Application can have multiple application attempts
-   * {@link ApplicationAttemptHistoryData}. This method returns the all
-   * {@link ApplicationAttemptHistoryData}s for the Application.
-   * 
-   * @param appId
-   * 
-   * @return all {@link ApplicationAttemptHistoryData}s for the Application.
-   * @throws IOException
-   */
-  Map<ApplicationAttemptId, ApplicationAttemptHistoryData>
-      getApplicationAttempts(ApplicationId appId) throws IOException;
-
-  /**
-   * This method returns {@link ApplicationAttemptHistoryData} for specified
-   * {@link ApplicationId}.
-   * 
-   * @param appAttemptId
-   *          {@link ApplicationAttemptId}
-   * @return {@link ApplicationAttemptHistoryData} for ApplicationAttemptId
-   * @throws IOException
-   */
-  ApplicationAttemptHistoryData getApplicationAttempt(
-      ApplicationAttemptId appAttemptId) throws IOException;
-
-  /**
-   * This method returns {@link ContainerHistoryData} for specified
-   * {@link ContainerId}.
-   * 
-   * @param containerId
-   *          {@link ContainerId}
-   * @return {@link ContainerHistoryData} for ContainerId
-   * @throws IOException
-   */
-  ContainerHistoryData getContainer(ContainerId containerId) throws IOException;
-
-  /**
-   * This method returns {@link ContainerHistoryData} for specified
-   * {@link ApplicationAttemptId}.
-   * 
-   * @param appAttemptId
-   *          {@link ApplicationAttemptId}
-   * @return {@link ContainerHistoryData} for ApplicationAttemptId
-   * @throws IOException
-   */
-  ContainerHistoryData getAMContainer(ApplicationAttemptId appAttemptId)
-      throws IOException;
-
-  /**
-   * This method returns Map{@link ContainerId} to {@link ContainerHistoryData}
-   * for specified {@link ApplicationAttemptId}.
-   * 
-   * @param appAttemptId
-   *          {@link ApplicationAttemptId}
-   * @return Map{@link ContainerId} to {@link ContainerHistoryData} for
-   *         ApplicationAttemptId
-   * @throws IOException
-   */
-  Map<ContainerId, ContainerHistoryData> getContainers(
-      ApplicationAttemptId appAttemptId) throws IOException;
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryStore.java
deleted file mode 100644
index c26faef..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryStore.java
+++ /dev/null
@@ -1,37 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.service.Service;
-
-/**
- * This class is the abstract of the storage of the application history data. It
- * is a {@link Service}, such that the implementation of this class can make use
- * of the service life cycle to initialize and cleanup the storage. Users can
- * access the storage via {@link ApplicationHistoryReader} and
- * {@link ApplicationHistoryWriter} interfaces.
- * 
- */
-@InterfaceAudience.Public
-@InterfaceStability.Unstable
-public interface ApplicationHistoryStore extends Service,
-    ApplicationHistoryReader, ApplicationHistoryWriter {
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryWriter.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryWriter.java
deleted file mode 100644
index 09ba36d..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryWriter.java
+++ /dev/null
@@ -1,112 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import java.io.IOException;
-
-import org.apache.hadoop.classification.InterfaceAudience.Private;
-import org.apache.hadoop.classification.InterfaceStability.Unstable;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptStartData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationStartData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerStartData;
-
-/**
- * It is the interface of writing the application history, exposing the methods
- * of writing {@link ApplicationStartData}, {@link ApplicationFinishData}
- * {@link ApplicationAttemptStartData}, {@link ApplicationAttemptFinishData},
- * {@link ContainerStartData} and {@link ContainerFinishData}.
- */
-@Private
-@Unstable
-public interface ApplicationHistoryWriter {
-
-  /**
-   * This method writes the information of <code>RMApp</code> that is available
-   * when it starts.
-   * 
-   * @param appStart
-   *          the record of the information of <code>RMApp</code> that is
-   *          available when it starts
-   * @throws IOException
-   */
-  void applicationStarted(ApplicationStartData appStart) throws IOException;
-
-  /**
-   * This method writes the information of <code>RMApp</code> that is available
-   * when it finishes.
-   * 
-   * @param appFinish
-   *          the record of the information of <code>RMApp</code> that is
-   *          available when it finishes
-   * @throws IOException
-   */
-  void applicationFinished(ApplicationFinishData appFinish) throws IOException;
-
-  /**
-   * This method writes the information of <code>RMAppAttempt</code> that is
-   * available when it starts.
-   * 
-   * @param appAttemptStart
-   *          the record of the information of <code>RMAppAttempt</code> that is
-   *          available when it starts
-   * @throws IOException
-   */
-  void applicationAttemptStarted(ApplicationAttemptStartData appAttemptStart)
-      throws IOException;
-
-  /**
-   * This method writes the information of <code>RMAppAttempt</code> that is
-   * available when it finishes.
-   * 
-   * @param appAttemptFinish
-   *          the record of the information of <code>RMAppAttempt</code> that is
-   *          available when it finishes
-   * @throws IOException
-   */
-  void
-      applicationAttemptFinished(ApplicationAttemptFinishData appAttemptFinish)
-          throws IOException;
-
-  /**
-   * This method writes the information of <code>RMContainer</code> that is
-   * available when it starts.
-   * 
-   * @param containerStart
-   *          the record of the information of <code>RMContainer</code> that is
-   *          available when it starts
-   * @throws IOException
-   */
-  void containerStarted(ContainerStartData containerStart) throws IOException;
-
-  /**
-   * This method writes the information of <code>RMContainer</code> that is
-   * available when it finishes.
-   * 
-   * @param containerFinish
-   *          the record of the information of <code>RMContainer</code> that is
-   *          available when it finishes
-   * @throws IOException
-   */
-  void containerFinished(ContainerFinishData containerFinish)
-      throws IOException;
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java
deleted file mode 100644
index 4c8d745..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/FileSystemApplicationHistoryStore.java
+++ /dev/null
@@ -1,784 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import java.io.DataInput;
-import java.io.DataInputStream;
-import java.io.DataOutput;
-import java.io.DataOutputStream;
-import java.io.IOException;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.ConcurrentMap;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.classification.InterfaceAudience.Public;
-import org.apache.hadoop.classification.InterfaceStability.Unstable;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FSDataOutputStream;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.permission.FsPermission;
-import org.apache.hadoop.io.IOUtils;
-import org.apache.hadoop.io.Writable;
-import org.apache.hadoop.io.file.tfile.TFile;
-import org.apache.hadoop.service.AbstractService;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.api.records.FinalApplicationStatus;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.proto.ApplicationHistoryServerProtos.ApplicationAttemptFinishDataProto;
-import org.apache.hadoop.yarn.proto.ApplicationHistoryServerProtos.ApplicationAttemptStartDataProto;
-import org.apache.hadoop.yarn.proto.ApplicationHistoryServerProtos.ApplicationFinishDataProto;
-import org.apache.hadoop.yarn.proto.ApplicationHistoryServerProtos.ApplicationStartDataProto;
-import org.apache.hadoop.yarn.proto.ApplicationHistoryServerProtos.ContainerFinishDataProto;
-import org.apache.hadoop.yarn.proto.ApplicationHistoryServerProtos.ContainerStartDataProto;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptStartData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationStartData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerStartData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.impl.pb.ApplicationAttemptFinishDataPBImpl;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.impl.pb.ApplicationAttemptStartDataPBImpl;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.impl.pb.ApplicationFinishDataPBImpl;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.impl.pb.ApplicationStartDataPBImpl;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.impl.pb.ContainerFinishDataPBImpl;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.impl.pb.ContainerStartDataPBImpl;
-import org.apache.hadoop.yarn.util.ConverterUtils;
-
-import com.google.protobuf.InvalidProtocolBufferException;
-
-/**
- * File system implementation of {@link ApplicationHistoryStore}. In this
- * implementation, one application will have just one file in the file system,
- * which contains all the history data of one application, and its attempts and
- * containers. {@link #applicationStarted(ApplicationStartData)} is supposed to
- * be invoked first when writing any history data of one application and it will
- * open a file, while {@link #applicationFinished(ApplicationFinishData)} is
- * supposed to be last writing operation and will close the file.
- */
-@Public
-@Unstable
-public class FileSystemApplicationHistoryStore extends AbstractService
-    implements ApplicationHistoryStore {
-
-  private static final Log LOG = LogFactory
-    .getLog(FileSystemApplicationHistoryStore.class);
-
-  private static final String ROOT_DIR_NAME = "ApplicationHistoryDataRoot";
-  private static final int MIN_BLOCK_SIZE = 256 * 1024;
-  private static final String START_DATA_SUFFIX = "_start";
-  private static final String FINISH_DATA_SUFFIX = "_finish";
-  private static final FsPermission ROOT_DIR_UMASK = FsPermission
-    .createImmutable((short) 0740);
-  private static final FsPermission HISTORY_FILE_UMASK = FsPermission
-    .createImmutable((short) 0640);
-
-  private FileSystem fs;
-  private Path rootDirPath;
-
-  private ConcurrentMap<ApplicationId, HistoryFileWriter> outstandingWriters =
-      new ConcurrentHashMap<ApplicationId, HistoryFileWriter>();
-
-  public FileSystemApplicationHistoryStore() {
-    super(FileSystemApplicationHistoryStore.class.getName());
-  }
-
-  @Override
-  public void serviceInit(Configuration conf) throws Exception {
-    Path fsWorkingPath =
-        new Path(conf.get(YarnConfiguration.FS_APPLICATION_HISTORY_STORE_URI));
-    rootDirPath = new Path(fsWorkingPath, ROOT_DIR_NAME);
-    try {
-      fs = fsWorkingPath.getFileSystem(conf);
-      fs.mkdirs(rootDirPath);
-      fs.setPermission(rootDirPath, ROOT_DIR_UMASK);
-    } catch (IOException e) {
-      LOG.error("Error when initializing FileSystemHistoryStorage", e);
-      throw e;
-    }
-    super.serviceInit(conf);
-  }
-
-  @Override
-  public void serviceStop() throws Exception {
-    try {
-      for (Entry<ApplicationId, HistoryFileWriter> entry : outstandingWriters
-        .entrySet()) {
-        entry.getValue().close();
-      }
-      outstandingWriters.clear();
-    } finally {
-      IOUtils.cleanup(LOG, fs);
-    }
-    super.serviceStop();
-  }
-
-  @Override
-  public ApplicationHistoryData getApplication(ApplicationId appId)
-      throws IOException {
-    HistoryFileReader hfReader = getHistoryFileReader(appId);
-    try {
-      boolean readStartData = false;
-      boolean readFinishData = false;
-      ApplicationHistoryData historyData =
-          ApplicationHistoryData.newInstance(appId, null, null, null, null,
-            Long.MIN_VALUE, Long.MIN_VALUE, Long.MAX_VALUE, null,
-            FinalApplicationStatus.UNDEFINED, null);
-      while ((!readStartData || !readFinishData) && hfReader.hasNext()) {
-        HistoryFileReader.Entry entry = hfReader.next();
-        if (entry.key.id.equals(appId.toString())) {
-          if (entry.key.suffix.equals(START_DATA_SUFFIX)) {
-            ApplicationStartData startData =
-                parseApplicationStartData(entry.value);
-            mergeApplicationHistoryData(historyData, startData);
-            readStartData = true;
-          } else if (entry.key.suffix.equals(FINISH_DATA_SUFFIX)) {
-            ApplicationFinishData finishData =
-                parseApplicationFinishData(entry.value);
-            mergeApplicationHistoryData(historyData, finishData);
-            readFinishData = true;
-          }
-        }
-      }
-      if (!readStartData && !readFinishData) {
-        return null;
-      }
-      if (!readStartData) {
-        LOG.warn("Start information is missing for application " + appId);
-      }
-      if (!readFinishData) {
-        LOG.warn("Finish information is missing for application " + appId);
-      }
-      LOG.info("Completed reading history information of application " + appId);
-      return historyData;
-    } catch (IOException e) {
-      LOG.error("Error when reading history file of application " + appId);
-      throw e;
-    } finally {
-      hfReader.close();
-    }
-  }
-
-  @Override
-  public Map<ApplicationId, ApplicationHistoryData> getAllApplications()
-      throws IOException {
-    Map<ApplicationId, ApplicationHistoryData> historyDataMap =
-        new HashMap<ApplicationId, ApplicationHistoryData>();
-    FileStatus[] files = fs.listStatus(rootDirPath);
-    for (FileStatus file : files) {
-      ApplicationId appId =
-          ConverterUtils.toApplicationId(file.getPath().getName());
-      try {
-        ApplicationHistoryData historyData = getApplication(appId);
-        if (historyData != null) {
-          historyDataMap.put(appId, historyData);
-        }
-      } catch (IOException e) {
-        // Eat the exception not to disturb the getting the next
-        // ApplicationHistoryData
-        LOG.error("History information of application " + appId
-            + " is not included into the result due to the exception", e);
-      }
-    }
-    return historyDataMap;
-  }
-
-  @Override
-  public Map<ApplicationAttemptId, ApplicationAttemptHistoryData>
-      getApplicationAttempts(ApplicationId appId) throws IOException {
-    Map<ApplicationAttemptId, ApplicationAttemptHistoryData> historyDataMap =
-        new HashMap<ApplicationAttemptId, ApplicationAttemptHistoryData>();
-    HistoryFileReader hfReader = getHistoryFileReader(appId);
-    try {
-      while (hfReader.hasNext()) {
-        HistoryFileReader.Entry entry = hfReader.next();
-        if (entry.key.id.startsWith(
-            ConverterUtils.APPLICATION_ATTEMPT_PREFIX)) {
-          ApplicationAttemptId appAttemptId = 
-              ConverterUtils.toApplicationAttemptId(entry.key.id);
-          if (appAttemptId.getApplicationId().equals(appId)) {
-            ApplicationAttemptHistoryData historyData = 
-                historyDataMap.get(appAttemptId);
-            if (historyData == null) {
-              historyData = ApplicationAttemptHistoryData.newInstance(
-                  appAttemptId, null, -1, null, null, null,
-                  FinalApplicationStatus.UNDEFINED, null);
-              historyDataMap.put(appAttemptId, historyData);
-            }
-            if (entry.key.suffix.equals(START_DATA_SUFFIX)) {
-              mergeApplicationAttemptHistoryData(historyData,
-                  parseApplicationAttemptStartData(entry.value));
-            } else if (entry.key.suffix.equals(FINISH_DATA_SUFFIX)) {
-              mergeApplicationAttemptHistoryData(historyData,
-                  parseApplicationAttemptFinishData(entry.value));
-            }
-          }
-        }
-      }
-      LOG.info("Completed reading history information of all application"
-          + " attempts of application " + appId);
-    } catch (IOException e) {
-      LOG.info("Error when reading history information of some application"
-          + " attempts of application " + appId);
-    } finally {
-      hfReader.close();
-    }
-    return historyDataMap;
-  }
-
-  @Override
-  public ApplicationAttemptHistoryData getApplicationAttempt(
-      ApplicationAttemptId appAttemptId) throws IOException {
-    HistoryFileReader hfReader =
-        getHistoryFileReader(appAttemptId.getApplicationId());
-    try {
-      boolean readStartData = false;
-      boolean readFinishData = false;
-      ApplicationAttemptHistoryData historyData =
-          ApplicationAttemptHistoryData.newInstance(appAttemptId, null, -1,
-            null, null, null, FinalApplicationStatus.UNDEFINED, null);
-      while ((!readStartData || !readFinishData) && hfReader.hasNext()) {
-        HistoryFileReader.Entry entry = hfReader.next();
-        if (entry.key.id.equals(appAttemptId.toString())) {
-          if (entry.key.suffix.equals(START_DATA_SUFFIX)) {
-            ApplicationAttemptStartData startData =
-                parseApplicationAttemptStartData(entry.value);
-            mergeApplicationAttemptHistoryData(historyData, startData);
-            readStartData = true;
-          } else if (entry.key.suffix.equals(FINISH_DATA_SUFFIX)) {
-            ApplicationAttemptFinishData finishData =
-                parseApplicationAttemptFinishData(entry.value);
-            mergeApplicationAttemptHistoryData(historyData, finishData);
-            readFinishData = true;
-          }
-        }
-      }
-      if (!readStartData && !readFinishData) {
-        return null;
-      }
-      if (!readStartData) {
-        LOG.warn("Start information is missing for application attempt "
-            + appAttemptId);
-      }
-      if (!readFinishData) {
-        LOG.warn("Finish information is missing for application attempt "
-            + appAttemptId);
-      }
-      LOG.info("Completed reading history information of application attempt "
-          + appAttemptId);
-      return historyData;
-    } catch (IOException e) {
-      LOG.error("Error when reading history file of application attempt"
-          + appAttemptId);
-      throw e;
-    } finally {
-      hfReader.close();
-    }
-  }
-
-  @Override
-  public ContainerHistoryData getContainer(ContainerId containerId)
-      throws IOException {
-    HistoryFileReader hfReader =
-        getHistoryFileReader(containerId.getApplicationAttemptId()
-          .getApplicationId());
-    try {
-      boolean readStartData = false;
-      boolean readFinishData = false;
-      ContainerHistoryData historyData =
-          ContainerHistoryData
-            .newInstance(containerId, null, null, null, Long.MIN_VALUE,
-              Long.MAX_VALUE, null, Integer.MAX_VALUE, null);
-      while ((!readStartData || !readFinishData) && hfReader.hasNext()) {
-        HistoryFileReader.Entry entry = hfReader.next();
-        if (entry.key.id.equals(containerId.toString())) {
-          if (entry.key.suffix.equals(START_DATA_SUFFIX)) {
-            ContainerStartData startData = parseContainerStartData(entry.value);
-            mergeContainerHistoryData(historyData, startData);
-            readStartData = true;
-          } else if (entry.key.suffix.equals(FINISH_DATA_SUFFIX)) {
-            ContainerFinishData finishData =
-                parseContainerFinishData(entry.value);
-            mergeContainerHistoryData(historyData, finishData);
-            readFinishData = true;
-          }
-        }
-      }
-      if (!readStartData && !readFinishData) {
-        return null;
-      }
-      if (!readStartData) {
-        LOG.warn("Start information is missing for container " + containerId);
-      }
-      if (!readFinishData) {
-        LOG.warn("Finish information is missing for container " + containerId);
-      }
-      LOG.info("Completed reading history information of container "
-          + containerId);
-      return historyData;
-    } catch (IOException e) {
-      LOG.error("Error when reading history file of container " + containerId);
-      throw e;
-    } finally {
-      hfReader.close();
-    }
-  }
-
-  @Override
-  public ContainerHistoryData getAMContainer(ApplicationAttemptId appAttemptId)
-      throws IOException {
-    ApplicationAttemptHistoryData attemptHistoryData =
-        getApplicationAttempt(appAttemptId);
-    if (attemptHistoryData == null
-        || attemptHistoryData.getMasterContainerId() == null) {
-      return null;
-    }
-    return getContainer(attemptHistoryData.getMasterContainerId());
-  }
-
-  @Override
-  public Map<ContainerId, ContainerHistoryData> getContainers(
-      ApplicationAttemptId appAttemptId) throws IOException {
-    Map<ContainerId, ContainerHistoryData> historyDataMap =
-        new HashMap<ContainerId, ContainerHistoryData>();
-    HistoryFileReader hfReader =
-        getHistoryFileReader(appAttemptId.getApplicationId());
-    try {
-      while (hfReader.hasNext()) {
-        HistoryFileReader.Entry entry = hfReader.next();
-        if (entry.key.id.startsWith(ConverterUtils.CONTAINER_PREFIX)) {
-          ContainerId containerId =
-              ConverterUtils.toContainerId(entry.key.id);
-          if (containerId.getApplicationAttemptId().equals(appAttemptId)) {
-            ContainerHistoryData historyData =
-                historyDataMap.get(containerId);
-            if (historyData == null) {
-              historyData = ContainerHistoryData.newInstance(
-                  containerId, null, null, null, Long.MIN_VALUE,
-                  Long.MAX_VALUE, null, Integer.MAX_VALUE, null);
-              historyDataMap.put(containerId, historyData);
-            }
-            if (entry.key.suffix.equals(START_DATA_SUFFIX)) {
-              mergeContainerHistoryData(historyData,
-                  parseContainerStartData(entry.value));
-            } else if (entry.key.suffix.equals(FINISH_DATA_SUFFIX)) {
-              mergeContainerHistoryData(historyData,
-                  parseContainerFinishData(entry.value));
-            }
-          }
-        }
-      }
-      LOG.info("Completed reading history information of all conatiners"
-          + " of application attempt " + appAttemptId);
-    } catch (IOException e) {
-      LOG.info("Error when reading history information of some containers"
-          + " of application attempt " + appAttemptId);
-    } finally {
-      hfReader.close();
-    }
-    return historyDataMap;
-  }
-
-  @Override
-  public void applicationStarted(ApplicationStartData appStart)
-      throws IOException {
-    HistoryFileWriter hfWriter =
-        outstandingWriters.get(appStart.getApplicationId());
-    if (hfWriter == null) {
-      Path applicationHistoryFile =
-          new Path(rootDirPath, appStart.getApplicationId().toString());
-      try {
-        hfWriter = new HistoryFileWriter(applicationHistoryFile);
-        LOG.info("Opened history file of application "
-            + appStart.getApplicationId());
-      } catch (IOException e) {
-        LOG.error("Error when openning history file of application "
-            + appStart.getApplicationId());
-        throw e;
-      }
-      outstandingWriters.put(appStart.getApplicationId(), hfWriter);
-    } else {
-      throw new IOException("History file of application "
-          + appStart.getApplicationId() + " is already opened");
-    }
-    assert appStart instanceof ApplicationStartDataPBImpl;
-    try {
-      hfWriter.writeHistoryData(new HistoryDataKey(appStart.getApplicationId()
-        .toString(), START_DATA_SUFFIX),
-        ((ApplicationStartDataPBImpl) appStart).getProto().toByteArray());
-      LOG.info("Start information of application "
-          + appStart.getApplicationId() + " is written");
-    } catch (IOException e) {
-      LOG.error("Error when writing start information of application "
-          + appStart.getApplicationId());
-      throw e;
-    }
-  }
-
-  @Override
-  public void applicationFinished(ApplicationFinishData appFinish)
-      throws IOException {
-    HistoryFileWriter hfWriter =
-        getHistoryFileWriter(appFinish.getApplicationId());
-    assert appFinish instanceof ApplicationFinishDataPBImpl;
-    try {
-      hfWriter.writeHistoryData(new HistoryDataKey(appFinish.getApplicationId()
-        .toString(), FINISH_DATA_SUFFIX),
-        ((ApplicationFinishDataPBImpl) appFinish).getProto().toByteArray());
-      LOG.info("Finish information of application "
-          + appFinish.getApplicationId() + " is written");
-    } catch (IOException e) {
-      LOG.error("Error when writing finish information of application "
-          + appFinish.getApplicationId());
-      throw e;
-    } finally {
-      hfWriter.close();
-      outstandingWriters.remove(appFinish.getApplicationId());
-    }
-  }
-
-  @Override
-  public void applicationAttemptStarted(
-      ApplicationAttemptStartData appAttemptStart) throws IOException {
-    HistoryFileWriter hfWriter =
-        getHistoryFileWriter(appAttemptStart.getApplicationAttemptId()
-          .getApplicationId());
-    assert appAttemptStart instanceof ApplicationAttemptStartDataPBImpl;
-    try {
-      hfWriter.writeHistoryData(new HistoryDataKey(appAttemptStart
-        .getApplicationAttemptId().toString(), START_DATA_SUFFIX),
-        ((ApplicationAttemptStartDataPBImpl) appAttemptStart).getProto()
-          .toByteArray());
-      LOG.info("Start information of application attempt "
-          + appAttemptStart.getApplicationAttemptId() + " is written");
-    } catch (IOException e) {
-      LOG.error("Error when writing start information of application attempt "
-          + appAttemptStart.getApplicationAttemptId());
-      throw e;
-    }
-  }
-
-  @Override
-  public void applicationAttemptFinished(
-      ApplicationAttemptFinishData appAttemptFinish) throws IOException {
-    HistoryFileWriter hfWriter =
-        getHistoryFileWriter(appAttemptFinish.getApplicationAttemptId()
-          .getApplicationId());
-    assert appAttemptFinish instanceof ApplicationAttemptFinishDataPBImpl;
-    try {
-      hfWriter.writeHistoryData(new HistoryDataKey(appAttemptFinish
-        .getApplicationAttemptId().toString(), FINISH_DATA_SUFFIX),
-        ((ApplicationAttemptFinishDataPBImpl) appAttemptFinish).getProto()
-          .toByteArray());
-      LOG.info("Finish information of application attempt "
-          + appAttemptFinish.getApplicationAttemptId() + " is written");
-    } catch (IOException e) {
-      LOG.error("Error when writing finish information of application attempt "
-          + appAttemptFinish.getApplicationAttemptId());
-      throw e;
-    }
-  }
-
-  @Override
-  public void containerStarted(ContainerStartData containerStart)
-      throws IOException {
-    HistoryFileWriter hfWriter =
-        getHistoryFileWriter(containerStart.getContainerId()
-          .getApplicationAttemptId().getApplicationId());
-    assert containerStart instanceof ContainerStartDataPBImpl;
-    try {
-      hfWriter.writeHistoryData(new HistoryDataKey(containerStart
-        .getContainerId().toString(), START_DATA_SUFFIX),
-        ((ContainerStartDataPBImpl) containerStart).getProto().toByteArray());
-      LOG.info("Start information of container "
-          + containerStart.getContainerId() + " is written");
-    } catch (IOException e) {
-      LOG.error("Error when writing start information of container "
-          + containerStart.getContainerId());
-      throw e;
-    }
-  }
-
-  @Override
-  public void containerFinished(ContainerFinishData containerFinish)
-      throws IOException {
-    HistoryFileWriter hfWriter =
-        getHistoryFileWriter(containerFinish.getContainerId()
-          .getApplicationAttemptId().getApplicationId());
-    assert containerFinish instanceof ContainerFinishDataPBImpl;
-    try {
-      hfWriter.writeHistoryData(new HistoryDataKey(containerFinish
-        .getContainerId().toString(), FINISH_DATA_SUFFIX),
-        ((ContainerFinishDataPBImpl) containerFinish).getProto().toByteArray());
-      LOG.info("Finish information of container "
-          + containerFinish.getContainerId() + " is written");
-    } catch (IOException e) {
-      LOG.error("Error when writing finish information of container "
-          + containerFinish.getContainerId());
-    }
-  }
-
-  private static ApplicationStartData parseApplicationStartData(byte[] value)
-      throws InvalidProtocolBufferException {
-    return new ApplicationStartDataPBImpl(
-      ApplicationStartDataProto.parseFrom(value));
-  }
-
-  private static ApplicationFinishData parseApplicationFinishData(byte[] value)
-      throws InvalidProtocolBufferException {
-    return new ApplicationFinishDataPBImpl(
-      ApplicationFinishDataProto.parseFrom(value));
-  }
-
-  private static ApplicationAttemptStartData parseApplicationAttemptStartData(
-      byte[] value) throws InvalidProtocolBufferException {
-    return new ApplicationAttemptStartDataPBImpl(
-      ApplicationAttemptStartDataProto.parseFrom(value));
-  }
-
-  private static ApplicationAttemptFinishData
-      parseApplicationAttemptFinishData(byte[] value)
-          throws InvalidProtocolBufferException {
-    return new ApplicationAttemptFinishDataPBImpl(
-      ApplicationAttemptFinishDataProto.parseFrom(value));
-  }
-
-  private static ContainerStartData parseContainerStartData(byte[] value)
-      throws InvalidProtocolBufferException {
-    return new ContainerStartDataPBImpl(
-      ContainerStartDataProto.parseFrom(value));
-  }
-
-  private static ContainerFinishData parseContainerFinishData(byte[] value)
-      throws InvalidProtocolBufferException {
-    return new ContainerFinishDataPBImpl(
-      ContainerFinishDataProto.parseFrom(value));
-  }
-
-  private static void mergeApplicationHistoryData(
-      ApplicationHistoryData historyData, ApplicationStartData startData) {
-    historyData.setApplicationName(startData.getApplicationName());
-    historyData.setApplicationType(startData.getApplicationType());
-    historyData.setQueue(startData.getQueue());
-    historyData.setUser(startData.getUser());
-    historyData.setSubmitTime(startData.getSubmitTime());
-    historyData.setStartTime(startData.getStartTime());
-  }
-
-  private static void mergeApplicationHistoryData(
-      ApplicationHistoryData historyData, ApplicationFinishData finishData) {
-    historyData.setFinishTime(finishData.getFinishTime());
-    historyData.setDiagnosticsInfo(finishData.getDiagnosticsInfo());
-    historyData.setFinalApplicationStatus(finishData
-      .getFinalApplicationStatus());
-    historyData.setYarnApplicationState(finishData.getYarnApplicationState());
-  }
-
-  private static void mergeApplicationAttemptHistoryData(
-      ApplicationAttemptHistoryData historyData,
-      ApplicationAttemptStartData startData) {
-    historyData.setHost(startData.getHost());
-    historyData.setRPCPort(startData.getRPCPort());
-    historyData.setMasterContainerId(startData.getMasterContainerId());
-  }
-
-  private static void mergeApplicationAttemptHistoryData(
-      ApplicationAttemptHistoryData historyData,
-      ApplicationAttemptFinishData finishData) {
-    historyData.setDiagnosticsInfo(finishData.getDiagnosticsInfo());
-    historyData.setTrackingURL(finishData.getTrackingURL());
-    historyData.setFinalApplicationStatus(finishData
-      .getFinalApplicationStatus());
-    historyData.setYarnApplicationAttemptState(finishData
-      .getYarnApplicationAttemptState());
-  }
-
-  private static void mergeContainerHistoryData(
-      ContainerHistoryData historyData, ContainerStartData startData) {
-    historyData.setAllocatedResource(startData.getAllocatedResource());
-    historyData.setAssignedNode(startData.getAssignedNode());
-    historyData.setPriority(startData.getPriority());
-    historyData.setStartTime(startData.getStartTime());
-  }
-
-  private static void mergeContainerHistoryData(
-      ContainerHistoryData historyData, ContainerFinishData finishData) {
-    historyData.setFinishTime(finishData.getFinishTime());
-    historyData.setDiagnosticsInfo(finishData.getDiagnosticsInfo());
-    historyData.setContainerExitStatus(finishData.getContainerExitStatus());
-    historyData.setContainerState(finishData.getContainerState());
-  }
-
-  private HistoryFileWriter getHistoryFileWriter(ApplicationId appId)
-      throws IOException {
-    HistoryFileWriter hfWriter = outstandingWriters.get(appId);
-    if (hfWriter == null) {
-      throw new IOException("History file of application " + appId
-          + " is not opened");
-    }
-    return hfWriter;
-  }
-
-  private HistoryFileReader getHistoryFileReader(ApplicationId appId)
-      throws IOException {
-    Path applicationHistoryFile = new Path(rootDirPath, appId.toString());
-    if (!fs.exists(applicationHistoryFile)) {
-      throw new IOException("History file for application " + appId
-          + " is not found");
-    }
-    // The history file is still under writing
-    if (outstandingWriters.containsKey(appId)) {
-      throw new IOException("History file for application " + appId
-          + " is under writing");
-    }
-    return new HistoryFileReader(applicationHistoryFile);
-  }
-
-  private class HistoryFileReader {
-
-    private class Entry {
-
-      private HistoryDataKey key;
-      private byte[] value;
-
-      public Entry(HistoryDataKey key, byte[] value) {
-        this.key = key;
-        this.value = value;
-      }
-    }
-
-    private TFile.Reader reader;
-    private TFile.Reader.Scanner scanner;
-
-    public HistoryFileReader(Path historyFile) throws IOException {
-      FSDataInputStream fsdis = fs.open(historyFile);
-      reader =
-          new TFile.Reader(fsdis, fs.getFileStatus(historyFile).getLen(),
-            getConfig());
-      reset();
-    }
-
-    public boolean hasNext() {
-      return !scanner.atEnd();
-    }
-
-    public Entry next() throws IOException {
-      TFile.Reader.Scanner.Entry entry = scanner.entry();
-      DataInputStream dis = entry.getKeyStream();
-      HistoryDataKey key = new HistoryDataKey();
-      key.readFields(dis);
-      dis = entry.getValueStream();
-      byte[] value = new byte[entry.getValueLength()];
-      dis.read(value);
-      scanner.advance();
-      return new Entry(key, value);
-    }
-
-    public void reset() throws IOException {
-      IOUtils.cleanup(LOG, scanner);
-      scanner = reader.createScanner();
-    }
-
-    public void close() {
-      IOUtils.cleanup(LOG, scanner, reader);
-    }
-
-  }
-
-  private class HistoryFileWriter {
-
-    private FSDataOutputStream fsdos;
-    private TFile.Writer writer;
-
-    public HistoryFileWriter(Path historyFile) throws IOException {
-      if (fs.exists(historyFile)) {
-        fsdos = fs.append(historyFile);
-      } else {
-        fsdos = fs.create(historyFile);
-      }
-      fs.setPermission(historyFile, HISTORY_FILE_UMASK);
-      writer =
-          new TFile.Writer(fsdos, MIN_BLOCK_SIZE, getConfig().get(
-            YarnConfiguration.FS_APPLICATION_HISTORY_STORE_COMPRESSION_TYPE,
-            YarnConfiguration.DEFAULT_FS_APPLICATION_HISTORY_STORE_COMPRESSION_TYPE), null,
-            getConfig());
-    }
-
-    public synchronized void close() {
-      IOUtils.cleanup(LOG, writer, fsdos);
-    }
-
-    public synchronized void writeHistoryData(HistoryDataKey key, byte[] value)
-        throws IOException {
-      DataOutputStream dos = null;
-      try {
-        dos = writer.prepareAppendKey(-1);
-        key.write(dos);
-      } finally {
-        IOUtils.cleanup(LOG, dos);
-      }
-      try {
-        dos = writer.prepareAppendValue(value.length);
-        dos.write(value);
-      } finally {
-        IOUtils.cleanup(LOG, dos);
-      }
-    }
-
-  }
-
-  private static class HistoryDataKey implements Writable {
-
-    private String id;
-
-    private String suffix;
-
-    public HistoryDataKey() {
-      this(null, null);
-    }
-
-    public HistoryDataKey(String id, String suffix) {
-      this.id = id;
-      this.suffix = suffix;
-    }
-
-    @Override
-    public void write(DataOutput out) throws IOException {
-      out.writeUTF(id);
-      out.writeUTF(suffix);
-    }
-
-    @Override
-    public void readFields(DataInput in) throws IOException {
-      id = in.readUTF();
-      suffix = in.readUTF();
-    }
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/MemoryApplicationHistoryStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/MemoryApplicationHistoryStore.java
deleted file mode 100644
index c226ad3..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/MemoryApplicationHistoryStore.java
+++ /dev/null
@@ -1,274 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import java.io.IOException;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.ConcurrentMap;
-
-import org.apache.hadoop.classification.InterfaceAudience.Private;
-import org.apache.hadoop.classification.InterfaceStability.Unstable;
-import org.apache.hadoop.service.AbstractService;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptStartData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationStartData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerStartData;
-
-/**
- * In-memory implementation of {@link ApplicationHistoryStore}. This
- * implementation is for test purpose only. If users improperly instantiate it,
- * they may encounter reading and writing history data in different memory
- * store.
- * 
- */
-@Private
-@Unstable
-public class MemoryApplicationHistoryStore extends AbstractService implements
-    ApplicationHistoryStore {
-
-  private final ConcurrentMap<ApplicationId, ApplicationHistoryData> applicationData =
-      new ConcurrentHashMap<ApplicationId, ApplicationHistoryData>();
-  private final ConcurrentMap<ApplicationId, ConcurrentMap<ApplicationAttemptId, ApplicationAttemptHistoryData>> applicationAttemptData =
-      new ConcurrentHashMap<ApplicationId, ConcurrentMap<ApplicationAttemptId, ApplicationAttemptHistoryData>>();
-  private final ConcurrentMap<ApplicationAttemptId, ConcurrentMap<ContainerId, ContainerHistoryData>> containerData =
-      new ConcurrentHashMap<ApplicationAttemptId, ConcurrentMap<ContainerId, ContainerHistoryData>>();
-
-  public MemoryApplicationHistoryStore() {
-    super(MemoryApplicationHistoryStore.class.getName());
-  }
-
-  @Override
-  public Map<ApplicationId, ApplicationHistoryData> getAllApplications() {
-    return new HashMap<ApplicationId, ApplicationHistoryData>(applicationData);
-  }
-
-  @Override
-  public ApplicationHistoryData getApplication(ApplicationId appId) {
-    return applicationData.get(appId);
-  }
-
-  @Override
-  public Map<ApplicationAttemptId, ApplicationAttemptHistoryData>
-      getApplicationAttempts(ApplicationId appId) {
-    ConcurrentMap<ApplicationAttemptId, ApplicationAttemptHistoryData> subMap =
-        applicationAttemptData.get(appId);
-    if (subMap == null) {
-      return Collections
-        .<ApplicationAttemptId, ApplicationAttemptHistoryData> emptyMap();
-    } else {
-      return new HashMap<ApplicationAttemptId, ApplicationAttemptHistoryData>(
-        subMap);
-    }
-  }
-
-  @Override
-  public ApplicationAttemptHistoryData getApplicationAttempt(
-      ApplicationAttemptId appAttemptId) {
-    ConcurrentMap<ApplicationAttemptId, ApplicationAttemptHistoryData> subMap =
-        applicationAttemptData.get(appAttemptId.getApplicationId());
-    if (subMap == null) {
-      return null;
-    } else {
-      return subMap.get(appAttemptId);
-    }
-  }
-
-  @Override
-  public ContainerHistoryData getAMContainer(ApplicationAttemptId appAttemptId) {
-    ApplicationAttemptHistoryData appAttempt =
-        getApplicationAttempt(appAttemptId);
-    if (appAttempt == null || appAttempt.getMasterContainerId() == null) {
-      return null;
-    } else {
-      return getContainer(appAttempt.getMasterContainerId());
-    }
-  }
-
-  @Override
-  public ContainerHistoryData getContainer(ContainerId containerId) {
-    Map<ContainerId, ContainerHistoryData> subMap =
-        containerData.get(containerId.getApplicationAttemptId());
-    if (subMap == null) {
-      return null;
-    } else {
-      return subMap.get(containerId);
-    }
-  }
-
-  @Override
-  public Map<ContainerId, ContainerHistoryData> getContainers(
-      ApplicationAttemptId appAttemptId) throws IOException {
-    ConcurrentMap<ContainerId, ContainerHistoryData> subMap =
-        containerData.get(appAttemptId);
-    if (subMap == null) {
-      return Collections.<ContainerId, ContainerHistoryData> emptyMap();
-    } else {
-      return new HashMap<ContainerId, ContainerHistoryData>(subMap);
-    }
-  }
-
-  @Override
-  public void applicationStarted(ApplicationStartData appStart)
-      throws IOException {
-    ApplicationHistoryData oldData =
-        applicationData.putIfAbsent(appStart.getApplicationId(),
-          ApplicationHistoryData.newInstance(appStart.getApplicationId(),
-            appStart.getApplicationName(), appStart.getApplicationType(),
-            appStart.getQueue(), appStart.getUser(), appStart.getSubmitTime(),
-            appStart.getStartTime(), Long.MAX_VALUE, null, null, null));
-    if (oldData != null) {
-      throw new IOException("The start information of application "
-          + appStart.getApplicationId() + " is already stored.");
-    }
-  }
-
-  @Override
-  public void applicationFinished(ApplicationFinishData appFinish)
-      throws IOException {
-    ApplicationHistoryData data =
-        applicationData.get(appFinish.getApplicationId());
-    if (data == null) {
-      throw new IOException("The finish information of application "
-          + appFinish.getApplicationId() + " is stored before the start"
-          + " information.");
-    }
-    // Make the assumption that YarnApplicationState should not be null if
-    // the finish information is already recorded
-    if (data.getYarnApplicationState() != null) {
-      throw new IOException("The finish information of application "
-          + appFinish.getApplicationId() + " is already stored.");
-    }
-    data.setFinishTime(appFinish.getFinishTime());
-    data.setDiagnosticsInfo(appFinish.getDiagnosticsInfo());
-    data.setFinalApplicationStatus(appFinish.getFinalApplicationStatus());
-    data.setYarnApplicationState(appFinish.getYarnApplicationState());
-  }
-
-  @Override
-  public void applicationAttemptStarted(
-      ApplicationAttemptStartData appAttemptStart) throws IOException {
-    ConcurrentMap<ApplicationAttemptId, ApplicationAttemptHistoryData> subMap =
-        getSubMap(appAttemptStart.getApplicationAttemptId().getApplicationId());
-    ApplicationAttemptHistoryData oldData =
-        subMap.putIfAbsent(appAttemptStart.getApplicationAttemptId(),
-          ApplicationAttemptHistoryData.newInstance(
-            appAttemptStart.getApplicationAttemptId(),
-            appAttemptStart.getHost(), appAttemptStart.getRPCPort(),
-            appAttemptStart.getMasterContainerId(), null, null, null, null));
-    if (oldData != null) {
-      throw new IOException("The start information of application attempt "
-          + appAttemptStart.getApplicationAttemptId() + " is already stored.");
-    }
-  }
-
-  @Override
-  public void applicationAttemptFinished(
-      ApplicationAttemptFinishData appAttemptFinish) throws IOException {
-    ConcurrentMap<ApplicationAttemptId, ApplicationAttemptHistoryData> subMap =
-        getSubMap(appAttemptFinish.getApplicationAttemptId().getApplicationId());
-    ApplicationAttemptHistoryData data =
-        subMap.get(appAttemptFinish.getApplicationAttemptId());
-    if (data == null) {
-      throw new IOException("The finish information of application attempt "
-          + appAttemptFinish.getApplicationAttemptId() + " is stored before"
-          + " the start information.");
-    }
-    // Make the assumption that YarnApplicationAttemptState should not be null
-    // if the finish information is already recorded
-    if (data.getYarnApplicationAttemptState() != null) {
-      throw new IOException("The finish information of application attempt "
-          + appAttemptFinish.getApplicationAttemptId() + " is already stored.");
-    }
-    data.setTrackingURL(appAttemptFinish.getTrackingURL());
-    data.setDiagnosticsInfo(appAttemptFinish.getDiagnosticsInfo());
-    data
-      .setFinalApplicationStatus(appAttemptFinish.getFinalApplicationStatus());
-    data.setYarnApplicationAttemptState(appAttemptFinish
-      .getYarnApplicationAttemptState());
-  }
-
-  private ConcurrentMap<ApplicationAttemptId, ApplicationAttemptHistoryData>
-      getSubMap(ApplicationId appId) {
-    applicationAttemptData
-      .putIfAbsent(
-        appId,
-        new ConcurrentHashMap<ApplicationAttemptId, ApplicationAttemptHistoryData>());
-    return applicationAttemptData.get(appId);
-  }
-
-  @Override
-  public void containerStarted(ContainerStartData containerStart)
-      throws IOException {
-    ConcurrentMap<ContainerId, ContainerHistoryData> subMap =
-        getSubMap(containerStart.getContainerId().getApplicationAttemptId());
-    ContainerHistoryData oldData =
-        subMap.putIfAbsent(containerStart.getContainerId(),
-          ContainerHistoryData.newInstance(containerStart.getContainerId(),
-            containerStart.getAllocatedResource(),
-            containerStart.getAssignedNode(), containerStart.getPriority(),
-            containerStart.getStartTime(), Long.MAX_VALUE, null,
-            Integer.MAX_VALUE, null));
-    if (oldData != null) {
-      throw new IOException("The start information of container "
-          + containerStart.getContainerId() + " is already stored.");
-    }
-  }
-
-  @Override
-  public void containerFinished(ContainerFinishData containerFinish)
-      throws IOException {
-    ConcurrentMap<ContainerId, ContainerHistoryData> subMap =
-        getSubMap(containerFinish.getContainerId().getApplicationAttemptId());
-    ContainerHistoryData data = subMap.get(containerFinish.getContainerId());
-    if (data == null) {
-      throw new IOException("The finish information of container "
-          + containerFinish.getContainerId() + " is stored before"
-          + " the start information.");
-    }
-    // Make the assumption that ContainerState should not be null if
-    // the finish information is already recorded
-    if (data.getContainerState() != null) {
-      throw new IOException("The finish information of container "
-          + containerFinish.getContainerId() + " is already stored.");
-    }
-    data.setFinishTime(containerFinish.getFinishTime());
-    data.setDiagnosticsInfo(containerFinish.getDiagnosticsInfo());
-    data.setContainerExitStatus(containerFinish.getContainerExitStatus());
-    data.setContainerState(containerFinish.getContainerState());
-  }
-
-  private ConcurrentMap<ContainerId, ContainerHistoryData> getSubMap(
-      ApplicationAttemptId appAttemptId) {
-    containerData.putIfAbsent(appAttemptId,
-      new ConcurrentHashMap<ContainerId, ContainerHistoryData>());
-    return containerData.get(appAttemptId);
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/NullApplicationHistoryStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/NullApplicationHistoryStore.java
deleted file mode 100644
index 3660c10..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/NullApplicationHistoryStore.java
+++ /dev/null
@@ -1,127 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import java.io.IOException;
-import java.util.Collections;
-import java.util.Map;
-
-import org.apache.hadoop.classification.InterfaceAudience.Private;
-import org.apache.hadoop.classification.InterfaceStability.Unstable;
-import org.apache.hadoop.service.AbstractService;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptStartData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationStartData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerStartData;
-
-/**
- * Dummy implementation of {@link ApplicationHistoryStore}. If this
- * implementation is used, no history data will be persisted.
- * 
- */
-@Unstable
-@Private
-public class NullApplicationHistoryStore extends AbstractService implements
-    ApplicationHistoryStore {
-
-  public NullApplicationHistoryStore() {
-    super(NullApplicationHistoryStore.class.getName());
-  }
-
-  @Override
-  public void applicationStarted(ApplicationStartData appStart)
-      throws IOException {
-  }
-
-  @Override
-  public void applicationFinished(ApplicationFinishData appFinish)
-      throws IOException {
-  }
-
-  @Override
-  public void applicationAttemptStarted(
-      ApplicationAttemptStartData appAttemptStart) throws IOException {
-  }
-
-  @Override
-  public void applicationAttemptFinished(
-      ApplicationAttemptFinishData appAttemptFinish) throws IOException {
-  }
-
-  @Override
-  public void containerStarted(ContainerStartData containerStart)
-      throws IOException {
-  }
-
-  @Override
-  public void containerFinished(ContainerFinishData containerFinish)
-      throws IOException {
-  }
-
-  @Override
-  public ApplicationHistoryData getApplication(ApplicationId appId)
-      throws IOException {
-    return null;
-  }
-
-  @Override
-  public Map<ApplicationId, ApplicationHistoryData> getAllApplications()
-      throws IOException {
-    return Collections.emptyMap();
-  }
-
-  @Override
-  public Map<ApplicationAttemptId, ApplicationAttemptHistoryData>
-      getApplicationAttempts(ApplicationId appId) throws IOException {
-    return Collections.emptyMap();
-  }
-
-  @Override
-  public ApplicationAttemptHistoryData getApplicationAttempt(
-      ApplicationAttemptId appAttemptId) throws IOException {
-    return null;
-  }
-
-  @Override
-  public ContainerHistoryData getContainer(ContainerId containerId)
-      throws IOException {
-    return null;
-  }
-
-  @Override
-  public ContainerHistoryData getAMContainer(ApplicationAttemptId appAttemptId)
-      throws IOException {
-    return null;
-  }
-
-  @Override
-  public Map<ContainerId, ContainerHistoryData> getContainers(
-      ApplicationAttemptId appAttemptId) throws IOException {
-    return Collections.emptyMap();
-  }
-
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
index 0626e8e..fc26f5d 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
@@ -50,11 +50,11 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.ALTER_METRICS_METADATA_TABLE;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.ANOMALY_METRICS_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CONTAINER_METRICS_TABLE_NAME;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_ANOMALY_METRICS_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_CONTAINER_METRICS_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_HOSTED_APPS_METADATA_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_INSTANCE_HOST_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_METRICS_AGGREGATE_TABLE_SQL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_ANOMALY_METRICS_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_METRICS_CLUSTER_AGGREGATE_GROUPED_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_METRICS_CLUSTER_AGGREGATE_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_METRICS_METADATA_TABLE_SQL;
@@ -74,6 +74,7 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_RECORD_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.PHOENIX_TABLES;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.PHOENIX_TABLES_REGEX_PATTERN;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.TREND_ANOMALY_METRICS_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_AGGREGATE_RECORD_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_ANOMALY_METRICS_SQL;
@@ -103,6 +104,7 @@ import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
+import java.util.Optional;
 import java.util.Set;
 import java.util.TreeMap;
 import java.util.concurrent.ArrayBlockingQueue;
@@ -116,10 +118,14 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.DoNotRetryIOException;
-import org.apache.hadoop.hbase.HColumnDescriptor;
-import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
 import org.apache.hadoop.hbase.client.Durability;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
 import org.apache.hadoop.hbase.util.RetryCounter;
 import org.apache.hadoop.hbase.util.RetryCounterFactory;
 import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
@@ -153,7 +159,6 @@ import org.apache.phoenix.exception.PhoenixIOException;
 import org.codehaus.jackson.map.ObjectMapper;
 import org.codehaus.jackson.type.TypeReference;
 
-import com.google.common.collect.Maps;
 import com.google.common.collect.Multimap;
 
 
@@ -208,7 +213,7 @@ public class PhoenixHBaseAccessor {
   static final String BLOCKING_STORE_FILES_KEY =
     "hbase.hstore.blockingStoreFiles";
 
-  private HashMap<String, String> tableTTL = new HashMap<>();
+  private Map<String, Integer> tableTTL = new HashMap<>();
 
   private final TimelineMetricConfiguration configuration;
   private List<InternalMetricsSource> rawMetricsSources = new ArrayList<>();
@@ -253,15 +258,15 @@ public class PhoenixHBaseAccessor {
     this.timelineMetricsTablesDurability = metricsConf.get(TIMELINE_METRICS_AGGREGATE_TABLES_DURABILITY, "");
     this.timelineMetricsPrecisionTableDurability = metricsConf.get(TIMELINE_METRICS_PRECISION_TABLE_DURABILITY, "");
 
-    tableTTL.put(METRICS_RECORD_TABLE_NAME, metricsConf.get(PRECISION_TABLE_TTL, String.valueOf(1 * 86400)));  // 1 day
-    tableTTL.put(CONTAINER_METRICS_TABLE_NAME, metricsConf.get(CONTAINER_METRICS_TTL, String.valueOf(30 * 86400)));  // 30 days
-    tableTTL.put(METRICS_AGGREGATE_MINUTE_TABLE_NAME, metricsConf.get(HOST_MINUTE_TABLE_TTL, String.valueOf(7 * 86400))); //7 days
-    tableTTL.put(METRICS_AGGREGATE_HOURLY_TABLE_NAME, metricsConf.get(HOST_HOUR_TABLE_TTL, String.valueOf(30 * 86400))); //30 days
-    tableTTL.put(METRICS_AGGREGATE_DAILY_TABLE_NAME, metricsConf.get(HOST_DAILY_TABLE_TTL, String.valueOf(365 * 86400))); //1 year
-    tableTTL.put(METRICS_CLUSTER_AGGREGATE_TABLE_NAME, metricsConf.get(CLUSTER_SECOND_TABLE_TTL, String.valueOf(7 * 86400))); //7 days
-    tableTTL.put(METRICS_CLUSTER_AGGREGATE_MINUTE_TABLE_NAME, metricsConf.get(CLUSTER_MINUTE_TABLE_TTL, String.valueOf(30 * 86400))); //30 days
-    tableTTL.put(METRICS_CLUSTER_AGGREGATE_HOURLY_TABLE_NAME, metricsConf.get(CLUSTER_HOUR_TABLE_TTL, String.valueOf(365 * 86400))); //1 year
-    tableTTL.put(METRICS_CLUSTER_AGGREGATE_DAILY_TABLE_NAME, metricsConf.get(CLUSTER_DAILY_TABLE_TTL, String.valueOf(730 * 86400))); //2 years
+    tableTTL.put(METRICS_RECORD_TABLE_NAME, metricsConf.getInt(PRECISION_TABLE_TTL, 1 * 86400));  // 1 day
+    tableTTL.put(CONTAINER_METRICS_TABLE_NAME, metricsConf.getInt(CONTAINER_METRICS_TTL, 30 * 86400));  // 30 days
+    tableTTL.put(METRICS_AGGREGATE_MINUTE_TABLE_NAME, metricsConf.getInt(HOST_MINUTE_TABLE_TTL, 7 * 86400)); //7 days
+    tableTTL.put(METRICS_AGGREGATE_HOURLY_TABLE_NAME, metricsConf.getInt(HOST_HOUR_TABLE_TTL, 30 * 86400)); //30 days
+    tableTTL.put(METRICS_AGGREGATE_DAILY_TABLE_NAME, metricsConf.getInt(HOST_DAILY_TABLE_TTL, 365 * 86400)); //1 year
+    tableTTL.put(METRICS_CLUSTER_AGGREGATE_TABLE_NAME, metricsConf.getInt(CLUSTER_SECOND_TABLE_TTL, 7 * 86400)); //7 days
+    tableTTL.put(METRICS_CLUSTER_AGGREGATE_MINUTE_TABLE_NAME, metricsConf.getInt(CLUSTER_MINUTE_TABLE_TTL, 30 * 86400)); //30 days
+    tableTTL.put(METRICS_CLUSTER_AGGREGATE_HOURLY_TABLE_NAME, metricsConf.getInt(CLUSTER_HOUR_TABLE_TTL, 365 * 86400)); //1 year
+    tableTTL.put(METRICS_CLUSTER_AGGREGATE_DAILY_TABLE_NAME, metricsConf.getInt(CLUSTER_DAILY_TABLE_TTL, 730 * 86400)); //2 years
 
     if (cacheEnabled) {
       LOG.debug("Initialising and starting metrics cache committer thread...");
@@ -495,7 +500,7 @@ public class PhoenixHBaseAccessor {
    * @return @HBaseAdmin
    * @throws IOException
    */
-  HBaseAdmin getHBaseAdmin() throws IOException {
+  Admin getHBaseAdmin() throws IOException {
     return dataSource.getHBaseAdmin();
   }
 
@@ -612,55 +617,85 @@ public class PhoenixHBaseAccessor {
   }
 
   protected void initPoliciesAndTTL() {
-
-    HBaseAdmin hBaseAdmin = null;
+    Admin hBaseAdmin = null;
     try {
       hBaseAdmin = dataSource.getHBaseAdmin();
     } catch (IOException e) {
       LOG.warn("Unable to initialize HBaseAdmin for setting policies.", e);
     }
 
+    TableName[] tableNames = null;
     if (hBaseAdmin != null) {
+      try {
+        tableNames = hBaseAdmin.listTableNames(PHOENIX_TABLES_REGEX_PATTERN, false);
+      } catch (IOException e) {
+        LOG.warn("Unable to get table names from HBaseAdmin for setting policies.", e);
+        return;
+      }
+      if (tableNames == null || tableNames.length == 0) {
+        LOG.warn("Unable to get table names from HBaseAdmin for setting policies.");
+        return;
+      }
       for (String tableName : PHOENIX_TABLES) {
         try {
           boolean modifyTable = false;
-          HTableDescriptor tableDescriptor = hBaseAdmin.getTableDescriptor(tableName.getBytes());
+          Optional<TableName> tableNameOptional = Arrays.stream(tableNames)
+            .filter(t -> tableName.equals(t.getNameAsString())).findFirst();
+
+          TableDescriptor tableDescriptor = null;
+          if (tableNameOptional.isPresent()) {
+            tableDescriptor = hBaseAdmin.getTableDescriptor(tableNameOptional.get());
+          }
+
+          if (tableDescriptor == null) {
+            LOG.warn("Unable to get table descriptor for " + tableName);
+            continue;
+          }
+
+          // @TableDescriptor is immutable by design
+          TableDescriptorBuilder tableDescriptorBuilder =
+            TableDescriptorBuilder.newBuilder(tableDescriptor);
 
           //Set normalizer preferences
           boolean enableNormalizer = hbaseConf.getBoolean("hbase.normalizer.enabled", false);
           if (enableNormalizer ^ tableDescriptor.isNormalizationEnabled()) {
-            tableDescriptor.setNormalizationEnabled(enableNormalizer);
+            tableDescriptorBuilder.setNormalizationEnabled(enableNormalizer);
             LOG.info("Normalizer set to " + enableNormalizer + " for " + tableName);
             modifyTable = true;
           }
 
           //Set durability preferences
-          boolean durabilitySettingsModified = setDurabilityForTable(tableName, tableDescriptor);
+          boolean durabilitySettingsModified = setDurabilityForTable(tableName, tableDescriptorBuilder);
           modifyTable = modifyTable || durabilitySettingsModified;
 
           //Set compaction policy preferences
           boolean compactionPolicyModified = false;
-          compactionPolicyModified = setCompactionPolicyForTable(tableName, tableDescriptor);
+          compactionPolicyModified = setCompactionPolicyForTable(tableName, tableDescriptorBuilder);
           modifyTable = modifyTable || compactionPolicyModified;
 
           // Change TTL setting to match user configuration
-          HColumnDescriptor[] columnFamilies = tableDescriptor.getColumnFamilies();
-          if (columnFamilies != null) {
-            for (HColumnDescriptor family : columnFamilies) {
-              String ttlValue = family.getValue("TTL");
-              if (StringUtils.isEmpty(ttlValue) ||
-                  !ttlValue.trim().equals(tableTTL.get(tableName))) {
-                family.setValue("TTL", tableTTL.get(tableName));
+          ColumnFamilyDescriptor[] columnFamilyDescriptors = tableDescriptor.getColumnFamilies();
+          if (columnFamilyDescriptors != null) {
+            for (ColumnFamilyDescriptor familyDescriptor : columnFamilyDescriptors) {
+              int ttlValue = familyDescriptor.getTimeToLive();
+              if (ttlValue != tableTTL.get(tableName)) {
+                ColumnFamilyDescriptorBuilder familyDescriptorBuilder =
+                  ColumnFamilyDescriptorBuilder.newBuilder(familyDescriptor);
+
+                familyDescriptorBuilder.setTimeToLive(tableTTL.get(tableName));
+
                 LOG.info("Setting TTL on table: " + tableName + " to : " +
                   tableTTL.get(tableName) + " seconds.");
-                modifyTable = true;
+
+                hBaseAdmin.modifyColumnFamily(tableNameOptional.get(), familyDescriptorBuilder.build());
+                // modifyTable = true;
               }
             }
           }
 
           // Persist only if anything changed
           if (modifyTable) {
-            hBaseAdmin.modifyTable(tableName.getBytes(), tableDescriptor);
+            hBaseAdmin.modifyTable(tableNameOptional.get(), tableDescriptorBuilder.build());
           }
 
         } catch (IOException e) {
@@ -675,10 +710,10 @@ public class PhoenixHBaseAccessor {
     }
   }
 
-  private boolean setDurabilityForTable(String tableName, HTableDescriptor tableDescriptor) {
+  private boolean setDurabilityForTable(String tableName, TableDescriptorBuilder tableDescriptor) {
 
     boolean modifyTable = false;
-    //Set WAL preferences
+    // Set WAL preferences
     if (METRICS_RECORD_TABLE_NAME.equals(tableName)) {
       if (!timelineMetricsPrecisionTableDurability.isEmpty()) {
         LOG.info("Setting WAL option " + timelineMetricsPrecisionTableDurability + " for table : " + tableName);
@@ -723,7 +758,9 @@ public class PhoenixHBaseAccessor {
     return modifyTable;
   }
 
-  private boolean setCompactionPolicyForTable(String tableName, HTableDescriptor tableDescriptor) {
+  private boolean setCompactionPolicyForTable(String tableName, TableDescriptorBuilder tableDescriptorBuilder) {
+
+    boolean modifyTable = false;
 
     String compactionPolicyKey = metricsConf.get(TIMELINE_METRICS_HBASE_AGGREGATE_TABLE_COMPACTION_POLICY_KEY,
       HSTORE_ENGINE_CLASS);
@@ -738,38 +775,32 @@ public class PhoenixHBaseAccessor {
         FIFO_COMPACTION_POLICY_CLASS);
       blockingStoreFiles = hbaseConf.getInt(TIMELINE_METRICS_PRECISION_TABLE_HBASE_BLOCKING_STORE_FILES, 1000);
     }
-
-    Map<String, String> config = new HashMap(tableDescriptor.getConfiguration());
-
+    
     if (StringUtils.isEmpty(compactionPolicyKey) || StringUtils.isEmpty(compactionPolicyClass)) {
-      config.remove(HSTORE_COMPACTION_CLASS_KEY);
-      config.remove(HSTORE_ENGINE_CLASS);
-      //Default blockingStoreFiles = 300
-      setHbaseBlockingStoreFiles(tableDescriptor, tableName, 300);
+      // Default blockingStoreFiles = 300
+      modifyTable = setHbaseBlockingStoreFiles(tableDescriptorBuilder, tableName, 300);
     } else {
-      tableDescriptor.setConfiguration(compactionPolicyKey, compactionPolicyClass);
-      setHbaseBlockingStoreFiles(tableDescriptor, tableName, blockingStoreFiles);
-    }
-
-    if (!compactionPolicyKey.equals(HSTORE_ENGINE_CLASS)) {
-      tableDescriptor.removeConfiguration(HSTORE_ENGINE_CLASS);
-    }
-    if (!compactionPolicyKey.equals(HSTORE_COMPACTION_CLASS_KEY)) {
-      tableDescriptor.removeConfiguration(HSTORE_COMPACTION_CLASS_KEY);
+      tableDescriptorBuilder.setValue(compactionPolicyKey, compactionPolicyClass);
+      tableDescriptorBuilder.removeValue(HSTORE_ENGINE_CLASS.getBytes());
+      tableDescriptorBuilder.removeValue(HSTORE_COMPACTION_CLASS_KEY.getBytes());
+      setHbaseBlockingStoreFiles(tableDescriptorBuilder, tableName, blockingStoreFiles);
+      modifyTable = true;
     }
 
-    Map<String, String> newConfig = tableDescriptor.getConfiguration();
-    return !Maps.difference(config, newConfig).areEqual();
+    return modifyTable;
   }
 
-  private void setHbaseBlockingStoreFiles(HTableDescriptor tableDescriptor, String tableName, int value) {
+  private boolean setHbaseBlockingStoreFiles(TableDescriptorBuilder tableDescriptor,
+                                             String tableName, int value) {
     int blockingStoreFiles = hbaseConf.getInt(HBASE_BLOCKING_STORE_FILES, value);
     if (blockingStoreFiles != value) {
       blockingStoreFiles = value;
+      tableDescriptor.setValue(BLOCKING_STORE_FILES_KEY, String.valueOf(value));
+      LOG.info("Setting config property " + BLOCKING_STORE_FILES_KEY +
+        " = " + blockingStoreFiles + " for " + tableName);
+      return true;
     }
-    tableDescriptor.setConfiguration(BLOCKING_STORE_FILES_KEY, String.valueOf(value));
-    LOG.info("Setting config property " + BLOCKING_STORE_FILES_KEY +
-      " = " + blockingStoreFiles + " for " + tableName);
+    return false;
   }
 
   protected String getSplitPointsStr(String splitPoints) {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
index 395ec7b..7c6f62b 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
@@ -198,9 +198,6 @@ public class TimelineMetricConfiguration {
   public static final String CLUSTER_AGGREGATOR_DAILY_DISABLED =
     "timeline.metrics.cluster.aggregator.daily.disabled";
 
-  public static final String DISABLE_APPLICATION_TIMELINE_STORE =
-    "timeline.service.disable.application.timeline.store";
-
   public static final String WEBAPP_HTTP_ADDRESS =
     "timeline.metrics.service.webapp.address";
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java
index cacbcfb..a7a20fd 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java
@@ -1,5 +1,6 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
+import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 
 import java.io.IOException;
@@ -27,5 +28,5 @@ public interface PhoenixConnectionProvider extends ConnectionProvider {
    * @return
    * @throws IOException
    */
-  HBaseAdmin getHBaseAdmin() throws IOException;
+  Admin getHBaseAdmin() throws IOException;
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
index 75a9d28..a1755f0 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
@@ -30,6 +30,7 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.List;
 import java.util.concurrent.TimeUnit;
+import java.util.regex.Pattern;
 
 /**
  * Encapsulate all metrics related SQL queries.
@@ -424,6 +425,8 @@ public class PhoenixTransactSQL {
   public static final String METRICS_CLUSTER_AGGREGATE_DAILY_TABLE_NAME =
     "METRIC_AGGREGATE_DAILY";
 
+  public static final Pattern PHOENIX_TABLES_REGEX_PATTERN = Pattern.compile("METRIC_");
+
   public static final String[] PHOENIX_TABLES = {
     METRICS_RECORD_TABLE_NAME,
     METRICS_AGGREGATE_MINUTE_TABLE_NAME,
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/EntityIdentifier.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/EntityIdentifier.java
deleted file mode 100644
index 4b202d8..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/EntityIdentifier.java
+++ /dev/null
@@ -1,100 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice.timeline;
-
-import org.apache.hadoop.classification.InterfaceAudience.Private;
-import org.apache.hadoop.classification.InterfaceStability.Unstable;
-
-/**
- * The unique identifier for an entity
- */
-@Private
-@Unstable
-public class EntityIdentifier implements Comparable<EntityIdentifier> {
-
-  private String id;
-  private String type;
-
-  public EntityIdentifier(String id, String type) {
-    this.id = id;
-    this.type = type;
-  }
-
-  /**
-   * Get the entity Id.
-   * @return The entity Id.
-   */
-  public String getId() {
-    return id;
-  }
-
-  /**
-   * Get the entity type.
-   * @return The entity type.
-   */
-  public String getType() {
-    return type;
-  }
-
-  @Override
-  public int compareTo(EntityIdentifier other) {
-    int c = type.compareTo(other.type);
-    if (c != 0) return c;
-    return id.compareTo(other.id);
-  }
-
-  @Override
-  public int hashCode() {
-    // generated by eclipse
-    final int prime = 31;
-    int result = 1;
-    result = prime * result + ((id == null) ? 0 : id.hashCode());
-    result = prime * result + ((type == null) ? 0 : type.hashCode());
-    return result;
-  }
-
-  @Override
-  public boolean equals(Object obj) {
-    // generated by eclipse
-    if (this == obj)
-      return true;
-    if (obj == null)
-      return false;
-    if (getClass() != obj.getClass())
-      return false;
-    EntityIdentifier other = (EntityIdentifier) obj;
-    if (id == null) {
-      if (other.id != null)
-        return false;
-    } else if (!id.equals(other.id))
-      return false;
-    if (type == null) {
-      if (other.type != null)
-        return false;
-    } else if (!type.equals(other.type))
-      return false;
-    return true;
-  }
-
-  @Override
-  public String toString() {
-    return "{ id: " + id + ", type: "+ type + " }";
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/LeveldbTimelineStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/LeveldbTimelineStore.java
deleted file mode 100644
index edd4842..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/LeveldbTimelineStore.java
+++ /dev/null
@@ -1,1473 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice.timeline;
-
-import java.io.ByteArrayOutputStream;
-import java.io.File;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.Comparator;
-import java.util.EnumSet;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.Set;
-import java.util.SortedSet;
-import java.util.TreeMap;
-import java.util.concurrent.locks.ReentrantLock;
-import java.util.concurrent.locks.ReentrantReadWriteLock;
-
-import com.google.common.annotations.VisibleForTesting;
-import org.apache.commons.collections.map.LRUMap;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.io.IOUtils;
-import org.apache.hadoop.io.WritableComparator;
-import org.apache.hadoop.service.AbstractService;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEntity;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEvent;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEvents;
-import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEvents.EventsOfOneEntity;
-import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse.TimelinePutError;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.fusesource.leveldbjni.JniDBFactory;
-import org.iq80.leveldb.DB;
-import org.iq80.leveldb.DBIterator;
-import org.iq80.leveldb.Options;
-import org.iq80.leveldb.ReadOptions;
-import org.iq80.leveldb.WriteBatch;
-import org.iq80.leveldb.WriteOptions;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.GenericObjectMapper.readReverseOrderedLong;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.GenericObjectMapper.writeReverseOrderedLong;
-
-/**
- * <p>An implementation of an application timeline store backed by leveldb.</p>
- *
- * <p>There are three sections of the db, the start time section,
- * the entity section, and the indexed entity section.</p>
- *
- * <p>The start time section is used to retrieve the unique start time for
- * a given entity. Its values each contain a start time while its keys are of
- * the form:</p>
- * <pre>
- *   START_TIME_LOOKUP_PREFIX + entity type + entity id</pre>
- *
- * <p>The entity section is ordered by entity type, then entity start time
- * descending, then entity ID. There are four sub-sections of the entity
- * section: events, primary filters, related entities,
- * and other info. The event entries have event info serialized into their
- * values. The other info entries have values corresponding to the values of
- * the other info name/value map for the entry (note the names are contained
- * in the key). All other entries have empty values. The key structure is as
- * follows:</p>
- * <pre>
- *   ENTITY_ENTRY_PREFIX + entity type + revstarttime + entity id
- *
- *   ENTITY_ENTRY_PREFIX + entity type + revstarttime + entity id +
- *     EVENTS_COLUMN + reveventtimestamp + eventtype
- *
- *   ENTITY_ENTRY_PREFIX + entity type + revstarttime + entity id +
- *     PRIMARY_FILTERS_COLUMN + name + value
- *
- *   ENTITY_ENTRY_PREFIX + entity type + revstarttime + entity id +
- *     OTHER_INFO_COLUMN + name
- *
- *   ENTITY_ENTRY_PREFIX + entity type + revstarttime + entity id +
- *     RELATED_ENTITIES_COLUMN + relatedentity type + relatedentity id
- *
- *   ENTITY_ENTRY_PREFIX + entity type + revstarttime + entity id +
- *     INVISIBLE_REVERSE_RELATED_ENTITIES_COLUMN + relatedentity type +
- *     relatedentity id</pre>
- *
- * <p>The indexed entity section contains a primary filter name and primary
- * filter value as the prefix. Within a given name/value, entire entity
- * entries are stored in the same format as described in the entity section
- * above (below, "key" represents any one of the possible entity entry keys
- * described above).</p>
- * <pre>
- *   INDEXED_ENTRY_PREFIX + primaryfilter name + primaryfilter value +
- *     key</pre>
- */
-@InterfaceAudience.Private
-@InterfaceStability.Unstable
-public class LeveldbTimelineStore extends AbstractService
-    implements TimelineStore {
-  private static final Log LOG = LogFactory
-      .getLog(LeveldbTimelineStore.class);
-
-  private static final String FILENAME = "leveldb-timeline-store.ldb";
-
-  private static final byte[] START_TIME_LOOKUP_PREFIX = "k".getBytes();
-  private static final byte[] ENTITY_ENTRY_PREFIX = "e".getBytes();
-  private static final byte[] INDEXED_ENTRY_PREFIX = "i".getBytes();
-
-  private static final byte[] EVENTS_COLUMN = "e".getBytes();
-  private static final byte[] PRIMARY_FILTERS_COLUMN = "f".getBytes();
-  private static final byte[] OTHER_INFO_COLUMN = "i".getBytes();
-  private static final byte[] RELATED_ENTITIES_COLUMN = "r".getBytes();
-  private static final byte[] INVISIBLE_REVERSE_RELATED_ENTITIES_COLUMN =
-      "z".getBytes();
-
-  private static final byte[] EMPTY_BYTES = new byte[0];
-
-  private Map<EntityIdentifier, StartAndInsertTime> startTimeWriteCache;
-  private Map<EntityIdentifier, Long> startTimeReadCache;
-
-  /**
-   * Per-entity locks are obtained when writing.
-   */
-  private final LockMap<EntityIdentifier> writeLocks =
-      new LockMap<EntityIdentifier>();
-
-  private final ReentrantReadWriteLock deleteLock =
-      new ReentrantReadWriteLock();
-
-  private DB db;
-
-  private Thread deletionThread;
-
-  public LeveldbTimelineStore() {
-    super(LeveldbTimelineStore.class.getName());
-  }
-
-  @Override
-  @SuppressWarnings("unchecked")
-  protected void serviceInit(Configuration conf) throws Exception {
-    Options options = new Options();
-    options.createIfMissing(true);
-    options.cacheSize(conf.getLong(
-        YarnConfiguration.TIMELINE_SERVICE_LEVELDB_READ_CACHE_SIZE,
-        YarnConfiguration.DEFAULT_TIMELINE_SERVICE_LEVELDB_READ_CACHE_SIZE));
-    JniDBFactory factory = new JniDBFactory();
-    String path = conf.get(YarnConfiguration.TIMELINE_SERVICE_LEVELDB_PATH);
-    File p = new File(path);
-    if (!p.exists()) {
-      if (!p.mkdirs()) {
-        throw new IOException("Couldn't create directory for leveldb " +
-            "timeline store " + path);
-      }
-    }
-    LOG.info("Using leveldb path " + path);
-    db = factory.open(new File(path, FILENAME), options);
-    startTimeWriteCache =
-        Collections.synchronizedMap(new LRUMap(getStartTimeWriteCacheSize(
-            conf)));
-    startTimeReadCache =
-        Collections.synchronizedMap(new LRUMap(getStartTimeReadCacheSize(
-            conf)));
-
-    if (conf.getBoolean(YarnConfiguration.TIMELINE_SERVICE_TTL_ENABLE, true)) {
-      deletionThread = new EntityDeletionThread(conf);
-      deletionThread.start();
-    }
-
-    super.serviceInit(conf);
-  }
-
-  @Override
-  protected void serviceStop() throws Exception {
-    if (deletionThread != null) {
-      deletionThread.interrupt();
-      LOG.info("Waiting for deletion thread to complete its current action");
-      try {
-        deletionThread.join();
-      } catch (InterruptedException e) {
-        LOG.warn("Interrupted while waiting for deletion thread to complete," +
-            " closing db now", e);
-      }
-    }
-    IOUtils.cleanup(LOG, db);
-    super.serviceStop();
-  }
-
-  private static class StartAndInsertTime {
-    final long startTime;
-    final long insertTime;
-
-    public StartAndInsertTime(long startTime, long insertTime) {
-      this.startTime = startTime;
-      this.insertTime = insertTime;
-    }
-  }
-
-  private class EntityDeletionThread extends Thread {
-    private final long ttl;
-    private final long ttlInterval;
-
-    public EntityDeletionThread(Configuration conf) {
-      ttl  = conf.getLong(YarnConfiguration.TIMELINE_SERVICE_TTL_MS,
-          YarnConfiguration.DEFAULT_TIMELINE_SERVICE_TTL_MS);
-      ttlInterval = conf.getLong(
-          YarnConfiguration.TIMELINE_SERVICE_LEVELDB_TTL_INTERVAL_MS,
-          YarnConfiguration.DEFAULT_TIMELINE_SERVICE_LEVELDB_TTL_INTERVAL_MS);
-      LOG.info("Starting deletion thread with ttl " + ttl + " and cycle " +
-          "interval " + ttlInterval);
-    }
-
-    @Override
-    public void run() {
-      while (true) {
-        long timestamp = System.currentTimeMillis() - ttl;
-        try {
-          discardOldEntities(timestamp);
-          Thread.sleep(ttlInterval);
-        } catch (IOException e) {
-          LOG.error(e);
-        } catch (InterruptedException e) {
-          LOG.info("Deletion thread received interrupt, exiting");
-          break;
-        }
-      }
-    }
-  }
-
-  private static class LockMap<K> {
-    private static class CountingReentrantLock<K> extends ReentrantLock {
-      private static final long serialVersionUID = 1L;
-      private int count;
-      private K key;
-
-      CountingReentrantLock(K key) {
-        super();
-        this.count = 0;
-        this.key = key;
-      }
-    }
-
-    private Map<K, CountingReentrantLock<K>> locks =
-        new HashMap<K, CountingReentrantLock<K>>();
-
-    synchronized CountingReentrantLock<K> getLock(K key) {
-      CountingReentrantLock<K> lock = locks.get(key);
-      if (lock == null) {
-        lock = new CountingReentrantLock<K>(key);
-        locks.put(key, lock);
-      }
-
-      lock.count++;
-      return lock;
-    }
-
-    synchronized void returnLock(CountingReentrantLock<K> lock) {
-      if (lock.count == 0) {
-        throw new IllegalStateException("Returned lock more times than it " +
-            "was retrieved");
-      }
-      lock.count--;
-
-      if (lock.count == 0) {
-        locks.remove(lock.key);
-      }
-    }
-  }
-
-  private static class KeyBuilder {
-    private static final int MAX_NUMBER_OF_KEY_ELEMENTS = 10;
-    private byte[][] b;
-    private boolean[] useSeparator;
-    private int index;
-    private int length;
-
-    public KeyBuilder(int size) {
-      b = new byte[size][];
-      useSeparator = new boolean[size];
-      index = 0;
-      length = 0;
-    }
-
-    public static KeyBuilder newInstance() {
-      return new KeyBuilder(MAX_NUMBER_OF_KEY_ELEMENTS);
-    }
-
-    public KeyBuilder add(String s) {
-      return add(s.getBytes(), true);
-    }
-
-    public KeyBuilder add(byte[] t) {
-      return add(t, false);
-    }
-
-    public KeyBuilder add(byte[] t, boolean sep) {
-      b[index] = t;
-      useSeparator[index] = sep;
-      length += t.length;
-      if (sep) {
-        length++;
-      }
-      index++;
-      return this;
-    }
-
-    public byte[] getBytes() throws IOException {
-      ByteArrayOutputStream baos = new ByteArrayOutputStream(length);
-      for (int i = 0; i < index; i++) {
-        baos.write(b[i]);
-        if (i < index-1 && useSeparator[i]) {
-          baos.write(0x0);
-        }
-      }
-      return baos.toByteArray();
-    }
-
-    public byte[] getBytesForLookup() throws IOException {
-      ByteArrayOutputStream baos = new ByteArrayOutputStream(length);
-      for (int i = 0; i < index; i++) {
-        baos.write(b[i]);
-        if (useSeparator[i]) {
-          baos.write(0x0);
-        }
-      }
-      return baos.toByteArray();
-    }
-  }
-
-  private static class KeyParser {
-    private final byte[] b;
-    private int offset;
-
-    public KeyParser(byte[] b, int offset) {
-      this.b = b;
-      this.offset = offset;
-    }
-
-    public String getNextString() throws IOException {
-      if (offset >= b.length) {
-        throw new IOException(
-            "tried to read nonexistent string from byte array");
-      }
-      int i = 0;
-      while (offset+i < b.length && b[offset+i] != 0x0) {
-        i++;
-      }
-      String s = new String(b, offset, i);
-      offset = offset + i + 1;
-      return s;
-    }
-
-    public long getNextLong() throws IOException {
-      if (offset+8 >= b.length) {
-        throw new IOException("byte array ran out when trying to read long");
-      }
-      long l = readReverseOrderedLong(b, offset);
-      offset += 8;
-      return l;
-    }
-
-    public int getOffset() {
-      return offset;
-    }
-  }
-
-  @Override
-  public TimelineEntity getEntity(String entityId, String entityType,
-      EnumSet<Field> fields) throws IOException {
-    Long revStartTime = getStartTimeLong(entityId, entityType);
-    if (revStartTime == null) {
-      return null;
-    }
-    byte[] prefix = KeyBuilder.newInstance().add(ENTITY_ENTRY_PREFIX)
-        .add(entityType).add(writeReverseOrderedLong(revStartTime))
-        .add(entityId).getBytesForLookup();
-
-    DBIterator iterator = null;
-    try {
-      iterator = db.iterator();
-      iterator.seek(prefix);
-
-      return getEntity(entityId, entityType, revStartTime, fields, iterator,
-          prefix, prefix.length);
-    } finally {
-      IOUtils.cleanup(LOG, iterator);
-    }
-  }
-
-  /**
-   * Read entity from a db iterator.  If no information is found in the
-   * specified fields for this entity, return null.
-   */
-  private static TimelineEntity getEntity(String entityId, String entityType,
-      Long startTime, EnumSet<Field> fields, DBIterator iterator,
-      byte[] prefix, int prefixlen) throws IOException {
-    if (fields == null) {
-      fields = EnumSet.allOf(Field.class);
-    }
-
-    TimelineEntity entity = new TimelineEntity();
-    boolean events = false;
-    boolean lastEvent = false;
-    if (fields.contains(Field.EVENTS)) {
-      events = true;
-    } else if (fields.contains(Field.LAST_EVENT_ONLY)) {
-      lastEvent = true;
-    } else {
-      entity.setEvents(null);
-    }
-    boolean relatedEntities = false;
-    if (fields.contains(Field.RELATED_ENTITIES)) {
-      relatedEntities = true;
-    } else {
-      entity.setRelatedEntities(null);
-    }
-    boolean primaryFilters = false;
-    if (fields.contains(Field.PRIMARY_FILTERS)) {
-      primaryFilters = true;
-    } else {
-      entity.setPrimaryFilters(null);
-    }
-    boolean otherInfo = false;
-    if (fields.contains(Field.OTHER_INFO)) {
-      otherInfo = true;
-    } else {
-      entity.setOtherInfo(null);
-    }
-
-    // iterate through the entity's entry, parsing information if it is part
-    // of a requested field
-    for (; iterator.hasNext(); iterator.next()) {
-      byte[] key = iterator.peekNext().getKey();
-      if (!prefixMatches(prefix, prefixlen, key)) {
-        break;
-      }
-      if (key.length == prefixlen) {
-        continue;
-      }
-      if (key[prefixlen] == PRIMARY_FILTERS_COLUMN[0]) {
-        if (primaryFilters) {
-          addPrimaryFilter(entity, key,
-              prefixlen + PRIMARY_FILTERS_COLUMN.length);
-        }
-      } else if (key[prefixlen] == OTHER_INFO_COLUMN[0]) {
-        if (otherInfo) {
-          entity.addOtherInfo(parseRemainingKey(key,
-              prefixlen + OTHER_INFO_COLUMN.length),
-              GenericObjectMapper.read(iterator.peekNext().getValue()));
-        }
-      } else if (key[prefixlen] == RELATED_ENTITIES_COLUMN[0]) {
-        if (relatedEntities) {
-          addRelatedEntity(entity, key,
-              prefixlen + RELATED_ENTITIES_COLUMN.length);
-        }
-      } else if (key[prefixlen] == EVENTS_COLUMN[0]) {
-        if (events || (lastEvent &&
-            entity.getEvents().size() == 0)) {
-          TimelineEvent event = getEntityEvent(null, key, prefixlen +
-              EVENTS_COLUMN.length, iterator.peekNext().getValue());
-          if (event != null) {
-            entity.addEvent(event);
-          }
-        }
-      } else {
-        if (key[prefixlen] !=
-            INVISIBLE_REVERSE_RELATED_ENTITIES_COLUMN[0]) {
-          LOG.warn(String.format("Found unexpected column for entity %s of " +
-              "type %s (0x%02x)", entityId, entityType, key[prefixlen]));
-        }
-      }
-    }
-
-    entity.setEntityId(entityId);
-    entity.setEntityType(entityType);
-    entity.setStartTime(startTime);
-
-    return entity;
-  }
-
-  @Override
-  public TimelineEvents getEntityTimelines(String entityType,
-      SortedSet<String> entityIds, Long limit, Long windowStart,
-      Long windowEnd, Set<String> eventType) throws IOException {
-    TimelineEvents events = new TimelineEvents();
-    if (entityIds == null || entityIds.isEmpty()) {
-      return events;
-    }
-    // create a lexicographically-ordered map from start time to entities
-    Map<byte[], List<EntityIdentifier>> startTimeMap = new TreeMap<byte[],
-        List<EntityIdentifier>>(new Comparator<byte[]>() {
-          @Override
-          public int compare(byte[] o1, byte[] o2) {
-            return WritableComparator.compareBytes(o1, 0, o1.length, o2, 0,
-                o2.length);
-          }
-        });
-    DBIterator iterator = null;
-    try {
-      // look up start times for the specified entities
-      // skip entities with no start time
-      for (String entityId : entityIds) {
-        byte[] startTime = getStartTime(entityId, entityType);
-        if (startTime != null) {
-          List<EntityIdentifier> entities = startTimeMap.get(startTime);
-          if (entities == null) {
-            entities = new ArrayList<EntityIdentifier>();
-            startTimeMap.put(startTime, entities);
-          }
-          entities.add(new EntityIdentifier(entityId, entityType));
-        }
-      }
-      for (Entry<byte[], List<EntityIdentifier>> entry :
-          startTimeMap.entrySet()) {
-        // look up the events matching the given parameters (limit,
-        // start time, end time, event types) for entities whose start times
-        // were found and add the entities to the return list
-        byte[] revStartTime = entry.getKey();
-        for (EntityIdentifier entityIdentifier : entry.getValue()) {
-          EventsOfOneEntity entity = new EventsOfOneEntity();
-          entity.setEntityId(entityIdentifier.getId());
-          entity.setEntityType(entityType);
-          events.addEvent(entity);
-          KeyBuilder kb = KeyBuilder.newInstance().add(ENTITY_ENTRY_PREFIX)
-              .add(entityType).add(revStartTime).add(entityIdentifier.getId())
-              .add(EVENTS_COLUMN);
-          byte[] prefix = kb.getBytesForLookup();
-          if (windowEnd == null) {
-            windowEnd = Long.MAX_VALUE;
-          }
-          byte[] revts = writeReverseOrderedLong(windowEnd);
-          kb.add(revts);
-          byte[] first = kb.getBytesForLookup();
-          byte[] last = null;
-          if (windowStart != null) {
-            last = KeyBuilder.newInstance().add(prefix)
-                .add(writeReverseOrderedLong(windowStart)).getBytesForLookup();
-          }
-          if (limit == null) {
-            limit = DEFAULT_LIMIT;
-          }
-          iterator = db.iterator();
-          for (iterator.seek(first); entity.getEvents().size() < limit &&
-              iterator.hasNext(); iterator.next()) {
-            byte[] key = iterator.peekNext().getKey();
-            if (!prefixMatches(prefix, prefix.length, key) || (last != null &&
-                WritableComparator.compareBytes(key, 0, key.length, last, 0,
-                    last.length) > 0)) {
-              break;
-            }
-            TimelineEvent event = getEntityEvent(eventType, key, prefix.length,
-                iterator.peekNext().getValue());
-            if (event != null) {
-              entity.addEvent(event);
-            }
-          }
-        }
-      }
-    } finally {
-      IOUtils.cleanup(LOG, iterator);
-    }
-    return events;
-  }
-
-  /**
-   * Returns true if the byte array begins with the specified prefix.
-   */
-  private static boolean prefixMatches(byte[] prefix, int prefixlen,
-      byte[] b) {
-    if (b.length < prefixlen) {
-      return false;
-    }
-    return WritableComparator.compareBytes(prefix, 0, prefixlen, b, 0,
-        prefixlen) == 0;
-  }
-
-  @Override
-  public TimelineEntities getEntities(String entityType,
-      Long limit, Long windowStart, Long windowEnd, String fromId, Long fromTs,
-      NameValuePair primaryFilter, Collection<NameValuePair> secondaryFilters,
-      EnumSet<Field> fields) throws IOException {
-    if (primaryFilter == null) {
-      // if no primary filter is specified, prefix the lookup with
-      // ENTITY_ENTRY_PREFIX
-      return getEntityByTime(ENTITY_ENTRY_PREFIX, entityType, limit,
-          windowStart, windowEnd, fromId, fromTs, secondaryFilters, fields);
-    } else {
-      // if a primary filter is specified, prefix the lookup with
-      // INDEXED_ENTRY_PREFIX + primaryFilterName + primaryFilterValue +
-      // ENTITY_ENTRY_PREFIX
-      byte[] base = KeyBuilder.newInstance().add(INDEXED_ENTRY_PREFIX)
-          .add(primaryFilter.getName())
-          .add(GenericObjectMapper.write(primaryFilter.getValue()), true)
-          .add(ENTITY_ENTRY_PREFIX).getBytesForLookup();
-      return getEntityByTime(base, entityType, limit, windowStart, windowEnd,
-          fromId, fromTs, secondaryFilters, fields);
-    }
-  }
-
-  /**
-   * Retrieves a list of entities satisfying given parameters.
-   *
-   * @param base A byte array prefix for the lookup
-   * @param entityType The type of the entity
-   * @param limit A limit on the number of entities to return
-   * @param starttime The earliest entity start time to retrieve (exclusive)
-   * @param endtime The latest entity start time to retrieve (inclusive)
-   * @param fromId Retrieve entities starting with this entity
-   * @param fromTs Ignore entities with insert timestamp later than this ts
-   * @param secondaryFilters Filter pairs that the entities should match
-   * @param fields The set of fields to retrieve
-   * @return A list of entities
-   * @throws IOException
-   */
-  private TimelineEntities getEntityByTime(byte[] base,
-      String entityType, Long limit, Long starttime, Long endtime,
-      String fromId, Long fromTs, Collection<NameValuePair> secondaryFilters,
-      EnumSet<Field> fields) throws IOException {
-    DBIterator iterator = null;
-    try {
-      KeyBuilder kb = KeyBuilder.newInstance().add(base).add(entityType);
-      // only db keys matching the prefix (base + entity type) will be parsed
-      byte[] prefix = kb.getBytesForLookup();
-      if (endtime == null) {
-        // if end time is null, place no restriction on end time
-        endtime = Long.MAX_VALUE;
-      }
-      // construct a first key that will be seeked to using end time or fromId
-      byte[] first = null;
-      if (fromId != null) {
-        Long fromIdStartTime = getStartTimeLong(fromId, entityType);
-        if (fromIdStartTime == null) {
-          // no start time for provided id, so return empty entities
-          return new TimelineEntities();
-        }
-        if (fromIdStartTime <= endtime) {
-          // if provided id's start time falls before the end of the window,
-          // use it to construct the seek key
-          first = kb.add(writeReverseOrderedLong(fromIdStartTime))
-              .add(fromId).getBytesForLookup();
-        }
-      }
-      // if seek key wasn't constructed using fromId, construct it using end ts
-      if (first == null) {
-        first = kb.add(writeReverseOrderedLong(endtime)).getBytesForLookup();
-      }
-      byte[] last = null;
-      if (starttime != null) {
-        // if start time is not null, set a last key that will not be
-        // iterated past
-        last = KeyBuilder.newInstance().add(base).add(entityType)
-            .add(writeReverseOrderedLong(starttime)).getBytesForLookup();
-      }
-      if (limit == null) {
-        // if limit is not specified, use the default
-        limit = DEFAULT_LIMIT;
-      }
-
-      TimelineEntities entities = new TimelineEntities();
-      iterator = db.iterator();
-      iterator.seek(first);
-      // iterate until one of the following conditions is met: limit is
-      // reached, there are no more keys, the key prefix no longer matches,
-      // or a start time has been specified and reached/exceeded
-      while (entities.getEntities().size() < limit && iterator.hasNext()) {
-        byte[] key = iterator.peekNext().getKey();
-        if (!prefixMatches(prefix, prefix.length, key) || (last != null &&
-            WritableComparator.compareBytes(key, 0, key.length, last, 0,
-                last.length) > 0)) {
-          break;
-        }
-        // read the start time and entity id from the current key
-        KeyParser kp = new KeyParser(key, prefix.length);
-        Long startTime = kp.getNextLong();
-        String entityId = kp.getNextString();
-
-        if (fromTs != null) {
-          long insertTime = readReverseOrderedLong(iterator.peekNext()
-              .getValue(), 0);
-          if (insertTime > fromTs) {
-            byte[] firstKey = key;
-            while (iterator.hasNext() && prefixMatches(firstKey,
-                kp.getOffset(), key)) {
-              iterator.next();
-              key = iterator.peekNext().getKey();
-            }
-            continue;
-          }
-        }
-
-        // parse the entity that owns this key, iterating over all keys for
-        // the entity
-        TimelineEntity entity = getEntity(entityId, entityType, startTime,
-            fields, iterator, key, kp.getOffset());
-        // determine if the retrieved entity matches the provided secondary
-        // filters, and if so add it to the list of entities to return
-        boolean filterPassed = true;
-        if (secondaryFilters != null) {
-          for (NameValuePair filter : secondaryFilters) {
-            Object v = entity.getOtherInfo().get(filter.getName());
-            if (v == null) {
-              Set<Object> vs = entity.getPrimaryFilters()
-                  .get(filter.getName());
-              if (vs != null && !vs.contains(filter.getValue())) {
-                filterPassed = false;
-                break;
-              }
-            } else if (!v.equals(filter.getValue())) {
-              filterPassed = false;
-              break;
-            }
-          }
-        }
-        if (filterPassed) {
-          entities.addEntity(entity);
-        }
-      }
-      return entities;
-    } finally {
-      IOUtils.cleanup(LOG, iterator);
-    }
-  }
-
-  /**
-   * Put a single entity.  If there is an error, add a TimelinePutError to the
-   * given response.
-   */
-  private void put(TimelineEntity entity, TimelinePutResponse response) {
-    LockMap.CountingReentrantLock<EntityIdentifier> lock =
-        writeLocks.getLock(new EntityIdentifier(entity.getEntityId(),
-            entity.getEntityType()));
-    lock.lock();
-    WriteBatch writeBatch = null;
-    List<EntityIdentifier> relatedEntitiesWithoutStartTimes =
-        new ArrayList<EntityIdentifier>();
-    byte[] revStartTime = null;
-    try {
-      writeBatch = db.createWriteBatch();
-      List<TimelineEvent> events = entity.getEvents();
-      // look up the start time for the entity
-      StartAndInsertTime startAndInsertTime = getAndSetStartTime(
-          entity.getEntityId(), entity.getEntityType(),
-          entity.getStartTime(), events);
-      if (startAndInsertTime == null) {
-        // if no start time is found, add an error and return
-        TimelinePutError error = new TimelinePutError();
-        error.setEntityId(entity.getEntityId());
-        error.setEntityType(entity.getEntityType());
-        error.setErrorCode(TimelinePutError.NO_START_TIME);
-        response.addError(error);
-        return;
-      }
-      revStartTime = writeReverseOrderedLong(startAndInsertTime
-          .startTime);
-
-      Map<String, Set<Object>> primaryFilters = entity.getPrimaryFilters();
-
-      // write entity marker
-      byte[] markerKey = createEntityMarkerKey(entity.getEntityId(),
-          entity.getEntityType(), revStartTime);
-      byte[] markerValue = writeReverseOrderedLong(startAndInsertTime
-          .insertTime);
-      writeBatch.put(markerKey, markerValue);
-      writePrimaryFilterEntries(writeBatch, primaryFilters, markerKey,
-          markerValue);
-
-      // write event entries
-      if (events != null && !events.isEmpty()) {
-        for (TimelineEvent event : events) {
-          byte[] revts = writeReverseOrderedLong(event.getTimestamp());
-          byte[] key = createEntityEventKey(entity.getEntityId(),
-              entity.getEntityType(), revStartTime, revts,
-              event.getEventType());
-          byte[] value = GenericObjectMapper.write(event.getEventInfo());
-          writeBatch.put(key, value);
-          writePrimaryFilterEntries(writeBatch, primaryFilters, key, value);
-        }
-      }
-
-      // write related entity entries
-      Map<String, Set<String>> relatedEntities =
-          entity.getRelatedEntities();
-      if (relatedEntities != null && !relatedEntities.isEmpty()) {
-        for (Entry<String, Set<String>> relatedEntityList :
-            relatedEntities.entrySet()) {
-          String relatedEntityType = relatedEntityList.getKey();
-          for (String relatedEntityId : relatedEntityList.getValue()) {
-            // invisible "reverse" entries (entity -> related entity)
-            byte[] key = createReverseRelatedEntityKey(entity.getEntityId(),
-                entity.getEntityType(), revStartTime, relatedEntityId,
-                relatedEntityType);
-            writeBatch.put(key, EMPTY_BYTES);
-            // look up start time of related entity
-            byte[] relatedEntityStartTime = getStartTime(relatedEntityId,
-                relatedEntityType);
-            // delay writing the related entity if no start time is found
-            if (relatedEntityStartTime == null) {
-              relatedEntitiesWithoutStartTimes.add(
-                  new EntityIdentifier(relatedEntityId, relatedEntityType));
-              continue;
-            }
-            // write "forward" entry (related entity -> entity)
-            key = createRelatedEntityKey(relatedEntityId,
-                relatedEntityType, relatedEntityStartTime,
-                entity.getEntityId(), entity.getEntityType());
-            writeBatch.put(key, EMPTY_BYTES);
-          }
-        }
-      }
-
-      // write primary filter entries
-      if (primaryFilters != null && !primaryFilters.isEmpty()) {
-        for (Entry<String, Set<Object>> primaryFilter :
-            primaryFilters.entrySet()) {
-          for (Object primaryFilterValue : primaryFilter.getValue()) {
-            byte[] key = createPrimaryFilterKey(entity.getEntityId(),
-                entity.getEntityType(), revStartTime,
-                primaryFilter.getKey(), primaryFilterValue);
-            writeBatch.put(key, EMPTY_BYTES);
-            writePrimaryFilterEntries(writeBatch, primaryFilters, key,
-                EMPTY_BYTES);
-          }
-        }
-      }
-
-      // write other info entries
-      Map<String, Object> otherInfo = entity.getOtherInfo();
-      if (otherInfo != null && !otherInfo.isEmpty()) {
-        for (Entry<String, Object> i : otherInfo.entrySet()) {
-          byte[] key = createOtherInfoKey(entity.getEntityId(),
-              entity.getEntityType(), revStartTime, i.getKey());
-          byte[] value = GenericObjectMapper.write(i.getValue());
-          writeBatch.put(key, value);
-          writePrimaryFilterEntries(writeBatch, primaryFilters, key, value);
-        }
-      }
-      db.write(writeBatch);
-    } catch (IOException e) {
-      LOG.error("Error putting entity " + entity.getEntityId() +
-          " of type " + entity.getEntityType(), e);
-      TimelinePutError error = new TimelinePutError();
-      error.setEntityId(entity.getEntityId());
-      error.setEntityType(entity.getEntityType());
-      error.setErrorCode(TimelinePutError.IO_EXCEPTION);
-      response.addError(error);
-    } finally {
-      lock.unlock();
-      writeLocks.returnLock(lock);
-      IOUtils.cleanup(LOG, writeBatch);
-    }
-
-    for (EntityIdentifier relatedEntity : relatedEntitiesWithoutStartTimes) {
-      lock = writeLocks.getLock(relatedEntity);
-      lock.lock();
-      try {
-        StartAndInsertTime relatedEntityStartAndInsertTime =
-            getAndSetStartTime(relatedEntity.getId(), relatedEntity.getType(),
-            readReverseOrderedLong(revStartTime, 0), null);
-        if (relatedEntityStartAndInsertTime == null) {
-          throw new IOException("Error setting start time for related entity");
-        }
-        byte[] relatedEntityStartTime = writeReverseOrderedLong(
-            relatedEntityStartAndInsertTime.startTime);
-        db.put(createRelatedEntityKey(relatedEntity.getId(),
-            relatedEntity.getType(), relatedEntityStartTime,
-            entity.getEntityId(), entity.getEntityType()), EMPTY_BYTES);
-        db.put(createEntityMarkerKey(relatedEntity.getId(),
-            relatedEntity.getType(), relatedEntityStartTime),
-            writeReverseOrderedLong(relatedEntityStartAndInsertTime
-                .insertTime));
-      } catch (IOException e) {
-        LOG.error("Error putting related entity " + relatedEntity.getId() +
-            " of type " + relatedEntity.getType() + " for entity " +
-            entity.getEntityId() + " of type " + entity.getEntityType(), e);
-        TimelinePutError error = new TimelinePutError();
-        error.setEntityId(entity.getEntityId());
-        error.setEntityType(entity.getEntityType());
-        error.setErrorCode(TimelinePutError.IO_EXCEPTION);
-        response.addError(error);
-      } finally {
-        lock.unlock();
-        writeLocks.returnLock(lock);
-      }
-    }
-  }
-
-  /**
-   * For a given key / value pair that has been written to the db,
-   * write additional entries to the db for each primary filter.
-   */
-  private static void writePrimaryFilterEntries(WriteBatch writeBatch,
-      Map<String, Set<Object>> primaryFilters, byte[] key, byte[] value)
-      throws IOException {
-    if (primaryFilters != null && !primaryFilters.isEmpty()) {
-      for (Entry<String, Set<Object>> pf : primaryFilters.entrySet()) {
-        for (Object pfval : pf.getValue()) {
-          writeBatch.put(addPrimaryFilterToKey(pf.getKey(), pfval,
-              key), value);
-        }
-      }
-    }
-  }
-
-  @Override
-  public TimelinePutResponse put(TimelineEntities entities) {
-    try {
-      deleteLock.readLock().lock();
-      TimelinePutResponse response = new TimelinePutResponse();
-      for (TimelineEntity entity : entities.getEntities()) {
-        put(entity, response);
-      }
-      return response;
-    } finally {
-      deleteLock.readLock().unlock();
-    }
-  }
-
-  /**
-   * Get the unique start time for a given entity as a byte array that sorts
-   * the timestamps in reverse order (see {@link
-   * GenericObjectMapper#writeReverseOrderedLong(long)}).
-   *
-   * @param entityId The id of the entity
-   * @param entityType The type of the entity
-   * @return A byte array, null if not found
-   * @throws IOException
-   */
-  private byte[] getStartTime(String entityId, String entityType)
-      throws IOException {
-    Long l = getStartTimeLong(entityId, entityType);
-    return l == null ? null : writeReverseOrderedLong(l);
-  }
-
-  /**
-   * Get the unique start time for a given entity as a Long.
-   *
-   * @param entityId The id of the entity
-   * @param entityType The type of the entity
-   * @return A Long, null if not found
-   * @throws IOException
-   */
-  private Long getStartTimeLong(String entityId, String entityType)
-      throws IOException {
-    EntityIdentifier entity = new EntityIdentifier(entityId, entityType);
-    // start time is not provided, so try to look it up
-    if (startTimeReadCache.containsKey(entity)) {
-      // found the start time in the cache
-      return startTimeReadCache.get(entity);
-    } else {
-      // try to look up the start time in the db
-      byte[] b = createStartTimeLookupKey(entity.getId(), entity.getType());
-      byte[] v = db.get(b);
-      if (v == null) {
-        // did not find the start time in the db
-        return null;
-      } else {
-        // found the start time in the db
-        Long l = readReverseOrderedLong(v, 0);
-        startTimeReadCache.put(entity, l);
-        return l;
-      }
-    }
-  }
-
-  /**
-   * Get the unique start time for a given entity as a byte array that sorts
-   * the timestamps in reverse order (see {@link
-   * GenericObjectMapper#writeReverseOrderedLong(long)}). If the start time
-   * doesn't exist, set it based on the information provided. Should only be
-   * called when a lock has been obtained on the entity.
-   *
-   * @param entityId The id of the entity
-   * @param entityType The type of the entity
-   * @param startTime The start time of the entity, or null
-   * @param events A list of events for the entity, or null
-   * @return A StartAndInsertTime
-   * @throws IOException
-   */
-  private StartAndInsertTime getAndSetStartTime(String entityId,
-      String entityType, Long startTime, List<TimelineEvent> events)
-      throws IOException {
-    EntityIdentifier entity = new EntityIdentifier(entityId, entityType);
-    if (startTime == null) {
-      // start time is not provided, so try to look it up
-      if (startTimeWriteCache.containsKey(entity)) {
-        // found the start time in the cache
-        return startTimeWriteCache.get(entity);
-      } else {
-        if (events != null) {
-          // prepare a start time from events in case it is needed
-          Long min = Long.MAX_VALUE;
-          for (TimelineEvent e : events) {
-            if (min > e.getTimestamp()) {
-              min = e.getTimestamp();
-            }
-          }
-          startTime = min;
-        }
-        return checkStartTimeInDb(entity, startTime);
-      }
-    } else {
-      // start time is provided
-      if (startTimeWriteCache.containsKey(entity)) {
-        // always use start time from cache if it exists
-        return startTimeWriteCache.get(entity);
-      } else {
-        // check the provided start time matches the db
-        return checkStartTimeInDb(entity, startTime);
-      }
-    }
-  }
-
-  /**
-   * Checks db for start time and returns it if it exists.  If it doesn't
-   * exist, writes the suggested start time (if it is not null).  This is
-   * only called when the start time is not found in the cache,
-   * so it adds it back into the cache if it is found. Should only be called
-   * when a lock has been obtained on the entity.
-   */
-  private StartAndInsertTime checkStartTimeInDb(EntityIdentifier entity,
-      Long suggestedStartTime) throws IOException {
-    StartAndInsertTime startAndInsertTime = null;
-    // create lookup key for start time
-    byte[] b = createStartTimeLookupKey(entity.getId(), entity.getType());
-    // retrieve value for key
-    byte[] v = db.get(b);
-    if (v == null) {
-      // start time doesn't exist in db
-      if (suggestedStartTime == null) {
-        return null;
-      }
-      startAndInsertTime = new StartAndInsertTime(suggestedStartTime,
-          System.currentTimeMillis());
-
-      // write suggested start time
-      v = new byte[16];
-      writeReverseOrderedLong(suggestedStartTime, v, 0);
-      writeReverseOrderedLong(startAndInsertTime.insertTime, v, 8);
-      WriteOptions writeOptions = new WriteOptions();
-      writeOptions.sync(true);
-      db.put(b, v, writeOptions);
-    } else {
-      // found start time in db, so ignore suggested start time
-      startAndInsertTime = new StartAndInsertTime(readReverseOrderedLong(v, 0),
-          readReverseOrderedLong(v, 8));
-    }
-    startTimeWriteCache.put(entity, startAndInsertTime);
-    startTimeReadCache.put(entity, startAndInsertTime.startTime);
-    return startAndInsertTime;
-  }
-
-  /**
-   * Creates a key for looking up the start time of a given entity,
-   * of the form START_TIME_LOOKUP_PREFIX + entity type + entity id.
-   */
-  private static byte[] createStartTimeLookupKey(String entityId,
-      String entityType) throws IOException {
-    return KeyBuilder.newInstance().add(START_TIME_LOOKUP_PREFIX)
-        .add(entityType).add(entityId).getBytes();
-  }
-
-  /**
-   * Creates an entity marker, serializing ENTITY_ENTRY_PREFIX + entity type +
-   * revstarttime + entity id.
-   */
-  private static byte[] createEntityMarkerKey(String entityId,
-      String entityType, byte[] revStartTime) throws IOException {
-    return KeyBuilder.newInstance().add(ENTITY_ENTRY_PREFIX)
-        .add(entityType).add(revStartTime).add(entityId).getBytesForLookup();
-  }
-
-  /**
-   * Creates an index entry for the given key of the form
-   * INDEXED_ENTRY_PREFIX + primaryfiltername + primaryfiltervalue + key.
-   */
-  private static byte[] addPrimaryFilterToKey(String primaryFilterName,
-      Object primaryFilterValue, byte[] key) throws IOException {
-    return KeyBuilder.newInstance().add(INDEXED_ENTRY_PREFIX)
-        .add(primaryFilterName)
-        .add(GenericObjectMapper.write(primaryFilterValue), true).add(key)
-        .getBytes();
-  }
-
-  /**
-   * Creates an event key, serializing ENTITY_ENTRY_PREFIX + entity type +
-   * revstarttime + entity id + EVENTS_COLUMN + reveventtimestamp + event type.
-   */
-  private static byte[] createEntityEventKey(String entityId,
-      String entityType, byte[] revStartTime, byte[] revEventTimestamp,
-      String eventType) throws IOException {
-    return KeyBuilder.newInstance().add(ENTITY_ENTRY_PREFIX)
-        .add(entityType).add(revStartTime).add(entityId).add(EVENTS_COLUMN)
-        .add(revEventTimestamp).add(eventType).getBytes();
-  }
-
-  /**
-   * Creates an event object from the given key, offset, and value.  If the
-   * event type is not contained in the specified set of event types,
-   * returns null.
-   */
-  private static TimelineEvent getEntityEvent(Set<String> eventTypes,
-      byte[] key, int offset, byte[] value) throws IOException {
-    KeyParser kp = new KeyParser(key, offset);
-    long ts = kp.getNextLong();
-    String tstype = kp.getNextString();
-    if (eventTypes == null || eventTypes.contains(tstype)) {
-      TimelineEvent event = new TimelineEvent();
-      event.setTimestamp(ts);
-      event.setEventType(tstype);
-      Object o = GenericObjectMapper.read(value);
-      if (o == null) {
-        event.setEventInfo(null);
-      } else if (o instanceof Map) {
-        @SuppressWarnings("unchecked")
-        Map<String, Object> m = (Map<String, Object>) o;
-        event.setEventInfo(m);
-      } else {
-        throw new IOException("Couldn't deserialize event info map");
-      }
-      return event;
-    }
-    return null;
-  }
-
-  /**
-   * Creates a primary filter key, serializing ENTITY_ENTRY_PREFIX +
-   * entity type + revstarttime + entity id + PRIMARY_FILTERS_COLUMN + name +
-   * value.
-   */
-  private static byte[] createPrimaryFilterKey(String entityId,
-      String entityType, byte[] revStartTime, String name, Object value)
-      throws IOException {
-    return KeyBuilder.newInstance().add(ENTITY_ENTRY_PREFIX).add(entityType)
-        .add(revStartTime).add(entityId).add(PRIMARY_FILTERS_COLUMN).add(name)
-        .add(GenericObjectMapper.write(value)).getBytes();
-  }
-
-  /**
-   * Parses the primary filter from the given key at the given offset and
-   * adds it to the given entity.
-   */
-  private static void addPrimaryFilter(TimelineEntity entity, byte[] key,
-      int offset) throws IOException {
-    KeyParser kp = new KeyParser(key, offset);
-    String name = kp.getNextString();
-    Object value = GenericObjectMapper.read(key, kp.getOffset());
-    entity.addPrimaryFilter(name, value);
-  }
-
-  /**
-   * Creates an other info key, serializing ENTITY_ENTRY_PREFIX + entity type +
-   * revstarttime + entity id + OTHER_INFO_COLUMN + name.
-   */
-  private static byte[] createOtherInfoKey(String entityId, String entityType,
-      byte[] revStartTime, String name) throws IOException {
-    return KeyBuilder.newInstance().add(ENTITY_ENTRY_PREFIX).add(entityType)
-        .add(revStartTime).add(entityId).add(OTHER_INFO_COLUMN).add(name)
-        .getBytes();
-  }
-
-  /**
-   * Creates a string representation of the byte array from the given offset
-   * to the end of the array (for parsing other info keys).
-   */
-  private static String parseRemainingKey(byte[] b, int offset) {
-    return new String(b, offset, b.length - offset);
-  }
-
-  /**
-   * Creates a related entity key, serializing ENTITY_ENTRY_PREFIX +
-   * entity type + revstarttime + entity id + RELATED_ENTITIES_COLUMN +
-   * relatedentity type + relatedentity id.
-   */
-  private static byte[] createRelatedEntityKey(String entityId,
-      String entityType, byte[] revStartTime, String relatedEntityId,
-      String relatedEntityType) throws IOException {
-    return KeyBuilder.newInstance().add(ENTITY_ENTRY_PREFIX).add(entityType)
-        .add(revStartTime).add(entityId).add(RELATED_ENTITIES_COLUMN)
-        .add(relatedEntityType).add(relatedEntityId).getBytes();
-  }
-
-  /**
-   * Parses the related entity from the given key at the given offset and
-   * adds it to the given entity.
-   */
-  private static void addRelatedEntity(TimelineEntity entity, byte[] key,
-      int offset) throws IOException {
-    KeyParser kp = new KeyParser(key, offset);
-    String type = kp.getNextString();
-    String id = kp.getNextString();
-    entity.addRelatedEntity(type, id);
-  }
-
-  /**
-   * Creates a reverse related entity key, serializing ENTITY_ENTRY_PREFIX +
-   * entity type + revstarttime + entity id +
-   * INVISIBLE_REVERSE_RELATED_ENTITIES_COLUMN +
-   * relatedentity type + relatedentity id.
-   */
-  private static byte[] createReverseRelatedEntityKey(String entityId,
-      String entityType, byte[] revStartTime, String relatedEntityId,
-      String relatedEntityType) throws IOException {
-    return KeyBuilder.newInstance().add(ENTITY_ENTRY_PREFIX).add(entityType)
-        .add(revStartTime).add(entityId)
-        .add(INVISIBLE_REVERSE_RELATED_ENTITIES_COLUMN)
-        .add(relatedEntityType).add(relatedEntityId).getBytes();
-  }
-
-  /**
-   * Clears the cache to test reloading start times from leveldb (only for
-   * testing).
-   */
-  @VisibleForTesting
-  void clearStartTimeCache() {
-    startTimeWriteCache.clear();
-    startTimeReadCache.clear();
-  }
-
-  @VisibleForTesting
-  static int getStartTimeReadCacheSize(Configuration conf) {
-    return conf.getInt(
-        YarnConfiguration.TIMELINE_SERVICE_LEVELDB_START_TIME_READ_CACHE_SIZE,
-        YarnConfiguration.
-            DEFAULT_TIMELINE_SERVICE_LEVELDB_START_TIME_READ_CACHE_SIZE);
-  }
-
-  @VisibleForTesting
-  static int getStartTimeWriteCacheSize(Configuration conf) {
-    return conf.getInt(
-        YarnConfiguration.TIMELINE_SERVICE_LEVELDB_START_TIME_WRITE_CACHE_SIZE,
-        YarnConfiguration.
-            DEFAULT_TIMELINE_SERVICE_LEVELDB_START_TIME_WRITE_CACHE_SIZE);
-  }
-
-  // warning is suppressed to prevent eclipse from noting unclosed resource
-  @SuppressWarnings("resource")
-  @VisibleForTesting
-  List<String> getEntityTypes() throws IOException {
-    DBIterator iterator = null;
-    try {
-      iterator = getDbIterator(false);
-      List<String> entityTypes = new ArrayList<String>();
-      iterator.seek(ENTITY_ENTRY_PREFIX);
-      while (iterator.hasNext()) {
-        byte[] key = iterator.peekNext().getKey();
-        if (key[0] != ENTITY_ENTRY_PREFIX[0]) {
-          break;
-        }
-        KeyParser kp = new KeyParser(key,
-            ENTITY_ENTRY_PREFIX.length);
-        String entityType = kp.getNextString();
-        entityTypes.add(entityType);
-        byte[] lookupKey = KeyBuilder.newInstance().add(ENTITY_ENTRY_PREFIX)
-            .add(entityType).getBytesForLookup();
-        if (lookupKey[lookupKey.length - 1] != 0x0) {
-          throw new IOException("Found unexpected end byte in lookup key");
-        }
-        lookupKey[lookupKey.length - 1] = 0x1;
-        iterator.seek(lookupKey);
-      }
-      return entityTypes;
-    } finally {
-      IOUtils.cleanup(LOG, iterator);
-    }
-  }
-
-  /**
-   * Finds all keys in the db that have a given prefix and deletes them on
-   * the given write batch.
-   */
-  private void deleteKeysWithPrefix(WriteBatch writeBatch, byte[] prefix,
-      DBIterator iterator) {
-    for (iterator.seek(prefix); iterator.hasNext(); iterator.next()) {
-      byte[] key = iterator.peekNext().getKey();
-      if (!prefixMatches(prefix, prefix.length, key)) {
-        break;
-      }
-      writeBatch.delete(key);
-    }
-  }
-
-  @VisibleForTesting
-  boolean deleteNextEntity(String entityType, byte[] reverseTimestamp,
-      DBIterator iterator, DBIterator pfIterator, boolean seeked)
-      throws IOException {
-    WriteBatch writeBatch = null;
-    try {
-      KeyBuilder kb = KeyBuilder.newInstance().add(ENTITY_ENTRY_PREFIX)
-          .add(entityType);
-      byte[] typePrefix = kb.getBytesForLookup();
-      kb.add(reverseTimestamp);
-      if (!seeked) {
-        iterator.seek(kb.getBytesForLookup());
-      }
-      if (!iterator.hasNext()) {
-        return false;
-      }
-      byte[] entityKey = iterator.peekNext().getKey();
-      if (!prefixMatches(typePrefix, typePrefix.length, entityKey)) {
-        return false;
-      }
-
-      // read the start time and entity id from the current key
-      KeyParser kp = new KeyParser(entityKey, typePrefix.length + 8);
-      String entityId = kp.getNextString();
-      int prefixlen = kp.getOffset();
-      byte[] deletePrefix = new byte[prefixlen];
-      System.arraycopy(entityKey, 0, deletePrefix, 0, prefixlen);
-
-      writeBatch = db.createWriteBatch();
-
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("Deleting entity type:" + entityType + " id:" + entityId);
-      }
-      // remove start time from cache and db
-      writeBatch.delete(createStartTimeLookupKey(entityId, entityType));
-      EntityIdentifier entityIdentifier =
-          new EntityIdentifier(entityId, entityType);
-      startTimeReadCache.remove(entityIdentifier);
-      startTimeWriteCache.remove(entityIdentifier);
-
-      // delete current entity
-      for (; iterator.hasNext(); iterator.next()) {
-        byte[] key = iterator.peekNext().getKey();
-        if (!prefixMatches(entityKey, prefixlen, key)) {
-          break;
-        }
-        writeBatch.delete(key);
-
-        if (key.length == prefixlen) {
-          continue;
-        }
-        if (key[prefixlen] == PRIMARY_FILTERS_COLUMN[0]) {
-          kp = new KeyParser(key,
-              prefixlen + PRIMARY_FILTERS_COLUMN.length);
-          String name = kp.getNextString();
-          Object value = GenericObjectMapper.read(key, kp.getOffset());
-          deleteKeysWithPrefix(writeBatch, addPrimaryFilterToKey(name, value,
-              deletePrefix), pfIterator);
-          if (LOG.isDebugEnabled()) {
-            LOG.debug("Deleting entity type:" + entityType + " id:" +
-                entityId + " primary filter entry " + name + " " +
-                value);
-          }
-        } else if (key[prefixlen] == RELATED_ENTITIES_COLUMN[0]) {
-          kp = new KeyParser(key,
-              prefixlen + RELATED_ENTITIES_COLUMN.length);
-          String type = kp.getNextString();
-          String id = kp.getNextString();
-          byte[] relatedEntityStartTime = getStartTime(id, type);
-          if (relatedEntityStartTime == null) {
-            LOG.warn("Found no start time for " +
-                "related entity " + id + " of type " + type + " while " +
-                "deleting " + entityId + " of type " + entityType);
-            continue;
-          }
-          writeBatch.delete(createReverseRelatedEntityKey(id, type,
-              relatedEntityStartTime, entityId, entityType));
-          if (LOG.isDebugEnabled()) {
-            LOG.debug("Deleting entity type:" + entityType + " id:" +
-                entityId + " from invisible reverse related entity " +
-                "entry of type:" + type + " id:" + id);
-          }
-        } else if (key[prefixlen] ==
-            INVISIBLE_REVERSE_RELATED_ENTITIES_COLUMN[0]) {
-          kp = new KeyParser(key, prefixlen +
-              INVISIBLE_REVERSE_RELATED_ENTITIES_COLUMN.length);
-          String type = kp.getNextString();
-          String id = kp.getNextString();
-          byte[] relatedEntityStartTime = getStartTime(id, type);
-          if (relatedEntityStartTime == null) {
-            LOG.warn("Found no start time for reverse " +
-                "related entity " + id + " of type " + type + " while " +
-                "deleting " + entityId + " of type " + entityType);
-            continue;
-          }
-          writeBatch.delete(createRelatedEntityKey(id, type,
-              relatedEntityStartTime, entityId, entityType));
-          if (LOG.isDebugEnabled()) {
-            LOG.debug("Deleting entity type:" + entityType + " id:" +
-                entityId + " from related entity entry of type:" +
-                type + " id:" + id);
-          }
-        }
-      }
-      WriteOptions writeOptions = new WriteOptions();
-      writeOptions.sync(true);
-      db.write(writeBatch, writeOptions);
-      return true;
-    } finally {
-      IOUtils.cleanup(LOG, writeBatch);
-    }
-  }
-
-  /**
-   * Discards entities with start timestamp less than or equal to the given
-   * timestamp.
-   */
-  @VisibleForTesting
-  void discardOldEntities(long timestamp)
-      throws IOException, InterruptedException {
-    byte[] reverseTimestamp = writeReverseOrderedLong(timestamp);
-    long totalCount = 0;
-    long t1 = System.currentTimeMillis();
-    try {
-      List<String> entityTypes = getEntityTypes();
-      for (String entityType : entityTypes) {
-        DBIterator iterator = null;
-        DBIterator pfIterator = null;
-        long typeCount = 0;
-        try {
-          deleteLock.writeLock().lock();
-          iterator = getDbIterator(false);
-          pfIterator = getDbIterator(false);
-
-          if (deletionThread != null && deletionThread.isInterrupted()) {
-            throw new InterruptedException();
-          }
-          boolean seeked = false;
-          while (deleteNextEntity(entityType, reverseTimestamp, iterator,
-              pfIterator, seeked)) {
-            typeCount++;
-            totalCount++;
-            seeked = true;
-            if (deletionThread != null && deletionThread.isInterrupted()) {
-              throw new InterruptedException();
-            }
-          }
-        } catch (IOException e) {
-          LOG.error("Got IOException while deleting entities for type " +
-              entityType + ", continuing to next type", e);
-        } finally {
-          IOUtils.cleanup(LOG, iterator, pfIterator);
-          deleteLock.writeLock().unlock();
-          if (typeCount > 0) {
-            LOG.info("Deleted " + typeCount + " entities of type " +
-                entityType);
-          }
-        }
-      }
-    } finally {
-      long t2 = System.currentTimeMillis();
-      LOG.info("Discarded " + totalCount + " entities for timestamp " +
-          timestamp + " and earlier in " + (t2 - t1) / 1000.0 + " seconds");
-    }
-  }
-
-  @VisibleForTesting
-  DBIterator getDbIterator(boolean fillCache) {
-    ReadOptions readOptions = new ReadOptions();
-    readOptions.fillCache(fillCache);
-    return db.iterator(readOptions);
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/MemoryTimelineStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/MemoryTimelineStore.java
deleted file mode 100644
index 86ac1f8..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/MemoryTimelineStore.java
+++ /dev/null
@@ -1,360 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice.timeline;
-
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.EnumSet;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.PriorityQueue;
-import java.util.Set;
-import java.util.SortedSet;
-import java.util.TreeSet;
-
-import org.apache.hadoop.classification.InterfaceAudience.Private;
-import org.apache.hadoop.classification.InterfaceStability.Unstable;
-import org.apache.hadoop.service.AbstractService;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEntity;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEvent;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEvents;
-import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEvents.EventsOfOneEntity;
-import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse.TimelinePutError;
-
-/**
- * In-memory implementation of {@link TimelineStore}. This
- * implementation is for test purpose only. If users improperly instantiate it,
- * they may encounter reading and writing history data in different memory
- * store.
- * 
- */
-@Private
-@Unstable
-public class MemoryTimelineStore
-    extends AbstractService implements TimelineStore {
-
-  private Map<EntityIdentifier, TimelineEntity> entities =
-      new HashMap<EntityIdentifier, TimelineEntity>();
-  private Map<EntityIdentifier, Long> entityInsertTimes =
-      new HashMap<EntityIdentifier, Long>();
-
-  public MemoryTimelineStore() {
-    super(MemoryTimelineStore.class.getName());
-  }
-
-  @Override
-  public TimelineEntities getEntities(String entityType, Long limit,
-      Long windowStart, Long windowEnd, String fromId, Long fromTs,
-      NameValuePair primaryFilter, Collection<NameValuePair> secondaryFilters,
-      EnumSet<Field> fields) {
-    if (limit == null) {
-      limit = DEFAULT_LIMIT;
-    }
-    if (windowStart == null) {
-      windowStart = Long.MIN_VALUE;
-    }
-    if (windowEnd == null) {
-      windowEnd = Long.MAX_VALUE;
-    }
-    if (fields == null) {
-      fields = EnumSet.allOf(Field.class);
-    }
-
-    Iterator<TimelineEntity> entityIterator = null;
-    if (fromId != null) {
-      TimelineEntity firstEntity = entities.get(new EntityIdentifier(fromId,
-          entityType));
-      if (firstEntity == null) {
-        return new TimelineEntities();
-      } else {
-        entityIterator = new TreeSet<TimelineEntity>(entities.values())
-            .tailSet(firstEntity, true).iterator();
-      }
-    }
-    if (entityIterator == null) {
-      entityIterator = new PriorityQueue<TimelineEntity>(entities.values())
-          .iterator();
-    }
-
-    List<TimelineEntity> entitiesSelected = new ArrayList<TimelineEntity>();
-    while (entityIterator.hasNext()) {
-      TimelineEntity entity = entityIterator.next();
-      if (entitiesSelected.size() >= limit) {
-        break;
-      }
-      if (!entity.getEntityType().equals(entityType)) {
-        continue;
-      }
-      if (entity.getStartTime() <= windowStart) {
-        continue;
-      }
-      if (entity.getStartTime() > windowEnd) {
-        continue;
-      }
-      if (fromTs != null && entityInsertTimes.get(new EntityIdentifier(
-          entity.getEntityId(), entity.getEntityType())) > fromTs) {
-        continue;
-      }
-      if (primaryFilter != null &&
-          !matchPrimaryFilter(entity.getPrimaryFilters(), primaryFilter)) {
-        continue;
-      }
-      if (secondaryFilters != null) { // AND logic
-        boolean flag = true;
-        for (NameValuePair secondaryFilter : secondaryFilters) {
-          if (secondaryFilter != null && !matchPrimaryFilter(
-              entity.getPrimaryFilters(), secondaryFilter) &&
-              !matchFilter(entity.getOtherInfo(), secondaryFilter)) {
-            flag = false;
-            break;
-          }
-        }
-        if (!flag) {
-          continue;
-        }
-      }
-      entitiesSelected.add(entity);
-    }
-    List<TimelineEntity> entitiesToReturn = new ArrayList<TimelineEntity>();
-    for (TimelineEntity entitySelected : entitiesSelected) {
-      entitiesToReturn.add(maskFields(entitySelected, fields));
-    }
-    Collections.sort(entitiesToReturn);
-    TimelineEntities entitiesWrapper = new TimelineEntities();
-    entitiesWrapper.setEntities(entitiesToReturn);
-    return entitiesWrapper;
-  }
-
-  @Override
-  public TimelineEntity getEntity(String entityId, String entityType,
-      EnumSet<Field> fieldsToRetrieve) {
-    if (fieldsToRetrieve == null) {
-      fieldsToRetrieve = EnumSet.allOf(Field.class);
-    }
-    TimelineEntity entity = entities.get(new EntityIdentifier(entityId, entityType));
-    if (entity == null) {
-      return null;
-    } else {
-      return maskFields(entity, fieldsToRetrieve);
-    }
-  }
-
-  @Override
-  public TimelineEvents getEntityTimelines(String entityType,
-      SortedSet<String> entityIds, Long limit, Long windowStart,
-      Long windowEnd,
-      Set<String> eventTypes) {
-    TimelineEvents allEvents = new TimelineEvents();
-    if (entityIds == null) {
-      return allEvents;
-    }
-    if (limit == null) {
-      limit = DEFAULT_LIMIT;
-    }
-    if (windowStart == null) {
-      windowStart = Long.MIN_VALUE;
-    }
-    if (windowEnd == null) {
-      windowEnd = Long.MAX_VALUE;
-    }
-    for (String entityId : entityIds) {
-      EntityIdentifier entityID = new EntityIdentifier(entityId, entityType);
-      TimelineEntity entity = entities.get(entityID);
-      if (entity == null) {
-        continue;
-      }
-      EventsOfOneEntity events = new EventsOfOneEntity();
-      events.setEntityId(entityId);
-      events.setEntityType(entityType);
-      for (TimelineEvent event : entity.getEvents()) {
-        if (events.getEvents().size() >= limit) {
-          break;
-        }
-        if (event.getTimestamp() <= windowStart) {
-          continue;
-        }
-        if (event.getTimestamp() > windowEnd) {
-          continue;
-        }
-        if (eventTypes != null && !eventTypes.contains(event.getEventType())) {
-          continue;
-        }
-        events.addEvent(event);
-      }
-      allEvents.addEvent(events);
-    }
-    return allEvents;
-  }
-
-  @Override
-  public TimelinePutResponse put(TimelineEntities data) {
-    TimelinePutResponse response = new TimelinePutResponse();
-    for (TimelineEntity entity : data.getEntities()) {
-      EntityIdentifier entityId =
-          new EntityIdentifier(entity.getEntityId(), entity.getEntityType());
-      // store entity info in memory
-      TimelineEntity existingEntity = entities.get(entityId);
-      if (existingEntity == null) {
-        existingEntity = new TimelineEntity();
-        existingEntity.setEntityId(entity.getEntityId());
-        existingEntity.setEntityType(entity.getEntityType());
-        existingEntity.setStartTime(entity.getStartTime());
-        entities.put(entityId, existingEntity);
-        entityInsertTimes.put(entityId, System.currentTimeMillis());
-      }
-      if (entity.getEvents() != null) {
-        if (existingEntity.getEvents() == null) {
-          existingEntity.setEvents(entity.getEvents());
-        } else {
-          existingEntity.addEvents(entity.getEvents());
-        }
-        Collections.sort(existingEntity.getEvents());
-      }
-      // check startTime
-      if (existingEntity.getStartTime() == null) {
-        if (existingEntity.getEvents() == null
-            || existingEntity.getEvents().isEmpty()) {
-          TimelinePutError error = new TimelinePutError();
-          error.setEntityId(entityId.getId());
-          error.setEntityType(entityId.getType());
-          error.setErrorCode(TimelinePutError.NO_START_TIME);
-          response.addError(error);
-          entities.remove(entityId);
-          entityInsertTimes.remove(entityId);
-          continue;
-        } else {
-          Long min = Long.MAX_VALUE;
-          for (TimelineEvent e : entity.getEvents()) {
-            if (min > e.getTimestamp()) {
-              min = e.getTimestamp();
-            }
-          }
-          existingEntity.setStartTime(min);
-        }
-      }
-      if (entity.getPrimaryFilters() != null) {
-        if (existingEntity.getPrimaryFilters() == null) {
-          existingEntity.setPrimaryFilters(new HashMap<String, Set<Object>>());
-        }
-        for (Entry<String, Set<Object>> pf :
-            entity.getPrimaryFilters().entrySet()) {
-          for (Object pfo : pf.getValue()) {
-            existingEntity.addPrimaryFilter(pf.getKey(), maybeConvert(pfo));
-          }
-        }
-      }
-      if (entity.getOtherInfo() != null) {
-        if (existingEntity.getOtherInfo() == null) {
-          existingEntity.setOtherInfo(new HashMap<String, Object>());
-        }
-        for (Entry<String, Object> info : entity.getOtherInfo().entrySet()) {
-          existingEntity.addOtherInfo(info.getKey(),
-              maybeConvert(info.getValue()));
-        }
-      }
-      // relate it to other entities
-      if (entity.getRelatedEntities() == null) {
-        continue;
-      }
-      for (Map.Entry<String, Set<String>> partRelatedEntities : entity
-          .getRelatedEntities().entrySet()) {
-        if (partRelatedEntities == null) {
-          continue;
-        }
-        for (String idStr : partRelatedEntities.getValue()) {
-          EntityIdentifier relatedEntityId =
-              new EntityIdentifier(idStr, partRelatedEntities.getKey());
-          TimelineEntity relatedEntity = entities.get(relatedEntityId);
-          if (relatedEntity != null) {
-            relatedEntity.addRelatedEntity(
-                existingEntity.getEntityType(), existingEntity.getEntityId());
-          } else {
-            relatedEntity = new TimelineEntity();
-            relatedEntity.setEntityId(relatedEntityId.getId());
-            relatedEntity.setEntityType(relatedEntityId.getType());
-            relatedEntity.setStartTime(existingEntity.getStartTime());
-            relatedEntity.addRelatedEntity(existingEntity.getEntityType(),
-                existingEntity.getEntityId());
-            entities.put(relatedEntityId, relatedEntity);
-            entityInsertTimes.put(relatedEntityId, System.currentTimeMillis());
-          }
-        }
-      }
-    }
-    return response;
-  }
-
-  private static TimelineEntity maskFields(
-      TimelineEntity entity, EnumSet<Field> fields) {
-    // Conceal the fields that are not going to be exposed
-    TimelineEntity entityToReturn = new TimelineEntity();
-    entityToReturn.setEntityId(entity.getEntityId());
-    entityToReturn.setEntityType(entity.getEntityType());
-    entityToReturn.setStartTime(entity.getStartTime());
-    entityToReturn.setEvents(fields.contains(Field.EVENTS) ?
-        entity.getEvents() : fields.contains(Field.LAST_EVENT_ONLY) ?
-            Arrays.asList(entity.getEvents().get(0)) : null);
-    entityToReturn.setRelatedEntities(fields.contains(Field.RELATED_ENTITIES) ?
-        entity.getRelatedEntities() : null);
-    entityToReturn.setPrimaryFilters(fields.contains(Field.PRIMARY_FILTERS) ?
-        entity.getPrimaryFilters() : null);
-    entityToReturn.setOtherInfo(fields.contains(Field.OTHER_INFO) ?
-        entity.getOtherInfo() : null);
-    return entityToReturn;
-  }
-
-  private static boolean matchFilter(Map<String, Object> tags,
-      NameValuePair filter) {
-    Object value = tags.get(filter.getName());
-    if (value == null) { // doesn't have the filter
-      return false;
-    } else if (!value.equals(filter.getValue())) { // doesn't match the filter
-      return false;
-    }
-    return true;
-  }
-
-  private static boolean matchPrimaryFilter(Map<String, Set<Object>> tags,
-      NameValuePair filter) {
-    Set<Object> value = tags.get(filter.getName());
-    if (value == null) { // doesn't have the filter
-      return false;
-    } else {
-      return value.contains(filter.getValue());
-    }
-  }
-
-  private static Object maybeConvert(Object o) {
-    if (o instanceof Long) {
-      Long l = (Long)o;
-      if (l >= Integer.MIN_VALUE && l <= Integer.MAX_VALUE) {
-        return l.intValue();
-      }
-    }
-    return o;
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSController.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSController.java
deleted file mode 100644
index 4e00bc8..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSController.java
+++ /dev/null
@@ -1,55 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
-
-import org.apache.hadoop.yarn.webapp.Controller;
-
-import com.google.inject.Inject;
-
-public class AHSController extends Controller {
-
-  @Inject
-  AHSController(RequestContext ctx) {
-    super(ctx);
-  }
-
-  @Override
-  public void index() {
-    setTitle("Application History");
-  }
-
-  public void app() {
-    render(AppPage.class);
-  }
-
-  public void appattempt() {
-    render(AppAttemptPage.class);
-  }
-
-  public void container() {
-    render(ContainerPage.class);
-  }
-
-  /**
-   * Render the logs page.
-   */
-  public void logs() {
-    render(AHSLogsPage.class);
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSLogsPage.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSLogsPage.java
deleted file mode 100644
index 8821bc0..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSLogsPage.java
+++ /dev/null
@@ -1,55 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
-
-import static org.apache.hadoop.yarn.webapp.YarnWebParams.CONTAINER_ID;
-import static org.apache.hadoop.yarn.webapp.YarnWebParams.ENTITY_STRING;
-
-import org.apache.hadoop.yarn.webapp.SubView;
-import org.apache.hadoop.yarn.webapp.log.AggregatedLogsBlock;
-
-public class AHSLogsPage extends AHSView {
-  /*
-   * (non-Javadoc)
-   * 
-   * @see
-   * org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AHSView#
-   * preHead(org.apache.hadoop .yarn.webapp.hamlet.Hamlet.HTML)
-   */
-  @Override
-  protected void preHead(Page.HTML<_> html) {
-    String logEntity = $(ENTITY_STRING);
-    if (logEntity == null || logEntity.isEmpty()) {
-      logEntity = $(CONTAINER_ID);
-    }
-    if (logEntity == null || logEntity.isEmpty()) {
-      logEntity = "UNKNOWN";
-    }
-    commonPreHead(html);
-  }
-
-  /**
-   * The content of this page is the AggregatedLogsBlock
-   * 
-   * @return AggregatedLogsBlock.class
-   */
-  @Override
-  protected Class<? extends SubView> content() {
-    return AggregatedLogsBlock.class;
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSView.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSView.java
deleted file mode 100644
index 4baa75d..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSView.java
+++ /dev/null
@@ -1,90 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
-
-import static org.apache.hadoop.yarn.util.StringHelper.sjoin;
-import static org.apache.hadoop.yarn.webapp.YarnWebParams.APP_STATE;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.ACCORDION;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.ACCORDION_ID;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES_ID;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.initID;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.tableInit;
-
-import org.apache.hadoop.yarn.server.webapp.AppsBlock;
-import org.apache.hadoop.yarn.webapp.SubView;
-import org.apache.hadoop.yarn.webapp.view.TwoColumnLayout;
-
-// Do NOT rename/refactor this to AHSView as it will wreak havoc
-// on Mac OS HFS
-public class AHSView extends TwoColumnLayout {
-  static final int MAX_DISPLAY_ROWS = 100; // direct table rendering
-  static final int MAX_FAST_ROWS = 1000; // inline js array
-
-  @Override
-  protected void preHead(Page.HTML<_> html) {
-    commonPreHead(html);
-    set(DATATABLES_ID, "apps");
-    set(initID(DATATABLES, "apps"), appsTableInit());
-    setTableStyles(html, "apps", ".queue {width:6em}", ".ui {width:8em}");
-
-    // Set the correct title.
-    String reqState = $(APP_STATE);
-    reqState = (reqState == null || reqState.isEmpty() ? "All" : reqState);
-    setTitle(sjoin(reqState, "Applications"));
-  }
-
-  protected void commonPreHead(Page.HTML<_> html) {
-    set(ACCORDION_ID, "nav");
-    set(initID(ACCORDION, "nav"), "{autoHeight:false, active:0}");
-  }
-
-  @Override
-  protected Class<? extends SubView> nav() {
-    return NavBlock.class;
-  }
-
-  @Override
-  protected Class<? extends SubView> content() {
-    return AppsBlock.class;
-  }
-
-  private String appsTableInit() {
-    // id, user, name, queue, starttime, finishtime, state, status, progress, ui
-    return tableInit().append(", 'aaData': appsTableData")
-      .append(", bDeferRender: true").append(", bProcessing: true")
-
-      .append("\n, aoColumnDefs: ").append(getAppsTableColumnDefs())
-
-      // Sort by id upon page load
-      .append(", aaSorting: [[0, 'desc']]}").toString();
-  }
-
-  protected String getAppsTableColumnDefs() {
-    StringBuilder sb = new StringBuilder();
-    return sb.append("[\n").append("{'sType':'numeric', 'aTargets': [0]")
-      .append(", 'mRender': parseHadoopID }")
-
-      .append("\n, {'sType':'numeric', 'aTargets': [5, 6]")
-      .append(", 'mRender': renderHadoopDate }")
-
-      .append("\n, {'sType':'numeric', bSearchable:false, 'aTargets': [9]")
-      .append(", 'mRender': parseHadoopProgress }]").toString();
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebApp.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebApp.java
deleted file mode 100644
index 72facce..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebApp.java
+++ /dev/null
@@ -1,66 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
-
-import static org.apache.hadoop.yarn.util.StringHelper.pajoin;
-
-import org.apache.hadoop.yarn.api.ApplicationBaseProtocol;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryClientService;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManager;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.TimelineStore;
-import org.apache.hadoop.yarn.webapp.GenericExceptionHandler;
-import org.apache.hadoop.yarn.webapp.WebApp;
-import org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider;
-import org.apache.hadoop.yarn.webapp.YarnWebParams;
-
-public class AHSWebApp extends WebApp implements YarnWebParams {
-
-  private final TimelineStore timelineStore;
-  private final TimelineMetricStore timelineMetricStore;
-  private final ApplicationHistoryClientService historyClientService;
-
-  public AHSWebApp(TimelineStore timelineStore,
-    TimelineMetricStore timelineMetricStore,
-    ApplicationHistoryClientService historyClientService) {
-
-    this.timelineStore = timelineStore;
-    this.timelineMetricStore = timelineMetricStore;
-    this.historyClientService = historyClientService;
-  }
-
-  @Override
-  public void setup() {
-    bind(YarnJacksonJaxbJsonProvider.class);
-    bind(AHSWebServices.class);
-    bind(TimelineWebServices.class);
-    bind(GenericExceptionHandler.class);
-    bind(ApplicationBaseProtocol.class).toInstance(historyClientService);
-    bind(TimelineStore.class).toInstance(timelineStore);
-    bind(TimelineMetricStore.class).toInstance(timelineMetricStore);
-    route("/", AHSController.class);
-    route(pajoin("/apps", APP_STATE), AHSController.class);
-    route(pajoin("/app", APPLICATION_ID), AHSController.class, "app");
-    route(pajoin("/appattempt", APPLICATION_ATTEMPT_ID), AHSController.class,
-      "appattempt");
-    route(pajoin("/container", CONTAINER_ID), AHSController.class, "container");
-    route(
-      pajoin("/logs", NM_NODENAME, CONTAINER_ID, ENTITY_STRING, APP_OWNER,
-        CONTAINER_LOG_TYPE), AHSController.class, "logs");
-  }
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
deleted file mode 100644
index 3064d2d..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
+++ /dev/null
@@ -1,162 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
-
-import java.util.Collections;
-import java.util.Set;
-
-import javax.servlet.http.HttpServletRequest;
-import javax.servlet.http.HttpServletResponse;
-import javax.ws.rs.GET;
-import javax.ws.rs.Path;
-import javax.ws.rs.PathParam;
-import javax.ws.rs.Produces;
-import javax.ws.rs.QueryParam;
-import javax.ws.rs.core.Context;
-import javax.ws.rs.core.MediaType;
-
-import org.apache.hadoop.yarn.api.ApplicationBaseProtocol;
-import org.apache.hadoop.yarn.api.records.YarnApplicationState;
-import org.apache.hadoop.yarn.server.webapp.WebServices;
-import org.apache.hadoop.yarn.server.webapp.dao.AppAttemptInfo;
-import org.apache.hadoop.yarn.server.webapp.dao.AppAttemptsInfo;
-import org.apache.hadoop.yarn.server.webapp.dao.AppInfo;
-import org.apache.hadoop.yarn.server.webapp.dao.AppsInfo;
-import org.apache.hadoop.yarn.server.webapp.dao.ContainerInfo;
-import org.apache.hadoop.yarn.server.webapp.dao.ContainersInfo;
-import org.apache.hadoop.yarn.webapp.BadRequestException;
-
-import com.google.inject.Inject;
-import com.google.inject.Singleton;
-
-@Singleton
-@Path("/ws/v1/applicationhistory")
-public class AHSWebServices extends WebServices {
-
-  @Inject
-  public AHSWebServices(ApplicationBaseProtocol appBaseProt) {
-    super(appBaseProt);
-  }
-
-  @GET
-  @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
-  public AppsInfo get(@Context HttpServletRequest req,
-      @Context HttpServletResponse res) {
-    return getApps(req, res, null, Collections.<String> emptySet(), null, null,
-      null, null, null, null, null, null, Collections.<String> emptySet());
-  }
-
-  @GET
-  @Path("/apps")
-  @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
-  @Override
-  public AppsInfo getApps(@Context HttpServletRequest req,
-      @Context HttpServletResponse res, @QueryParam("state") String stateQuery,
-      @QueryParam("states") Set<String> statesQuery,
-      @QueryParam("finalStatus") String finalStatusQuery,
-      @QueryParam("user") String userQuery,
-      @QueryParam("queue") String queueQuery,
-      @QueryParam("limit") String count,
-      @QueryParam("startedTimeBegin") String startedBegin,
-      @QueryParam("startedTimeEnd") String startedEnd,
-      @QueryParam("finishedTimeBegin") String finishBegin,
-      @QueryParam("finishedTimeEnd") String finishEnd,
-      @QueryParam("applicationTypes") Set<String> applicationTypes) {
-    init(res);
-    validateStates(stateQuery, statesQuery);
-    return super.getApps(req, res, stateQuery, statesQuery, finalStatusQuery,
-      userQuery, queueQuery, count, startedBegin, startedEnd, finishBegin,
-      finishEnd, applicationTypes);
-  }
-
-  @GET
-  @Path("/apps/{appid}")
-  @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
-  @Override
-  public AppInfo getApp(@Context HttpServletRequest req,
-      @Context HttpServletResponse res, @PathParam("appid") String appId) {
-    init(res);
-    return super.getApp(req, res, appId);
-  }
-
-  @GET
-  @Path("/apps/{appid}/appattempts")
-  @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
-  @Override
-  public AppAttemptsInfo getAppAttempts(@Context HttpServletRequest req,
-      @Context HttpServletResponse res, @PathParam("appid") String appId) {
-    init(res);
-    return super.getAppAttempts(req, res, appId);
-  }
-
-  @GET
-  @Path("/apps/{appid}/appattempts/{appattemptid}")
-  @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
-  @Override
-  public AppAttemptInfo getAppAttempt(@Context HttpServletRequest req,
-      @Context HttpServletResponse res, @PathParam("appid") String appId,
-      @PathParam("appattemptid") String appAttemptId) {
-    init(res);
-    return super.getAppAttempt(req, res, appId, appAttemptId);
-  }
-
-  @GET
-  @Path("/apps/{appid}/appattempts/{appattemptid}/containers")
-  @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
-  @Override
-  public ContainersInfo getContainers(@Context HttpServletRequest req,
-      @Context HttpServletResponse res, @PathParam("appid") String appId,
-      @PathParam("appattemptid") String appAttemptId) {
-    init(res);
-    return super.getContainers(req, res, appId, appAttemptId);
-  }
-
-  @GET
-  @Path("/apps/{appid}/appattempts/{appattemptid}/containers/{containerid}")
-  @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
-  @Override
-  public ContainerInfo getContainer(@Context HttpServletRequest req,
-      @Context HttpServletResponse res, @PathParam("appid") String appId,
-      @PathParam("appattemptid") String appAttemptId,
-      @PathParam("containerid") String containerId) {
-    init(res);
-    return super.getContainer(req, res, appId, appAttemptId, containerId);
-  }
-
-  private static void
-      validateStates(String stateQuery, Set<String> statesQuery) {
-    // stateQuery is deprecated.
-    if (stateQuery != null && !stateQuery.isEmpty()) {
-      statesQuery.add(stateQuery);
-    }
-    Set<String> appStates = parseQueries(statesQuery, true);
-    for (String appState : appStates) {
-      switch (YarnApplicationState.valueOf(appState.toUpperCase())) {
-        case FINISHED:
-        case FAILED:
-        case KILLED:
-          continue;
-        default:
-          throw new BadRequestException("Invalid application-state " + appState
-              + " specified. It should be a final state");
-      }
-    }
-  }
-
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/package-info.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AMSController.java
similarity index 74%
rename from ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/package-info.java
rename to ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AMSController.java
index 970e868..0bf962e 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/package-info.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AMSController.java
@@ -15,6 +15,23 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-@InterfaceAudience.Private
-package org.apache.hadoop.yarn.server.applicationhistoryservice.timeline;
-import org.apache.hadoop.classification.InterfaceAudience;
+
+package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
+
+import org.apache.hadoop.yarn.webapp.Controller;
+
+import com.google.inject.Inject;
+
+public class AMSController extends Controller {
+
+  @Inject
+  AMSController(RequestContext ctx) {
+    super(ctx);
+  }
+
+  @Override
+  public void index() {
+    setTitle("Ambari Metrics Service");
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/ContainerPage.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AMSWebApp.java
similarity index 55%
rename from ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/ContainerPage.java
rename to ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AMSWebApp.java
index 1be8a26..2f6eec7 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/ContainerPage.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AMSWebApp.java
@@ -17,25 +17,26 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
 
-import static org.apache.hadoop.yarn.util.StringHelper.join;
-
-import org.apache.hadoop.yarn.server.webapp.ContainerBlock;
-import org.apache.hadoop.yarn.webapp.SubView;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
+import org.apache.hadoop.yarn.webapp.GenericExceptionHandler;
+import org.apache.hadoop.yarn.webapp.WebApp;
+import org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider;
 import org.apache.hadoop.yarn.webapp.YarnWebParams;
 
-public class ContainerPage extends AHSView {
-
-  @Override
-  protected void preHead(Page.HTML<_> html) {
-    commonPreHead(html);
+public class AMSWebApp extends WebApp implements YarnWebParams {
+  
+  private final TimelineMetricStore timelineMetricStore;
 
-    String containerId = $(YarnWebParams.CONTAINER_ID);
-    set(TITLE, containerId.isEmpty() ? "Bad request: missing container ID"
-        : join("Container ", $(YarnWebParams.CONTAINER_ID)));
+  public AMSWebApp(TimelineMetricStore timelineMetricStore) {
+    this.timelineMetricStore = timelineMetricStore;
   }
 
   @Override
-  protected Class<? extends SubView> content() {
-    return ContainerBlock.class;
+  public void setup() {
+    bind(YarnJacksonJaxbJsonProvider.class);
+    bind(TimelineWebServices.class);
+    bind(GenericExceptionHandler.class);
+    bind(TimelineMetricStore.class).toInstance(timelineMetricStore);
+    route("/", AMSController.class);
   }
 }
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AppAttemptPage.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AppAttemptPage.java
deleted file mode 100644
index 63b44bd..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AppAttemptPage.java
+++ /dev/null
@@ -1,69 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
-
-import static org.apache.hadoop.yarn.util.StringHelper.join;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES_ID;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.initID;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.tableInit;
-
-import org.apache.hadoop.yarn.server.webapp.AppAttemptBlock;
-import org.apache.hadoop.yarn.webapp.SubView;
-import org.apache.hadoop.yarn.webapp.YarnWebParams;
-
-public class AppAttemptPage extends AHSView {
-
-  @Override
-  protected void preHead(Page.HTML<_> html) {
-    commonPreHead(html);
-
-    String appAttemptId = $(YarnWebParams.APPLICATION_ATTEMPT_ID);
-    set(
-      TITLE,
-      appAttemptId.isEmpty() ? "Bad request: missing application attempt ID"
-          : join("Application Attempt ",
-            $(YarnWebParams.APPLICATION_ATTEMPT_ID)));
-
-    set(DATATABLES_ID, "containers");
-    set(initID(DATATABLES, "containers"), containersTableInit());
-    setTableStyles(html, "containers", ".queue {width:6em}", ".ui {width:8em}");
-  }
-
-  @Override
-  protected Class<? extends SubView> content() {
-    return AppAttemptBlock.class;
-  }
-
-  private String containersTableInit() {
-    return tableInit().append(", 'aaData': containersTableData")
-      .append(", bDeferRender: true").append(", bProcessing: true")
-
-      .append("\n, aoColumnDefs: ").append(getContainersTableColumnDefs())
-
-      // Sort by id upon page load
-      .append(", aaSorting: [[0, 'desc']]}").toString();
-  }
-
-  protected String getContainersTableColumnDefs() {
-    StringBuilder sb = new StringBuilder();
-    return sb.append("[\n").append("{'sType':'numeric', 'aTargets': [0]")
-      .append(", 'mRender': parseHadoopID }]").toString();
-  }
-
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AppPage.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AppPage.java
deleted file mode 100644
index 96ca659..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AppPage.java
+++ /dev/null
@@ -1,71 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
-
-import static org.apache.hadoop.yarn.util.StringHelper.join;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES_ID;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.initID;
-import static org.apache.hadoop.yarn.webapp.view.JQueryUI.tableInit;
-
-import org.apache.hadoop.yarn.server.webapp.AppBlock;
-import org.apache.hadoop.yarn.webapp.SubView;
-import org.apache.hadoop.yarn.webapp.YarnWebParams;
-
-public class AppPage extends AHSView {
-
-  @Override
-  protected void preHead(Page.HTML<_> html) {
-    commonPreHead(html);
-
-    String appId = $(YarnWebParams.APPLICATION_ID);
-    set(
-      TITLE,
-      appId.isEmpty() ? "Bad request: missing application ID" : join(
-        "Application ", $(YarnWebParams.APPLICATION_ID)));
-
-    set(DATATABLES_ID, "attempts");
-    set(initID(DATATABLES, "attempts"), attemptsTableInit());
-    setTableStyles(html, "attempts", ".queue {width:6em}", ".ui {width:8em}");
-  }
-
-  @Override
-  protected Class<? extends SubView> content() {
-    return AppBlock.class;
-  }
-
-  private String attemptsTableInit() {
-    return tableInit().append(", 'aaData': attemptsTableData")
-      .append(", bDeferRender: true").append(", bProcessing: true")
-
-      .append("\n, aoColumnDefs: ").append(getAttemptsTableColumnDefs())
-
-      // Sort by id upon page load
-      .append(", aaSorting: [[0, 'desc']]}").toString();
-  }
-
-  protected String getAttemptsTableColumnDefs() {
-    StringBuilder sb = new StringBuilder();
-    return sb.append("[\n").append("{'sType':'numeric', 'aTargets': [0]")
-      .append(", 'mRender': parseHadoopID }")
-
-      .append("\n, {'sType':'numeric', 'aTargets': [1]")
-      .append(", 'mRender': renderHadoopDate }]").toString();
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/NavBlock.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/NavBlock.java
deleted file mode 100644
index e84ddec..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/NavBlock.java
+++ /dev/null
@@ -1,51 +0,0 @@
-/**
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*     http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-*/
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
-
-import org.apache.hadoop.yarn.api.records.YarnApplicationState;
-import org.apache.hadoop.yarn.webapp.view.HtmlBlock;
-
-public class NavBlock extends HtmlBlock {
-
-  @Override
-  public void render(Block html) {
-    html.
-        div("#nav").
-            h3("Application History").
-                ul().
-                    li().a(url("apps"), "Applications").
-                        ul().
-                            li().a(url("apps",
-                                YarnApplicationState.FINISHED.toString()),
-                                YarnApplicationState.FINISHED.toString()).
-                            _().
-                            li().a(url("apps",
-                                YarnApplicationState.FAILED.toString()),
-                                YarnApplicationState.FAILED.toString()).
-                            _().
-                            li().a(url("apps",
-                                YarnApplicationState.KILLED.toString()),
-                                YarnApplicationState.KILLED.toString()).
-                            _().
-                        _().
-                    _().
-                _().
-            _();
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
index dc401e6..2930b33 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
@@ -18,44 +18,26 @@
 
 package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
 
-import com.google.inject.Inject;
-import com.google.inject.Singleton;
-import org.apache.commons.lang.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.classification.InterfaceAudience.Public;
-import org.apache.hadoop.classification.InterfaceStability.Unstable;
-import org.apache.hadoop.metrics2.sink.timeline.AggregationResult;
-import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
-import org.apache.hadoop.metrics2.sink.timeline.PrecisionLimitExceededException;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
-import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEntity;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEvents;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
-import org.apache.hadoop.metrics2.sink.timeline.Precision;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.EntityIdentifier;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.GenericObjectMapper;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.NameValuePair;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.TimelineReader.Field;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.TimelineStore;
-import org.apache.hadoop.yarn.util.timeline.TimelineUtils;
-import org.apache.hadoop.yarn.webapp.BadRequestException;
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.EnumSet;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeSet;
 
 import javax.servlet.http.HttpServletRequest;
 import javax.servlet.http.HttpServletResponse;
 import javax.ws.rs.Consumes;
-import javax.ws.rs.DELETE;
-import javax.ws.rs.DefaultValue;
 import javax.ws.rs.GET;
 import javax.ws.rs.POST;
 import javax.ws.rs.Path;
-import javax.ws.rs.PathParam;
 import javax.ws.rs.Produces;
 import javax.ws.rs.QueryParam;
 import javax.ws.rs.WebApplicationException;
@@ -66,40 +48,40 @@ import javax.xml.bind.annotation.XmlAccessType;
 import javax.xml.bind.annotation.XmlAccessorType;
 import javax.xml.bind.annotation.XmlElement;
 import javax.xml.bind.annotation.XmlRootElement;
-import java.io.IOException;
-import java.sql.SQLException;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.EnumSet;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.SortedSet;
-import java.util.TreeSet;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.ScheduledExecutorService;
-import java.util.concurrent.TimeUnit;
 
-import static org.apache.hadoop.yarn.util.StringHelper.CSV_JOINER;
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.metrics2.sink.timeline.AggregationResult;
+import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
+import org.apache.hadoop.metrics2.sink.timeline.Precision;
+import org.apache.hadoop.metrics2.sink.timeline.PrecisionLimitExceededException;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
+import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.GenericObjectMapper;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.NameValuePair;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.TimelineReader.Field;
+import org.apache.hadoop.yarn.util.timeline.TimelineUtils;
+import org.apache.hadoop.yarn.webapp.BadRequestException;
+
+import com.google.inject.Inject;
+import com.google.inject.Singleton;
 
 @Singleton
 @Path("/ws/v1/timeline")
-//TODO: support XML serialization/deserialization
 public class TimelineWebServices {
-
   private static final Log LOG = LogFactory.getLog(TimelineWebServices.class);
-
-  private TimelineStore store;
+  
   private TimelineMetricStore timelineMetricStore;
 
   @Inject
-  public TimelineWebServices(TimelineStore store,
-                             TimelineMetricStore timelineMetricStore) {
-    this.store = store;
+  public TimelineWebServices(TimelineMetricStore timelineMetricStore) {
     this.timelineMetricStore = timelineMetricStore;
   }
 
@@ -139,125 +121,7 @@ public class TimelineWebServices {
       @Context HttpServletRequest req,
       @Context HttpServletResponse res) {
     init(res);
-    return new AboutInfo("Timeline API");
-  }
-
-  /**
-   * Return a list of entities that match the given parameters.
-   */
-  @GET
-  @Path("/{entityType}")
-  @Produces({ MediaType.APPLICATION_JSON /* , MediaType.APPLICATION_XML */})
-  public TimelineEntities getEntities(
-      @Context HttpServletRequest req,
-      @Context HttpServletResponse res,
-      @PathParam("entityType") String entityType,
-      @QueryParam("primaryFilter") String primaryFilter,
-      @QueryParam("secondaryFilter") String secondaryFilter,
-      @QueryParam("windowStart") String windowStart,
-      @QueryParam("windowEnd") String windowEnd,
-      @QueryParam("fromId") String fromId,
-      @QueryParam("fromTs") String fromTs,
-      @QueryParam("limit") String limit,
-      @QueryParam("fields") String fields) {
-    init(res);
-    TimelineEntities entities = null;
-    try {
-      entities = store.getEntities(
-        parseStr(entityType),
-        parseLongStr(limit),
-        parseLongStr(windowStart),
-        parseLongStr(windowEnd),
-        parseStr(fromId),
-        parseLongStr(fromTs),
-        parsePairStr(primaryFilter, ":"),
-        parsePairsStr(secondaryFilter, ",", ":"),
-        parseFieldsStr(fields, ","));
-    } catch (NumberFormatException e) {
-      throw new BadRequestException(
-          "windowStart, windowEnd or limit is not a numeric value.");
-    } catch (IllegalArgumentException e) {
-      throw new BadRequestException("requested invalid field.");
-    } catch (IOException e) {
-      LOG.error("Error getting entities", e);
-      throw new WebApplicationException(e,
-          Response.Status.INTERNAL_SERVER_ERROR);
-    }
-    if (entities == null) {
-      return new TimelineEntities();
-    }
-    return entities;
-  }
-
-  /**
-   * Return a single entity of the given entity type and Id.
-   */
-  @GET
-  @Path("/{entityType}/{entityId}")
-  @Produces({ MediaType.APPLICATION_JSON /* , MediaType.APPLICATION_XML */})
-  public TimelineEntity getEntity(
-      @Context HttpServletRequest req,
-      @Context HttpServletResponse res,
-      @PathParam("entityType") String entityType,
-      @PathParam("entityId") String entityId,
-      @QueryParam("fields") String fields) {
-    init(res);
-    TimelineEntity entity = null;
-    try {
-      entity =
-          store.getEntity(parseStr(entityId), parseStr(entityType),
-            parseFieldsStr(fields, ","));
-    } catch (IllegalArgumentException e) {
-      throw new BadRequestException(
-          "requested invalid field.");
-    } catch (IOException e) {
-      LOG.error("Error getting entity", e);
-      throw new WebApplicationException(e,
-          Response.Status.INTERNAL_SERVER_ERROR);
-    }
-    if (entity == null) {
-      throw new WebApplicationException(Response.Status.NOT_FOUND);
-    }
-    return entity;
-  }
-
-  /**
-   * Return the events that match the given parameters.
-   */
-  @GET
-  @Path("/{entityType}/events")
-  @Produces({ MediaType.APPLICATION_JSON /* , MediaType.APPLICATION_XML */})
-  public TimelineEvents getEvents(
-      @Context HttpServletRequest req,
-      @Context HttpServletResponse res,
-      @PathParam("entityType") String entityType,
-      @QueryParam("entityId") String entityId,
-      @QueryParam("eventType") String eventType,
-      @QueryParam("windowStart") String windowStart,
-      @QueryParam("windowEnd") String windowEnd,
-      @QueryParam("limit") String limit) {
-    init(res);
-    TimelineEvents events = null;
-    try {
-      events = store.getEntityTimelines(
-        parseStr(entityType),
-        parseArrayStr(entityId, ","),
-        parseLongStr(limit),
-        parseLongStr(windowStart),
-        parseLongStr(windowEnd),
-        parseArrayStr(eventType, ","));
-    } catch (NumberFormatException e) {
-      throw new BadRequestException(
-          "windowStart, windowEnd or limit is not a numeric value.");
-    } catch (IOException e) {
-      LOG.error("Error getting entity timelines", e);
-      throw new WebApplicationException(e,
-          Response.Status.INTERNAL_SERVER_ERROR);
-    }
-    if (events == null) {
-      return new TimelineEvents();
-    }
-    return events;
+    return new AboutInfo("AMS API");
   }
 
   /**
@@ -559,42 +423,6 @@ public class TimelineWebServices {
     return timelineMetricStore.getLiveInstances();
   }
 
-  /**
-   * Store the given entities into the timeline store, and return the errors
-   * that happen during storing.
-   */
-  @POST
-  @Consumes({ MediaType.APPLICATION_JSON /* , MediaType.APPLICATION_XML */})
-  public TimelinePutResponse postEntities(
-      @Context HttpServletRequest req,
-      @Context HttpServletResponse res,
-      TimelineEntities entities) {
-    init(res);
-    if (entities == null) {
-      return new TimelinePutResponse();
-    }
-    try {
-      List<EntityIdentifier> entityIDs = new ArrayList<EntityIdentifier>();
-      for (TimelineEntity entity : entities.getEntities()) {
-        EntityIdentifier entityID =
-            new EntityIdentifier(entity.getEntityId(), entity.getEntityType());
-        entityIDs.add(entityID);
-        if (LOG.isDebugEnabled()) {
-          LOG.debug("Storing the entity " + entityID + ", JSON-style content: "
-              + TimelineUtils.dumpTimelineRecordtoJSON(entity));
-        }
-      }
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("Storing entities: " + CSV_JOINER.join(entityIDs));
-      }
-      return store.put(entities);
-    } catch (IOException e) {
-      LOG.error("Error putting entities", e);
-      throw new WebApplicationException(e,
-          Response.Status.INTERNAL_SERVER_ERROR);
-    }
-  }
-
   private void init(HttpServletResponse response) {
     response.setContentType(null);
   }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryStoreTestUtils.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryStoreTestUtils.java
deleted file mode 100644
index ec9b49d..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryStoreTestUtils.java
+++ /dev/null
@@ -1,84 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import java.io.IOException;
-
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.api.records.ContainerState;
-import org.apache.hadoop.yarn.api.records.FinalApplicationStatus;
-import org.apache.hadoop.yarn.api.records.NodeId;
-import org.apache.hadoop.yarn.api.records.Priority;
-import org.apache.hadoop.yarn.api.records.Resource;
-import org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState;
-import org.apache.hadoop.yarn.api.records.YarnApplicationState;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptStartData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationStartData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerFinishData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerStartData;
-
-public class ApplicationHistoryStoreTestUtils {
-
-  protected ApplicationHistoryStore store;
-
-  protected void writeApplicationStartData(ApplicationId appId)
-      throws IOException {
-    store.applicationStarted(ApplicationStartData.newInstance(appId,
-      appId.toString(), "test type", "test queue", "test user", 0, 0));
-  }
-
-  protected void writeApplicationFinishData(ApplicationId appId)
-      throws IOException {
-    store.applicationFinished(ApplicationFinishData.newInstance(appId, 0,
-      appId.toString(), FinalApplicationStatus.UNDEFINED,
-      YarnApplicationState.FINISHED));
-  }
-
-  protected void writeApplicationAttemptStartData(
-      ApplicationAttemptId appAttemptId) throws IOException {
-    store.applicationAttemptStarted(ApplicationAttemptStartData.newInstance(
-      appAttemptId, appAttemptId.toString(), 0,
-      ContainerId.newContainerId(appAttemptId, 1)));
-  }
-
-  protected void writeApplicationAttemptFinishData(
-      ApplicationAttemptId appAttemptId) throws IOException {
-    store.applicationAttemptFinished(ApplicationAttemptFinishData.newInstance(
-      appAttemptId, appAttemptId.toString(), "test tracking url",
-      FinalApplicationStatus.UNDEFINED, YarnApplicationAttemptState.FINISHED));
-  }
-
-  protected void writeContainerStartData(ContainerId containerId)
-      throws IOException {
-    store.containerStarted(ContainerStartData.newInstance(containerId,
-      Resource.newInstance(0, 0), NodeId.newInstance("localhost", 0),
-      Priority.newInstance(containerId.getId()), 0));
-  }
-
-  protected void writeContainerFinishData(ContainerId containerId)
-      throws IOException {
-    store.containerFinished(ContainerFinishData.newInstance(containerId, 0,
-      containerId.toString(), 0, ContainerState.COMPLETE));
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryClientService.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryClientService.java
deleted file mode 100644
index f93ac5e..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryClientService.java
+++ /dev/null
@@ -1,209 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import java.io.IOException;
-import java.util.List;
-
-import junit.framework.Assert;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetContainerReportRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.GetContainerReportResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.GetContainersRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.GetContainersResponse;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptReport;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ApplicationReport;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.api.records.ContainerReport;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.exceptions.YarnException;
-import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Ignore;
-import org.junit.Test;
-
-// Timeline service client support is not enabled for AMBARI_METRICS
-@Ignore
-public class TestApplicationHistoryClientService extends
-    ApplicationHistoryStoreTestUtils {
-
-  ApplicationHistoryServer historyServer = null;
-  String expectedLogUrl = null;
-
-  @Before
-  public void setup() {
-    historyServer = new ApplicationHistoryServer();
-    Configuration config = new YarnConfiguration();
-    expectedLogUrl = WebAppUtils.getHttpSchemePrefix(config) +
-        WebAppUtils.getAHSWebAppURLWithoutScheme(config) +
-        "/applicationhistory/logs/localhost:0/container_0_0001_01_000001/" +
-        "container_0_0001_01_000001/test user";
-    config.setClass(YarnConfiguration.APPLICATION_HISTORY_STORE,
-      MemoryApplicationHistoryStore.class, ApplicationHistoryStore.class);
-    historyServer.init(config);
-    historyServer.start();
-    store =
-        ((ApplicationHistoryManagerImpl) historyServer.getApplicationHistory())
-          .getHistoryStore();
-  }
-
-  @After
-  public void tearDown() throws Exception {
-    historyServer.stop();
-  }
-
-  @Test
-  public void testApplicationReport() throws IOException, YarnException {
-    ApplicationId appId = null;
-    appId = ApplicationId.newInstance(0, 1);
-    writeApplicationStartData(appId);
-    writeApplicationFinishData(appId);
-    GetApplicationReportRequest request =
-        GetApplicationReportRequest.newInstance(appId);
-    GetApplicationReportResponse response =
-        historyServer.getClientService().getClientHandler()
-          .getApplicationReport(request);
-    ApplicationReport appReport = response.getApplicationReport();
-    Assert.assertNotNull(appReport);
-    Assert.assertEquals("application_0_0001", appReport.getApplicationId()
-      .toString());
-    Assert.assertEquals("test type", appReport.getApplicationType().toString());
-    Assert.assertEquals("test queue", appReport.getQueue().toString());
-  }
-
-  @Test
-  public void testApplications() throws IOException, YarnException {
-    ApplicationId appId = null;
-    appId = ApplicationId.newInstance(0, 1);
-    writeApplicationStartData(appId);
-    writeApplicationFinishData(appId);
-    ApplicationId appId1 = ApplicationId.newInstance(0, 2);
-    writeApplicationStartData(appId1);
-    writeApplicationFinishData(appId1);
-    GetApplicationsRequest request = GetApplicationsRequest.newInstance();
-    GetApplicationsResponse response =
-        historyServer.getClientService().getClientHandler()
-          .getApplications(request);
-    List<ApplicationReport> appReport = response.getApplicationList();
-    Assert.assertNotNull(appReport);
-    Assert.assertEquals(appId, appReport.get(0).getApplicationId());
-    Assert.assertEquals(appId1, appReport.get(1).getApplicationId());
-  }
-
-  @Test
-  public void testApplicationAttemptReport() throws IOException, YarnException {
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    ApplicationAttemptId appAttemptId =
-        ApplicationAttemptId.newInstance(appId, 1);
-    writeApplicationAttemptStartData(appAttemptId);
-    writeApplicationAttemptFinishData(appAttemptId);
-    GetApplicationAttemptReportRequest request =
-        GetApplicationAttemptReportRequest.newInstance(appAttemptId);
-    GetApplicationAttemptReportResponse response =
-        historyServer.getClientService().getClientHandler()
-          .getApplicationAttemptReport(request);
-    ApplicationAttemptReport attemptReport =
-        response.getApplicationAttemptReport();
-    Assert.assertNotNull(attemptReport);
-    Assert.assertEquals("appattempt_0_0001_000001", attemptReport
-      .getApplicationAttemptId().toString());
-  }
-
-  @Test
-  public void testApplicationAttempts() throws IOException, YarnException {
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    ApplicationAttemptId appAttemptId =
-        ApplicationAttemptId.newInstance(appId, 1);
-    ApplicationAttemptId appAttemptId1 =
-        ApplicationAttemptId.newInstance(appId, 2);
-    writeApplicationAttemptStartData(appAttemptId);
-    writeApplicationAttemptFinishData(appAttemptId);
-    writeApplicationAttemptStartData(appAttemptId1);
-    writeApplicationAttemptFinishData(appAttemptId1);
-    GetApplicationAttemptsRequest request =
-        GetApplicationAttemptsRequest.newInstance(appId);
-    GetApplicationAttemptsResponse response =
-        historyServer.getClientService().getClientHandler()
-          .getApplicationAttempts(request);
-    List<ApplicationAttemptReport> attemptReports =
-        response.getApplicationAttemptList();
-    Assert.assertNotNull(attemptReports);
-    Assert.assertEquals(appAttemptId, attemptReports.get(0)
-      .getApplicationAttemptId());
-    Assert.assertEquals(appAttemptId1, attemptReports.get(1)
-      .getApplicationAttemptId());
-  }
-
-  @Test
-  public void testContainerReport() throws IOException, YarnException {
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    writeApplicationStartData(appId);
-    ApplicationAttemptId appAttemptId =
-        ApplicationAttemptId.newInstance(appId, 1);
-    ContainerId containerId = ContainerId.newContainerId(appAttemptId, 1);
-    writeContainerStartData(containerId);
-    writeContainerFinishData(containerId);
-    writeApplicationFinishData(appId);
-    GetContainerReportRequest request =
-        GetContainerReportRequest.newInstance(containerId);
-    GetContainerReportResponse response =
-        historyServer.getClientService().getClientHandler()
-          .getContainerReport(request);
-    ContainerReport container = response.getContainerReport();
-    Assert.assertNotNull(container);
-    Assert.assertEquals(containerId, container.getContainerId());
-    Assert.assertEquals(expectedLogUrl, container.getLogUrl());
-  }
-
-  @Test
-  public void testContainers() throws IOException, YarnException {
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    writeApplicationStartData(appId);
-    ApplicationAttemptId appAttemptId =
-        ApplicationAttemptId.newInstance(appId, 1);
-    ContainerId containerId = ContainerId.newContainerId(appAttemptId, 1);
-    ContainerId containerId1 = ContainerId.newContainerId(appAttemptId, 2);
-    writeContainerStartData(containerId);
-    writeContainerFinishData(containerId);
-    writeContainerStartData(containerId1);
-    writeContainerFinishData(containerId1);
-    writeApplicationFinishData(appId);
-    GetContainersRequest request =
-        GetContainersRequest.newInstance(appAttemptId);
-    GetContainersResponse response =
-        historyServer.getClientService().getClientHandler()
-          .getContainers(request);
-    List<ContainerReport> containers = response.getContainerList();
-    Assert.assertNotNull(containers);
-    Assert.assertEquals(containerId, containers.get(1).getContainerId());
-    Assert.assertEquals(containerId1, containers.get(0).getContainerId());
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerImpl.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerImpl.java
deleted file mode 100644
index aad23d9..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerImpl.java
+++ /dev/null
@@ -1,76 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import java.io.IOException;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ApplicationReport;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.exceptions.YarnException;
-import org.junit.After;
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Ignore;
-import org.junit.Test;
-
-public class TestApplicationHistoryManagerImpl extends
-    ApplicationHistoryStoreTestUtils {
-  ApplicationHistoryManagerImpl applicationHistoryManagerImpl = null;
-
-  @Before
-  public void setup() throws Exception {
-    Configuration config = new Configuration();
-    config.setClass(YarnConfiguration.APPLICATION_HISTORY_STORE,
-      MemoryApplicationHistoryStore.class, ApplicationHistoryStore.class);
-    applicationHistoryManagerImpl = new ApplicationHistoryManagerImpl();
-    applicationHistoryManagerImpl.init(config);
-    applicationHistoryManagerImpl.start();
-    store = applicationHistoryManagerImpl.getHistoryStore();
-  }
-
-  @After
-  public void tearDown() throws Exception {
-    applicationHistoryManagerImpl.stop();
-  }
-
-  @Test
-  @Ignore
-  public void testApplicationReport() throws IOException, YarnException {
-    ApplicationId appId = null;
-    appId = ApplicationId.newInstance(0, 1);
-    writeApplicationStartData(appId);
-    writeApplicationFinishData(appId);
-    ApplicationAttemptId appAttemptId =
-        ApplicationAttemptId.newInstance(appId, 1);
-    writeApplicationAttemptStartData(appAttemptId);
-    writeApplicationAttemptFinishData(appAttemptId);
-    ApplicationReport appReport =
-        applicationHistoryManagerImpl.getApplication(appId);
-    Assert.assertNotNull(appReport);
-    Assert.assertEquals(appId, appReport.getApplicationId());
-    Assert.assertEquals(appAttemptId,
-      appReport.getCurrentApplicationAttemptId());
-    Assert.assertEquals(appAttemptId.toString(), appReport.getHost());
-    Assert.assertEquals("test type", appReport.getApplicationType().toString());
-    Assert.assertEquals("test queue", appReport.getQueue().toString());
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
deleted file mode 100644
index 03205e7..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
+++ /dev/null
@@ -1,267 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import org.apache.commons.io.FileUtils;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.security.UserGroupInformation;
-import org.apache.hadoop.service.Service.STATE;
-import org.apache.hadoop.util.ExitUtil;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.HBaseTimelineMetricsService;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultPhoenixDataSource;
-import org.apache.zookeeper.ClientCnxn;
-import org.easymock.EasyMock;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Ignore;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.rules.TemporaryFolder;
-import org.junit.runner.RunWith;
-import org.powermock.api.easymock.PowerMock;
-import org.powermock.core.classloader.annotations.PowerMockIgnore;
-import org.powermock.core.classloader.annotations.PrepareForTest;
-import org.powermock.modules.junit4.PowerMockRunner;
-
-import java.io.File;
-import java.io.IOException;
-import java.net.MalformedURLException;
-import java.net.URISyntaxException;
-import java.net.URL;
-import java.net.URLClassLoader;
-import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.PreparedStatement;
-import java.sql.ResultSet;
-import java.sql.Statement;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.METRICS_SITE_CONFIGURATION_FILE;
-import static org.easymock.EasyMock.anyObject;
-import static org.easymock.EasyMock.anyString;
-import static org.easymock.EasyMock.createNiceMock;
-import static org.easymock.EasyMock.expect;
-import static org.easymock.EasyMock.expectLastCall;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.fail;
-import static org.powermock.api.easymock.PowerMock.expectNew;
-import static org.powermock.api.easymock.PowerMock.mockStatic;
-import static org.powermock.api.easymock.PowerMock.replayAll;
-import static org.powermock.api.easymock.PowerMock.verifyAll;
-import static org.powermock.api.support.membermodification.MemberMatcher.method;
-import static org.powermock.api.support.membermodification.MemberModifier.suppress;
-
-@RunWith(PowerMockRunner.class)
-@PrepareForTest({ PhoenixHBaseAccessor.class, HBaseTimelineMetricsService.class, UserGroupInformation.class,
-  ClientCnxn.class, DefaultPhoenixDataSource.class, ConnectionFactory.class,
-  TimelineMetricConfiguration.class, ApplicationHistoryServer.class })
-@PowerMockIgnore( {"javax.management.*"})
-public class TestApplicationHistoryServer {
-
-  ApplicationHistoryServer historyServer = null;
-  Configuration metricsConf = null;
-
-  @Rule
-  public TemporaryFolder folder = new TemporaryFolder();
-
-  @Before
-  @SuppressWarnings("all")
-  public void setup() throws URISyntaxException, IOException {
-    folder.create();
-    File hbaseSite = folder.newFile("hbase-site.xml");
-    File amsSite = folder.newFile("ams-site.xml");
-
-    FileUtils.writeStringToFile(hbaseSite, "<configuration>\n" +
-      "  <property>\n" +
-      "    <name>hbase.defaults.for.version.skip</name>\n" +
-      "    <value>true</value>\n" +
-      "  </property>" +
-      "  <property> " +
-      "    <name>hbase.zookeeper.quorum</name>\n" +
-      "    <value>localhost</value>\n" +
-      "  </property>" +
-      "</configuration>");
-
-    FileUtils.writeStringToFile(amsSite, "<configuration>\n" +
-      "  <property>\n" +
-      "    <name>test</name>\n" +
-      "    <value>testReady</value>\n" +
-      "  </property>\n" +
-      "  <property>\n" +
-      "    <name>timeline.metrics.host.aggregator.hourly.disabled</name>\n" +
-      "    <value>true</value>\n" +
-      "    <description>\n" +
-      "      Disable host based hourly aggregations.\n" +
-      "    </description>\n" +
-      "  </property>\n" +
-      "  <property>\n" +
-      "    <name>timeline.metrics.host.aggregator.minute.disabled</name>\n" +
-      "    <value>true</value>\n" +
-      "    <description>\n" +
-      "      Disable host based minute aggregations.\n" +
-      "    </description>\n" +
-      "  </property>\n" +
-      "  <property>\n" +
-      "    <name>timeline.metrics.cluster.aggregator.hourly.disabled</name>\n" +
-      "    <value>true</value>\n" +
-      "    <description>\n" +
-      "      Disable cluster based hourly aggregations.\n" +
-      "    </description>\n" +
-      "  </property>\n" +
-      "  <property>\n" +
-      "    <name>timeline.metrics.cluster.aggregator.minute.disabled</name>\n" +
-      "    <value>true</value>\n" +
-      "    <description>\n" +
-      "      Disable cluster based minute aggregations.\n" +
-      "    </description>\n" +
-      "  </property>" +
-      "</configuration>");
-
-    ClassLoader currentClassLoader = Thread.currentThread().getContextClassLoader();
-
-    // Add the conf dir to the classpath
-    // Chain the current thread classloader
-    URLClassLoader urlClassLoader = null;
-    try {
-      urlClassLoader = new URLClassLoader(new URL[] {
-        folder.getRoot().toURI().toURL() }, currentClassLoader);
-    } catch (MalformedURLException e) {
-      e.printStackTrace();
-    }
-
-    Thread.currentThread().setContextClassLoader(urlClassLoader);
-    metricsConf = new Configuration(false);
-    metricsConf.addResource(Thread.currentThread().getContextClassLoader()
-      .getResource(METRICS_SITE_CONFIGURATION_FILE).toURI().toURL());
-    assertNotNull(metricsConf.get("test"));
-  }
-
-  // simple test init/start/stop ApplicationHistoryServer. Status should change.
-  @Ignore
-  @Test(timeout = 50000)
-  public void testStartStopServer() throws Exception {
-    Configuration config = new YarnConfiguration();
-    UserGroupInformation ugi =
-      UserGroupInformation.createUserForTesting("ambari", new String[] {"ambari"});
-
-    mockStatic(UserGroupInformation.class);
-    expect(UserGroupInformation.getCurrentUser()).andReturn(ugi).anyTimes();
-    expect(UserGroupInformation.isSecurityEnabled()).andReturn(false).anyTimes();
-    config.set(YarnConfiguration.APPLICATION_HISTORY_STORE,
-      "org.apache.hadoop.yarn.server.applicationhistoryservice.NullApplicationHistoryStore");
-    Configuration hbaseConf = new Configuration();
-    hbaseConf.set("hbase.zookeeper.quorum", "localhost");
-
-    TimelineMetricConfiguration metricConfiguration = PowerMock.createNiceMock(TimelineMetricConfiguration.class);
-    expectNew(TimelineMetricConfiguration.class).andReturn(metricConfiguration);
-    expect(metricConfiguration.getHbaseConf()).andReturn(hbaseConf);
-    Configuration metricsConf = new Configuration();
-    expect(metricConfiguration.getMetricsConf()).andReturn(metricsConf).anyTimes();
-    expect(metricConfiguration.isTimelineMetricsServiceWatcherDisabled()).andReturn(true);
-    expect(metricConfiguration.getTimelineMetricsServiceHandlerThreadCount()).andReturn(20).anyTimes();
-    expect(metricConfiguration.getWebappAddress()).andReturn("localhost:9990").anyTimes();
-    expect(metricConfiguration.getTimelineServiceRpcAddress()).andReturn("localhost:10299").anyTimes();
-    expect(metricConfiguration.getClusterZKQuorum()).andReturn("localhost").anyTimes();
-    expect(metricConfiguration.getClusterZKClientPort()).andReturn("2181").anyTimes();
-
-    Connection connection = createNiceMock(Connection.class);
-    Statement stmt = createNiceMock(Statement.class);
-    PreparedStatement preparedStatement = createNiceMock(PreparedStatement.class);
-    ResultSet rs = createNiceMock(ResultSet.class);
-    mockStatic(DriverManager.class);
-    expect(DriverManager.getConnection("jdbc:phoenix:localhost:2181:/ams-hbase-unsecure"))
-      .andReturn(connection).anyTimes();
-    expect(connection.createStatement()).andReturn(stmt).anyTimes();
-    expect(connection.prepareStatement(anyString())).andReturn(preparedStatement).anyTimes();
-    suppress(method(Statement.class, "executeUpdate", String.class));
-    expect(preparedStatement.executeQuery()).andReturn(rs).anyTimes();
-    expect(rs.next()).andReturn(false).anyTimes();
-    preparedStatement.close();
-    expectLastCall().anyTimes();
-    connection.close();
-    expectLastCall();
-
-    MetricCollectorHAController haControllerMock = PowerMock.createMock(MetricCollectorHAController.class);
-    expectNew(MetricCollectorHAController.class, metricConfiguration)
-      .andReturn(haControllerMock);
-
-    haControllerMock.initializeHAController();
-    expectLastCall().once();
-    expect(haControllerMock.isInitialized()).andReturn(false).anyTimes();
-
-    org.apache.hadoop.hbase.client.Connection conn = createNiceMock(org.apache.hadoop.hbase.client.Connection.class);
-    mockStatic(ConnectionFactory.class);
-    expect(ConnectionFactory.createConnection((Configuration) anyObject())).andReturn(conn);
-    expect(conn.getAdmin()).andReturn(null);
-
-    EasyMock.replay(connection, stmt, preparedStatement, rs);
-    replayAll();
-
-    historyServer = new ApplicationHistoryServer();
-    historyServer.init(config);
-
-    verifyAll();
-
-    assertEquals(STATE.INITED, historyServer.getServiceState());
-    assertEquals(4, historyServer.getServices().size());
-    ApplicationHistoryClientService historyService =
-      historyServer.getClientService();
-    assertNotNull(historyServer.getClientService());
-    assertEquals(STATE.INITED, historyService.getServiceState());
-
-    historyServer.start();
-    assertEquals(STATE.STARTED, historyServer.getServiceState());
-    assertEquals(STATE.STARTED, historyService.getServiceState());
-    historyServer.stop();
-    assertEquals(STATE.STOPPED, historyServer.getServiceState());
-  }
-
-  // test launch method
-  @Ignore
-  @Test(timeout = 60000)
-  public void testLaunch() throws Exception {
-
-    UserGroupInformation ugi =
-      UserGroupInformation.createUserForTesting("ambari", new String[]{"ambari"});
-    mockStatic(UserGroupInformation.class);
-    expect(UserGroupInformation.getCurrentUser()).andReturn(ugi).anyTimes();
-    expect(UserGroupInformation.isSecurityEnabled()).andReturn(false).anyTimes();
-
-    ExitUtil.disableSystemExit();
-    try {
-      historyServer = ApplicationHistoryServer.launchAppHistoryServer(new String[0]);
-    } catch (ExitUtil.ExitException e) {
-      assertEquals(0, e.status);
-      ExitUtil.resetFirstExitException();
-      fail();
-    }
-  }
-
-  @After
-  public void stop() {
-    if (historyServer != null) {
-      historyServer.stop();
-    }
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestFileSystemApplicationHistoryStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestFileSystemApplicationHistoryStore.java
deleted file mode 100644
index 543c25b..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestFileSystemApplicationHistoryStore.java
+++ /dev/null
@@ -1,233 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import java.io.IOException;
-import java.net.URI;
-
-import junit.framework.Assert;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.RawLocalFileSystem;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.api.records.Priority;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerHistoryData;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Test;
-
-public class TestFileSystemApplicationHistoryStore extends
-    ApplicationHistoryStoreTestUtils {
-
-  private FileSystem fs;
-  private Path fsWorkingPath;
-
-  @Before
-  public void setup() throws Exception {
-    fs = new RawLocalFileSystem();
-    Configuration conf = new Configuration();
-    fs.initialize(new URI("/"), conf);
-    fsWorkingPath = new Path("Test");
-    fs.delete(fsWorkingPath, true);
-    conf.set(YarnConfiguration.FS_APPLICATION_HISTORY_STORE_URI, fsWorkingPath.toString());
-    store = new FileSystemApplicationHistoryStore();
-    store.init(conf);
-    store.start();
-  }
-
-  @After
-  public void tearDown() throws Exception {
-    store.stop();
-    fs.delete(fsWorkingPath, true);
-    fs.close();
-  }
-
-  @Test
-  public void testReadWriteHistoryData() throws IOException {
-    testWriteHistoryData(5);
-    testReadHistoryData(5);
-  }
-
-  private void testWriteHistoryData(int num) throws IOException {
-    testWriteHistoryData(num, false, false);
-  }
-  
-  private void testWriteHistoryData(
-      int num, boolean missingContainer, boolean missingApplicationAttempt)
-          throws IOException {
-    // write application history data
-    for (int i = 1; i <= num; ++i) {
-      ApplicationId appId = ApplicationId.newInstance(0, i);
-      writeApplicationStartData(appId);
-
-      // write application attempt history data
-      for (int j = 1; j <= num; ++j) {
-        ApplicationAttemptId appAttemptId =
-            ApplicationAttemptId.newInstance(appId, j);
-        writeApplicationAttemptStartData(appAttemptId);
-
-        if (missingApplicationAttempt && j == num) {
-          continue;
-        }
-        // write container history data
-        for (int k = 1; k <= num; ++k) {
-          ContainerId containerId = ContainerId.newContainerId(appAttemptId, k);
-          writeContainerStartData(containerId);
-          if (missingContainer && k == num) {
-            continue;
-          }
-          writeContainerFinishData(containerId);
-        }
-        writeApplicationAttemptFinishData(appAttemptId);
-      }
-      writeApplicationFinishData(appId);
-    }
-  }
-
-  private void testReadHistoryData(int num) throws IOException {
-    testReadHistoryData(num, false, false);
-  }
-  
-  private void testReadHistoryData(
-      int num, boolean missingContainer, boolean missingApplicationAttempt)
-          throws IOException {
-    // read application history data
-    Assert.assertEquals(num, store.getAllApplications().size());
-    for (int i = 1; i <= num; ++i) {
-      ApplicationId appId = ApplicationId.newInstance(0, i);
-      ApplicationHistoryData appData = store.getApplication(appId);
-      Assert.assertNotNull(appData);
-      Assert.assertEquals(appId.toString(), appData.getApplicationName());
-      Assert.assertEquals(appId.toString(), appData.getDiagnosticsInfo());
-
-      // read application attempt history data
-      Assert.assertEquals(num, store.getApplicationAttempts(appId).size());
-      for (int j = 1; j <= num; ++j) {
-        ApplicationAttemptId appAttemptId =
-            ApplicationAttemptId.newInstance(appId, j);
-        ApplicationAttemptHistoryData attemptData =
-            store.getApplicationAttempt(appAttemptId);
-        Assert.assertNotNull(attemptData);
-        Assert.assertEquals(appAttemptId.toString(), attemptData.getHost());
-        
-        if (missingApplicationAttempt && j == num) {
-          Assert.assertNull(attemptData.getDiagnosticsInfo());
-          continue;
-        } else {
-          Assert.assertEquals(appAttemptId.toString(),
-              attemptData.getDiagnosticsInfo());
-        }
-
-        // read container history data
-        Assert.assertEquals(num, store.getContainers(appAttemptId).size());
-        for (int k = 1; k <= num; ++k) {
-          ContainerId containerId = ContainerId.newContainerId(appAttemptId, k);
-          ContainerHistoryData containerData = store.getContainer(containerId);
-          Assert.assertNotNull(containerData);
-          Assert.assertEquals(Priority.newInstance(containerId.getId()),
-            containerData.getPriority());
-          if (missingContainer && k == num) {
-            Assert.assertNull(containerData.getDiagnosticsInfo());
-          } else {
-            Assert.assertEquals(containerId.toString(),
-                containerData.getDiagnosticsInfo());
-          }
-        }
-        ContainerHistoryData masterContainer =
-            store.getAMContainer(appAttemptId);
-        Assert.assertNotNull(masterContainer);
-        Assert.assertEquals(ContainerId.newContainerId(appAttemptId, 1),
-          masterContainer.getContainerId());
-      }
-    }
-  }
-
-  @Test
-  public void testWriteAfterApplicationFinish() throws IOException {
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    writeApplicationStartData(appId);
-    writeApplicationFinishData(appId);
-    // write application attempt history data
-    ApplicationAttemptId appAttemptId =
-        ApplicationAttemptId.newInstance(appId, 1);
-    try {
-      writeApplicationAttemptStartData(appAttemptId);
-      Assert.fail();
-    } catch (IOException e) {
-      Assert.assertTrue(e.getMessage().contains("is not opened"));
-    }
-    try {
-      writeApplicationAttemptFinishData(appAttemptId);
-      Assert.fail();
-    } catch (IOException e) {
-      Assert.assertTrue(e.getMessage().contains("is not opened"));
-    }
-    // write container history data
-    ContainerId containerId = ContainerId.newContainerId(appAttemptId, 1);
-    try {
-      writeContainerStartData(containerId);
-      Assert.fail();
-    } catch (IOException e) {
-      Assert.assertTrue(e.getMessage().contains("is not opened"));
-    }
-    try {
-      writeContainerFinishData(containerId);
-      Assert.fail();
-    } catch (IOException e) {
-      Assert.assertTrue(e.getMessage().contains("is not opened"));
-    }
-  }
-
-  @Test
-  public void testMassiveWriteContainerHistoryData() throws IOException {
-    long mb = 1024 * 1024;
-    long usedDiskBefore = fs.getContentSummary(fsWorkingPath).getLength() / mb;
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    writeApplicationStartData(appId);
-    ApplicationAttemptId appAttemptId =
-        ApplicationAttemptId.newInstance(appId, 1);
-    for (int i = 1; i <= 1000; ++i) {
-      ContainerId containerId = ContainerId.newContainerId(appAttemptId, i);
-      writeContainerStartData(containerId);
-      writeContainerFinishData(containerId);
-    }
-    writeApplicationFinishData(appId);
-    long usedDiskAfter = fs.getContentSummary(fsWorkingPath).getLength() / mb;
-    Assert.assertTrue((usedDiskAfter - usedDiskBefore) < 20);
-  }
-
-  @Test
-  public void testMissingContainerHistoryData() throws IOException {
-    testWriteHistoryData(3, true, false);
-    testReadHistoryData(3, true, false);
-  }
-  
-  @Test
-  public void testMissingApplicationAttemptHistoryData() throws IOException {
-    testWriteHistoryData(3, false, true);
-    testReadHistoryData(3, false, true);
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestMemoryApplicationHistoryStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestMemoryApplicationHistoryStore.java
deleted file mode 100644
index b4da01a..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestMemoryApplicationHistoryStore.java
+++ /dev/null
@@ -1,206 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice;
-
-import java.io.IOException;
-
-import junit.framework.Assert;
-
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.api.records.Priority;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationAttemptHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ApplicationHistoryData;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.records.ContainerHistoryData;
-import org.junit.Before;
-import org.junit.Ignore;
-import org.junit.Test;
-
-public class TestMemoryApplicationHistoryStore extends
-    ApplicationHistoryStoreTestUtils {
-
-  @Before
-  public void setup() {
-    store = new MemoryApplicationHistoryStore();
-  }
-
-  @Test
-  public void testReadWriteApplicationHistory() throws Exception {
-    // Out of order
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    try {
-      writeApplicationFinishData(appId);
-      Assert.fail();
-    } catch (IOException e) {
-      Assert.assertTrue(e.getMessage().contains(
-        "is stored before the start information"));
-    }
-    // Normal
-    int numApps = 5;
-    for (int i = 1; i <= numApps; ++i) {
-      appId = ApplicationId.newInstance(0, i);
-      writeApplicationStartData(appId);
-      writeApplicationFinishData(appId);
-    }
-    Assert.assertEquals(numApps, store.getAllApplications().size());
-    for (int i = 1; i <= numApps; ++i) {
-      appId = ApplicationId.newInstance(0, i);
-      ApplicationHistoryData data = store.getApplication(appId);
-      Assert.assertNotNull(data);
-      Assert.assertEquals(appId.toString(), data.getApplicationName());
-      Assert.assertEquals(appId.toString(), data.getDiagnosticsInfo());
-    }
-    // Write again
-    appId = ApplicationId.newInstance(0, 1);
-    try {
-      writeApplicationStartData(appId);
-      Assert.fail();
-    } catch (IOException e) {
-      Assert.assertTrue(e.getMessage().contains("is already stored"));
-    }
-    try {
-      writeApplicationFinishData(appId);
-      Assert.fail();
-    } catch (IOException e) {
-      Assert.assertTrue(e.getMessage().contains("is already stored"));
-    }
-  }
-
-  @Test
-  public void testReadWriteApplicationAttemptHistory() throws Exception {
-    // Out of order
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    ApplicationAttemptId appAttemptId =
-        ApplicationAttemptId.newInstance(appId, 1);
-    try {
-      writeApplicationAttemptFinishData(appAttemptId);
-      Assert.fail();
-    } catch (IOException e) {
-      Assert.assertTrue(e.getMessage().contains(
-        "is stored before the start information"));
-    }
-    // Normal
-    int numAppAttempts = 5;
-    writeApplicationStartData(appId);
-    for (int i = 1; i <= numAppAttempts; ++i) {
-      appAttemptId = ApplicationAttemptId.newInstance(appId, i);
-      writeApplicationAttemptStartData(appAttemptId);
-      writeApplicationAttemptFinishData(appAttemptId);
-    }
-    Assert.assertEquals(numAppAttempts, store.getApplicationAttempts(appId)
-      .size());
-    for (int i = 1; i <= numAppAttempts; ++i) {
-      appAttemptId = ApplicationAttemptId.newInstance(appId, i);
-      ApplicationAttemptHistoryData data =
-          store.getApplicationAttempt(appAttemptId);
-      Assert.assertNotNull(data);
-      Assert.assertEquals(appAttemptId.toString(), data.getHost());
-      Assert.assertEquals(appAttemptId.toString(), data.getDiagnosticsInfo());
-    }
-    writeApplicationFinishData(appId);
-    // Write again
-    appAttemptId = ApplicationAttemptId.newInstance(appId, 1);
-    try {
-      writeApplicationAttemptStartData(appAttemptId);
-      Assert.fail();
-    } catch (IOException e) {
-      Assert.assertTrue(e.getMessage().contains("is already stored"));
-    }
-    try {
-      writeApplicationAttemptFinishData(appAttemptId);
-      Assert.fail();
-    } catch (IOException e) {
-      Assert.assertTrue(e.getMessage().contains("is already stored"));
-    }
-  }
-
-  @Test
-  public void testReadWriteContainerHistory() throws Exception {
-    // Out of order
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    ApplicationAttemptId appAttemptId =
-        ApplicationAttemptId.newInstance(appId, 1);
-    ContainerId containerId = ContainerId.newContainerId(appAttemptId, 1);
-    try {
-      writeContainerFinishData(containerId);
-      Assert.fail();
-    } catch (IOException e) {
-      Assert.assertTrue(e.getMessage().contains(
-        "is stored before the start information"));
-    }
-    // Normal
-    writeApplicationAttemptStartData(appAttemptId);
-    int numContainers = 5;
-    for (int i = 1; i <= numContainers; ++i) {
-      containerId = ContainerId.newContainerId(appAttemptId, i);
-      writeContainerStartData(containerId);
-      writeContainerFinishData(containerId);
-    }
-    Assert
-      .assertEquals(numContainers, store.getContainers(appAttemptId).size());
-    for (int i = 1; i <= numContainers; ++i) {
-      containerId = ContainerId.newContainerId(appAttemptId, i);
-      ContainerHistoryData data = store.getContainer(containerId);
-      Assert.assertNotNull(data);
-      Assert.assertEquals(Priority.newInstance(containerId.getId()),
-        data.getPriority());
-      Assert.assertEquals(containerId.toString(), data.getDiagnosticsInfo());
-    }
-    ContainerHistoryData masterContainer = store.getAMContainer(appAttemptId);
-    Assert.assertNotNull(masterContainer);
-    Assert.assertEquals(ContainerId.newContainerId(appAttemptId, 1),
-      masterContainer.getContainerId());
-    writeApplicationAttemptFinishData(appAttemptId);
-    // Write again
-    containerId = ContainerId.newContainerId(appAttemptId, 1);
-    try {
-      writeContainerStartData(containerId);
-      Assert.fail();
-    } catch (IOException e) {
-      Assert.assertTrue(e.getMessage().contains("is already stored"));
-    }
-    try {
-      writeContainerFinishData(containerId);
-      Assert.fail();
-    } catch (IOException e) {
-      Assert.assertTrue(e.getMessage().contains("is already stored"));
-    }
-  }
-
-  @Test
-  @Ignore
-  public void testMassiveWriteContainerHistory() throws IOException {
-    long mb = 1024 * 1024;
-    Runtime runtime = Runtime.getRuntime();
-    long usedMemoryBefore = (runtime.totalMemory() - runtime.freeMemory()) / mb;
-    int numContainers = 100000;
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    ApplicationAttemptId appAttemptId =
-        ApplicationAttemptId.newInstance(appId, 1);
-    for (int i = 1; i <= numContainers; ++i) {
-      ContainerId containerId = ContainerId.newContainerId(appAttemptId, i);
-      writeContainerStartData(containerId);
-      writeContainerFinishData(containerId);
-    }
-    long usedMemoryAfter = (runtime.totalMemory() - runtime.freeMemory()) / mb;
-    Assert.assertTrue((usedMemoryAfter - usedMemoryBefore) < 200);
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
index 741bb3c..c4cebd6 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
@@ -20,9 +20,9 @@ package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.OUT_OFF_BAND_DATA_TIME_ALLOWANCE;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_RECORD_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_METRICS_SQL;
+import static org.apache.phoenix.end2end.ParallelStatsDisabledIT.tearDownMiniCluster;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.assertj.core.api.Assertions.assertThat;
-import static org.powermock.api.easymock.PowerMock.mockStatic;
 
 import java.io.IOException;
 import java.sql.Connection;
@@ -40,8 +40,8 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.IntegrationTestingUtility;
+import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
-import org.apache.hadoop.hbase.util.RetryCounterFactory;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils;
@@ -204,7 +204,7 @@ public abstract class AbstractMiniHBaseClusterTest extends BaseTest {
       new PhoenixHBaseAccessor(new TimelineMetricConfiguration(new Configuration(), metricsConf),
         new PhoenixConnectionProvider() {
           @Override
-          public HBaseAdmin getHBaseAdmin() throws IOException {
+          public Admin getHBaseAdmin() throws IOException {
             try {
               return driver.getConnectionQueryServices(null, null).getAdmin();
             } catch (SQLException e) {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
index 57f9796..2a5dd0b 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
@@ -30,6 +30,7 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_AGGREGATE_MINUTE_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_RECORD_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.PHOENIX_TABLES;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.PHOENIX_TABLES_REGEX_PATTERN;
 
 import java.io.IOException;
 import java.lang.reflect.Field;
@@ -43,12 +44,14 @@ import java.util.HashMap;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
+import java.util.Optional;
 
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.HColumnDescriptor;
-import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
 import org.apache.hadoop.hbase.client.Durability;
-import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.TableDescriptor;
 import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
 import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
 import org.apache.hadoop.metrics2.sink.timeline.MetricHostAggregate;
@@ -324,26 +327,13 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
 
   @Test
   public void testInitPoliciesAndTTL() throws Exception {
-    HBaseAdmin hBaseAdmin = hdb.getHBaseAdmin();
-    String precisionTtl = "";
-    // Verify policies are unset
-    for (String tableName : PHOENIX_TABLES) {
-      HTableDescriptor tableDescriptor = hBaseAdmin.getTableDescriptor(tableName.getBytes());
-      tableDescriptor.setNormalizationEnabled(true);
-      Assert.assertTrue("Normalizer enabled.", tableDescriptor.isNormalizationEnabled());
-
-      for (HColumnDescriptor family : tableDescriptor.getColumnFamilies()) {
-        if (tableName.equals(METRICS_RECORD_TABLE_NAME)) {
-          precisionTtl = family.getValue("TTL");
-        }
-      }
-      Assert.assertEquals("Precision TTL value.", "86400", precisionTtl);
-    }
+    Admin hBaseAdmin = hdb.getHBaseAdmin();
+    int precisionTtl = 2 * 86400;
 
     Field f = PhoenixHBaseAccessor.class.getDeclaredField("tableTTL");
     f.setAccessible(true);
-    Map<String, String> precisionValues = (Map<String, String>) f.get(hdb);
-    precisionValues.put(METRICS_RECORD_TABLE_NAME, String.valueOf(2 * 86400));
+    Map<String, Integer> precisionValues = (Map<String, Integer>) f.get(hdb);
+    precisionValues.put(METRICS_RECORD_TABLE_NAME, precisionTtl);
     f.set(hdb, precisionValues);
 
     Field f2 = PhoenixHBaseAccessor.class.getDeclaredField("timelineMetricsTablesDurability");
@@ -360,13 +350,18 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
     for (int i = 0; i < 10; i++) {
       LOG.warn("Policy check retry : " + i);
       for (String tableName : PHOENIX_TABLES) {
-        HTableDescriptor tableDescriptor = hBaseAdmin.getTableDescriptor(tableName.getBytes());
+        TableName[] tableNames = hBaseAdmin.listTableNames(PHOENIX_TABLES_REGEX_PATTERN, false);
+        Optional<TableName> tableNameOptional = Arrays.stream(tableNames)
+          .filter(t -> tableName.equals(t.getNameAsString())).findFirst();
+
+        TableDescriptor tableDescriptor = hBaseAdmin.getTableDescriptor(tableNameOptional.get());
+        
         normalizerEnabled = tableDescriptor.isNormalizationEnabled();
         tableDurabilitySet = (Durability.ASYNC_WAL.equals(tableDescriptor.getDurability()));
         if (tableName.equals(METRICS_RECORD_TABLE_NAME)) {
-          precisionTableCompactionPolicy = tableDescriptor.getConfigurationValue(HSTORE_ENGINE_CLASS);
+          precisionTableCompactionPolicy = tableDescriptor.getValue(HSTORE_ENGINE_CLASS);
         } else {
-          aggregateTableCompactionPolicy = tableDescriptor.getConfigurationValue(HSTORE_COMPACTION_CLASS_KEY);
+          aggregateTableCompactionPolicy = tableDescriptor.getValue(HSTORE_COMPACTION_CLASS_KEY);
         }
         LOG.debug("Table: " + tableName + ", normalizerEnabled = " + normalizerEnabled);
         // Best effort for 20 seconds
@@ -374,8 +369,8 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
           Thread.sleep(20000l);
         }
         if (tableName.equals(METRICS_RECORD_TABLE_NAME)) {
-          for (HColumnDescriptor family : tableDescriptor.getColumnFamilies()) {
-            precisionTtl = family.getValue("TTL");
+          for (ColumnFamilyDescriptor family : tableDescriptor.getColumnFamilies()) {
+            precisionTtl = family.getTimeToLive();
           }
         }
       }
@@ -385,7 +380,7 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
     Assert.assertTrue("Durability Set.", tableDurabilitySet);
     Assert.assertEquals("FIFO compaction policy is set for METRIC_RECORD.", FIFO_COMPACTION_POLICY_CLASS, precisionTableCompactionPolicy);
     Assert.assertEquals("FIFO compaction policy is set for aggregate tables", DATE_TIERED_COMPACTION_POLICY, aggregateTableCompactionPolicy);
-    Assert.assertEquals("Precision TTL value not changed.", String.valueOf(2 * 86400), precisionTtl);
+    Assert.assertEquals("Precision TTL value as expected.", 2 * 86400, precisionTtl);
 
     hBaseAdmin.close();
   }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TestLeveldbTimelineStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TestLeveldbTimelineStore.java
deleted file mode 100644
index 9b27309..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TestLeveldbTimelineStore.java
+++ /dev/null
@@ -1,253 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.yarn.server.applicationhistoryservice.timeline;
-
-import java.io.File;
-import java.io.IOException;
-import java.util.Collections;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileContext;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.IOUtils;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEntity;
-import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.iq80.leveldb.DBIterator;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Test;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.GenericObjectMapper.writeReverseOrderedLong;
-import static org.junit.Assert.assertEquals;
-
-@InterfaceAudience.Private
-@InterfaceStability.Unstable
-public class TestLeveldbTimelineStore extends TimelineStoreTestUtils {
-  private FileContext fsContext;
-  private File fsPath;
-
-  @Before
-  public void setup() throws Exception {
-    fsContext = FileContext.getLocalFSFileContext();
-    Configuration conf = new Configuration();
-    fsPath = new File("target", this.getClass().getSimpleName() +
-        "-tmpDir").getAbsoluteFile();
-    fsContext.delete(new Path(fsPath.getAbsolutePath()), true);
-    conf.set(YarnConfiguration.TIMELINE_SERVICE_LEVELDB_PATH,
-        fsPath.getAbsolutePath());
-    conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_TTL_ENABLE, false);
-    store = new LeveldbTimelineStore();
-    store.init(conf);
-    store.start();
-    loadTestData();
-    loadVerificationData();
-  }
-
-  @After
-  public void tearDown() throws Exception {
-    store.stop();
-    fsContext.delete(new Path(fsPath.getAbsolutePath()), true);
-  }
-
-  @Test
-  public void testGetSingleEntity() throws IOException {
-    super.testGetSingleEntity();
-    ((LeveldbTimelineStore)store).clearStartTimeCache();
-    super.testGetSingleEntity();
-    loadTestData();
-  }
-
-  @Test
-  public void testGetEntities() throws IOException {
-    super.testGetEntities();
-  }
-
-  @Test
-  public void testGetEntitiesWithFromId() throws IOException {
-    super.testGetEntitiesWithFromId();
-  }
-
-  @Test
-  public void testGetEntitiesWithFromTs() throws IOException {
-    super.testGetEntitiesWithFromTs();
-  }
-
-  @Test
-  public void testGetEntitiesWithPrimaryFilters() throws IOException {
-    super.testGetEntitiesWithPrimaryFilters();
-  }
-
-  @Test
-  public void testGetEntitiesWithSecondaryFilters() throws IOException {
-    super.testGetEntitiesWithSecondaryFilters();
-  }
-
-  @Test
-  public void testGetEvents() throws IOException {
-    super.testGetEvents();
-  }
-
-  @Test
-  public void testCacheSizes() {
-    Configuration conf = new Configuration();
-    assertEquals(10000, LeveldbTimelineStore.getStartTimeReadCacheSize(conf));
-    assertEquals(10000, LeveldbTimelineStore.getStartTimeWriteCacheSize(conf));
-    conf.setInt(
-        YarnConfiguration.TIMELINE_SERVICE_LEVELDB_START_TIME_READ_CACHE_SIZE,
-        10001);
-    assertEquals(10001, LeveldbTimelineStore.getStartTimeReadCacheSize(conf));
-    conf = new Configuration();
-    conf.setInt(
-        YarnConfiguration.TIMELINE_SERVICE_LEVELDB_START_TIME_WRITE_CACHE_SIZE,
-        10002);
-    assertEquals(10002, LeveldbTimelineStore.getStartTimeWriteCacheSize(conf));
-  }
-
-  private boolean deleteNextEntity(String entityType, byte[] ts)
-      throws IOException, InterruptedException {
-    DBIterator iterator = null;
-    DBIterator pfIterator = null;
-    try {
-      iterator = ((LeveldbTimelineStore)store).getDbIterator(false);
-      pfIterator = ((LeveldbTimelineStore)store).getDbIterator(false);
-      return ((LeveldbTimelineStore)store).deleteNextEntity(entityType, ts,
-          iterator, pfIterator, false);
-    } finally {
-      IOUtils.cleanup(null, iterator, pfIterator);
-    }
-  }
-
-  @Test
-  public void testGetEntityTypes() throws IOException {
-    List<String> entityTypes = ((LeveldbTimelineStore)store).getEntityTypes();
-    assertEquals(4, entityTypes.size());
-    assertEquals(entityType1, entityTypes.get(0));
-    assertEquals(entityType2, entityTypes.get(1));
-    assertEquals(entityType4, entityTypes.get(2));
-    assertEquals(entityType5, entityTypes.get(3));
-  }
-
-  @Test
-  public void testDeleteEntities() throws IOException, InterruptedException {
-    assertEquals(2, getEntities("type_1").size());
-    assertEquals(1, getEntities("type_2").size());
-
-    assertEquals(false, deleteNextEntity(entityType1,
-        writeReverseOrderedLong(122l)));
-    assertEquals(2, getEntities("type_1").size());
-    assertEquals(1, getEntities("type_2").size());
-
-    assertEquals(true, deleteNextEntity(entityType1,
-        writeReverseOrderedLong(123l)));
-    List<TimelineEntity> entities = getEntities("type_2");
-    assertEquals(1, entities.size());
-    verifyEntityInfo(entityId2, entityType2, events2, Collections.singletonMap(
-        entityType1, Collections.singleton(entityId1b)), EMPTY_PRIMARY_FILTERS,
-        EMPTY_MAP, entities.get(0));
-    entities = getEntitiesWithPrimaryFilter("type_1", userFilter);
-    assertEquals(1, entities.size());
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-
-    ((LeveldbTimelineStore)store).discardOldEntities(-123l);
-    assertEquals(1, getEntities("type_1").size());
-    assertEquals(0, getEntities("type_2").size());
-    assertEquals(3, ((LeveldbTimelineStore)store).getEntityTypes().size());
-
-    ((LeveldbTimelineStore)store).discardOldEntities(123l);
-    assertEquals(0, getEntities("type_1").size());
-    assertEquals(0, getEntities("type_2").size());
-    assertEquals(0, ((LeveldbTimelineStore)store).getEntityTypes().size());
-    assertEquals(0, getEntitiesWithPrimaryFilter("type_1", userFilter).size());
-  }
-
-  @Test
-  public void testDeleteEntitiesPrimaryFilters()
-      throws IOException, InterruptedException {
-    Map<String, Set<Object>> primaryFilter =
-        Collections.singletonMap("user", Collections.singleton(
-            (Object) "otheruser"));
-    TimelineEntities atsEntities = new TimelineEntities();
-    atsEntities.setEntities(Collections.singletonList(createEntity(entityId1b,
-        entityType1, 789l, Collections.singletonList(ev2), null, primaryFilter,
-        null)));
-    TimelinePutResponse response = store.put(atsEntities);
-    assertEquals(0, response.getErrors().size());
-
-    NameValuePair pfPair = new NameValuePair("user", "otheruser");
-    List<TimelineEntity> entities = getEntitiesWithPrimaryFilter("type_1",
-        pfPair);
-    assertEquals(1, entities.size());
-    verifyEntityInfo(entityId1b, entityType1, Collections.singletonList(ev2),
-        EMPTY_REL_ENTITIES, primaryFilter, EMPTY_MAP, entities.get(0));
-
-    entities = getEntitiesWithPrimaryFilter("type_1", userFilter);
-    assertEquals(2, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(1));
-
-    ((LeveldbTimelineStore)store).discardOldEntities(-123l);
-    assertEquals(1, getEntitiesWithPrimaryFilter("type_1", pfPair).size());
-    assertEquals(2, getEntitiesWithPrimaryFilter("type_1", userFilter).size());
-
-    ((LeveldbTimelineStore)store).discardOldEntities(123l);
-    assertEquals(0, getEntities("type_1").size());
-    assertEquals(0, getEntities("type_2").size());
-    assertEquals(0, ((LeveldbTimelineStore)store).getEntityTypes().size());
-
-    assertEquals(0, getEntitiesWithPrimaryFilter("type_1", pfPair).size());
-    assertEquals(0, getEntitiesWithPrimaryFilter("type_1", userFilter).size());
-  }
-
-  @Test
-  public void testFromTsWithDeletion()
-      throws IOException, InterruptedException {
-    long l = System.currentTimeMillis();
-    assertEquals(2, getEntitiesFromTs("type_1", l).size());
-    assertEquals(1, getEntitiesFromTs("type_2", l).size());
-    assertEquals(2, getEntitiesFromTsWithPrimaryFilter("type_1", userFilter,
-        l).size());
-    ((LeveldbTimelineStore)store).discardOldEntities(123l);
-    assertEquals(0, getEntitiesFromTs("type_1", l).size());
-    assertEquals(0, getEntitiesFromTs("type_2", l).size());
-    assertEquals(0, getEntitiesFromTsWithPrimaryFilter("type_1", userFilter,
-        l).size());
-    assertEquals(0, getEntities("type_1").size());
-    assertEquals(0, getEntities("type_2").size());
-    assertEquals(0, getEntitiesFromTsWithPrimaryFilter("type_1", userFilter,
-        l).size());
-    loadTestData();
-    assertEquals(0, getEntitiesFromTs("type_1", l).size());
-    assertEquals(0, getEntitiesFromTs("type_2", l).size());
-    assertEquals(0, getEntitiesFromTsWithPrimaryFilter("type_1", userFilter,
-        l).size());
-    assertEquals(2, getEntities("type_1").size());
-    assertEquals(1, getEntities("type_2").size());
-    assertEquals(2, getEntitiesWithPrimaryFilter("type_1", userFilter).size());
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TestMemoryTimelineStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TestMemoryTimelineStore.java
deleted file mode 100644
index 415de53..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TestMemoryTimelineStore.java
+++ /dev/null
@@ -1,83 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice.timeline;
-
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Test;
-
-import java.io.IOException;
-
-public class TestMemoryTimelineStore extends TimelineStoreTestUtils {
-
-  @Before
-  public void setup() throws Exception {
-    store = new MemoryTimelineStore();
-    store.init(new YarnConfiguration());
-    store.start();
-    loadTestData();
-    loadVerificationData();
-  }
-
-  @After
-  public void tearDown() throws Exception {
-    store.stop();
-  }
-
-  public TimelineStore getTimelineStore() {
-    return store;
-  }
-
-  @Test
-  public void testGetSingleEntity() throws IOException {
-    super.testGetSingleEntity();
-  }
-
-  @Test
-  public void testGetEntities() throws IOException {
-    super.testGetEntities();
-  }
-
-  @Test
-  public void testGetEntitiesWithFromId() throws IOException {
-    super.testGetEntitiesWithFromId();
-  }
-
-  @Test
-  public void testGetEntitiesWithFromTs() throws IOException {
-    super.testGetEntitiesWithFromTs();
-  }
-
-  @Test
-  public void testGetEntitiesWithPrimaryFilters() throws IOException {
-    super.testGetEntitiesWithPrimaryFilters();
-  }
-
-  @Test
-  public void testGetEntitiesWithSecondaryFilters() throws IOException {
-    super.testGetEntitiesWithSecondaryFilters();
-  }
-
-  @Test
-  public void testGetEvents() throws IOException {
-    super.testGetEvents();
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TimelineStoreTestUtils.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TimelineStoreTestUtils.java
deleted file mode 100644
index d760536..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TimelineStoreTestUtils.java
+++ /dev/null
@@ -1,789 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.yarn.server.applicationhistoryservice.timeline;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertTrue;
-
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.EnumSet;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.Set;
-import java.util.SortedSet;
-import java.util.TreeSet;
-
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEntity;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEvent;
-import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEvents.EventsOfOneEntity;
-import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse.TimelinePutError;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.TimelineReader.Field;
-
-public class TimelineStoreTestUtils {
-
-  protected static final List<TimelineEvent> EMPTY_EVENTS =
-      Collections.emptyList();
-  protected static final Map<String, Object> EMPTY_MAP =
-      Collections.emptyMap();
-  protected static final Map<String, Set<Object>> EMPTY_PRIMARY_FILTERS =
-      Collections.emptyMap();
-  protected static final Map<String, Set<String>> EMPTY_REL_ENTITIES =
-      Collections.emptyMap();
-
-  protected TimelineStore store;
-  protected String entityId1;
-  protected String entityType1;
-  protected String entityId1b;
-  protected String entityId2;
-  protected String entityType2;
-  protected String entityId4;
-  protected String entityType4;
-  protected String entityId5;
-  protected String entityType5;
-  protected Map<String, Set<Object>> primaryFilters;
-  protected Map<String, Object> secondaryFilters;
-  protected Map<String, Object> allFilters;
-  protected Map<String, Object> otherInfo;
-  protected Map<String, Set<String>> relEntityMap;
-  protected Map<String, Set<String>> relEntityMap2;
-  protected NameValuePair userFilter;
-  protected NameValuePair numericFilter1;
-  protected NameValuePair numericFilter2;
-  protected NameValuePair numericFilter3;
-  protected Collection<NameValuePair> goodTestingFilters;
-  protected Collection<NameValuePair> badTestingFilters;
-  protected TimelineEvent ev1;
-  protected TimelineEvent ev2;
-  protected TimelineEvent ev3;
-  protected TimelineEvent ev4;
-  protected Map<String, Object> eventInfo;
-  protected List<TimelineEvent> events1;
-  protected List<TimelineEvent> events2;
-  protected long beforeTs;
-
-  /**
-   * Load test data into the given store
-   */
-  protected void loadTestData() throws IOException {
-    beforeTs = System.currentTimeMillis()-1;
-    TimelineEntities entities = new TimelineEntities();
-    Map<String, Set<Object>> primaryFilters =
-        new HashMap<String, Set<Object>>();
-    Set<Object> l1 = new HashSet<Object>();
-    l1.add("username");
-    Set<Object> l2 = new HashSet<Object>();
-    l2.add((long)Integer.MAX_VALUE);
-    Set<Object> l3 = new HashSet<Object>();
-    l3.add("123abc");
-    Set<Object> l4 = new HashSet<Object>();
-    l4.add((long)Integer.MAX_VALUE + 1l);
-    primaryFilters.put("user", l1);
-    primaryFilters.put("appname", l2);
-    primaryFilters.put("other", l3);
-    primaryFilters.put("long", l4);
-    Map<String, Object> secondaryFilters = new HashMap<String, Object>();
-    secondaryFilters.put("startTime", 123456l);
-    secondaryFilters.put("status", "RUNNING");
-    Map<String, Object> otherInfo1 = new HashMap<String, Object>();
-    otherInfo1.put("info1", "val1");
-    otherInfo1.putAll(secondaryFilters);
-
-    String entityId1 = "id_1";
-    String entityType1 = "type_1";
-    String entityId1b = "id_2";
-    String entityId2 = "id_2";
-    String entityType2 = "type_2";
-    String entityId4 = "id_4";
-    String entityType4 = "type_4";
-    String entityId5 = "id_5";
-    String entityType5 = "type_5";
-
-    Map<String, Set<String>> relatedEntities =
-        new HashMap<String, Set<String>>();
-    relatedEntities.put(entityType2, Collections.singleton(entityId2));
-
-    TimelineEvent ev3 = createEvent(789l, "launch_event", null);
-    TimelineEvent ev4 = createEvent(-123l, "init_event", null);
-    List<TimelineEvent> events = new ArrayList<TimelineEvent>();
-    events.add(ev3);
-    events.add(ev4);
-    entities.setEntities(Collections.singletonList(createEntity(entityId2,
-        entityType2, null, events, null, null, null)));
-    TimelinePutResponse response = store.put(entities);
-    assertEquals(0, response.getErrors().size());
-
-    TimelineEvent ev1 = createEvent(123l, "start_event", null);
-    entities.setEntities(Collections.singletonList(createEntity(entityId1,
-        entityType1, 123l, Collections.singletonList(ev1),
-        relatedEntities, primaryFilters, otherInfo1)));
-    response = store.put(entities);
-    assertEquals(0, response.getErrors().size());
-    entities.setEntities(Collections.singletonList(createEntity(entityId1b,
-        entityType1, null, Collections.singletonList(ev1), relatedEntities,
-        primaryFilters, otherInfo1)));
-    response = store.put(entities);
-    assertEquals(0, response.getErrors().size());
-
-    Map<String, Object> eventInfo = new HashMap<String, Object>();
-    eventInfo.put("event info 1", "val1");
-    TimelineEvent ev2 = createEvent(456l, "end_event", eventInfo);
-    Map<String, Object> otherInfo2 = new HashMap<String, Object>();
-    otherInfo2.put("info2", "val2");
-    entities.setEntities(Collections.singletonList(createEntity(entityId1,
-        entityType1, null, Collections.singletonList(ev2), null,
-        primaryFilters, otherInfo2)));
-    response = store.put(entities);
-    assertEquals(0, response.getErrors().size());
-    entities.setEntities(Collections.singletonList(createEntity(entityId1b,
-        entityType1, 789l, Collections.singletonList(ev2), null,
-        primaryFilters, otherInfo2)));
-    response = store.put(entities);
-    assertEquals(0, response.getErrors().size());
-
-    entities.setEntities(Collections.singletonList(createEntity(
-        "badentityid", "badentity", null, null, null, null, otherInfo1)));
-    response = store.put(entities);
-    assertEquals(1, response.getErrors().size());
-    TimelinePutError error = response.getErrors().get(0);
-    assertEquals("badentityid", error.getEntityId());
-    assertEquals("badentity", error.getEntityType());
-    assertEquals(TimelinePutError.NO_START_TIME, error.getErrorCode());
-
-    relatedEntities.clear();
-    relatedEntities.put(entityType5, Collections.singleton(entityId5));
-    entities.setEntities(Collections.singletonList(createEntity(entityId4,
-        entityType4, 42l, null, relatedEntities, null, null)));
-    response = store.put(entities);
-    assertEquals(0, response.getErrors().size());
-  }
-
-  /**
-   * Load verification data
-   */
-  protected void loadVerificationData() throws Exception {
-    userFilter = new NameValuePair("user", "username");
-    numericFilter1 = new NameValuePair("appname", Integer.MAX_VALUE);
-    numericFilter2 = new NameValuePair("long", (long)Integer.MAX_VALUE + 1l);
-    numericFilter3 = new NameValuePair("other", "123abc");
-    goodTestingFilters = new ArrayList<NameValuePair>();
-    goodTestingFilters.add(new NameValuePair("appname", Integer.MAX_VALUE));
-    goodTestingFilters.add(new NameValuePair("status", "RUNNING"));
-    badTestingFilters = new ArrayList<NameValuePair>();
-    badTestingFilters.add(new NameValuePair("appname", Integer.MAX_VALUE));
-    badTestingFilters.add(new NameValuePair("status", "FINISHED"));
-
-    primaryFilters = new HashMap<String, Set<Object>>();
-    Set<Object> l1 = new HashSet<Object>();
-    l1.add("username");
-    Set<Object> l2 = new HashSet<Object>();
-    l2.add(Integer.MAX_VALUE);
-    Set<Object> l3 = new HashSet<Object>();
-    l3.add("123abc");
-    Set<Object> l4 = new HashSet<Object>();
-    l4.add((long)Integer.MAX_VALUE + 1l);
-    primaryFilters.put("user", l1);
-    primaryFilters.put("appname", l2);
-    primaryFilters.put("other", l3);
-    primaryFilters.put("long", l4);
-    secondaryFilters = new HashMap<String, Object>();
-    secondaryFilters.put("startTime", 123456);
-    secondaryFilters.put("status", "RUNNING");
-    allFilters = new HashMap<String, Object>();
-    allFilters.putAll(secondaryFilters);
-    for (Entry<String, Set<Object>> pf : primaryFilters.entrySet()) {
-      for (Object o : pf.getValue()) {
-        allFilters.put(pf.getKey(), o);
-      }
-    }
-    otherInfo = new HashMap<String, Object>();
-    otherInfo.put("info1", "val1");
-    otherInfo.put("info2", "val2");
-    otherInfo.putAll(secondaryFilters);
-
-    entityId1 = "id_1";
-    entityType1 = "type_1";
-    entityId1b = "id_2";
-    entityId2 = "id_2";
-    entityType2 = "type_2";
-    entityId4 = "id_4";
-    entityType4 = "type_4";
-    entityId5 = "id_5";
-    entityType5 = "type_5";
-
-    ev1 = createEvent(123l, "start_event", null);
-
-    eventInfo = new HashMap<String, Object>();
-    eventInfo.put("event info 1", "val1");
-    ev2 = createEvent(456l, "end_event", eventInfo);
-    events1 = new ArrayList<TimelineEvent>();
-    events1.add(ev2);
-    events1.add(ev1);
-
-    relEntityMap =
-        new HashMap<String, Set<String>>();
-    Set<String> ids = new HashSet<String>();
-    ids.add(entityId1);
-    ids.add(entityId1b);
-    relEntityMap.put(entityType1, ids);
-
-    relEntityMap2 =
-        new HashMap<String, Set<String>>();
-    relEntityMap2.put(entityType4, Collections.singleton(entityId4));
-
-    ev3 = createEvent(789l, "launch_event", null);
-    ev4 = createEvent(-123l, "init_event", null);
-    events2 = new ArrayList<TimelineEvent>();
-    events2.add(ev3);
-    events2.add(ev4);
-  }
-
-  public void testGetSingleEntity() throws IOException {
-    // test getting entity info
-    verifyEntityInfo(null, null, null, null, null, null,
-        store.getEntity("id_1", "type_2", EnumSet.allOf(Field.class)));
-
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, 123l, store.getEntity(entityId1,
-        entityType1, EnumSet.allOf(Field.class)));
-
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, 123l, store.getEntity(entityId1b,
-        entityType1, EnumSet.allOf(Field.class)));
-
-    verifyEntityInfo(entityId2, entityType2, events2, relEntityMap,
-        EMPTY_PRIMARY_FILTERS, EMPTY_MAP, -123l, store.getEntity(entityId2,
-        entityType2, EnumSet.allOf(Field.class)));
-
-    verifyEntityInfo(entityId4, entityType4, EMPTY_EVENTS, EMPTY_REL_ENTITIES,
-        EMPTY_PRIMARY_FILTERS, EMPTY_MAP, 42l, store.getEntity(entityId4,
-        entityType4, EnumSet.allOf(Field.class)));
-
-    verifyEntityInfo(entityId5, entityType5, EMPTY_EVENTS, relEntityMap2,
-        EMPTY_PRIMARY_FILTERS, EMPTY_MAP, 42l, store.getEntity(entityId5,
-        entityType5, EnumSet.allOf(Field.class)));
-
-    // test getting single fields
-    verifyEntityInfo(entityId1, entityType1, events1, null, null, null,
-        store.getEntity(entityId1, entityType1, EnumSet.of(Field.EVENTS)));
-
-    verifyEntityInfo(entityId1, entityType1, Collections.singletonList(ev2),
-        null, null, null, store.getEntity(entityId1, entityType1,
-        EnumSet.of(Field.LAST_EVENT_ONLY)));
-
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, store.getEntity(entityId1b, entityType1,
-        null));
-
-    verifyEntityInfo(entityId1, entityType1, null, null, primaryFilters, null,
-        store.getEntity(entityId1, entityType1,
-            EnumSet.of(Field.PRIMARY_FILTERS)));
-
-    verifyEntityInfo(entityId1, entityType1, null, null, null, otherInfo,
-        store.getEntity(entityId1, entityType1, EnumSet.of(Field.OTHER_INFO)));
-
-    verifyEntityInfo(entityId2, entityType2, null, relEntityMap, null, null,
-        store.getEntity(entityId2, entityType2,
-            EnumSet.of(Field.RELATED_ENTITIES)));
-  }
-
-  protected List<TimelineEntity> getEntities(String entityType)
-      throws IOException {
-    return store.getEntities(entityType, null, null, null, null, null,
-        null, null, null).getEntities();
-  }
-
-  protected List<TimelineEntity> getEntitiesWithPrimaryFilter(
-      String entityType, NameValuePair primaryFilter) throws IOException {
-    return store.getEntities(entityType, null, null, null, null, null,
-        primaryFilter, null, null).getEntities();
-  }
-
-  protected List<TimelineEntity> getEntitiesFromId(String entityType,
-      String fromId) throws IOException {
-    return store.getEntities(entityType, null, null, null, fromId, null,
-        null, null, null).getEntities();
-  }
-
-  protected List<TimelineEntity> getEntitiesFromTs(String entityType,
-      long fromTs) throws IOException {
-    return store.getEntities(entityType, null, null, null, null, fromTs,
-        null, null, null).getEntities();
-  }
-
-  protected List<TimelineEntity> getEntitiesFromIdWithPrimaryFilter(
-      String entityType, NameValuePair primaryFilter, String fromId)
-      throws IOException {
-    return store.getEntities(entityType, null, null, null, fromId, null,
-        primaryFilter, null, null).getEntities();
-  }
-
-  protected List<TimelineEntity> getEntitiesFromTsWithPrimaryFilter(
-      String entityType, NameValuePair primaryFilter, long fromTs)
-      throws IOException {
-    return store.getEntities(entityType, null, null, null, null, fromTs,
-        primaryFilter, null, null).getEntities();
-  }
-
-  protected List<TimelineEntity> getEntitiesFromIdWithWindow(String entityType,
-      Long windowEnd, String fromId) throws IOException {
-    return store.getEntities(entityType, null, null, windowEnd, fromId, null,
-        null, null, null).getEntities();
-  }
-
-  protected List<TimelineEntity> getEntitiesFromIdWithPrimaryFilterAndWindow(
-      String entityType, Long windowEnd, String fromId,
-      NameValuePair primaryFilter) throws IOException {
-    return store.getEntities(entityType, null, null, windowEnd, fromId, null,
-        primaryFilter, null, null).getEntities();
-  }
-
-  protected List<TimelineEntity> getEntitiesWithFilters(String entityType,
-      NameValuePair primaryFilter, Collection<NameValuePair> secondaryFilters)
-      throws IOException {
-    return store.getEntities(entityType, null, null, null, null, null,
-        primaryFilter, secondaryFilters, null).getEntities();
-  }
-
-  protected List<TimelineEntity> getEntities(String entityType, Long limit,
-      Long windowStart, Long windowEnd, NameValuePair primaryFilter,
-      EnumSet<Field> fields) throws IOException {
-    return store.getEntities(entityType, limit, windowStart, windowEnd, null,
-        null, primaryFilter, null, fields).getEntities();
-  }
-
-  public void testGetEntities() throws IOException {
-    // test getting entities
-    assertEquals("nonzero entities size for nonexistent type", 0,
-        getEntities("type_0").size());
-    assertEquals("nonzero entities size for nonexistent type", 0,
-        getEntities("type_3").size());
-    assertEquals("nonzero entities size for nonexistent type", 0,
-        getEntities("type_6").size());
-    assertEquals("nonzero entities size for nonexistent type", 0,
-        getEntitiesWithPrimaryFilter("type_0", userFilter).size());
-    assertEquals("nonzero entities size for nonexistent type", 0,
-        getEntitiesWithPrimaryFilter("type_3", userFilter).size());
-    assertEquals("nonzero entities size for nonexistent type", 0,
-        getEntitiesWithPrimaryFilter("type_6", userFilter).size());
-
-    List<TimelineEntity> entities = getEntities("type_1");
-    assertEquals(2, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(1));
-
-    entities = getEntities("type_2");
-    assertEquals(1, entities.size());
-    verifyEntityInfo(entityId2, entityType2, events2, relEntityMap,
-        EMPTY_PRIMARY_FILTERS, EMPTY_MAP, entities.get(0));
-
-    entities = getEntities("type_1", 1l, null, null, null,
-        EnumSet.allOf(Field.class));
-    assertEquals(1, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-
-    entities = getEntities("type_1", 1l, 0l, null, null,
-        EnumSet.allOf(Field.class));
-    assertEquals(1, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-
-    entities = getEntities("type_1", null, 234l, null, null,
-        EnumSet.allOf(Field.class));
-    assertEquals(0, entities.size());
-
-    entities = getEntities("type_1", null, 123l, null, null,
-        EnumSet.allOf(Field.class));
-    assertEquals(0, entities.size());
-
-    entities = getEntities("type_1", null, 234l, 345l, null,
-        EnumSet.allOf(Field.class));
-    assertEquals(0, entities.size());
-
-    entities = getEntities("type_1", null, null, 345l, null,
-        EnumSet.allOf(Field.class));
-    assertEquals(2, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(1));
-
-    entities = getEntities("type_1", null, null, 123l, null,
-        EnumSet.allOf(Field.class));
-    assertEquals(2, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(1));
-  }
-
-  public void testGetEntitiesWithFromId() throws IOException {
-    List<TimelineEntity> entities = getEntitiesFromId("type_1", entityId1);
-    assertEquals(2, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(1));
-
-    entities = getEntitiesFromId("type_1", entityId1b);
-    assertEquals(1, entities.size());
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-
-    entities = getEntitiesFromIdWithWindow("type_1", 0l, entityId1);
-    assertEquals(0, entities.size());
-
-    entities = getEntitiesFromId("type_2", "a");
-    assertEquals(0, entities.size());
-
-    entities = getEntitiesFromId("type_2", entityId2);
-    assertEquals(1, entities.size());
-    verifyEntityInfo(entityId2, entityType2, events2, relEntityMap,
-        EMPTY_PRIMARY_FILTERS, EMPTY_MAP, entities.get(0));
-
-    entities = getEntitiesFromIdWithWindow("type_2", -456l, null);
-    assertEquals(0, entities.size());
-
-    entities = getEntitiesFromIdWithWindow("type_2", -456l, "a");
-    assertEquals(0, entities.size());
-
-    entities = getEntitiesFromIdWithWindow("type_2", 0l, null);
-    assertEquals(1, entities.size());
-
-    entities = getEntitiesFromIdWithWindow("type_2", 0l, entityId2);
-    assertEquals(1, entities.size());
-
-    // same tests with primary filters
-    entities = getEntitiesFromIdWithPrimaryFilter("type_1", userFilter,
-        entityId1);
-    assertEquals(2, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(1));
-
-    entities = getEntitiesFromIdWithPrimaryFilter("type_1", userFilter,
-        entityId1b);
-    assertEquals(1, entities.size());
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-
-    entities = getEntitiesFromIdWithPrimaryFilterAndWindow("type_1", 0l,
-        entityId1, userFilter);
-    assertEquals(0, entities.size());
-
-    entities = getEntitiesFromIdWithPrimaryFilter("type_2", userFilter, "a");
-    assertEquals(0, entities.size());
-  }
-
-  public void testGetEntitiesWithFromTs() throws IOException {
-    assertEquals(0, getEntitiesFromTs("type_1", beforeTs).size());
-    assertEquals(0, getEntitiesFromTs("type_2", beforeTs).size());
-    assertEquals(0, getEntitiesFromTsWithPrimaryFilter("type_1", userFilter,
-        beforeTs).size());
-    long afterTs = System.currentTimeMillis();
-    assertEquals(2, getEntitiesFromTs("type_1", afterTs).size());
-    assertEquals(1, getEntitiesFromTs("type_2", afterTs).size());
-    assertEquals(2, getEntitiesFromTsWithPrimaryFilter("type_1", userFilter,
-        afterTs).size());
-    assertEquals(2, getEntities("type_1").size());
-    assertEquals(1, getEntities("type_2").size());
-    assertEquals(2, getEntitiesWithPrimaryFilter("type_1", userFilter).size());
-    // check insert time is not overwritten
-    long beforeTs = this.beforeTs;
-    loadTestData();
-    assertEquals(0, getEntitiesFromTs("type_1", beforeTs).size());
-    assertEquals(0, getEntitiesFromTs("type_2", beforeTs).size());
-    assertEquals(0, getEntitiesFromTsWithPrimaryFilter("type_1", userFilter,
-        beforeTs).size());
-    assertEquals(2, getEntitiesFromTs("type_1", afterTs).size());
-    assertEquals(1, getEntitiesFromTs("type_2", afterTs).size());
-    assertEquals(2, getEntitiesFromTsWithPrimaryFilter("type_1", userFilter,
-        afterTs).size());
-  }
-
-  public void testGetEntitiesWithPrimaryFilters() throws IOException {
-    // test using primary filter
-    assertEquals("nonzero entities size for primary filter", 0,
-        getEntitiesWithPrimaryFilter("type_1",
-            new NameValuePair("none", "none")).size());
-    assertEquals("nonzero entities size for primary filter", 0,
-        getEntitiesWithPrimaryFilter("type_2",
-            new NameValuePair("none", "none")).size());
-    assertEquals("nonzero entities size for primary filter", 0,
-        getEntitiesWithPrimaryFilter("type_3",
-            new NameValuePair("none", "none")).size());
-
-    List<TimelineEntity> entities = getEntitiesWithPrimaryFilter("type_1",
-        userFilter);
-    assertEquals(2, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(1));
-
-    entities = getEntitiesWithPrimaryFilter("type_1", numericFilter1);
-    assertEquals(2, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(1));
-
-    entities = getEntitiesWithPrimaryFilter("type_1", numericFilter2);
-    assertEquals(2, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(1));
-
-    entities = getEntitiesWithPrimaryFilter("type_1", numericFilter3);
-    assertEquals(2, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(1));
-
-    entities = getEntitiesWithPrimaryFilter("type_2", userFilter);
-    assertEquals(0, entities.size());
-
-    entities = getEntities("type_1", 1l, null, null, userFilter, null);
-    assertEquals(1, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-
-    entities = getEntities("type_1", 1l, 0l, null, userFilter, null);
-    assertEquals(1, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-
-    entities = getEntities("type_1", null, 234l, null, userFilter, null);
-    assertEquals(0, entities.size());
-
-    entities = getEntities("type_1", null, 234l, 345l, userFilter, null);
-    assertEquals(0, entities.size());
-
-    entities = getEntities("type_1", null, null, 345l, userFilter, null);
-    assertEquals(2, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(1));
-  }
-
-  public void testGetEntitiesWithSecondaryFilters() throws IOException {
-    // test using secondary filter
-    List<TimelineEntity> entities = getEntitiesWithFilters("type_1", null,
-        goodTestingFilters);
-    assertEquals(2, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(1));
-
-    entities = getEntitiesWithFilters("type_1", userFilter, goodTestingFilters);
-    assertEquals(2, entities.size());
-    verifyEntityInfo(entityId1, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(0));
-    verifyEntityInfo(entityId1b, entityType1, events1, EMPTY_REL_ENTITIES,
-        primaryFilters, otherInfo, entities.get(1));
-
-    entities = getEntitiesWithFilters("type_1", null,
-        Collections.singleton(new NameValuePair("user", "none")));
-    assertEquals(0, entities.size());
-
-    entities = getEntitiesWithFilters("type_1", null, badTestingFilters);
-    assertEquals(0, entities.size());
-
-    entities = getEntitiesWithFilters("type_1", userFilter, badTestingFilters);
-    assertEquals(0, entities.size());
-  }
-
-  public void testGetEvents() throws IOException {
-    // test getting entity timelines
-    SortedSet<String> sortedSet = new TreeSet<String>();
-    sortedSet.add(entityId1);
-    List<EventsOfOneEntity> timelines =
-        store.getEntityTimelines(entityType1, sortedSet, null, null,
-            null, null).getAllEvents();
-    assertEquals(1, timelines.size());
-    verifyEntityTimeline(timelines.get(0), entityId1, entityType1, ev2, ev1);
-
-    sortedSet.add(entityId1b);
-    timelines = store.getEntityTimelines(entityType1, sortedSet, null,
-        null, null, null).getAllEvents();
-    assertEquals(2, timelines.size());
-    verifyEntityTimeline(timelines.get(0), entityId1, entityType1, ev2, ev1);
-    verifyEntityTimeline(timelines.get(1), entityId1b, entityType1, ev2, ev1);
-
-    timelines = store.getEntityTimelines(entityType1, sortedSet, 1l,
-        null, null, null).getAllEvents();
-    assertEquals(2, timelines.size());
-    verifyEntityTimeline(timelines.get(0), entityId1, entityType1, ev2);
-    verifyEntityTimeline(timelines.get(1), entityId1b, entityType1, ev2);
-
-    timelines = store.getEntityTimelines(entityType1, sortedSet, null,
-        345l, null, null).getAllEvents();
-    assertEquals(2, timelines.size());
-    verifyEntityTimeline(timelines.get(0), entityId1, entityType1, ev2);
-    verifyEntityTimeline(timelines.get(1), entityId1b, entityType1, ev2);
-
-    timelines = store.getEntityTimelines(entityType1, sortedSet, null,
-        123l, null, null).getAllEvents();
-    assertEquals(2, timelines.size());
-    verifyEntityTimeline(timelines.get(0), entityId1, entityType1, ev2);
-    verifyEntityTimeline(timelines.get(1), entityId1b, entityType1, ev2);
-
-    timelines = store.getEntityTimelines(entityType1, sortedSet, null,
-        null, 345l, null).getAllEvents();
-    assertEquals(2, timelines.size());
-    verifyEntityTimeline(timelines.get(0), entityId1, entityType1, ev1);
-    verifyEntityTimeline(timelines.get(1), entityId1b, entityType1, ev1);
-
-    timelines = store.getEntityTimelines(entityType1, sortedSet, null,
-        null, 123l, null).getAllEvents();
-    assertEquals(2, timelines.size());
-    verifyEntityTimeline(timelines.get(0), entityId1, entityType1, ev1);
-    verifyEntityTimeline(timelines.get(1), entityId1b, entityType1, ev1);
-
-    timelines = store.getEntityTimelines(entityType1, sortedSet, null,
-        null, null, Collections.singleton("end_event")).getAllEvents();
-    assertEquals(2, timelines.size());
-    verifyEntityTimeline(timelines.get(0), entityId1, entityType1, ev2);
-    verifyEntityTimeline(timelines.get(1), entityId1b, entityType1, ev2);
-
-    sortedSet.add(entityId2);
-    timelines = store.getEntityTimelines(entityType2, sortedSet, null,
-        null, null, null).getAllEvents();
-    assertEquals(1, timelines.size());
-    verifyEntityTimeline(timelines.get(0), entityId2, entityType2, ev3, ev4);
-  }
-
-  /**
-   * Verify a single entity and its start time
-   */
-  protected static void verifyEntityInfo(String entityId, String entityType,
-      List<TimelineEvent> events, Map<String, Set<String>> relatedEntities,
-      Map<String, Set<Object>> primaryFilters, Map<String, Object> otherInfo,
-      Long startTime, TimelineEntity retrievedEntityInfo) {
-
-    verifyEntityInfo(entityId, entityType, events, relatedEntities,
-        primaryFilters, otherInfo, retrievedEntityInfo);
-    assertEquals(startTime, retrievedEntityInfo.getStartTime());
-  }
-
-  /**
-   * Verify a single entity
-   */
-  protected static void verifyEntityInfo(String entityId, String entityType,
-      List<TimelineEvent> events, Map<String, Set<String>> relatedEntities,
-      Map<String, Set<Object>> primaryFilters, Map<String, Object> otherInfo,
-      TimelineEntity retrievedEntityInfo) {
-    if (entityId == null) {
-      assertNull(retrievedEntityInfo);
-      return;
-    }
-    assertEquals(entityId, retrievedEntityInfo.getEntityId());
-    assertEquals(entityType, retrievedEntityInfo.getEntityType());
-    if (events == null) {
-      assertNull(retrievedEntityInfo.getEvents());
-    } else {
-      assertEquals(events, retrievedEntityInfo.getEvents());
-    }
-    if (relatedEntities == null) {
-      assertNull(retrievedEntityInfo.getRelatedEntities());
-    } else {
-      assertEquals(relatedEntities, retrievedEntityInfo.getRelatedEntities());
-    }
-    if (primaryFilters == null) {
-      assertNull(retrievedEntityInfo.getPrimaryFilters());
-    } else {
-      assertTrue(primaryFilters.equals(
-          retrievedEntityInfo.getPrimaryFilters()));
-    }
-    if (otherInfo == null) {
-      assertNull(retrievedEntityInfo.getOtherInfo());
-    } else {
-      assertTrue(otherInfo.equals(retrievedEntityInfo.getOtherInfo()));
-    }
-  }
-
-  /**
-   * Verify timeline events
-   */
-  private static void verifyEntityTimeline(
-      EventsOfOneEntity retrievedEvents, String entityId, String entityType,
-      TimelineEvent... actualEvents) {
-    assertEquals(entityId, retrievedEvents.getEntityId());
-    assertEquals(entityType, retrievedEvents.getEntityType());
-    assertEquals(actualEvents.length, retrievedEvents.getEvents().size());
-    for (int i = 0; i < actualEvents.length; i++) {
-      assertEquals(actualEvents[i], retrievedEvents.getEvents().get(i));
-    }
-  }
-
-  /**
-   * Create a test entity
-   */
-  protected static TimelineEntity createEntity(String entityId, String entityType,
-      Long startTime, List<TimelineEvent> events,
-      Map<String, Set<String>> relatedEntities,
-      Map<String, Set<Object>> primaryFilters,
-      Map<String, Object> otherInfo) {
-    TimelineEntity entity = new TimelineEntity();
-    entity.setEntityId(entityId);
-    entity.setEntityType(entityType);
-    entity.setStartTime(startTime);
-    entity.setEvents(events);
-    if (relatedEntities != null) {
-      for (Entry<String, Set<String>> e : relatedEntities.entrySet()) {
-        for (String v : e.getValue()) {
-          entity.addRelatedEntity(e.getKey(), v);
-        }
-      }
-    } else {
-      entity.setRelatedEntities(null);
-    }
-    entity.setPrimaryFilters(primaryFilters);
-    entity.setOtherInfo(otherInfo);
-    return entity;
-  }
-
-  /**
-   * Create a test event
-   */
-  private static TimelineEvent createEvent(long timestamp, String type, Map<String,
-      Object> info) {
-    TimelineEvent event = new TimelineEvent();
-    event.setTimestamp(timestamp);
-    event.setEventType(type);
-    event.setEventInfo(info);
-    return event;
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebApp.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebApp.java
deleted file mode 100644
index 605358f..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebApp.java
+++ /dev/null
@@ -1,199 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
-
-import static org.apache.hadoop.yarn.webapp.Params.TITLE;
-import static org.mockito.Mockito.mock;
-
-import org.junit.Assert;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.yarn.api.ApplicationBaseProtocol;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.api.records.YarnApplicationState;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryClientService;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManager;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerImpl;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryStore;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryStoreTestUtils;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.MemoryApplicationHistoryStore;
-import org.apache.hadoop.yarn.util.StringHelper;
-import org.apache.hadoop.yarn.webapp.YarnWebParams;
-import org.apache.hadoop.yarn.webapp.test.WebAppTests;
-import org.junit.Before;
-import org.junit.Test;
-
-import com.google.inject.Injector;
-
-public class TestAHSWebApp extends ApplicationHistoryStoreTestUtils {
-
-  public void setApplicationHistoryStore(ApplicationHistoryStore store) {
-    this.store = store;
-  }
-
-  @Before
-  public void setup() {
-    store = new MemoryApplicationHistoryStore();
-  }
-
-  @Test
-  public void testAppControllerIndex() throws Exception {
-    ApplicationHistoryManager ahManager = mock(ApplicationHistoryManager.class);
-    Injector injector =
-      WebAppTests.createMockInjector(ApplicationHistoryManager.class,
-        ahManager);
-    AHSController controller = injector.getInstance(AHSController.class);
-    controller.index();
-    Assert
-      .assertEquals("Application History", controller.get(TITLE, "unknown"));
-  }
-
-  @Test
-  public void testView() throws Exception {
-    Injector injector =
-      WebAppTests.createMockInjector(ApplicationBaseProtocol.class,
-        mockApplicationHistoryClientService(5, 1, 1));
-    AHSView ahsViewInstance = injector.getInstance(AHSView.class);
-
-    ahsViewInstance.render();
-    WebAppTests.flushOutput(injector);
-
-    ahsViewInstance.set(YarnWebParams.APP_STATE,
-      YarnApplicationState.FAILED.toString());
-    ahsViewInstance.render();
-    WebAppTests.flushOutput(injector);
-
-    ahsViewInstance.set(YarnWebParams.APP_STATE, StringHelper.cjoin(
-      YarnApplicationState.FAILED.toString(), YarnApplicationState.KILLED));
-    ahsViewInstance.render();
-    WebAppTests.flushOutput(injector);
-  }
-
-  @Test
-  public void testAboutPage() throws Exception {
-    Injector injector =
-      WebAppTests.createMockInjector(ApplicationBaseProtocol.class,
-        mockApplicationHistoryClientService(0, 0, 0));
-    AboutPage aboutPageInstance = injector.getInstance(AboutPage.class);
-
-    aboutPageInstance.render();
-    WebAppTests.flushOutput(injector);
-
-    aboutPageInstance.render();
-    WebAppTests.flushOutput(injector);
-  }
-
-  @Test
-  public void testAppPage() throws Exception {
-    Injector injector =
-      WebAppTests.createMockInjector(ApplicationBaseProtocol.class,
-        mockApplicationHistoryClientService(1, 5, 1));
-    AppPage appPageInstance = injector.getInstance(AppPage.class);
-
-    appPageInstance.render();
-    WebAppTests.flushOutput(injector);
-
-    appPageInstance.set(YarnWebParams.APPLICATION_ID, ApplicationId
-      .newInstance(0, 1).toString());
-    appPageInstance.render();
-    WebAppTests.flushOutput(injector);
-  }
-
-  @Test
-  public void testAppAttemptPage() throws Exception {
-    Injector injector =
-      WebAppTests.createMockInjector(ApplicationBaseProtocol.class,
-        mockApplicationHistoryClientService(1, 1, 5));
-    AppAttemptPage appAttemptPageInstance =
-      injector.getInstance(AppAttemptPage.class);
-
-    appAttemptPageInstance.render();
-    WebAppTests.flushOutput(injector);
-
-    appAttemptPageInstance.set(YarnWebParams.APPLICATION_ATTEMPT_ID,
-      ApplicationAttemptId.newInstance(ApplicationId.newInstance(0, 1), 1)
-        .toString());
-    appAttemptPageInstance.render();
-    WebAppTests.flushOutput(injector);
-  }
-
-  @Test
-  public void testContainerPage() throws Exception {
-    Injector injector =
-      WebAppTests.createMockInjector(ApplicationBaseProtocol.class,
-        mockApplicationHistoryClientService(1, 1, 1));
-    ContainerPage containerPageInstance =
-      injector.getInstance(ContainerPage.class);
-
-    containerPageInstance.render();
-    WebAppTests.flushOutput(injector);
-
-    containerPageInstance.set(
-      YarnWebParams.CONTAINER_ID,
-      ContainerId
-        .newContainerId(
-          ApplicationAttemptId.newInstance(ApplicationId.newInstance(0, 1), 1),
-          1).toString());
-    containerPageInstance.render();
-    WebAppTests.flushOutput(injector);
-  }
-
-  ApplicationHistoryClientService mockApplicationHistoryClientService(int numApps,
-                                                                      int numAppAttempts, int numContainers) throws Exception {
-    ApplicationHistoryManager ahManager =
-      new MockApplicationHistoryManagerImpl(store);
-    ApplicationHistoryClientService historyClientService =
-      new ApplicationHistoryClientService(ahManager);
-    for (int i = 1; i <= numApps; ++i) {
-      ApplicationId appId = ApplicationId.newInstance(0, i);
-      writeApplicationStartData(appId);
-      for (int j = 1; j <= numAppAttempts; ++j) {
-        ApplicationAttemptId appAttemptId =
-          ApplicationAttemptId.newInstance(appId, j);
-        writeApplicationAttemptStartData(appAttemptId);
-        for (int k = 1; k <= numContainers; ++k) {
-          ContainerId containerId = ContainerId.newContainerId(appAttemptId, k);
-          writeContainerStartData(containerId);
-          writeContainerFinishData(containerId);
-        }
-        writeApplicationAttemptFinishData(appAttemptId);
-      }
-      writeApplicationFinishData(appId);
-    }
-    return historyClientService;
-  }
-
-  class MockApplicationHistoryManagerImpl extends ApplicationHistoryManagerImpl {
-
-    public MockApplicationHistoryManagerImpl(ApplicationHistoryStore store) {
-      super();
-      init(new YarnConfiguration());
-      start();
-    }
-
-    @Override
-    protected ApplicationHistoryStore createApplicationHistoryStore(
-      Configuration conf) {
-      return store;
-    }
-  };
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebServices.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebServices.java
deleted file mode 100644
index 44b3f65..0000000
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebServices.java
+++ /dev/null
@@ -1,302 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.fail;
-
-import javax.ws.rs.core.MediaType;
-
-import junit.framework.Assert;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.yarn.api.ApplicationBaseProtocol;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.api.records.ContainerState;
-import org.apache.hadoop.yarn.api.records.FinalApplicationStatus;
-import org.apache.hadoop.yarn.api.records.NodeId;
-import org.apache.hadoop.yarn.api.records.Priority;
-import org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState;
-import org.apache.hadoop.yarn.api.records.YarnApplicationState;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryClientService;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManager;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryStore;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.MemoryApplicationHistoryStore;
-import org.apache.hadoop.yarn.webapp.GenericExceptionHandler;
-import org.apache.hadoop.yarn.webapp.WebServicesTestUtils;
-import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
-import org.codehaus.jettison.json.JSONArray;
-import org.codehaus.jettison.json.JSONException;
-import org.codehaus.jettison.json.JSONObject;
-import org.junit.Before;
-import org.junit.Test;
-
-import com.google.inject.Guice;
-import com.google.inject.Injector;
-import com.google.inject.servlet.GuiceServletContextListener;
-import com.google.inject.servlet.ServletModule;
-import com.sun.jersey.api.client.ClientResponse;
-import com.sun.jersey.api.client.ClientResponse.Status;
-import com.sun.jersey.api.client.UniformInterfaceException;
-import com.sun.jersey.api.client.WebResource;
-import com.sun.jersey.guice.spi.container.servlet.GuiceContainer;
-import com.sun.jersey.test.framework.JerseyTest;
-import com.sun.jersey.test.framework.WebAppDescriptor;
-
-public class TestAHSWebServices extends JerseyTest {
-
-  private static ApplicationHistoryClientService historyClientService;
-
-  private Injector injector = Guice.createInjector(new ServletModule() {
-
-    @Override
-    protected void configureServlets() {
-      bind(JAXBContextResolver.class);
-      bind(AHSWebServices.class);
-      bind(GenericExceptionHandler.class);
-      try {
-        historyClientService = mockApplicationHistoryManager();
-      } catch (Exception e) {
-        Assert.fail();
-      }
-      bind(ApplicationBaseProtocol.class).toInstance(historyClientService);
-      serve("/*").with(GuiceContainer.class);
-    }
-  });
-
-  public class GuiceServletConfig extends GuiceServletContextListener {
-
-    @Override
-    protected Injector getInjector() {
-      return injector;
-    }
-  }
-
-  private ApplicationHistoryClientService mockApplicationHistoryManager()
-      throws Exception {
-    ApplicationHistoryStore store = new MemoryApplicationHistoryStore();
-    TestAHSWebApp testAHSWebApp = new TestAHSWebApp();
-    testAHSWebApp.setApplicationHistoryStore(store);
-    return testAHSWebApp.mockApplicationHistoryClientService(5, 5, 5);
-  }
-
-  public TestAHSWebServices() {
-    super(new WebAppDescriptor.Builder(
-      "org.apache.hadoop.yarn.server.applicationhistoryservice.webapp")
-      .contextListenerClass(GuiceServletConfig.class)
-      .filterClass(com.google.inject.servlet.GuiceFilter.class)
-      .contextPath("jersey-guice-filter").servletPath("/").build());
-  }
-
-  @Before
-  @Override
-  public void setUp() throws Exception {
-    super.setUp();
-  }
-
-  @Test
-  public void testInvalidUri() throws JSONException, Exception {
-    WebResource r = resource();
-    String responseStr = "";
-    try {
-      responseStr =
-          r.path("ws").path("v1").path("applicationhistory").path("bogus")
-            .accept(MediaType.APPLICATION_JSON).get(String.class);
-      fail("should have thrown exception on invalid uri");
-    } catch (UniformInterfaceException ue) {
-      ClientResponse response = ue.getResponse();
-      assertEquals(Status.NOT_FOUND, response.getClientResponseStatus());
-
-      WebServicesTestUtils.checkStringMatch(
-        "error string exists and shouldn't", "", responseStr);
-    }
-  }
-
-  @Test
-  public void testInvalidUri2() throws JSONException, Exception {
-    WebResource r = resource();
-    String responseStr = "";
-    try {
-      responseStr = r.accept(MediaType.APPLICATION_JSON).get(String.class);
-      fail("should have thrown exception on invalid uri");
-    } catch (UniformInterfaceException ue) {
-      ClientResponse response = ue.getResponse();
-      assertEquals(Status.NOT_FOUND, response.getClientResponseStatus());
-      WebServicesTestUtils.checkStringMatch(
-        "error string exists and shouldn't", "", responseStr);
-    }
-  }
-
-  @Test
-  public void testInvalidAccept() throws JSONException, Exception {
-    WebResource r = resource();
-    String responseStr = "";
-    try {
-      responseStr =
-          r.path("ws").path("v1").path("applicationhistory")
-            .accept(MediaType.TEXT_PLAIN).get(String.class);
-      fail("should have thrown exception on invalid uri");
-    } catch (UniformInterfaceException ue) {
-      ClientResponse response = ue.getResponse();
-      assertEquals(Status.INTERNAL_SERVER_ERROR,
-        response.getClientResponseStatus());
-      WebServicesTestUtils.checkStringMatch(
-        "error string exists and shouldn't", "", responseStr);
-    }
-  }
-
-  @Test
-  public void testAppsQuery() throws Exception {
-    WebResource r = resource();
-    ClientResponse response =
-        r.path("ws").path("v1").path("applicationhistory").path("apps")
-          .queryParam("state", YarnApplicationState.FINISHED.toString())
-          .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    JSONObject json = response.getEntity(JSONObject.class);
-    assertEquals("incorrect number of elements", 1, json.length());
-    JSONObject apps = json.getJSONObject("apps");
-    assertEquals("incorrect number of elements", 1, apps.length());
-    JSONArray array = apps.getJSONArray("app");
-    assertEquals("incorrect number of elements", 5, array.length());
-  }
-
-  @Test
-  public void testSingleApp() throws Exception {
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    WebResource r = resource();
-    ClientResponse response =
-        r.path("ws").path("v1").path("applicationhistory").path("apps")
-          .path(appId.toString()).accept(MediaType.APPLICATION_JSON)
-          .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    JSONObject json = response.getEntity(JSONObject.class);
-    assertEquals("incorrect number of elements", 1, json.length());
-    JSONObject app = json.getJSONObject("app");
-    assertEquals(appId.toString(), app.getString("appId"));
-    assertEquals(appId.toString(), app.get("name"));
-    assertEquals(appId.toString(), app.get("diagnosticsInfo"));
-    assertEquals("test queue", app.get("queue"));
-    assertEquals("test user", app.get("user"));
-    assertEquals("test type", app.get("type"));
-    assertEquals(FinalApplicationStatus.UNDEFINED.toString(),
-      app.get("finalAppStatus"));
-    assertEquals(YarnApplicationState.FINISHED.toString(), app.get("appState"));
-  }
-
-  @Test
-  public void testMultipleAttempts() throws Exception {
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    WebResource r = resource();
-    ClientResponse response =
-        r.path("ws").path("v1").path("applicationhistory").path("apps")
-          .path(appId.toString()).path("appattempts")
-          .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    JSONObject json = response.getEntity(JSONObject.class);
-    assertEquals("incorrect number of elements", 1, json.length());
-    JSONObject appAttempts = json.getJSONObject("appAttempts");
-    assertEquals("incorrect number of elements", 1, appAttempts.length());
-    JSONArray array = appAttempts.getJSONArray("appAttempt");
-    assertEquals("incorrect number of elements", 5, array.length());
-  }
-
-  @Test
-  public void testSingleAttempt() throws Exception {
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    ApplicationAttemptId appAttemptId =
-        ApplicationAttemptId.newInstance(appId, 1);
-    WebResource r = resource();
-    ClientResponse response =
-        r.path("ws").path("v1").path("applicationhistory").path("apps")
-          .path(appId.toString()).path("appattempts")
-          .path(appAttemptId.toString()).accept(MediaType.APPLICATION_JSON)
-          .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    JSONObject json = response.getEntity(JSONObject.class);
-    assertEquals("incorrect number of elements", 1, json.length());
-    JSONObject appAttempt = json.getJSONObject("appAttempt");
-    assertEquals(appAttemptId.toString(), appAttempt.getString("appAttemptId"));
-    assertEquals(appAttemptId.toString(), appAttempt.getString("host"));
-    assertEquals(appAttemptId.toString(),
-      appAttempt.getString("diagnosticsInfo"));
-    assertEquals("test tracking url", appAttempt.getString("trackingUrl"));
-    assertEquals(YarnApplicationAttemptState.FINISHED.toString(),
-      appAttempt.get("appAttemptState"));
-  }
-
-  @Test
-  public void testMultipleContainers() throws Exception {
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    ApplicationAttemptId appAttemptId =
-        ApplicationAttemptId.newInstance(appId, 1);
-    WebResource r = resource();
-    ClientResponse response =
-        r.path("ws").path("v1").path("applicationhistory").path("apps")
-          .path(appId.toString()).path("appattempts")
-          .path(appAttemptId.toString()).path("containers")
-          .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    JSONObject json = response.getEntity(JSONObject.class);
-    assertEquals("incorrect number of elements", 1, json.length());
-    JSONObject containers = json.getJSONObject("containers");
-    assertEquals("incorrect number of elements", 1, containers.length());
-    JSONArray array = containers.getJSONArray("container");
-    assertEquals("incorrect number of elements", 5, array.length());
-  }
-
-  @Test
-  public void testSingleContainer() throws Exception {
-    ApplicationId appId = ApplicationId.newInstance(0, 1);
-    ApplicationAttemptId appAttemptId =
-        ApplicationAttemptId.newInstance(appId, 1);
-    ContainerId containerId = ContainerId.newContainerId(appAttemptId, 1);
-    WebResource r = resource();
-    ClientResponse response =
-        r.path("ws").path("v1").path("applicationhistory").path("apps")
-          .path(appId.toString()).path("appattempts")
-          .path(appAttemptId.toString()).path("containers")
-          .path(containerId.toString()).accept(MediaType.APPLICATION_JSON)
-          .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    JSONObject json = response.getEntity(JSONObject.class);
-    assertEquals("incorrect number of elements", 1, json.length());
-    JSONObject container = json.getJSONObject("container");
-    assertEquals(containerId.toString(), container.getString("containerId"));
-    assertEquals(containerId.toString(), container.getString("diagnosticsInfo"));
-    assertEquals("0", container.getString("allocatedMB"));
-    assertEquals("0", container.getString("allocatedVCores"));
-    assertEquals(NodeId.newInstance("localhost", 0).toString(),
-      container.getString("assignedNodeId"));
-    assertEquals(Priority.newInstance(containerId.getId()).toString(),
-      container.getString("priority"));
-    Configuration conf = new YarnConfiguration();
-    assertEquals(WebAppUtils.getHttpSchemePrefix(conf) +
-        WebAppUtils.getAHSWebAppURLWithoutScheme(conf) +
-        "/applicationhistory/logs/localhost:0/container_0_0001_01_000001/" +
-        "container_0_0001_01_000001/test user",
-        container.getString("logUrl"));
-    assertEquals(ContainerState.COMPLETE.toString(),
-      container.getString("containerState"));
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestTimelineWebServices.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestTimelineWebServices.java
index b093a2a..83e2a27 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestTimelineWebServices.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestTimelineWebServices.java
@@ -22,8 +22,6 @@ import static org.junit.Assert.assertEquals;
 
 import javax.ws.rs.core.MediaType;
 
-import junit.framework.Assert;
-
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEntity;
@@ -33,7 +31,6 @@ import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TestTimelineMetricStore;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.TimelineStore;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.TestMemoryTimelineStore;
 import org.apache.hadoop.yarn.webapp.GenericExceptionHandler;
 import org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider;
 import org.junit.Test;
@@ -49,10 +46,10 @@ import com.sun.jersey.guice.spi.container.servlet.GuiceContainer;
 import com.sun.jersey.test.framework.JerseyTest;
 import com.sun.jersey.test.framework.WebAppDescriptor;
 
+import junit.framework.Assert;
 
-public class TestTimelineWebServices extends JerseyTest {
 
-  private static TimelineStore store;
+public class TestTimelineWebServices extends JerseyTest {
   private static TimelineMetricStore metricStore;
   private long beforeTime;
 
@@ -63,13 +60,11 @@ public class TestTimelineWebServices extends JerseyTest {
       bind(YarnJacksonJaxbJsonProvider.class);
       bind(TimelineWebServices.class);
       bind(GenericExceptionHandler.class);
-      try{
-        store = mockTimelineStore();
+      try {
         metricStore = new TestTimelineMetricStore();
       } catch (Exception e) {
         Assert.fail();
       }
-      bind(TimelineStore.class).toInstance(store);
       bind(TimelineMetricStore.class).toInstance(metricStore);
       serve("/*").with(GuiceContainer.class);
     }
@@ -84,59 +79,30 @@ public class TestTimelineWebServices extends JerseyTest {
     }
   }
 
-  private TimelineStore mockTimelineStore()
-      throws Exception {
-    beforeTime = System.currentTimeMillis() - 1;
-    TestMemoryTimelineStore store = new TestMemoryTimelineStore();
-    store.setup();
-    return store.getTimelineStore();
-  }
-
   public TestTimelineWebServices() {
     super(new WebAppDescriptor.Builder(
-        "org.apache.hadoop.yarn.server.applicationhistoryservice.webapp")
-        .contextListenerClass(GuiceServletConfig.class)
-        .filterClass(com.google.inject.servlet.GuiceFilter.class)
-        .contextPath("jersey-guice-filter")
-        .servletPath("/")
-        .clientConfig(new DefaultClientConfig(YarnJacksonJaxbJsonProvider.class))
-        .build());
+      "org.apache.hadoop.yarn.server.applicationhistoryservice.webapp")
+      .contextListenerClass(GuiceServletConfig.class)
+      .filterClass(com.google.inject.servlet.GuiceFilter.class)
+      .contextPath("jersey-guice-filter")
+      .servletPath("/")
+      .clientConfig(new DefaultClientConfig(YarnJacksonJaxbJsonProvider.class))
+      .build());
   }
 
   @Test
   public void testAbout() throws Exception {
     WebResource r = resource();
     ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
+      .accept(MediaType.APPLICATION_JSON)
+      .get(ClientResponse.class);
     assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
     TimelineWebServices.AboutInfo about =
-        response.getEntity(TimelineWebServices.AboutInfo.class);
+      response.getEntity(TimelineWebServices.AboutInfo.class);
     Assert.assertNotNull(about);
-    Assert.assertEquals("Timeline API", about.getAbout());
-  }
-
-  private static void verifyEntities(TimelineEntities entities) {
-    Assert.assertNotNull(entities);
-    Assert.assertEquals(2, entities.getEntities().size());
-    TimelineEntity entity1 = entities.getEntities().get(0);
-    Assert.assertNotNull(entity1);
-    Assert.assertEquals("id_1", entity1.getEntityId());
-    Assert.assertEquals("type_1", entity1.getEntityType());
-    Assert.assertEquals(123l, entity1.getStartTime().longValue());
-    Assert.assertEquals(2, entity1.getEvents().size());
-    Assert.assertEquals(4, entity1.getPrimaryFilters().size());
-    Assert.assertEquals(4, entity1.getOtherInfo().size());
-    TimelineEntity entity2 = entities.getEntities().get(1);
-    Assert.assertNotNull(entity2);
-    Assert.assertEquals("id_2", entity2.getEntityId());
-    Assert.assertEquals("type_1", entity2.getEntityType());
-    Assert.assertEquals(123l, entity2.getStartTime().longValue());
-    Assert.assertEquals(2, entity2.getEvents().size());
-    Assert.assertEquals(4, entity2.getPrimaryFilters().size());
-    Assert.assertEquals(4, entity2.getOtherInfo().size());
+    Assert.assertEquals("AMS API", about.getAbout());
   }
-
+  
   private static void verifyMetrics(TimelineMetrics metrics) {
     Assert.assertNotNull(metrics);
     Assert.assertEquals("cpu_user", metrics.getMetrics().get(0).getMetricName());
@@ -146,239 +112,6 @@ public class TestTimelineWebServices extends JerseyTest {
   }
 
   @Test
-  public void testGetEntities() throws Exception {
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .path("type_1")
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    verifyEntities(response.getEntity(TimelineEntities.class));
-  }
-
-  @Test
-  public void testFromId() throws Exception {
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .path("type_1").queryParam("fromId", "id_2")
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    assertEquals(1, response.getEntity(TimelineEntities.class).getEntities()
-        .size());
-
-    response = r.path("ws").path("v1").path("timeline")
-        .path("type_1").queryParam("fromId", "id_1")
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    assertEquals(2, response.getEntity(TimelineEntities.class).getEntities()
-        .size());
-  }
-
-  @Test
-  public void testFromTs() throws Exception {
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .path("type_1").queryParam("fromTs", Long.toString(beforeTime))
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    assertEquals(0, response.getEntity(TimelineEntities.class).getEntities()
-        .size());
-
-    response = r.path("ws").path("v1").path("timeline")
-        .path("type_1").queryParam("fromTs", Long.toString(
-            System.currentTimeMillis()))
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    assertEquals(2, response.getEntity(TimelineEntities.class).getEntities()
-        .size());
-  }
-
-  @Test
-  public void testPrimaryFilterString() {
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .path("type_1").queryParam("primaryFilter", "user:username")
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    verifyEntities(response.getEntity(TimelineEntities.class));
-  }
-
-  @Test
-  public void testPrimaryFilterInteger() {
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .path("type_1").queryParam("primaryFilter",
-            "appname:" + Integer.toString(Integer.MAX_VALUE))
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    verifyEntities(response.getEntity(TimelineEntities.class));
-  }
-
-  @Test
-  public void testPrimaryFilterLong() {
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .path("type_1").queryParam("primaryFilter",
-            "long:" + Long.toString((long)Integer.MAX_VALUE + 1l))
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    verifyEntities(response.getEntity(TimelineEntities.class));
-  }
-
-  @Test
-  public void testPrimaryFilterNumericString() {
-    // without quotes, 123abc is interpreted as the number 123,
-    // which finds no entities
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .path("type_1").queryParam("primaryFilter", "other:123abc")
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    assertEquals(0, response.getEntity(TimelineEntities.class).getEntities()
-        .size());
-  }
-
-  @Test
-  public void testPrimaryFilterNumericStringWithQuotes() {
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .path("type_1").queryParam("primaryFilter", "other:\"123abc\"")
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    verifyEntities(response.getEntity(TimelineEntities.class));
-  }
-
-  @Test
-  public void testSecondaryFilters() {
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .path("type_1")
-        .queryParam("secondaryFilter",
-            "user:username,appname:" + Integer.toString(Integer.MAX_VALUE))
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    verifyEntities(response.getEntity(TimelineEntities.class));
-  }
-
-  @Test
-  public void testGetEntity() throws Exception {
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .path("type_1").path("id_1")
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    TimelineEntity entity = response.getEntity(TimelineEntity.class);
-    Assert.assertNotNull(entity);
-    Assert.assertEquals("id_1", entity.getEntityId());
-    Assert.assertEquals("type_1", entity.getEntityType());
-    Assert.assertEquals(123l, entity.getStartTime().longValue());
-    Assert.assertEquals(2, entity.getEvents().size());
-    Assert.assertEquals(4, entity.getPrimaryFilters().size());
-    Assert.assertEquals(4, entity.getOtherInfo().size());
-  }
-
-  @Test
-  public void testGetEntityFields1() throws Exception {
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .path("type_1").path("id_1").queryParam("fields", "events,otherinfo")
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    TimelineEntity entity = response.getEntity(TimelineEntity.class);
-    Assert.assertNotNull(entity);
-    Assert.assertEquals("id_1", entity.getEntityId());
-    Assert.assertEquals("type_1", entity.getEntityType());
-    Assert.assertEquals(123l, entity.getStartTime().longValue());
-    Assert.assertEquals(2, entity.getEvents().size());
-    Assert.assertEquals(0, entity.getPrimaryFilters().size());
-    Assert.assertEquals(4, entity.getOtherInfo().size());
-  }
-
-  @Test
-  public void testGetEntityFields2() throws Exception {
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .path("type_1").path("id_1").queryParam("fields", "lasteventonly," +
-            "primaryfilters,relatedentities")
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    TimelineEntity entity = response.getEntity(TimelineEntity.class);
-    Assert.assertNotNull(entity);
-    Assert.assertEquals("id_1", entity.getEntityId());
-    Assert.assertEquals("type_1", entity.getEntityType());
-    Assert.assertEquals(123l, entity.getStartTime().longValue());
-    Assert.assertEquals(1, entity.getEvents().size());
-    Assert.assertEquals(4, entity.getPrimaryFilters().size());
-    Assert.assertEquals(0, entity.getOtherInfo().size());
-  }
-
-  @Test
-  public void testGetEvents() throws Exception {
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .path("type_1").path("events")
-        .queryParam("entityId", "id_1")
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    TimelineEvents events = response.getEntity(TimelineEvents.class);
-    Assert.assertNotNull(events);
-    Assert.assertEquals(1, events.getAllEvents().size());
-    TimelineEvents.EventsOfOneEntity partEvents = events.getAllEvents().get(0);
-    Assert.assertEquals(2, partEvents.getEvents().size());
-    TimelineEvent event1 = partEvents.getEvents().get(0);
-    Assert.assertEquals(456l, event1.getTimestamp());
-    Assert.assertEquals("end_event", event1.getEventType());
-    Assert.assertEquals(1, event1.getEventInfo().size());
-    TimelineEvent event2 = partEvents.getEvents().get(1);
-    Assert.assertEquals(123l, event2.getTimestamp());
-    Assert.assertEquals("start_event", event2.getEventType());
-    Assert.assertEquals(0, event2.getEventInfo().size());
-  }
-
-  @Test
-  public void testPostEntities() throws Exception {
-    TimelineEntities entities = new TimelineEntities();
-    TimelineEntity entity = new TimelineEntity();
-    entity.setEntityId("test id");
-    entity.setEntityType("test type");
-    entity.setStartTime(System.currentTimeMillis());
-    entities.addEntity(entity);
-    WebResource r = resource();
-    ClientResponse response = r.path("ws").path("v1").path("timeline")
-        .accept(MediaType.APPLICATION_JSON)
-        .type(MediaType.APPLICATION_JSON)
-        .post(ClientResponse.class, entities);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    TimelinePutResponse putResposne = response.getEntity(TimelinePutResponse.class);
-    Assert.assertNotNull(putResposne);
-    Assert.assertEquals(0, putResposne.getErrors().size());
-    // verify the entity exists in the store
-    response = r.path("ws").path("v1").path("timeline")
-        .path("test type").path("test id")
-        .accept(MediaType.APPLICATION_JSON)
-        .get(ClientResponse.class);
-    assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
-    entity = response.getEntity(TimelineEntity.class);
-    Assert.assertNotNull(entity);
-    Assert.assertEquals("test id", entity.getEntityId());
-    Assert.assertEquals("test type", entity.getEntityType());
-  }
-
-  @Test
   public void testGetMetrics() throws Exception {
     WebResource r = resource();
     ClientResponse response = r.path("ws").path("v1").path("timeline")
diff --git a/ambari-metrics/pom.xml b/ambari-metrics/pom.xml
index 98559e6..c91f2f9 100644
--- a/ambari-metrics/pom.xml
+++ b/ambari-metrics/pom.xml
@@ -41,23 +41,17 @@
     <python.ver>python &gt;= 2.6</python.ver>
     <deb.python.ver>python (&gt;= 2.6)</deb.python.ver>
     <!--TODO change to HDP URL-->
-    <hbase.tar>https://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.4.0/tars/hbase/hbase-1.1.2.2.6.4.0-91.tar.gz</hbase.tar>
-    <hbase.folder>hbase-1.1.2.2.6.4.0-91</hbase.folder>
-    <hadoop.tar>https://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.4.0/tars/hadoop/hadoop-2.7.3.2.6.4.0-91.tar.gz</hadoop.tar>
-    <hadoop.folder>hadoop-2.7.3.2.6.4.0-91</hadoop.folder>
-    <hbase.winpkg.zip>https://msibuilds.blob.core.windows.net/hdp/2.x/2.2.4.2/2/hbase-0.98.4.2.2.4.2-0002-hadoop2.winpkg.zip</hbase.winpkg.zip>
-    <hbase.winpkg.folder>hbase-0.98.4.2.2.4.2-0002-hadoop2</hbase.winpkg.folder>
-    <hadoop.winpkg.zip>https://msibuilds.blob.core.windows.net/hdp/2.x/2.2.4.2/2/hadoop-2.6.0.2.2.4.2-0002.winpkg.zip</hadoop.winpkg.zip>
-    <hadoop.winpkg.folder>hadoop-2.6.0.2.2.4.2-0002</hadoop.winpkg.folder>
+    <hbase.tar>http://dev.hortonworks.com.s3.amazonaws.com/HDP/centos7/3.x/BUILDS/3.0.0.0-623/tars/hbase/hbase-2.0.0.3.0.0.0-623-bin.tar.gz</hbase.tar>
+    <hbase.folder>hbase-1.1.2.2.6.0.3-8</hbase.folder>
+    <hadoop.tar>http://dev.hortonworks.com.s3.amazonaws.com/HDP/centos7/3.x/BUILDS/3.0.0.0-623/tars/hadoop/hadoop-3.0.0.3.0.0.0-623.tar.gz</hadoop.tar>
+    <hadoop.folder>hadoop-3.0.0.3.0.0.0-623</hadoop.folder>
     <grafana.folder>grafana-2.6.0</grafana.folder>
     <grafana.tar>https://grafanarel.s3.amazonaws.com/builds/grafana-2.6.0.linux-x64.tar.gz</grafana.tar>
-    <phoenix.tar>https://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.4.0/tars/phoenix/phoenix-4.7.0.2.6.4.0-91.tar.gz</phoenix.tar>
-    <phoenix.folder>phoenix-4.7.0.2.6.4.0-91</phoenix.folder>
+    <phoenix.tar>http://dev.hortonworks.com.s3.amazonaws.com/HDP/centos7/3.x/BUILDS/3.0.0.0-623/tars/phoenix/phoenix-5.0.0.3.0.0.0-623.tar.gz</phoenix.tar>
+    <phoenix.folder>phoenix-5.0.0.3.0.0.0-623</phoenix.folder>
     <spark.tar>http://dev.hortonworks.com.s3.amazonaws.com/HDP/centos7/3.x/BUILDS/3.0.0.0-439/tars/spark2/spark-2.1.0.3.0.0.0-439-bin-3.0.0.3.0.0.0-439.tgz</spark.tar>
     <spark.folder>spark-2.1.0.3.0.0.0-439-bin-3.0.0.3.0.0.0-439</spark.folder>
-    <resmonitor.install.dir>
-      /usr/lib/python2.6/site-packages/resource_monitoring
-    </resmonitor.install.dir>
+    <resmonitor.install.dir>/usr/lib/python2.6/site-packages/resource_monitoring</resmonitor.install.dir>
     <powermock.version>1.6.2</powermock.version>
     <distMgmtSnapshotsId>apache.snapshots.https</distMgmtSnapshotsId>
     <distMgmtSnapshotsName>Apache Development Snapshot Repository</distMgmtSnapshotsName>

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 35/39: Fix AMS phoenix, hbase and hadoop versions in pom.xml

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 003c522a277e4f29f803a5fbd57d71b3f60ad67b
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Tue Feb 27 11:52:24 2018 -0800

    Fix AMS phoenix, hbase and hadoop versions in pom.xml
---
 ambari-metrics/ambari-metrics-timelineservice/pom.xml |  6 +++---
 ambari-metrics/pom.xml                                | 14 +++++++-------
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-timelineservice/pom.xml b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
index d06c0ea..6a6dc3e 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/pom.xml
+++ b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
@@ -34,9 +34,9 @@
     <!-- Needed for generating FindBugs warnings using parent pom -->
     <!--<yarn.basedir>${project.parent.parent.basedir}</yarn.basedir>-->
     <protobuf.version>2.5.0</protobuf.version>
-    <hadoop.version>3.0.0.3.0.0.0-623</hadoop.version>
-    <phoenix.version>5.0.0.3.0.0.0-623</phoenix.version>
-    <hbase.version>2.0.0.3.0.0.0-623</hbase.version>
+    <hadoop.version>3.0.0.3.0.0.2-97</hadoop.version>
+    <phoenix.version>5.0.0.3.0.0.2-97</phoenix.version>
+    <hbase.version>2.0.0.3.0.0.2-97</hbase.version>
   </properties>
 
   <build>
diff --git a/ambari-metrics/pom.xml b/ambari-metrics/pom.xml
index d52f93d..32f7ab2 100644
--- a/ambari-metrics/pom.xml
+++ b/ambari-metrics/pom.xml
@@ -40,14 +40,14 @@
     <python.ver>python &gt;= 2.6</python.ver>
     <deb.python.ver>python (&gt;= 2.6)</deb.python.ver>
     <!--TODO change to HDP URL-->
-    <hbase.tar>http://dev.hortonworks.com.s3.amazonaws.com/HDP/centos7/3.x/BUILDS/3.0.0.0-623/tars/hbase/hbase-2.0.0.3.0.0.0-623-bin.tar.gz</hbase.tar>
-    <hbase.folder>hbase-2.0.0.3.0.0.0-623</hbase.folder>
-    <hadoop.tar>http://dev.hortonworks.com.s3.amazonaws.com/HDP/centos7/3.x/BUILDS/3.0.0.0-623/tars/hadoop/hadoop-3.0.0.3.0.0.0-623.tar.gz</hadoop.tar>
-    <hadoop.folder>hadoop-3.0.0.3.0.0.0-623</hadoop.folder>
+    <hbase.tar>http://dev.hortonworks.com.s3.amazonaws.com/HDP/centos7/3.x/BUILDS/3.0.0.2-97/tars/hbase/hbase-2.0.0.3.0.0.2-97-bin.tar.gz</hbase.tar>
+    <hbase.folder>hbase-2.0.0.3.0.0.2-97</hbase.folder>
+    <hadoop.tar>http://dev.hortonworks.com.s3.amazonaws.com/HDP/centos7/3.x/BUILDS/3.0.0.2-97/tars/hadoop/hadoop-3.0.0.3.0.0.2-97.tar.gz</hadoop.tar>
+    <hadoop.folder>hadoop-3.0.0.3.0.0.2-97</hadoop.folder>
     <grafana.folder>grafana-2.6.0</grafana.folder>
     <grafana.tar>https://grafanarel.s3.amazonaws.com/builds/grafana-2.6.0.linux-x64.tar.gz</grafana.tar>
-    <phoenix.tar>http://dev.hortonworks.com.s3.amazonaws.com/HDP/centos7/3.x/BUILDS/3.0.0.0-623/tars/phoenix/phoenix-5.0.0.3.0.0.0-623.tar.gz</phoenix.tar>
-    <phoenix.folder>phoenix-5.0.0.3.0.0.0-623</phoenix.folder>
+    <phoenix.tar>http://dev.hortonworks.com.s3.amazonaws.com/HDP/centos7/3.x/BUILDS/3.0.0.2-97/tars/phoenix/phoenix-5.0.0.3.0.0.2-97.tar.gz</phoenix.tar>
+    <phoenix.folder>phoenix-5.0.0.3.0.0.2-97</phoenix.folder>
     <resmonitor.install.dir>/usr/lib/python2.6/site-packages/resource_monitoring</resmonitor.install.dir>
     <powermock.version>1.6.2</powermock.version>
     <distMgmtSnapshotsId>apache.snapshots.https</distMgmtSnapshotsId>
@@ -73,7 +73,7 @@
     <repository>
       <id>apache-hadoop</id>
       <name>hdp</name>
-      <url>http://repo.hortonworks.com/content/groups/public/</url>
+      <url>http://nexus-private.hortonworks.com/nexus/content/groups/public</url>
     </repository>
     <repository>
       <id>apache-snapshots</id>

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 13/39: AMBARI-21686 : Implement a test driver that provides a set of metric series with different kinds of metric behavior. (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit e466cc4fd06a30c9fd62eb99923576664233f06e
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Thu Sep 21 15:40:35 2017 -0700

    AMBARI-21686 : Implement a test driver that provides a set of metric series with different kinds of metric behavior. (avijayan)
---
 ambari-metrics/ambari-metrics-alertservice/pom.xml |  17 +-
 .../ambari/metrics/alertservice/R/AmsRTest.java    | 147 --------
 .../metrics/alertservice/R/RFunctionInvoker.java   | 192 -----------
 .../metrics/alertservice/common/MetricAnomaly.java |  69 ----
 .../common/SingleValuedTimelineMetric.java         | 103 ------
 .../alertservice/common/TimelineMetric.java        | 238 -------------
 .../alertservice/common/TimelineMetrics.java       | 129 -------
 .../metrics/alertservice/methods/ema/EmaDS.java    |  70 ----
 .../metrics/alertservice/methods/ema/EmaModel.java | 129 -------
 .../alertservice/methods/ema/TestEmaModel.java     |  68 ----
 .../prototype/AmbariServerInterface.java           | 122 +++++++
 .../prototype/MetricAnomalyDetectorTestInput.java  | 126 +++++++
 .../prototype/MetricAnomalyTester.java             | 163 +++++++++
 .../MetricKafkaProducer.java}                      |  54 +--
 .../prototype/MetricSparkConsumer.java             | 178 ++++++++++
 .../prototype/MetricsCollectorInterface.java       | 237 +++++++++++++
 .../prototype/PointInTimeADSystem.java             | 256 ++++++++++++++
 .../alertservice/prototype/RFunctionInvoker.java   | 222 ++++++++++++
 .../prototype/TestSeriesInputRequest.java          |  88 +++++
 .../alertservice/prototype/TrendADSystem.java      | 331 ++++++++++++++++++
 .../TrendMetric.java}                              |  18 +-
 .../common/DataSeries.java}                        |  12 +-
 .../{ => prototype}/common/ResultSet.java          |   4 +-
 .../{ => prototype}/common/StatisticUtils.java     |  41 +--
 .../methods/AnomalyDetectionTechnique.java}        |  26 +-
 .../prototype/methods/MetricAnomaly.java           |  86 +++++
 .../prototype/methods/ema/EmaModel.java            | 124 +++++++
 .../methods/ema/EmaModelLoader.java                |  28 +-
 .../prototype/methods/ema/EmaTechnique.java        | 142 ++++++++
 .../prototype/methods/hsdev/HsdevTechnique.java    |  77 +++++
 .../prototype/methods/kstest/KSTechnique.java      | 101 ++++++
 .../AbstractMetricSeries.java}                     |  12 +-
 .../seriesgenerator/DualBandMetricSeries.java      |  88 +++++
 .../MetricSeriesGeneratorFactory.java              | 379 +++++++++++++++++++++
 .../seriesgenerator/MonotonicMetricSeries.java     | 101 ++++++
 .../seriesgenerator/NormalMetricSeries.java        |  81 +++++
 .../SteadyWithTurbulenceMetricSeries.java          | 115 +++++++
 .../seriesgenerator/StepFunctionMetricSeries.java  | 107 ++++++
 .../seriesgenerator/UniformMetricSeries.java       |  95 ++++++
 .../alertservice/spark/AnomalyMetricPublisher.java | 196 -----------
 .../alertservice/spark/MetricAnomalyDetector.java  | 147 --------
 .../src/main/resources/R-scripts/hsdev.r           |  12 +-
 .../src/main/resources/R-scripts/kstest.r          |   2 +-
 .../src/main/resources/R-scripts/tukeys.r          |   9 +-
 .../src/main/resources/R-scripts/util.R            |  36 --
 .../alertservice/prototype/TestEmaTechnique.java   |  86 +++++
 .../prototype/TestRFunctionInvoker.java            | 161 +++++++++
 .../metrics/alertservice/prototype/TestTukeys.java | 100 ++++++
 .../seriesgenerator/MetricSeriesGeneratorTest.java | 108 ++++++
 .../metrics2/sink/timeline/TimelineMetric.java     |   5 +-
 .../metrics2/sink/timeline/TimelineMetrics.java    |   3 +-
 .../metrics/spark/MetricAnomalyDetector.scala      |  18 +-
 .../ambari/metrics/spark/SparkPhoenixReader.scala  |  18 +-
 .../ambari-metrics-timelineservice/pom.xml         |   2 +-
 .../timeline/HBaseTimelineMetricsService.java      |  24 --
 .../metrics/timeline/MetricsPaddingMethodTest.java |   7 -
 56 files changed, 3794 insertions(+), 1716 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-alertservice/pom.xml b/ambari-metrics/ambari-metrics-alertservice/pom.xml
index 4afc80f..4db8a6a 100644
--- a/ambari-metrics/ambari-metrics-alertservice/pom.xml
+++ b/ambari-metrics/ambari-metrics-alertservice/pom.xml
@@ -31,7 +31,6 @@
     <build>
         <plugins>
             <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
                 <artifactId>maven-compiler-plugin</artifactId>
                 <configuration>
                     <source>1.8</source>
@@ -130,5 +129,21 @@
             <artifactId>spark-mllib_2.10</artifactId>
             <version>1.3.0</version>
         </dependency>
+        <dependency>
+            <groupId>junit</groupId>
+            <artifactId>junit</artifactId>
+            <scope>test</scope>
+            <version>4.10</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.ambari</groupId>
+            <artifactId>ambari-metrics-common</artifactId>
+            <version>${project.version}</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.httpcomponents</groupId>
+            <artifactId>httpclient</artifactId>
+            <version>4.2.5</version>
+        </dependency>
     </dependencies>
 </project>
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/AmsRTest.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/AmsRTest.java
deleted file mode 100644
index 2bbc250..0000000
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/AmsRTest.java
+++ /dev/null
@@ -1,147 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.alertservice.R;
-
-import org.apache.ambari.metrics.alertservice.common.ResultSet;
-import org.apache.ambari.metrics.alertservice.common.DataSet;
-import org.apache.commons.lang.ArrayUtils;
-import org.rosuda.JRI.REXP;
-import org.rosuda.JRI.RVector;
-import org.rosuda.JRI.Rengine;
-
-import java.util.HashMap;
-import java.util.Map;
-import java.util.Random;
-
-public class AmsRTest {
-
-    public static void main(String[] args) {
-
-        String metricName = "TestMetric";
-        double[] ts = getTS(1000);
-
-        double[] train_ts = ArrayUtils.subarray(ts, 0,750);
-        double[] train_x = getData(750);
-        DataSet trainData = new DataSet(metricName, train_ts, train_x);
-
-        double[] test_ts = ArrayUtils.subarray(ts, 750,1000);
-        double[] test_x = getData(250);
-        test_x[50] = 5.5; //Anomaly
-        DataSet testData = new DataSet(metricName, test_ts, test_x);
-        ResultSet rs;
-
-        Map<String, String> configs = new HashMap();
-
-        System.out.println("TUKEYS");
-        configs.put("tukeys.n", "3");
-        rs = RFunctionInvoker.tukeys(trainData, testData, configs);
-        rs.print();
-        System.out.println("--------------");
-
-        System.out.println("EMA Global");
-        configs.put("ema.n", "3");
-        configs.put("ema.w", "0.8");
-        rs = RFunctionInvoker.ema_global(trainData, testData, configs);
-        rs.print();
-        System.out.println("--------------");
-
-        System.out.println("EMA Daily");
-        rs = RFunctionInvoker.ema_daily(trainData, testData, configs);
-        rs.print();
-        System.out.println("--------------");
-
-        configs.put("ks.p_value", "0.05");
-        System.out.println("KS Test");
-        rs = RFunctionInvoker.ksTest(trainData, testData, configs);
-        rs.print();
-        System.out.println("--------------");
-
-        ts = getTS(5000);
-        train_ts = ArrayUtils.subarray(ts, 30,4800);
-        train_x = getData(4800);
-        trainData = new DataSet(metricName, train_ts, train_x);
-        test_ts = ArrayUtils.subarray(ts, 4800,5000);
-        test_x = getData(200);
-        for (int i =0; i<200;i++) {
-            test_x[i] = test_x[i]*5;
-        }
-        testData = new DataSet(metricName, test_ts, test_x);
-        configs.put("hsdev.n", "3");
-        configs.put("hsdev.nhp", "3");
-        configs.put("hsdev.interval", "86400000");
-        configs.put("hsdev.period", "604800000");
-        System.out.println("HSdev");
-        rs = RFunctionInvoker.hsdev(trainData, testData, configs);
-        rs.print();
-        System.out.println("--------------");
-
-    }
-
-    static double[] getTS(int n) {
-        long currentTime = System.currentTimeMillis();
-        double[] ts = new double[n];
-        currentTime = currentTime - (currentTime % (5*60*1000));
-
-        for (int i = 0,j=n-1; i<n; i++,j--) {
-            ts[j] = currentTime;
-            currentTime = currentTime - (5*60*1000);
-        }
-        return ts;
-    }
-
-    static void testBasic() {
-        Rengine r = new Rengine(new String[]{"--no-save"}, false, null);
-        try {
-            r.eval("library(ambarimetricsAD)");
-            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/test.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
-            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/util.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
-            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/tukeys.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
-            double[] ts = getTS(5000);
-            double[] x = getData(5000);
-            r.assign("ts", ts);
-            r.assign("x", x);
-            r.eval("x[1000] <- 4.5");
-            r.eval("x[2000] <- 4.75");
-            r.eval("x[3000] <- 3.5");
-            r.eval("x[4000] <- 5.5");
-            r.eval("x[5000] <- 5.0");
-            r.eval("data <- data.frame(ts,x)");
-            r.eval("names(data) <- c(\"TS\", \"Metric\")");
-            System.out.println(r.eval("data"));
-            REXP exp = r.eval("t_an <- test_methods(data)");
-            exp = r.eval("t_an");
-            String strExp = exp.asString();
-            System.out.println("result:" + exp);
-            RVector cont = (RVector) exp.getContent();
-            double[] an_ts = cont.at(0).asDoubleArray();
-            double[] an_x = cont.at(1).asDoubleArray();
-            System.out.println("result:" + strExp);
-        }
-        finally {
-            r.end();
-        }
-    }
-    static double[] getData(int n) {
-        double[] metrics = new double[n];
-        Random random = new Random();
-        for (int i = 0; i<n; i++) {
-            metrics[i] = random.nextDouble();
-        }
-        return metrics;
-    }
-}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java
deleted file mode 100644
index 2713b71..0000000
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java
+++ /dev/null
@@ -1,192 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.alertservice.R;
-
-
-import org.apache.ambari.metrics.alertservice.common.ResultSet;
-import org.apache.ambari.metrics.alertservice.common.DataSet;
-import org.rosuda.JRI.REXP;
-import org.rosuda.JRI.RVector;
-import org.rosuda.JRI.Rengine;
-
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-
-public class RFunctionInvoker {
-
-    public static Rengine r = new Rengine(new String[]{"--no-save"}, false, null);
-
-
-    private static void loadDataSets(Rengine r, DataSet trainData, DataSet testData) {
-        r.assign("train_ts", trainData.ts);
-        r.assign("train_x", trainData.values);
-        r.eval("train_data <- data.frame(train_ts,train_x)");
-        r.eval("names(train_data) <- c(\"TS\", " + trainData.metricName + ")");
-
-        r.assign("test_ts", testData.ts);
-        r.assign("test_x", testData.values);
-        r.eval("test_data <- data.frame(test_ts,test_x)");
-        r.eval("names(test_data) <- c(\"TS\", " + testData.metricName + ")");
-    }
-
-
-    public static ResultSet tukeys(DataSet trainData, DataSet testData, Map<String, String> configs) {
-        try {
-            r.eval("source('tukeys.r', echo=TRUE)");
-
-            int n = Integer.parseInt(configs.get("tukeys.n"));
-            r.eval("n <- " + n);
-
-            loadDataSets(r, trainData, testData);
-
-            r.eval("an <- ams_tukeys(train_data, test_data, n)");
-            REXP exp = r.eval("an");
-            RVector cont = (RVector) exp.getContent();
-            List<double[]> result = new ArrayList();
-            for (int i = 0; i< cont.size(); i++) {
-                result.add(cont.at(i).asDoubleArray());
-            }
-            return new ResultSet(result);
-        } catch(Exception e) {
-            e.printStackTrace();
-        } finally {
-            r.end();
-        }
-        return null;
-    }
-
-    public static ResultSet ema_global(DataSet trainData, DataSet testData, Map<String, String> configs) {
-        try {
-            r.eval("source('ema.R', echo=TRUE)");
-
-            int n = Integer.parseInt(configs.get("ema.n"));
-            r.eval("n <- " + n);
-
-            double w = Double.parseDouble(configs.get("ema.w"));
-            r.eval("w <- " + w);
-
-            loadDataSets(r, trainData, testData);
-
-            r.eval("an <- ema_global(train_data, test_data, w, n)");
-            REXP exp = r.eval("an");
-            RVector cont = (RVector) exp.getContent();
-            List<double[]> result = new ArrayList();
-            for (int i = 0; i< cont.size(); i++) {
-                result.add(cont.at(i).asDoubleArray());
-            }
-            return new ResultSet(result);
-
-        } catch(Exception e) {
-            e.printStackTrace();
-        } finally {
-            r.end();
-        }
-        return null;
-    }
-
-    public static ResultSet ema_daily(DataSet trainData, DataSet testData, Map<String, String> configs) {
-        try {
-            r.eval("source('ema.R', echo=TRUE)");
-
-            int n = Integer.parseInt(configs.get("ema.n"));
-            r.eval("n <- " + n);
-
-            double w = Double.parseDouble(configs.get("ema.w"));
-            r.eval("w <- " + w);
-
-            loadDataSets(r, trainData, testData);
-
-            r.eval("an <- ema_daily(train_data, test_data, w, n)");
-            REXP exp = r.eval("an");
-            RVector cont = (RVector) exp.getContent();
-            List<double[]> result = new ArrayList();
-            for (int i = 0; i< cont.size(); i++) {
-                result.add(cont.at(i).asDoubleArray());
-            }
-            return new ResultSet(result);
-
-        } catch(Exception e) {
-            e.printStackTrace();
-        } finally {
-            r.end();
-        }
-        return null;
-    }
-
-    public static ResultSet ksTest(DataSet trainData, DataSet testData, Map<String, String> configs) {
-        try {
-            r.eval("source('kstest.r', echo=TRUE)");
-
-            double p_value = Double.parseDouble(configs.get("ks.p_value"));
-            r.eval("p_value <- " + p_value);
-
-            loadDataSets(r, trainData, testData);
-
-            r.eval("an <- ams_ks(train_data, test_data, p_value)");
-            REXP exp = r.eval("an");
-            RVector cont = (RVector) exp.getContent();
-            List<double[]> result = new ArrayList();
-            for (int i = 0; i< cont.size(); i++) {
-                result.add(cont.at(i).asDoubleArray());
-            }
-            return new ResultSet(result);
-
-        } catch(Exception e) {
-            e.printStackTrace();
-        } finally {
-            r.end();
-        }
-        return null;
-    }
-
-    public static ResultSet hsdev(DataSet trainData, DataSet testData, Map<String, String> configs) {
-        try {
-            r.eval("source('hsdev.r', echo=TRUE)");
-
-            int n = Integer.parseInt(configs.get("hsdev.n"));
-            r.eval("n <- " + n);
-
-            int nhp = Integer.parseInt(configs.get("hsdev.nhp"));
-            r.eval("nhp <- " + nhp);
-
-            long interval = Long.parseLong(configs.get("hsdev.interval"));
-            r.eval("interval <- " + interval);
-
-            long period = Long.parseLong(configs.get("hsdev.period"));
-            r.eval("period <- " + period);
-
-            loadDataSets(r, trainData, testData);
-
-            r.eval("an2 <- hsdev_daily(train_data, test_data, n, nhp, interval, period)");
-            REXP exp = r.eval("an2");
-            RVector cont = (RVector) exp.getContent();
-
-            List<double[]> result = new ArrayList();
-            for (int i = 0; i< cont.size(); i++) {
-                result.add(cont.at(i).asDoubleArray());
-            }
-            return new ResultSet(result);
-        } catch(Exception e) {
-            e.printStackTrace();
-        } finally {
-            r.end();
-        }
-        return null;
-    }
-}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MetricAnomaly.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MetricAnomaly.java
deleted file mode 100644
index 4dbb425..0000000
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MetricAnomaly.java
+++ /dev/null
@@ -1,69 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.alertservice.common;
-
-public class MetricAnomaly {
-
-    private String metricKey;
-    private long timestamp;
-    private double metricValue;
-    private MethodResult methodResult;
-
-    public MetricAnomaly(String metricKey, long timestamp, double metricValue, MethodResult methodResult) {
-        this.metricKey = metricKey;
-        this.timestamp = timestamp;
-        this.metricValue = metricValue;
-        this.methodResult = methodResult;
-    }
-
-    public String getMetricKey() {
-        return metricKey;
-    }
-
-    public void setMetricName(String metricName) {
-        this.metricKey = metricName;
-    }
-
-    public long getTimestamp() {
-        return timestamp;
-    }
-
-    public void setTimestamp(long timestamp) {
-        this.timestamp = timestamp;
-    }
-
-    public double getMetricValue() {
-        return metricValue;
-    }
-
-    public void setMetricValue(double metricValue) {
-        this.metricValue = metricValue;
-    }
-
-    public MethodResult getMethodResult() {
-        return methodResult;
-    }
-
-    public void setMethodResult(MethodResult methodResult) {
-        this.methodResult = methodResult;
-    }
-
-    public String getAnomalyAsString() {
-        return metricKey + ":" + timestamp + ":" + metricValue + ":" + methodResult.prettyPrint();
-    }
-}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/SingleValuedTimelineMetric.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/SingleValuedTimelineMetric.java
deleted file mode 100644
index acd4452..0000000
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/SingleValuedTimelineMetric.java
+++ /dev/null
@@ -1,103 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.alertservice.common;
-
-
-public class SingleValuedTimelineMetric {
-    private Long timestamp;
-    private Double value;
-    private String metricName;
-    private String appId;
-    private String instanceId;
-    private String hostName;
-    private Long startTime;
-    private String type;
-
-    public void setSingleTimeseriesValue(Long timestamp, Double value) {
-        this.timestamp = timestamp;
-        this.value = value;
-    }
-
-    public SingleValuedTimelineMetric(String metricName, String appId,
-                                      String instanceId, String hostName,
-                                      long timestamp, long startTime, String type) {
-        this.metricName = metricName;
-        this.appId = appId;
-        this.instanceId = instanceId;
-        this.hostName = hostName;
-        this.timestamp = timestamp;
-        this.startTime = startTime;
-        this.type = type;
-    }
-
-    public Long getTimestamp() {
-        return timestamp;
-    }
-
-    public long getStartTime() {
-        return startTime;
-    }
-
-    public String getType() {
-        return type;
-    }
-
-    public Double getValue() {
-        return value;
-    }
-
-    public String getMetricName() {
-        return metricName;
-    }
-
-    public String getAppId() {
-        return appId;
-    }
-
-    public String getInstanceId() {
-        return instanceId;
-    }
-
-    public String getHostName() {
-        return hostName;
-    }
-
-    public boolean equalsExceptTime(TimelineMetric metric) {
-        if (!metricName.equals(metric.getMetricName())) return false;
-        if (hostName != null ? !hostName.equals(metric.getHostName()) : metric.getHostName() != null)
-            return false;
-        if (appId != null ? !appId.equals(metric.getAppId()) : metric.getAppId() != null)
-            return false;
-        if (instanceId != null ? !instanceId.equals(metric.getInstanceId()) : metric.getInstanceId() != null) return false;
-
-        return true;
-    }
-
-    public TimelineMetric getTimelineMetric() {
-        TimelineMetric metric = new TimelineMetric();
-        metric.setMetricName(this.metricName);
-        metric.setAppId(this.appId);
-        metric.setHostName(this.hostName);
-        metric.setType(this.type);
-        metric.setInstanceId(this.instanceId);
-        metric.setStartTime(this.startTime);
-        metric.setTimestamp(this.timestamp);
-        metric.getMetricValues().put(timestamp, value);
-        return metric;
-    }
-}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetric.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetric.java
deleted file mode 100644
index 88ad834..0000000
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetric.java
+++ /dev/null
@@ -1,238 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.alertservice.common;
-
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.classification.InterfaceStability;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import java.io.Serializable;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.TreeMap;
-
-@XmlRootElement(name = "metric")
-@XmlAccessorType(XmlAccessType.NONE)
-@InterfaceAudience.Public
-@InterfaceStability.Unstable
-public class TimelineMetric implements Comparable<TimelineMetric>, Serializable {
-
-    private String metricName;
-    private String appId;
-    private String instanceId;
-    private String hostName;
-    private long timestamp;
-    private long startTime;
-    private String type;
-    private String units;
-    private TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
-    private Map<String, String> metadata = new HashMap<>();
-
-    // default
-    public TimelineMetric() {
-
-    }
-
-    public TimelineMetric(String metricName, String appId, String hostName, TreeMap<Long,Double> metricValues) {
-        this.metricName = metricName;
-        this.appId = appId;
-        this.hostName = hostName;
-        this.metricValues.putAll(metricValues);
-    }
-
-    // copy constructor
-    public TimelineMetric(TimelineMetric metric) {
-        setMetricName(metric.getMetricName());
-        setType(metric.getType());
-        setUnits(metric.getUnits());
-        setTimestamp(metric.getTimestamp());
-        setAppId(metric.getAppId());
-        setInstanceId(metric.getInstanceId());
-        setHostName(metric.getHostName());
-        setStartTime(metric.getStartTime());
-        setMetricValues(new TreeMap<Long, Double>(metric.getMetricValues()));
-    }
-
-    @XmlElement(name = "metricname")
-    public String getMetricName() {
-        return metricName;
-    }
-
-    public void setMetricName(String metricName) {
-        this.metricName = metricName;
-    }
-
-    @XmlElement(name = "appid")
-    public String getAppId() {
-        return appId;
-    }
-
-    public void setAppId(String appId) {
-        this.appId = appId;
-    }
-
-    @XmlElement(name = "instanceid")
-    public String getInstanceId() {
-        return instanceId;
-    }
-
-    public void setInstanceId(String instanceId) {
-        this.instanceId = instanceId;
-    }
-
-    @XmlElement(name = "hostname")
-    public String getHostName() {
-        return hostName;
-    }
-
-    public void setHostName(String hostName) {
-        this.hostName = hostName;
-    }
-
-    @XmlElement(name = "timestamp")
-    public long getTimestamp() {
-        return timestamp;
-    }
-
-    public void setTimestamp(long timestamp) {
-        this.timestamp = timestamp;
-    }
-
-    @XmlElement(name = "starttime")
-    public long getStartTime() {
-        return startTime;
-    }
-
-    public void setStartTime(long startTime) {
-        this.startTime = startTime;
-    }
-
-    @XmlElement(name = "type", defaultValue = "UNDEFINED")
-    public String getType() {
-        return type;
-    }
-
-    public void setType(String type) {
-        this.type = type;
-    }
-
-    @XmlElement(name = "units")
-    public String getUnits() {
-        return units;
-    }
-
-    public void setUnits(String units) {
-        this.units = units;
-    }
-
-    @XmlElement(name = "metrics")
-    public TreeMap<Long, Double> getMetricValues() {
-        return metricValues;
-    }
-
-    public void setMetricValues(TreeMap<Long, Double> metricValues) {
-        this.metricValues = metricValues;
-    }
-
-    public void addMetricValues(Map<Long, Double> metricValues) {
-        this.metricValues.putAll(metricValues);
-    }
-
-    @XmlElement(name = "metadata")
-    public Map<String,String> getMetadata () {
-        return metadata;
-    }
-
-    public void setMetadata (Map<String,String> metadata) {
-        this.metadata = metadata;
-    }
-
-    @Override
-    public boolean equals(Object o) {
-        if (this == o) return true;
-        if (o == null || getClass() != o.getClass()) return false;
-
-        TimelineMetric metric = (TimelineMetric) o;
-
-        if (!metricName.equals(metric.metricName)) return false;
-        if (hostName != null ? !hostName.equals(metric.hostName) : metric.hostName != null)
-            return false;
-        if (appId != null ? !appId.equals(metric.appId) : metric.appId != null)
-            return false;
-        if (instanceId != null ? !instanceId.equals(metric.instanceId) : metric.instanceId != null)
-            return false;
-        if (timestamp != metric.timestamp) return false;
-        if (startTime != metric.startTime) return false;
-
-        return true;
-    }
-
-    public boolean equalsExceptTime(TimelineMetric metric) {
-        if (!metricName.equals(metric.metricName)) return false;
-        if (hostName != null ? !hostName.equals(metric.hostName) : metric.hostName != null)
-            return false;
-        if (appId != null ? !appId.equals(metric.appId) : metric.appId != null)
-            return false;
-        if (instanceId != null ? !instanceId.equals(metric.instanceId) : metric.instanceId != null)
-            return false;
-
-        return true;
-    }
-
-    @Override
-    public int hashCode() {
-        int result = metricName.hashCode();
-        result = 31 * result + (appId != null ? appId.hashCode() : 0);
-        result = 31 * result + (instanceId != null ? instanceId.hashCode() : 0);
-        result = 31 * result + (hostName != null ? hostName.hashCode() : 0);
-        result = 31 * result + (int) (timestamp ^ (timestamp >>> 32));
-        return result;
-    }
-
-    @Override
-    public int compareTo(TimelineMetric other) {
-        if (timestamp > other.timestamp) {
-            return -1;
-        } else if (timestamp < other.timestamp) {
-            return 1;
-        } else {
-            return metricName.compareTo(other.metricName);
-        }
-    }
-}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetrics.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetrics.java
deleted file mode 100644
index 7df6a9c..0000000
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetrics.java
+++ /dev/null
@@ -1,129 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.alertservice.common;
-
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.classification.InterfaceStability;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import java.io.Serializable;
-import java.util.ArrayList;
-import java.util.List;
-
-/**
- * The class that hosts a list of timeline entities.
- */
-@XmlRootElement(name = "metrics")
-@XmlAccessorType(XmlAccessType.NONE)
-@InterfaceAudience.Public
-@InterfaceStability.Unstable
-public class TimelineMetrics implements Serializable {
-
-    private List<TimelineMetric> allMetrics = new ArrayList<TimelineMetric>();
-
-    public TimelineMetrics() {}
-
-    @XmlElement(name = "metrics")
-    public List<TimelineMetric> getMetrics() {
-        return allMetrics;
-    }
-
-    public void setMetrics(List<TimelineMetric> allMetrics) {
-        this.allMetrics = allMetrics;
-    }
-
-    private boolean isEqualTimelineMetrics(TimelineMetric metric1,
-                                           TimelineMetric metric2) {
-
-        boolean isEqual = true;
-
-        if (!metric1.getMetricName().equals(metric2.getMetricName())) {
-            return false;
-        }
-
-        if (metric1.getHostName() != null) {
-            isEqual = metric1.getHostName().equals(metric2.getHostName());
-        }
-
-        if (metric1.getAppId() != null) {
-            isEqual = metric1.getAppId().equals(metric2.getAppId());
-        }
-
-        return isEqual;
-    }
-
-    /**
-     * Merge with existing TimelineMetric if everything except startTime is
-     * the same.
-     * @param metric {@link TimelineMetric}
-     */
-    public void addOrMergeTimelineMetric(TimelineMetric metric) {
-        TimelineMetric metricToMerge = null;
-
-        if (!allMetrics.isEmpty()) {
-            for (TimelineMetric timelineMetric : allMetrics) {
-                if (timelineMetric.equalsExceptTime(metric)) {
-                    metricToMerge = timelineMetric;
-                    break;
-                }
-            }
-        }
-
-        if (metricToMerge != null) {
-            metricToMerge.addMetricValues(metric.getMetricValues());
-            if (metricToMerge.getTimestamp() > metric.getTimestamp()) {
-                metricToMerge.setTimestamp(metric.getTimestamp());
-            }
-            if (metricToMerge.getStartTime() > metric.getStartTime()) {
-                metricToMerge.setStartTime(metric.getStartTime());
-            }
-        } else {
-            allMetrics.add(metric);
-        }
-    }
-
-    // Optimization that addresses too many TreeMaps from getting created.
-    public void addOrMergeTimelineMetric(SingleValuedTimelineMetric metric) {
-        TimelineMetric metricToMerge = null;
-
-        if (!allMetrics.isEmpty()) {
-            for (TimelineMetric timelineMetric : allMetrics) {
-                if (metric.equalsExceptTime(timelineMetric)) {
-                    metricToMerge = timelineMetric;
-                    break;
-                }
-            }
-        }
-
-        if (metricToMerge != null) {
-            metricToMerge.getMetricValues().put(metric.getTimestamp(), metric.getValue());
-            if (metricToMerge.getTimestamp() > metric.getTimestamp()) {
-                metricToMerge.setTimestamp(metric.getTimestamp());
-            }
-            if (metricToMerge.getStartTime() > metric.getStartTime()) {
-                metricToMerge.setStartTime(metric.getStartTime());
-            }
-        } else {
-            allMetrics.add(metric.getTimelineMetric());
-        }
-    }
-}
-
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaDS.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaDS.java
deleted file mode 100644
index 32cd96b..0000000
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaDS.java
+++ /dev/null
@@ -1,70 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.alertservice.methods.ema;
-
-import com.sun.org.apache.commons.logging.Log;
-import com.sun.org.apache.commons.logging.LogFactory;
-
-import javax.xml.bind.annotation.XmlRootElement;
-import java.io.Serializable;
-
-@XmlRootElement
-public class EmaDS implements Serializable {
-
-    String metricName;
-    String appId;
-    String hostname;
-    double ema;
-    double ems;
-    double weight;
-    int timessdev;
-    private static final Log LOG = LogFactory.getLog(EmaDS.class);
-
-    public EmaDS(String metricName, String appId, String hostname, double weight, int timessdev) {
-        this.metricName = metricName;
-        this.appId = appId;
-        this.hostname = hostname;
-        this.weight = weight;
-        this.timessdev = timessdev;
-        this.ema = 0.0;
-        this.ems = 0.0;
-    }
-
-
-    public EmaResult testAndUpdate(double metricValue) {
-
-        double diff  = Math.abs(ema - metricValue) - (timessdev * ems);
-
-        ema = weight * ema + (1 - weight) * metricValue;
-        ems = Math.sqrt(weight * Math.pow(ems, 2.0) + (1 - weight) * Math.pow(metricValue - ema, 2.0));
-        LOG.info(ema + ", " + ems);
-        return diff > 0 ? new EmaResult(diff) : null;
-    }
-
-    public void update(double metricValue) {
-        ema = weight * ema + (1 - weight) * metricValue;
-        ems = Math.sqrt(weight * Math.pow(ems, 2.0) + (1 - weight) * Math.pow(metricValue - ema, 2.0));
-        LOG.info(ema + ", " + ems);
-    }
-
-    public EmaResult test(double metricValue) {
-        double diff  = Math.abs(ema - metricValue) - (timessdev * ems);
-        return diff > 0 ? new EmaResult(diff) : null;
-    }
-
-}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModel.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModel.java
deleted file mode 100644
index 13a0f55..0000000
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModel.java
+++ /dev/null
@@ -1,129 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.alertservice.methods.ema;
-
-import com.google.gson.Gson;
-import com.sun.org.apache.commons.logging.Log;
-import com.sun.org.apache.commons.logging.LogFactory;
-import org.apache.ambari.metrics.alertservice.common.MethodResult;
-import org.apache.ambari.metrics.alertservice.common.MetricAnomaly;
-import org.apache.ambari.metrics.alertservice.common.TimelineMetric;
-import org.apache.ambari.metrics.alertservice.methods.MetricAnomalyModel;
-import org.apache.spark.SparkContext;
-import org.apache.spark.mllib.util.Saveable;
-
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import java.io.*;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-@XmlRootElement
-public class EmaModel implements MetricAnomalyModel, Saveable, Serializable {
-
-    @XmlElement(name = "trackedEmas")
-    private Map<String, EmaDS> trackedEmas = new HashMap<>();
-    private static final Log LOG = LogFactory.getLog(EmaModel.class);
-
-    public List<MetricAnomaly> onNewMetric(TimelineMetric metric) {
-
-        String metricName = metric.getMetricName();
-        String appId = metric.getAppId();
-        String hostname = metric.getHostName();
-        String key = metricName + "_" + appId + "_" + hostname;
-        List<MetricAnomaly> anomalies = new ArrayList<>();
-
-        if (!trackedEmas.containsKey(metricName)) {
-            trackedEmas.put(key, new EmaDS(metricName, appId, hostname, 0.8, 3));
-        }
-
-        EmaDS emaDS = trackedEmas.get(key);
-        for (Long timestamp : metric.getMetricValues().keySet()) {
-            double metricValue = metric.getMetricValues().get(timestamp);
-            MethodResult result = emaDS.testAndUpdate(metricValue);
-            if (result != null) {
-                MetricAnomaly metricAnomaly = new MetricAnomaly(key,timestamp, metricValue, result);
-                anomalies.add(metricAnomaly);
-            }
-        }
-        return anomalies;
-    }
-
-    public EmaDS train(TimelineMetric metric, double weight, int timessdev) {
-
-        String metricName = metric.getMetricName();
-        String appId = metric.getAppId();
-        String hostname = metric.getHostName();
-        String key = metricName + "_" + appId + "_" + hostname;
-
-        EmaDS emaDS = new EmaDS(metric.getMetricName(), metric.getAppId(), metric.getHostName(), weight, timessdev);
-        LOG.info("In EMA Train step");
-        for (Long timestamp : metric.getMetricValues().keySet()) {
-            emaDS.update(metric.getMetricValues().get(timestamp));
-        }
-        trackedEmas.put(key, emaDS);
-        return emaDS;
-    }
-
-    public List<MetricAnomaly> test(TimelineMetric metric) {
-        String metricName = metric.getMetricName();
-        String appId = metric.getAppId();
-        String hostname = metric.getHostName();
-        String key = metricName + "_" + appId + "_" + hostname;
-
-        EmaDS emaDS = trackedEmas.get(key);
-
-        if (emaDS == null) {
-            return new ArrayList<>();
-        }
-
-        List<MetricAnomaly> anomalies = new ArrayList<>();
-
-        for (Long timestamp : metric.getMetricValues().keySet()) {
-            double metricValue = metric.getMetricValues().get(timestamp);
-            MethodResult result = emaDS.testAndUpdate(metricValue);
-            if (result != null) {
-                MetricAnomaly metricAnomaly = new MetricAnomaly(key,timestamp, metricValue, result);
-                anomalies.add(metricAnomaly);
-            }
-        }
-        return anomalies;
-    }
-
-    @Override
-    public void save(SparkContext sc, String path) {
-        Gson gson = new Gson();
-        try {
-            String json = gson.toJson(this);
-            try (Writer writer = new BufferedWriter(new OutputStreamWriter(
-                    new FileOutputStream(path), "utf-8"))) {
-                writer.write(json);
-            }        } catch (IOException e) {
-            LOG.error(e);
-        }
-    }
-
-    @Override
-    public String formatVersion() {
-        return "1.0";
-    }
-
-}
-
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/TestEmaModel.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/TestEmaModel.java
deleted file mode 100644
index b851dab..0000000
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/TestEmaModel.java
+++ /dev/null
@@ -1,68 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.alertservice.methods.ema;
-
-import com.fasterxml.jackson.databind.ObjectMapper;
-import com.google.gson.Gson;
-import org.apache.ambari.metrics.alertservice.common.TimelineMetric;
-
-import java.io.*;
-import java.nio.charset.StandardCharsets;
-import java.nio.file.Files;
-import java.nio.file.Paths;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.TreeMap;
-
-public class TestEmaModel {
-
-    public static void main(String[] args) throws IOException {
-
-        long now = System.currentTimeMillis();
-        TimelineMetric metric1 = new TimelineMetric();
-        metric1.setMetricName("dummy_metric");
-        metric1.setHostName("dummy_host");
-        metric1.setTimestamp(now);
-        metric1.setStartTime(now - 1000);
-        metric1.setAppId("HOST");
-        metric1.setType("Integer");
-
-        TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
-
-        for (int i = 0; i<20;i++) {
-            double metric = 9 + Math.random();
-            metricValues.put(now - i*100, metric);
-        }
-        metric1.setMetricValues(metricValues);
-
-        EmaModel emaModel = new EmaModel();
-
-        emaModel.train(metric1, 0.8, 3);
-    }
-
-    /*
-     {{
-            put(now - 100, 1.20);
-            put(now - 200, 1.25);
-            put(now - 300, 1.30);
-            put(now - 400, 4.50);
-            put(now - 500, 1.35);
-            put(now - 400, 5.50);
-        }}
-     */
-}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/AmbariServerInterface.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/AmbariServerInterface.java
new file mode 100644
index 0000000..0c1c6fc
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/AmbariServerInterface.java
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.alertservice.prototype;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.codehaus.jettison.json.JSONArray;
+import org.codehaus.jettison.json.JSONObject;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.io.Serializable;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.nio.charset.StandardCharsets;
+import java.util.Base64;
+
+public class AmbariServerInterface implements Serializable{
+
+  private static final Log LOG = LogFactory.getLog(AmbariServerInterface.class);
+
+  private String ambariServerHost;
+  private String clusterName;
+
+  public AmbariServerInterface(String ambariServerHost, String clusterName) {
+    this.ambariServerHost = ambariServerHost;
+    this.clusterName = clusterName;
+  }
+
+  public int getPointInTimeSensitivity() {
+
+    String url = constructUri("http", ambariServerHost, "8080", "/api/v1/clusters/" + clusterName + "/alert_definitions?fields=*");
+
+    URL obj = null;
+    BufferedReader in = null;
+
+    try {
+      obj = new URL(url);
+      HttpURLConnection con = (HttpURLConnection) obj.openConnection();
+      con.setRequestMethod("GET");
+
+      String encoded = Base64.getEncoder().encodeToString(("admin:admin").getBytes(StandardCharsets.UTF_8));
+      con.setRequestProperty("Authorization", "Basic "+encoded);
+
+      int responseCode = con.getResponseCode();
+      LOG.info("Sending 'GET' request to URL : " + url);
+      LOG.info("Response Code : " + responseCode);
+
+      in = new BufferedReader(
+        new InputStreamReader(con.getInputStream()));
+
+      StringBuilder responseJsonSb = new StringBuilder();
+      String line;
+      while ((line = in.readLine()) != null) {
+        responseJsonSb.append(line);
+      }
+
+      JSONObject jsonObject = new JSONObject(responseJsonSb.toString());
+      JSONArray array = jsonObject.getJSONArray("items");
+      for(int i = 0 ; i < array.length() ; i++){
+        JSONObject alertDefn = array.getJSONObject(i).getJSONObject("AlertDefinition");
+        LOG.info("alertDefn : " + alertDefn.get("name"));
+        if (alertDefn.get("name") != null && alertDefn.get("name").equals("point_in_time_metrics_anomalies")) {
+          JSONObject sourceNode = alertDefn.getJSONObject("source");
+          JSONArray params = sourceNode.getJSONArray("parameters");
+          for(int j = 0 ; j < params.length() ; j++){
+            JSONObject param = params.getJSONObject(j);
+            if (param.get("name").equals("sensitivity")) {
+              return param.getInt("value");
+            }
+          }
+          break;
+        }
+      }
+
+    } catch (Exception e) {
+      LOG.error(e);
+    } finally {
+      if (in != null) {
+        try {
+          in.close();
+        } catch (IOException e) {
+          LOG.warn(e);
+        }
+      }
+    }
+
+    return -1;
+  }
+
+  private String constructUri(String protocol, String host, String port, String path) {
+    StringBuilder sb = new StringBuilder(protocol);
+    sb.append("://");
+    sb.append(host);
+    sb.append(":");
+    sb.append(port);
+    sb.append(path);
+    return sb.toString();
+  }
+
+//  public static void main(String[] args) {
+//    AmbariServerInterface ambariServerInterface = new AmbariServerInterface();
+//    ambariServerInterface.getPointInTimeSensitivity("avijayan-ams-1.openstacklocal","c1");
+//  }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricAnomalyDetectorTestInput.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricAnomalyDetectorTestInput.java
new file mode 100644
index 0000000..490328a
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricAnomalyDetectorTestInput.java
@@ -0,0 +1,126 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.alertservice.prototype;
+
+import javax.xml.bind.annotation.XmlRootElement;
+import java.util.List;
+import java.util.Map;
+
+@XmlRootElement
+public class MetricAnomalyDetectorTestInput {
+
+  public MetricAnomalyDetectorTestInput() {
+  }
+
+  //Train data
+  private String trainDataName;
+  private String trainDataType;
+  private Map<String, String> trainDataConfigs;
+  private int trainDataSize;
+
+  //Test data
+  private String testDataName;
+  private String testDataType;
+  private Map<String, String> testDataConfigs;
+  private int testDataSize;
+
+  //Algorithm data
+  private List<String> methods;
+  private Map<String, String> methodConfigs;
+
+  public String getTrainDataName() {
+    return trainDataName;
+  }
+
+  public void setTrainDataName(String trainDataName) {
+    this.trainDataName = trainDataName;
+  }
+
+  public String getTrainDataType() {
+    return trainDataType;
+  }
+
+  public void setTrainDataType(String trainDataType) {
+    this.trainDataType = trainDataType;
+  }
+
+  public Map<String, String> getTrainDataConfigs() {
+    return trainDataConfigs;
+  }
+
+  public void setTrainDataConfigs(Map<String, String> trainDataConfigs) {
+    this.trainDataConfigs = trainDataConfigs;
+  }
+
+  public String getTestDataName() {
+    return testDataName;
+  }
+
+  public void setTestDataName(String testDataName) {
+    this.testDataName = testDataName;
+  }
+
+  public String getTestDataType() {
+    return testDataType;
+  }
+
+  public void setTestDataType(String testDataType) {
+    this.testDataType = testDataType;
+  }
+
+  public Map<String, String> getTestDataConfigs() {
+    return testDataConfigs;
+  }
+
+  public void setTestDataConfigs(Map<String, String> testDataConfigs) {
+    this.testDataConfigs = testDataConfigs;
+  }
+
+  public Map<String, String> getMethodConfigs() {
+    return methodConfigs;
+  }
+
+  public void setMethodConfigs(Map<String, String> methodConfigs) {
+    this.methodConfigs = methodConfigs;
+  }
+
+  public int getTrainDataSize() {
+    return trainDataSize;
+  }
+
+  public void setTrainDataSize(int trainDataSize) {
+    this.trainDataSize = trainDataSize;
+  }
+
+  public int getTestDataSize() {
+    return testDataSize;
+  }
+
+  public void setTestDataSize(int testDataSize) {
+    this.testDataSize = testDataSize;
+  }
+
+  public List<String> getMethods() {
+    return methods;
+  }
+
+  public void setMethods(List<String> methods) {
+    this.methods = methods;
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricAnomalyTester.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricAnomalyTester.java
new file mode 100644
index 0000000..bff8120
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricAnomalyTester.java
@@ -0,0 +1,163 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype;
+
+import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
+import org.apache.ambari.metrics.alertservice.seriesgenerator.MetricSeriesGeneratorFactory;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+
+import java.net.InetAddress;
+import java.net.UnknownHostException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.TreeMap;
+
+public class MetricAnomalyTester {
+
+  public static String appId = MetricsCollectorInterface.serviceName;
+  static final Log LOG = LogFactory.getLog(MetricAnomalyTester.class);
+  static Map<String, TimelineMetric> timelineMetricMap = new HashMap<>();
+
+  public static TimelineMetrics runTestAnomalyRequest(MetricAnomalyDetectorTestInput input) throws UnknownHostException {
+
+    long currentTime = System.currentTimeMillis();
+    TimelineMetrics timelineMetrics = new TimelineMetrics();
+    String hostname = InetAddress.getLocalHost().getHostName();
+
+    //Train data
+    TimelineMetric metric1 = new TimelineMetric();
+    if (StringUtils.isNotEmpty(input.getTrainDataName())) {
+      metric1 = timelineMetricMap.get(input.getTrainDataName());
+      if (metric1 == null) {
+        metric1 = new TimelineMetric();
+        double[] trainSeries = MetricSeriesGeneratorFactory.generateSeries(input.getTrainDataType(), input.getTrainDataSize(), input.getTrainDataConfigs());
+        metric1.setMetricName(input.getTrainDataName());
+        metric1.setAppId(appId);
+        metric1.setHostName(hostname);
+        metric1.setStartTime(currentTime);
+        metric1.setInstanceId(null);
+        metric1.setMetricValues(getAsTimeSeries(currentTime, trainSeries));
+        timelineMetricMap.put(input.getTrainDataName(), metric1);
+      }
+      timelineMetrics.getMetrics().add(metric1);
+    } else {
+      LOG.error("No train data name specified");
+    }
+
+    //Test data
+    TimelineMetric metric2 = new TimelineMetric();
+    if (StringUtils.isNotEmpty(input.getTestDataName())) {
+      metric2 = timelineMetricMap.get(input.getTestDataName());
+      if (metric2 == null) {
+        metric2 = new TimelineMetric();
+        double[] testSeries = MetricSeriesGeneratorFactory.generateSeries(input.getTestDataType(), input.getTestDataSize(), input.getTestDataConfigs());
+        metric2.setMetricName(input.getTestDataName());
+        metric2.setAppId(appId);
+        metric2.setHostName(hostname);
+        metric2.setStartTime(currentTime);
+        metric2.setInstanceId(null);
+        metric2.setMetricValues(getAsTimeSeries(currentTime, testSeries));
+        timelineMetricMap.put(input.getTestDataName(), metric2);
+      }
+      timelineMetrics.getMetrics().add(metric2);
+    } else {
+      LOG.warn("No test data name specified");
+    }
+
+    //Invoke method
+    if (CollectionUtils.isNotEmpty(input.getMethods())) {
+      RFunctionInvoker.setScriptsDir("/etc/ambari-metrics-collector/conf/R-scripts");
+      for (String methodType : input.getMethods()) {
+        ResultSet result = RFunctionInvoker.executeMethod(methodType, getAsDataSeries(metric1), getAsDataSeries(metric2), input.getMethodConfigs());
+        TimelineMetric timelineMetric = getAsTimelineMetric(result, methodType, input, currentTime, hostname);
+        if (timelineMetric != null) {
+          timelineMetrics.getMetrics().add(timelineMetric);
+        }
+      }
+    } else {
+      LOG.warn("No anomaly method requested");
+    }
+
+    return timelineMetrics;
+  }
+
+
+  private static TimelineMetric getAsTimelineMetric(ResultSet result, String methodType, MetricAnomalyDetectorTestInput input, long currentTime, String hostname) {
+
+    if (result == null) {
+      return null;
+    }
+
+    TimelineMetric timelineMetric = new TimelineMetric();
+    if (methodType.equals("tukeys") || methodType.equals("ema")) {
+      timelineMetric.setMetricName(input.getTrainDataName() + "_" + input.getTestDataName() + "_" + methodType + "_" + currentTime);
+      timelineMetric.setHostName(hostname);
+      timelineMetric.setAppId(appId);
+      timelineMetric.setInstanceId(null);
+      timelineMetric.setStartTime(currentTime);
+
+      TreeMap<Long, Double> metricValues = new TreeMap<>();
+      if (result.resultset.size() > 0) {
+        double[] ts = result.resultset.get(0);
+        double[] metrics = result.resultset.get(1);
+        for (int i = 0; i < ts.length; i++) {
+          if (i == 0) {
+            timelineMetric.setStartTime((long) ts[i]);
+          }
+          metricValues.put((long) ts[i], metrics[i]);
+        }
+      }
+      timelineMetric.setMetricValues(metricValues);
+      return timelineMetric;
+    }
+    return null;
+  }
+
+
+  private static TreeMap<Long, Double> getAsTimeSeries(long currentTime, double[] values) {
+
+    long startTime = currentTime - (values.length - 1) * 60 * 1000;
+    TreeMap<Long, Double> metricValues = new TreeMap<>();
+
+    for (int i = 0; i < values.length; i++) {
+      metricValues.put(startTime, values[i]);
+      startTime += (60 * 1000);
+    }
+    return metricValues;
+  }
+
+  private static DataSeries getAsDataSeries(TimelineMetric timelineMetric) {
+
+    TreeMap<Long, Double> metricValues = timelineMetric.getMetricValues();
+    double[] timestamps = new double[metricValues.size()];
+    double[] values = new double[metricValues.size()];
+    int i = 0;
+
+    for (Long timestamp : metricValues.keySet()) {
+      timestamps[i] = timestamp;
+      values[i++] = metricValues.get(timestamp);
+    }
+    return new DataSeries(timelineMetric.getMetricName() + "_" + timelineMetric.getAppId() + "_" + timelineMetric.getHostName(), timestamps, values);
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AmsKafkaProducer.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricKafkaProducer.java
similarity index 56%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AmsKafkaProducer.java
rename to ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricKafkaProducer.java
index daaee5c..8023d15 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AmsKafkaProducer.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricKafkaProducer.java
@@ -15,25 +15,27 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.spark;
+package org.apache.ambari.metrics.alertservice.prototype;
 
 import com.fasterxml.jackson.databind.JsonNode;
 import com.fasterxml.jackson.databind.ObjectMapper;
-import org.apache.ambari.metrics.alertservice.common.TimelineMetric;
-import org.apache.ambari.metrics.alertservice.common.TimelineMetrics;
-import org.apache.kafka.clients.producer.*;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.Producer;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.clients.producer.RecordMetadata;
 
 import java.util.Properties;
-import java.util.TreeMap;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.Future;
 
-public class AmsKafkaProducer {
+public class MetricKafkaProducer {
 
     Producer producer;
     private static String topicName = "ambari-metrics-topic";
 
-    public AmsKafkaProducer(String kafkaServers) {
+    public MetricKafkaProducer(String kafkaServers) {
         Properties configProperties = new Properties();
         configProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaServers); //"avijayan-ams-2.openstacklocal:6667"
         configProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.ByteArraySerializer");
@@ -51,42 +53,4 @@ public class AmsKafkaProducer {
         System.out.println(kafkaFuture.isDone());
         System.out.println(kafkaFuture.get().topic());
     }
-
-    public static void main(String[] args) throws ExecutionException, InterruptedException {
-        final long now = System.currentTimeMillis();
-
-        TimelineMetrics timelineMetrics = new TimelineMetrics();
-        TimelineMetric metric1 = new TimelineMetric();
-        metric1.setMetricName("mem_free");
-        metric1.setHostName("avijayan-ams-3.openstacklocal");
-        metric1.setTimestamp(now);
-        metric1.setStartTime(now - 1000);
-        metric1.setAppId("HOST");
-        metric1.setType("Integer");
-
-        TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
-
-        for (int i = 0; i<20;i++) {
-            double metric = 20000 + Math.random();
-            metricValues.put(now - i*100, metric);
-        }
-
-        metric1.setMetricValues(metricValues);
-
-//        metric1.setMetricValues(new TreeMap<Long, Double>() {{
-//            put(now - 100, 1.20);
-//            put(now - 200, 11.25);
-//            put(now - 300, 1.30);
-//            put(now - 400, 4.50);
-//            put(now - 500, 16.35);
-//            put(now - 400, 5.50);
-//        }});
-
-        timelineMetrics.getMetrics().add(metric1);
-
-        for (int i = 0; i<1; i++) {
-            new AmsKafkaProducer("avijayan-ams-2.openstacklocal:6667").sendMetrics(timelineMetrics);
-            Thread.sleep(1000);
-        }
-    }
 }
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricSparkConsumer.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricSparkConsumer.java
new file mode 100644
index 0000000..7735d6c
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricSparkConsumer.java
@@ -0,0 +1,178 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
+import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique;
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.apache.spark.SparkConf;
+import org.apache.spark.api.java.function.Function;
+import org.apache.spark.broadcast.Broadcast;
+import org.apache.spark.streaming.Duration;
+import org.apache.spark.streaming.api.java.JavaDStream;
+import org.apache.spark.streaming.api.java.JavaPairDStream;
+import org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream;
+import org.apache.spark.streaming.api.java.JavaStreamingContext;
+import org.apache.spark.streaming.kafka.KafkaUtils;
+import scala.Tuple2;
+
+import java.util.*;
+
+public class MetricSparkConsumer {
+
+  private static final Log LOG = LogFactory.getLog(MetricSparkConsumer.class);
+  private static String groupId = "ambari-metrics-group";
+  private static String topicName = "ambari-metrics-topic";
+  private static int numThreads = 1;
+  private static long pitStartTime = System.currentTimeMillis();
+  private static long ksStartTime = pitStartTime;
+  private static long hdevStartTime = ksStartTime;
+
+  public MetricSparkConsumer() {
+  }
+
+  public static void main(String[] args) throws InterruptedException {
+
+    if (args.length < 5) {
+      System.err.println("Usage: MetricSparkConsumer <appid1,appid2> <collector_host> <port> <protocol> <zkQuorum>");
+      System.exit(1);
+    }
+
+    List<String> appIds = Arrays.asList(args[0].split(","));
+    String collectorHost = args[1];
+    String collectorPort = args[2];
+    String collectorProtocol = args[3];
+    String zkQuorum = args[4];
+
+    double emaW = StringUtils.isNotEmpty(args[5]) ? Double.parseDouble(args[5]) : 0.5;
+    double emaN = StringUtils.isNotEmpty(args[8]) ? Double.parseDouble(args[6]) : 3;
+    double tukeysN = StringUtils.isNotEmpty(args[7]) ? Double.parseDouble(args[7]) : 3;
+
+    long pitTestInterval = StringUtils.isNotEmpty(args[8]) ? Long.parseLong(args[8]) : 5 * 60 * 1000;
+    long pitTrainInterval = StringUtils.isNotEmpty(args[9]) ? Long.parseLong(args[9]) : 15 * 60 * 1000;
+
+    String fileName = args[10];
+    long ksTestInterval = StringUtils.isNotEmpty(args[11]) ? Long.parseLong(args[11]) : 10 * 60 * 1000;
+    long ksTrainInterval = StringUtils.isNotEmpty(args[12]) ? Long.parseLong(args[12]) : 10 * 60 * 1000;
+    int hsdevNhp = StringUtils.isNotEmpty(args[13]) ? Integer.parseInt(args[13]) : 3;
+    long hsdevInterval = StringUtils.isNotEmpty(args[14]) ? Long.parseLong(args[14]) : 30 * 60 * 1000;
+
+    String ambariServerHost = args[15];
+    String clusterName = args[16];
+
+    MetricsCollectorInterface metricsCollectorInterface = new MetricsCollectorInterface(collectorHost, collectorProtocol, collectorPort);
+
+    SparkConf sparkConf = new SparkConf().setAppName("AmbariMetricsAnomalyDetector");
+
+    JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(10000));
+
+    EmaTechnique emaTechnique = new EmaTechnique(emaW, emaN);
+    PointInTimeADSystem pointInTimeADSystem = new PointInTimeADSystem(metricsCollectorInterface,
+      tukeysN,
+      pitTestInterval,
+      pitTrainInterval,
+      ambariServerHost,
+      clusterName);
+
+    TrendADSystem trendADSystem = new TrendADSystem(metricsCollectorInterface,
+      ksTestInterval,
+      ksTrainInterval,
+      hsdevNhp,
+      fileName);
+
+    Broadcast<EmaTechnique> emaTechniqueBroadcast = jssc.sparkContext().broadcast(emaTechnique);
+    Broadcast<PointInTimeADSystem> pointInTimeADSystemBroadcast = jssc.sparkContext().broadcast(pointInTimeADSystem);
+    Broadcast<TrendADSystem> trendADSystemBroadcast = jssc.sparkContext().broadcast(trendADSystem);
+    Broadcast<MetricsCollectorInterface> metricsCollectorInterfaceBroadcast = jssc.sparkContext().broadcast(metricsCollectorInterface);
+
+    JavaPairReceiverInputDStream<String, String> messages =
+      KafkaUtils.createStream(jssc, zkQuorum, groupId, Collections.singletonMap(topicName, numThreads));
+
+    //Convert JSON string to TimelineMetrics.
+    JavaDStream<TimelineMetrics> timelineMetricsStream = messages.map(new Function<Tuple2<String, String>, TimelineMetrics>() {
+      @Override
+      public TimelineMetrics call(Tuple2<String, String> message) throws Exception {
+        ObjectMapper mapper = new ObjectMapper();
+        TimelineMetrics metrics = mapper.readValue(message._2, TimelineMetrics.class);
+        return metrics;
+      }
+    });
+
+    timelineMetricsStream.print();
+
+    //Group TimelineMetric by AppId.
+    JavaPairDStream<String, TimelineMetrics> appMetricStream = timelineMetricsStream.mapToPair(
+      timelineMetrics -> timelineMetrics.getMetrics().isEmpty()  ?  new Tuple2<>("TEST", new TimelineMetrics()) : new Tuple2<String, TimelineMetrics>(timelineMetrics.getMetrics().get(0).getAppId(), timelineMetrics)
+    );
+
+    appMetricStream.print();
+
+    //Filter AppIds that are not needed.
+    JavaPairDStream<String, TimelineMetrics> filteredAppMetricStream = appMetricStream.filter(new Function<Tuple2<String, TimelineMetrics>, Boolean>() {
+      @Override
+      public Boolean call(Tuple2<String, TimelineMetrics> appMetricTuple) throws Exception {
+        return appIds.contains(appMetricTuple._1);
+      }
+    });
+
+    filteredAppMetricStream.print();
+
+    filteredAppMetricStream.foreachRDD(rdd -> {
+      rdd.foreach(
+        tuple2 -> {
+          long currentTime = System.currentTimeMillis();
+          EmaTechnique ema = emaTechniqueBroadcast.getValue();
+          if (currentTime > pitStartTime + pitTestInterval) {
+            LOG.info("Running Tukeys....");
+            pointInTimeADSystemBroadcast.getValue().runTukeysAndRefineEma(ema, currentTime);
+            pitStartTime = pitStartTime + pitTestInterval;
+          }
+
+          if (currentTime > ksStartTime + ksTestInterval) {
+            LOG.info("Running KS Test....");
+            trendADSystemBroadcast.getValue().runKSTest(currentTime);
+            ksStartTime = ksStartTime + ksTestInterval;
+          }
+
+          if (currentTime > hdevStartTime + hsdevInterval) {
+            LOG.info("Running HSdev Test....");
+            trendADSystemBroadcast.getValue().runHsdevMethod();
+            hdevStartTime = hdevStartTime + hsdevInterval;
+          }
+
+          TimelineMetrics metrics = tuple2._2();
+          for (TimelineMetric timelineMetric : metrics.getMetrics()) {
+            List<MetricAnomaly> anomalies = ema.test(timelineMetric);
+            metricsCollectorInterfaceBroadcast.getValue().publish(anomalies);
+          }
+        });
+    });
+
+    jssc.start();
+    jssc.awaitTermination();
+  }
+}
+
+
+
+
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricsCollectorInterface.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricsCollectorInterface.java
new file mode 100644
index 0000000..7b3f63d
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricsCollectorInterface.java
@@ -0,0 +1,237 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype;
+
+import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.codehaus.jackson.map.AnnotationIntrospector;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.map.ObjectReader;
+import org.codehaus.jackson.map.annotate.JsonSerialize;
+import org.codehaus.jackson.xc.JaxbAnnotationIntrospector;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.io.OutputStream;
+import java.io.Serializable;
+import java.net.HttpURLConnection;
+import java.net.InetAddress;
+import java.net.URL;
+import java.net.UnknownHostException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.TreeMap;
+
+public class MetricsCollectorInterface implements Serializable {
+
+  private static String hostName = null;
+  private String instanceId = null;
+  public final static String serviceName = "anomaly-engine";
+  private String collectorHost;
+  private String protocol;
+  private String port;
+  private static final String WS_V1_TIMELINE_METRICS = "/ws/v1/timeline/metrics";
+  private static final Log LOG = LogFactory.getLog(MetricsCollectorInterface.class);
+  private static ObjectMapper mapper;
+  private final static ObjectReader timelineObjectReader;
+
+  static {
+    mapper = new ObjectMapper();
+    AnnotationIntrospector introspector = new JaxbAnnotationIntrospector();
+    mapper.setAnnotationIntrospector(introspector);
+    mapper.getSerializationConfig()
+      .withSerializationInclusion(JsonSerialize.Inclusion.NON_NULL);
+    timelineObjectReader = mapper.reader(TimelineMetrics.class);
+  }
+
+  public MetricsCollectorInterface(String collectorHost, String protocol, String port) {
+    this.collectorHost = collectorHost;
+    this.protocol = protocol;
+    this.port = port;
+    this.hostName = getDefaultLocalHostName();
+  }
+
+  public static String getDefaultLocalHostName() {
+
+    if (hostName != null) {
+      return hostName;
+    }
+
+    try {
+      return InetAddress.getLocalHost().getCanonicalHostName();
+    } catch (UnknownHostException e) {
+      LOG.info("Error getting host address");
+    }
+    return null;
+  }
+
+  public void publish(List<MetricAnomaly> metricAnomalies) {
+    if (CollectionUtils.isNotEmpty(metricAnomalies)) {
+      LOG.info("Sending metric anomalies of size : " + metricAnomalies.size());
+      List<TimelineMetric> metricList = getTimelineMetricList(metricAnomalies);
+      if (!metricList.isEmpty()) {
+        TimelineMetrics timelineMetrics = new TimelineMetrics();
+        timelineMetrics.setMetrics(metricList);
+        emitMetrics(timelineMetrics);
+      }
+    } else {
+      LOG.info("No anomalies to send.");
+    }
+  }
+
+  private List<TimelineMetric> getTimelineMetricList(List<MetricAnomaly> metricAnomalies) {
+    List<TimelineMetric> metrics = new ArrayList<>();
+
+    if (metricAnomalies.isEmpty()) {
+      return metrics;
+    }
+
+    for (MetricAnomaly anomaly : metricAnomalies) {
+      TimelineMetric timelineMetric = new TimelineMetric();
+      timelineMetric.setMetricName(anomaly.getMetricKey());
+      timelineMetric.setAppId(serviceName + "-" + anomaly.getMethodType());
+      timelineMetric.setInstanceId(null);
+      timelineMetric.setHostName(getDefaultLocalHostName());
+      timelineMetric.setStartTime(anomaly.getTimestamp());
+      HashMap<String, String> metadata = new HashMap<>();
+      metadata.put("method", anomaly.getMethodType());
+      metadata.put("anomaly-score", String.valueOf(anomaly.getAnomalyScore()));
+      timelineMetric.setMetadata(metadata);
+      TreeMap<Long,Double> metricValues = new TreeMap<>();
+      metricValues.put(anomaly.getTimestamp(), anomaly.getMetricValue());
+      timelineMetric.setMetricValues(metricValues);
+
+      metrics.add(timelineMetric);
+    }
+    return metrics;
+  }
+
+  public boolean emitMetrics(TimelineMetrics metrics) {
+    String connectUrl = constructTimelineMetricUri();
+    String jsonData = null;
+    LOG.info("EmitMetrics connectUrl = " + connectUrl);
+    try {
+      jsonData = mapper.writeValueAsString(metrics);
+      LOG.info(jsonData);
+    } catch (IOException e) {
+      LOG.error("Unable to parse metrics", e);
+    }
+    if (jsonData != null) {
+      return emitMetricsJson(connectUrl, jsonData);
+    }
+    return false;
+  }
+
+  private HttpURLConnection getConnection(String spec) throws IOException {
+    return (HttpURLConnection) new URL(spec).openConnection();
+  }
+
+  private boolean emitMetricsJson(String connectUrl, String jsonData) {
+    int timeout = 10000;
+    HttpURLConnection connection = null;
+    try {
+      if (connectUrl == null) {
+        throw new IOException("Unknown URL. Unable to connect to metrics collector.");
+      }
+      connection = getConnection(connectUrl);
+
+      connection.setRequestMethod("POST");
+      connection.setRequestProperty("Content-Type", "application/json");
+      connection.setRequestProperty("Connection", "Keep-Alive");
+      connection.setConnectTimeout(timeout);
+      connection.setReadTimeout(timeout);
+      connection.setDoOutput(true);
+
+      if (jsonData != null) {
+        try (OutputStream os = connection.getOutputStream()) {
+          os.write(jsonData.getBytes("UTF-8"));
+        }
+      }
+
+      int statusCode = connection.getResponseCode();
+
+      if (statusCode != 200) {
+        LOG.info("Unable to POST metrics to collector, " + connectUrl + ", " +
+          "statusCode = " + statusCode);
+      } else {
+        LOG.info("Metrics posted to Collector " + connectUrl);
+      }
+      return true;
+    } catch (IOException ioe) {
+      LOG.error(ioe.getMessage());
+    }
+    return false;
+  }
+
+  private String constructTimelineMetricUri() {
+    StringBuilder sb = new StringBuilder(protocol);
+    sb.append("://");
+    sb.append(collectorHost);
+    sb.append(":");
+    sb.append(port);
+    sb.append(WS_V1_TIMELINE_METRICS);
+    return sb.toString();
+  }
+
+  public TimelineMetrics fetchMetrics(String metricName,
+                                      String appId,
+                                      String hostname,
+                                      long startime,
+                                      long endtime) {
+
+    String url = constructTimelineMetricUri() + "?metricNames=" + metricName + "&appId=" + appId +
+      "&hostname=" + hostname + "&startTime=" + startime + "&endTime=" + endtime;
+    LOG.info("Fetch metrics URL : " + url);
+
+    URL obj = null;
+    BufferedReader in = null;
+    TimelineMetrics timelineMetrics = new TimelineMetrics();
+
+    try {
+      obj = new URL(url);
+      HttpURLConnection con = (HttpURLConnection) obj.openConnection();
+      con.setRequestMethod("GET");
+      int responseCode = con.getResponseCode();
+      LOG.info("Sending 'GET' request to URL : " + url);
+      LOG.info("Response Code : " + responseCode);
+
+      in = new BufferedReader(
+        new InputStreamReader(con.getInputStream()));
+      timelineMetrics = timelineObjectReader.readValue(in);
+    } catch (Exception e) {
+      LOG.error(e);
+    } finally {
+      if (in != null) {
+        try {
+          in.close();
+        } catch (IOException e) {
+          LOG.warn(e);
+        }
+      }
+    }
+
+    LOG.info("Fetched " + timelineMetrics.getMetrics().size() + " metrics.");
+    return timelineMetrics;
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/PointInTimeADSystem.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/PointInTimeADSystem.java
new file mode 100644
index 0000000..b4a8593
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/PointInTimeADSystem.java
@@ -0,0 +1,256 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype;
+
+import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
+import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaModel;
+import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+public class PointInTimeADSystem implements Serializable {
+
+  //private EmaTechnique emaTechnique;
+  private MetricsCollectorInterface metricsCollectorInterface;
+  private Map<String, Double> tukeysNMap;
+  private double defaultTukeysN = 3;
+
+  private long testIntervalMillis = 5*60*1000; //10mins
+  private long trainIntervalMillis = 15*60*1000; //1hour
+
+  private static final Log LOG = LogFactory.getLog(PointInTimeADSystem.class);
+
+  private AmbariServerInterface ambariServerInterface;
+  private int sensitivity = 50;
+  private int minSensitivity = 0;
+  private int maxSensitivity = 10;
+
+  public PointInTimeADSystem(MetricsCollectorInterface metricsCollectorInterface, double defaultTukeysN,
+                             long testIntervalMillis, long trainIntervalMillis, String ambariServerHost, String clusterName) {
+    this.metricsCollectorInterface = metricsCollectorInterface;
+    this.defaultTukeysN = defaultTukeysN;
+    this.tukeysNMap = new HashMap<>();
+    this.testIntervalMillis = testIntervalMillis;
+    this.trainIntervalMillis = trainIntervalMillis;
+    this.ambariServerInterface = new AmbariServerInterface(ambariServerHost, clusterName);
+    LOG.info("Starting PointInTimeADSystem...");
+  }
+
+  public void runTukeysAndRefineEma(EmaTechnique emaTechnique, long startTime) {
+    LOG.info("Running Tukeys for test data interval [" + new Date(startTime - testIntervalMillis) + " : " + new Date(startTime) + "], with train data period [" + new Date(startTime  - testIntervalMillis - trainIntervalMillis) + " : " + new Date(startTime - testIntervalMillis) + "]");
+
+    int requiredSensivity = ambariServerInterface.getPointInTimeSensitivity();
+    if (requiredSensivity == -1 || requiredSensivity == sensitivity) {
+      LOG.info("No change in sensitivity needed.");
+    } else {
+      LOG.info("Current tukey's N value = " + defaultTukeysN);
+      if (requiredSensivity > sensitivity) {
+        int targetSensitivity = Math.min(maxSensitivity, requiredSensivity);
+        while (sensitivity < targetSensitivity) {
+          defaultTukeysN = defaultTukeysN + defaultTukeysN * 0.1;
+          sensitivity++;
+        }
+      } else {
+        int targetSensitivity = Math.max(minSensitivity, requiredSensivity);
+        while (sensitivity > targetSensitivity) {
+          defaultTukeysN = defaultTukeysN - defaultTukeysN * 0.1;
+          sensitivity--;
+        }
+      }
+      LOG.info("New tukey's N value = " + defaultTukeysN);
+    }
+
+    TimelineMetrics timelineMetrics = new TimelineMetrics();
+    for (String metricKey : emaTechnique.getTrackedEmas().keySet()) {
+      LOG.info("EMA key = " + metricKey);
+      EmaModel emaModel = emaTechnique.getTrackedEmas().get(metricKey);
+      String metricName = emaModel.getMetricName();
+      String appId = emaModel.getAppId();
+      String hostname = emaModel.getHostname();
+
+      TimelineMetrics tukeysData = metricsCollectorInterface.fetchMetrics(metricName, appId, hostname, startTime - (testIntervalMillis + trainIntervalMillis),
+        startTime);
+
+      if (tukeysData.getMetrics().isEmpty()) {
+        LOG.info("No metrics fetched for Tukeys, metricKey = " + metricKey);
+        continue;
+      }
+
+      List<Double> trainTsList = new ArrayList<>();
+      List<Double> trainDataList = new ArrayList<>();
+      List<Double> testTsList = new ArrayList<>();
+      List<Double> testDataList = new ArrayList<>();
+
+      for (TimelineMetric metric : tukeysData.getMetrics()) {
+        for (Long timestamp : metric.getMetricValues().keySet()) {
+          if (timestamp <= (startTime - testIntervalMillis)) {
+            trainDataList.add(metric.getMetricValues().get(timestamp));
+            trainTsList.add((double)timestamp);
+          } else {
+            testDataList.add(metric.getMetricValues().get(timestamp));
+            testTsList.add((double)timestamp);
+          }
+        }
+      }
+
+      if (trainDataList.isEmpty() || testDataList.isEmpty() || trainDataList.size() < testDataList.size()) {
+        LOG.info("Not enough train/test data to perform analysis.");
+        continue;
+      }
+
+      String tukeysTrainSeries = "tukeysTrainSeries";
+      double[] trainTs = new double[trainTsList.size()];
+      double[] trainData = new double[trainTsList.size()];
+      for (int i = 0; i < trainTs.length; i++) {
+        trainTs[i] = trainTsList.get(i);
+        trainData[i] = trainDataList.get(i);
+      }
+
+      String tukeysTestSeries = "tukeysTestSeries";
+      double[] testTs = new double[testTsList.size()];
+      double[] testData = new double[testTsList.size()];
+      for (int i = 0; i < testTs.length; i++) {
+        testTs[i] = testTsList.get(i);
+        testData[i] = testDataList.get(i);
+      }
+
+      LOG.info("Train Size = " + trainTs.length + ", Test Size = " + testTs.length);
+
+      DataSeries tukeysTrainData = new DataSeries(tukeysTrainSeries, trainTs, trainData);
+      DataSeries tukeysTestData = new DataSeries(tukeysTestSeries, testTs, testData);
+
+      if (!tukeysNMap.containsKey(metricKey)) {
+        tukeysNMap.put(metricKey, defaultTukeysN);
+      }
+
+      Map<String, String> configs = new HashMap<>();
+      configs.put("tukeys.n", String.valueOf(tukeysNMap.get(metricKey)));
+
+      ResultSet rs = RFunctionInvoker.tukeys(tukeysTrainData, tukeysTestData, configs);
+
+      List<TimelineMetric> tukeysMetrics = getAsTimelineMetric(rs, metricName, appId, hostname);
+      LOG.info("Tukeys anomalies size : " + tukeysMetrics.size());
+      TreeMap<Long, Double> tukeysMetricValues = new TreeMap<>();
+
+      for (TimelineMetric tukeysMetric : tukeysMetrics) {
+        tukeysMetricValues.putAll(tukeysMetric.getMetricValues());
+        timelineMetrics.addOrMergeTimelineMetric(tukeysMetric);
+      }
+
+      TimelineMetrics emaData = metricsCollectorInterface.fetchMetrics(metricKey, MetricsCollectorInterface.serviceName+"-ema", MetricsCollectorInterface.getDefaultLocalHostName(), startTime - testIntervalMillis, startTime);
+      TreeMap<Long, Double> emaMetricValues = new TreeMap();
+      if (!emaData.getMetrics().isEmpty()) {
+        emaMetricValues = emaData.getMetrics().get(0).getMetricValues();
+      }
+
+      LOG.info("Ema anomalies size : " + emaMetricValues.size());
+      int tp = 0;
+      int tn = 0;
+      int fp = 0;
+      int fn = 0;
+
+      for (double ts : testTs) {
+        long timestamp = (long) ts;
+        if (tukeysMetricValues.containsKey(timestamp)) {
+          if (emaMetricValues.containsKey(timestamp)) {
+            tp++;
+          } else {
+            fn++;
+          }
+        } else {
+          if (emaMetricValues.containsKey(timestamp)) {
+            fp++;
+          } else {
+            tn++;
+          }
+        }
+      }
+
+      double recall = (double) tp / (double) (tp + fn);
+      double precision = (double) tp / (double) (tp + fp);
+      LOG.info("----------------------------");
+      LOG.info("Precision Recall values for " + metricKey);
+      LOG.info("tp=" + tp + ", fp=" + fp + ", tn=" + tn + ", fn=" + fn);
+      LOG.info("----------------------------");
+
+      if (recall < 0.5) {
+        LOG.info("Increasing EMA sensitivity by 10%");
+        emaModel.updateModel(true, 10);
+      } else if (precision < 0.5) {
+        LOG.info("Decreasing EMA sensitivity by 10%");
+        emaModel.updateModel(false, 10);
+      }
+
+    }
+
+    if (emaTechnique.getTrackedEmas().isEmpty()){
+      LOG.info("No EMA Technique keys tracked!!!!");
+    }
+
+    if (!timelineMetrics.getMetrics().isEmpty()) {
+      metricsCollectorInterface.emitMetrics(timelineMetrics);
+    }
+  }
+
+  private static List<TimelineMetric> getAsTimelineMetric(ResultSet result, String metricName, String appId, String hostname) {
+
+    List<TimelineMetric> timelineMetrics = new ArrayList<>();
+
+    if (result == null) {
+      LOG.info("ResultSet from R call is null!!");
+      return null;
+    }
+
+    if (result.resultset.size() > 0) {
+      double[] ts = result.resultset.get(0);
+      double[] metrics = result.resultset.get(1);
+      double[] anomalyScore = result.resultset.get(2);
+      for (int i = 0; i < ts.length; i++) {
+        TimelineMetric timelineMetric = new TimelineMetric();
+        timelineMetric.setMetricName(metricName + "_" + appId + "_" + hostname);
+        timelineMetric.setHostName(MetricsCollectorInterface.getDefaultLocalHostName());
+        timelineMetric.setAppId(MetricsCollectorInterface.serviceName + "-tukeys");
+        timelineMetric.setInstanceId(null);
+        timelineMetric.setStartTime((long) ts[i]);
+        TreeMap<Long, Double> metricValues = new TreeMap<>();
+        metricValues.put((long) ts[i], metrics[i]);
+
+        HashMap<String, String> metadata = new HashMap<>();
+        metadata.put("method", "tukeys");
+        metadata.put("anomaly-score", String.valueOf(anomalyScore[i]));
+        timelineMetric.setMetadata(metadata);
+
+        timelineMetric.setMetricValues(metricValues);
+        timelineMetrics.add(timelineMetric);
+      }
+    }
+
+    return timelineMetrics;
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/RFunctionInvoker.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/RFunctionInvoker.java
new file mode 100644
index 0000000..4fdf27d
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/RFunctionInvoker.java
@@ -0,0 +1,222 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype;
+
+
+import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
+import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.rosuda.JRI.REXP;
+import org.rosuda.JRI.RVector;
+import org.rosuda.JRI.Rengine;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class RFunctionInvoker {
+
+  static final Log LOG = LogFactory.getLog(RFunctionInvoker.class);
+  public static Rengine r = new Rengine(new String[]{"--no-save"}, false, null);
+  private static String rScriptDir = "/usr/lib/ambari-metrics-collector/R-scripts";
+
+  private static void loadDataSets(Rengine r, DataSeries trainData, DataSeries testData) {
+    r.assign("train_ts", trainData.ts);
+    r.assign("train_x", trainData.values);
+    r.eval("train_data <- data.frame(train_ts,train_x)");
+    r.eval("names(train_data) <- c(\"TS\", " + trainData.seriesName + ")");
+
+    r.assign("test_ts", testData.ts);
+    r.assign("test_x", testData.values);
+    r.eval("test_data <- data.frame(test_ts,test_x)");
+    r.eval("names(test_data) <- c(\"TS\", " + testData.seriesName + ")");
+  }
+
+  public static void setScriptsDir(String dir) {
+    rScriptDir = dir;
+  }
+
+  public static ResultSet executeMethod(String methodType, DataSeries trainData, DataSeries testData, Map<String, String> configs) {
+
+    ResultSet result;
+    switch (methodType) {
+      case "tukeys":
+        result = tukeys(trainData, testData, configs);
+        break;
+      case "ema":
+        result = ema_global(trainData, testData, configs);
+        break;
+      case "ks":
+        result = ksTest(trainData, testData, configs);
+        break;
+      case "hsdev":
+        result = hsdev(trainData, testData, configs);
+        break;
+      default:
+        result = tukeys(trainData, testData, configs);
+        break;
+    }
+    return result;
+  }
+
+  public static ResultSet tukeys(DataSeries trainData, DataSeries testData, Map<String, String> configs) {
+    try {
+
+      REXP exp1 = r.eval("source('" + rScriptDir + "/tukeys.r" + "')");
+
+      double n = Double.parseDouble(configs.get("tukeys.n"));
+      r.eval("n <- " + n);
+
+      loadDataSets(r, trainData, testData);
+
+      r.eval("an <- ams_tukeys(train_data, test_data, n)");
+      REXP exp = r.eval("an");
+      RVector cont = (RVector) exp.getContent();
+      List<double[]> result = new ArrayList();
+      for (int i = 0; i < cont.size(); i++) {
+        result.add(cont.at(i).asDoubleArray());
+      }
+      return new ResultSet(result);
+    } catch (Exception e) {
+      LOG.error(e);
+    } finally {
+      r.end();
+    }
+    return null;
+  }
+
+  public static ResultSet ema_global(DataSeries trainData, DataSeries testData, Map<String, String> configs) {
+    try {
+      r.eval("source('" + rScriptDir + "/ema.r" + "')");
+
+      int n = Integer.parseInt(configs.get("ema.n"));
+      r.eval("n <- " + n);
+
+      double w = Double.parseDouble(configs.get("ema.w"));
+      r.eval("w <- " + w);
+
+      loadDataSets(r, trainData, testData);
+
+      r.eval("an <- ema_global(train_data, test_data, w, n)");
+      REXP exp = r.eval("an");
+      RVector cont = (RVector) exp.getContent();
+      List<double[]> result = new ArrayList();
+      for (int i = 0; i < cont.size(); i++) {
+        result.add(cont.at(i).asDoubleArray());
+      }
+      return new ResultSet(result);
+
+    } catch (Exception e) {
+      LOG.error(e);
+    } finally {
+      r.end();
+    }
+    return null;
+  }
+
+  public static ResultSet ema_daily(DataSeries trainData, DataSeries testData, Map<String, String> configs) {
+    try {
+      r.eval("source('" + rScriptDir + "/ema.r" + "')");
+
+      int n = Integer.parseInt(configs.get("ema.n"));
+      r.eval("n <- " + n);
+
+      double w = Double.parseDouble(configs.get("ema.w"));
+      r.eval("w <- " + w);
+
+      loadDataSets(r, trainData, testData);
+
+      r.eval("an <- ema_daily(train_data, test_data, w, n)");
+      REXP exp = r.eval("an");
+      RVector cont = (RVector) exp.getContent();
+      List<double[]> result = new ArrayList();
+      for (int i = 0; i < cont.size(); i++) {
+        result.add(cont.at(i).asDoubleArray());
+      }
+      return new ResultSet(result);
+
+    } catch (Exception e) {
+      LOG.error(e);
+    } finally {
+      r.end();
+    }
+    return null;
+  }
+
+  public static ResultSet ksTest(DataSeries trainData, DataSeries testData, Map<String, String> configs) {
+    try {
+      r.eval("source('" + rScriptDir + "/kstest.r" + "')");
+
+      double p_value = Double.parseDouble(configs.get("ks.p_value"));
+      r.eval("p_value <- " + p_value);
+
+      loadDataSets(r, trainData, testData);
+
+      r.eval("an <- ams_ks(train_data, test_data, p_value)");
+      REXP exp = r.eval("an");
+      RVector cont = (RVector) exp.getContent();
+      List<double[]> result = new ArrayList();
+      for (int i = 0; i < cont.size(); i++) {
+        result.add(cont.at(i).asDoubleArray());
+      }
+      return new ResultSet(result);
+
+    } catch (Exception e) {
+      LOG.error(e);
+    } finally {
+      r.end();
+    }
+    return null;
+  }
+
+  public static ResultSet hsdev(DataSeries trainData, DataSeries testData, Map<String, String> configs) {
+    try {
+      r.eval("source('" + rScriptDir + "/hsdev.r" + "')");
+
+      int n = Integer.parseInt(configs.get("hsdev.n"));
+      r.eval("n <- " + n);
+
+      int nhp = Integer.parseInt(configs.get("hsdev.nhp"));
+      r.eval("nhp <- " + nhp);
+
+      long interval = Long.parseLong(configs.get("hsdev.interval"));
+      r.eval("interval <- " + interval);
+
+      long period = Long.parseLong(configs.get("hsdev.period"));
+      r.eval("period <- " + period);
+
+      loadDataSets(r, trainData, testData);
+
+      r.eval("an2 <- hsdev_daily(train_data, test_data, n, nhp, interval, period)");
+      REXP exp = r.eval("an2");
+      RVector cont = (RVector) exp.getContent();
+
+      List<double[]> result = new ArrayList();
+      for (int i = 0; i < cont.size(); i++) {
+        result.add(cont.at(i).asDoubleArray());
+      }
+      return new ResultSet(result);
+    } catch (Exception e) {
+      LOG.error(e);
+    } finally {
+      r.end();
+    }
+    return null;
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TestSeriesInputRequest.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TestSeriesInputRequest.java
new file mode 100644
index 0000000..7485f01
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TestSeriesInputRequest.java
@@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype;
+
+import org.apache.htrace.fasterxml.jackson.core.JsonProcessingException;
+import org.apache.htrace.fasterxml.jackson.databind.ObjectMapper;
+
+import javax.xml.bind.annotation.XmlRootElement;
+import java.util.Collections;
+import java.util.Map;
+
+@XmlRootElement
+public class TestSeriesInputRequest {
+
+  private String seriesName;
+  private String seriesType;
+  private Map<String, String> configs;
+
+  public TestSeriesInputRequest() {
+  }
+
+  public TestSeriesInputRequest(String seriesName, String seriesType, Map<String, String> configs) {
+    this.seriesName = seriesName;
+    this.seriesType = seriesType;
+    this.configs = configs;
+  }
+
+  public String getSeriesName() {
+    return seriesName;
+  }
+
+  public void setSeriesName(String seriesName) {
+    this.seriesName = seriesName;
+  }
+
+  public String getSeriesType() {
+    return seriesType;
+  }
+
+  public void setSeriesType(String seriesType) {
+    this.seriesType = seriesType;
+  }
+
+  public Map<String, String> getConfigs() {
+    return configs;
+  }
+
+  public void setConfigs(Map<String, String> configs) {
+    this.configs = configs;
+  }
+
+  @Override
+  public boolean equals(Object o) {
+    TestSeriesInputRequest anotherInput = (TestSeriesInputRequest)o;
+    return anotherInput.getSeriesName().equals(this.getSeriesName());
+  }
+
+  @Override
+  public int hashCode() {
+    return seriesName.hashCode();
+  }
+
+  public static void main(String[] args) {
+
+    ObjectMapper objectMapper = new ObjectMapper();
+    TestSeriesInputRequest testSeriesInputRequest = new TestSeriesInputRequest("test", "ema", Collections.singletonMap("key","value"));
+    try {
+      System.out.print(objectMapper.writeValueAsString(testSeriesInputRequest));
+    } catch (JsonProcessingException e) {
+      e.printStackTrace();
+    }
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendADSystem.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendADSystem.java
new file mode 100644
index 0000000..1534b55
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendADSystem.java
@@ -0,0 +1,331 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype;
+
+import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
+import org.apache.ambari.metrics.alertservice.prototype.methods.hsdev.HsdevTechnique;
+import org.apache.ambari.metrics.alertservice.prototype.methods.kstest.KSTechnique;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+
+import java.io.BufferedReader;
+import java.io.FileReader;
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+public class TrendADSystem implements Serializable {
+
+  private MetricsCollectorInterface metricsCollectorInterface;
+  private List<TrendMetric> trendMetrics;
+
+  private long ksTestIntervalMillis = 10 * 60 * 1000;
+  private long ksTrainIntervalMillis = 10 * 60 * 1000;
+  private KSTechnique ksTechnique;
+
+  private HsdevTechnique hsdevTechnique;
+  private int hsdevNumHistoricalPeriods = 3;
+
+  private Map<KsSingleRunKey, MetricAnomaly> trackedKsAnomalies = new HashMap<>();
+  private static final Log LOG = LogFactory.getLog(TrendADSystem.class);
+  private String inputFile = "";
+
+  public TrendADSystem(MetricsCollectorInterface metricsCollectorInterface,
+                       long ksTestIntervalMillis,
+                       long ksTrainIntervalMillis,
+                       int hsdevNumHistoricalPeriods,
+                       String inputFileName) {
+
+    this.metricsCollectorInterface = metricsCollectorInterface;
+    this.ksTestIntervalMillis = ksTestIntervalMillis;
+    this.ksTrainIntervalMillis = ksTrainIntervalMillis;
+    this.hsdevNumHistoricalPeriods = hsdevNumHistoricalPeriods;
+
+    this.ksTechnique = new KSTechnique();
+    this.hsdevTechnique = new HsdevTechnique();
+
+    trendMetrics = new ArrayList<>();
+    this.inputFile = inputFileName;
+    readInputFile(inputFileName);
+  }
+
+  public void runKSTest(long currentEndTime) {
+    readInputFile(inputFile);
+
+    long ksTestIntervalStartTime = currentEndTime - ksTestIntervalMillis;
+    LOG.info("Running KS Test for test data interval [" + new Date(ksTestIntervalStartTime) + " : " +
+      new Date(currentEndTime) + "], with train data period [" + new Date(ksTestIntervalStartTime - ksTrainIntervalMillis)
+      + " : " + new Date(ksTestIntervalStartTime) + "]");
+
+    for (TrendMetric metric : trendMetrics) {
+      String metricName = metric.metricName;
+      String appId = metric.appId;
+      String hostname = metric.hostname;
+      String key = metricName + "_" + appId + "_" + hostname;
+
+      TimelineMetrics ksData = metricsCollectorInterface.fetchMetrics(metricName, appId, hostname, ksTestIntervalStartTime - ksTrainIntervalMillis,
+        currentEndTime);
+
+      if (ksData.getMetrics().isEmpty()) {
+        LOG.info("No metrics fetched for KS, metricKey = " + key);
+        continue;
+      }
+
+      List<Double> trainTsList = new ArrayList<>();
+      List<Double> trainDataList = new ArrayList<>();
+      List<Double> testTsList = new ArrayList<>();
+      List<Double> testDataList = new ArrayList<>();
+
+      for (TimelineMetric timelineMetric : ksData.getMetrics()) {
+        for (Long timestamp : timelineMetric.getMetricValues().keySet()) {
+          if (timestamp <= ksTestIntervalStartTime) {
+            trainDataList.add(timelineMetric.getMetricValues().get(timestamp));
+            trainTsList.add((double) timestamp);
+          } else {
+            testDataList.add(timelineMetric.getMetricValues().get(timestamp));
+            testTsList.add((double) timestamp);
+          }
+        }
+      }
+
+      if (trainDataList.isEmpty() || testDataList.isEmpty() || trainDataList.size() < testDataList.size()) {
+        LOG.info("Not enough train/test data to perform KS analysis.");
+        continue;
+      }
+
+      String ksTrainSeries = "KSTrainSeries";
+      double[] trainTs = new double[trainTsList.size()];
+      double[] trainData = new double[trainTsList.size()];
+      for (int i = 0; i < trainTs.length; i++) {
+        trainTs[i] = trainTsList.get(i);
+        trainData[i] = trainDataList.get(i);
+      }
+
+      String ksTestSeries = "KSTestSeries";
+      double[] testTs = new double[testTsList.size()];
+      double[] testData = new double[testTsList.size()];
+      for (int i = 0; i < testTs.length; i++) {
+        testTs[i] = testTsList.get(i);
+        testData[i] = testDataList.get(i);
+      }
+
+      LOG.info("Train Size = " + trainTs.length + ", Test Size = " + testTs.length);
+
+      DataSeries ksTrainData = new DataSeries(ksTrainSeries, trainTs, trainData);
+      DataSeries ksTestData = new DataSeries(ksTestSeries, testTs, testData);
+
+      MetricAnomaly metricAnomaly = ksTechnique.runKsTest(key, ksTrainData, ksTestData);
+      if (metricAnomaly == null) {
+        LOG.info("No anomaly from KS test.");
+      } else {
+        LOG.info("Found Anomaly in KS Test. Publishing KS Anomaly metric....");
+        TimelineMetric timelineMetric = getAsTimelineMetric(metricAnomaly,
+          ksTestIntervalStartTime, currentEndTime, ksTestIntervalStartTime - ksTrainIntervalMillis, ksTestIntervalStartTime);
+        TimelineMetrics timelineMetrics = new TimelineMetrics();
+        timelineMetrics.addOrMergeTimelineMetric(timelineMetric);
+        metricsCollectorInterface.emitMetrics(timelineMetrics);
+
+        trackedKsAnomalies.put(new KsSingleRunKey(ksTestIntervalStartTime, currentEndTime, metricName, appId, hostname), metricAnomaly);
+      }
+    }
+
+    if (trendMetrics.isEmpty()) {
+      LOG.info("No Trend metrics tracked!!!!");
+    }
+
+  }
+
+  private TimelineMetric getAsTimelineMetric(MetricAnomaly metricAnomaly,
+                                   long testStart,
+                                   long testEnd,
+                                   long trainStart,
+                                   long trainEnd) {
+
+    TimelineMetric timelineMetric = new TimelineMetric();
+    timelineMetric.setMetricName(metricAnomaly.getMetricKey());
+    timelineMetric.setAppId(MetricsCollectorInterface.serviceName + "-" + metricAnomaly.getMethodType());
+    timelineMetric.setInstanceId(null);
+    timelineMetric.setHostName(MetricsCollectorInterface.getDefaultLocalHostName());
+    timelineMetric.setStartTime(testEnd);
+    HashMap<String, String> metadata = new HashMap<>();
+    metadata.put("method", metricAnomaly.getMethodType());
+    metadata.put("anomaly-score", String.valueOf(metricAnomaly.getAnomalyScore()));
+    metadata.put("test-start-time", String.valueOf(testStart));
+    metadata.put("train-start-time", String.valueOf(trainStart));
+    metadata.put("train-end-time", String.valueOf(trainEnd));
+    timelineMetric.setMetadata(metadata);
+    TreeMap<Long,Double> metricValues = new TreeMap<>();
+    metricValues.put(testEnd, metricAnomaly.getMetricValue());
+    timelineMetric.setMetricValues(metricValues);
+    return timelineMetric;
+
+  }
+  public void runHsdevMethod() {
+
+    List<TimelineMetric> hsdevMetricAnomalies = new ArrayList<>();
+
+    for (KsSingleRunKey ksSingleRunKey : trackedKsAnomalies.keySet()) {
+
+      long hsdevTestEnd = ksSingleRunKey.endTime;
+      long hsdevTestStart = ksSingleRunKey.startTime;
+
+      long period = hsdevTestEnd - hsdevTestStart;
+
+      long hsdevTrainStart = hsdevTestStart - (hsdevNumHistoricalPeriods) * period;
+      long hsdevTrainEnd = hsdevTestStart;
+
+      LOG.info("Running HSdev Test for test data interval [" + new Date(hsdevTestStart) + " : " +
+        new Date(hsdevTestEnd) + "], with train data period [" + new Date(hsdevTrainStart)
+        + " : " + new Date(hsdevTrainEnd) + "]");
+
+      String metricName = ksSingleRunKey.metricName;
+      String appId = ksSingleRunKey.appId;
+      String hostname = ksSingleRunKey.hostname;
+      String key = metricName + "_" + appId + "_" + hostname;
+
+      TimelineMetrics hsdevData = metricsCollectorInterface.fetchMetrics(
+        metricName,
+        appId,
+        hostname,
+        hsdevTrainStart,
+        hsdevTestEnd);
+
+      if (hsdevData.getMetrics().isEmpty()) {
+        LOG.info("No metrics fetched for HSDev, metricKey = " + key);
+        continue;
+      }
+
+      List<Double> trainTsList = new ArrayList<>();
+      List<Double> trainDataList = new ArrayList<>();
+      List<Double> testTsList = new ArrayList<>();
+      List<Double> testDataList = new ArrayList<>();
+
+      for (TimelineMetric timelineMetric : hsdevData.getMetrics()) {
+        for (Long timestamp : timelineMetric.getMetricValues().keySet()) {
+          if (timestamp <= hsdevTestStart) {
+            trainDataList.add(timelineMetric.getMetricValues().get(timestamp));
+            trainTsList.add((double) timestamp);
+          } else {
+            testDataList.add(timelineMetric.getMetricValues().get(timestamp));
+            testTsList.add((double) timestamp);
+          }
+        }
+      }
+
+      if (trainDataList.isEmpty() || testDataList.isEmpty() || trainDataList.size() < testDataList.size()) {
+        LOG.info("Not enough train/test data to perform Hsdev analysis.");
+        continue;
+      }
+
+      String hsdevTrainSeries = "HsdevTrainSeries";
+      double[] trainTs = new double[trainTsList.size()];
+      double[] trainData = new double[trainTsList.size()];
+      for (int i = 0; i < trainTs.length; i++) {
+        trainTs[i] = trainTsList.get(i);
+        trainData[i] = trainDataList.get(i);
+      }
+
+      String hsdevTestSeries = "HsdevTestSeries";
+      double[] testTs = new double[testTsList.size()];
+      double[] testData = new double[testTsList.size()];
+      for (int i = 0; i < testTs.length; i++) {
+        testTs[i] = testTsList.get(i);
+        testData[i] = testDataList.get(i);
+      }
+
+      LOG.info("Train Size = " + trainTs.length + ", Test Size = " + testTs.length);
+
+      DataSeries hsdevTrainData = new DataSeries(hsdevTrainSeries, trainTs, trainData);
+      DataSeries hsdevTestData = new DataSeries(hsdevTestSeries, testTs, testData);
+
+      MetricAnomaly metricAnomaly = hsdevTechnique.runHsdevTest(key, hsdevTrainData, hsdevTestData);
+      if (metricAnomaly == null) {
+        LOG.info("No anomaly from Hsdev test. Mismatch between KS and HSDev. ");
+        ksTechnique.updateModel(key, false, 10);
+      } else {
+        LOG.info("Found Anomaly in Hsdev Test. This confirms KS anomaly.");
+        hsdevMetricAnomalies.add(getAsTimelineMetric(metricAnomaly,
+          hsdevTestStart, hsdevTestEnd, hsdevTrainStart, hsdevTrainEnd));
+      }
+    }
+    clearTrackedKsRunKeys();
+
+    if (!hsdevMetricAnomalies.isEmpty()) {
+      LOG.info("Publishing Hsdev Anomalies....");
+      TimelineMetrics timelineMetrics = new TimelineMetrics();
+      timelineMetrics.setMetrics(hsdevMetricAnomalies);
+      metricsCollectorInterface.emitMetrics(timelineMetrics);
+    }
+  }
+
+  private void clearTrackedKsRunKeys() {
+    trackedKsAnomalies.clear();
+  }
+
+  private void readInputFile(String fileName) {
+    trendMetrics.clear();
+    try (BufferedReader br = new BufferedReader(new FileReader(fileName))) {
+      for (String line; (line = br.readLine()) != null; ) {
+        String[] splits = line.split(",");
+        LOG.info("Adding a new metric to track in Trend AD system : " + splits[0]);
+        trendMetrics.add(new TrendMetric(splits[0], splits[1], splits[2]));
+      }
+    } catch (IOException e) {
+      LOG.error("Error reading input file : " + e);
+    }
+  }
+
+  class KsSingleRunKey implements Serializable{
+
+    long startTime;
+    long endTime;
+    String metricName;
+    String appId;
+    String hostname;
+
+    public KsSingleRunKey(long startTime, long endTime, String metricName, String appId, String hostname) {
+      this.startTime = startTime;
+      this.endTime = endTime;
+      this.metricName = metricName;
+      this.appId = appId;
+      this.hostname = hostname;
+    }
+  }
+
+  /*
+          boolean isPresent = false;
+        for (TrendMetric trendMetric : trendMetrics) {
+          if (trendMetric.metricName.equalsIgnoreCase(splits[0])) {
+            isPresent = true;
+          }
+        }
+        if (!isPresent) {
+          LOG.info("Adding a new metric to track in Trend AD system : " + splits[0]);
+          trendMetrics.add(new TrendMetric(splits[0], splits[1], splits[2]));
+        }
+   */
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/MetricAnomalyModel.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendMetric.java
similarity index 68%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/MetricAnomalyModel.java
rename to ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendMetric.java
index af33d26..3bead8b 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/MetricAnomalyModel.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendMetric.java
@@ -15,15 +15,19 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.methods;
+package org.apache.ambari.metrics.alertservice.prototype;
 
-import org.apache.ambari.metrics.alertservice.common.MetricAnomaly;
-import org.apache.ambari.metrics.alertservice.common.TimelineMetric;
+import java.io.Serializable;
 
-import java.util.List;
+public class TrendMetric implements Serializable {
 
-public interface MetricAnomalyModel {
+  String metricName;
+  String appId;
+  String hostname;
 
-    public List<MetricAnomaly> onNewMetric(TimelineMetric metric);
-    public List<MetricAnomaly> test(TimelineMetric metric);
+  public TrendMetric(String metricName, String appId, String hostname) {
+    this.metricName = metricName;
+    this.appId = appId;
+    this.hostname = hostname;
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/DataSet.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/DataSeries.java
similarity index 77%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/DataSet.java
rename to ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/DataSeries.java
index a709c73..eb19857 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/DataSet.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/DataSeries.java
@@ -15,24 +15,24 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.common;
+package org.apache.ambari.metrics.alertservice.prototype.common;
 
 import java.util.Arrays;
 
-public class DataSet {
+public class DataSeries {
 
-    public String metricName;
+    public String seriesName;
     public double[] ts;
     public double[] values;
 
-    public DataSet(String metricName, double[] ts, double[] values) {
-        this.metricName = metricName;
+    public DataSeries(String seriesName, double[] ts, double[] values) {
+        this.seriesName = seriesName;
         this.ts = ts;
         this.values = values;
     }
 
     @Override
     public String toString() {
-        return metricName + Arrays.toString(ts) + Arrays.toString(values);
+        return seriesName + Arrays.toString(ts) + Arrays.toString(values);
     }
 }
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/ResultSet.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/ResultSet.java
similarity index 91%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/ResultSet.java
rename to ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/ResultSet.java
index 9415c1b..101b0e9 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/ResultSet.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/ResultSet.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.common;
+package org.apache.ambari.metrics.alertservice.prototype.common;
 
 
 import java.util.ArrayList;
@@ -23,7 +23,7 @@ import java.util.List;
 
 public class ResultSet {
 
-    List<double[]> resultset = new ArrayList<>();
+    public List<double[]> resultset = new ArrayList<>();
 
     public ResultSet(List<double[]> resultset) {
         this.resultset = resultset;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/StatisticUtils.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/StatisticUtils.java
similarity index 55%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/StatisticUtils.java
rename to ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/StatisticUtils.java
index 81bd77b..4ea4ac5 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/StatisticUtils.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/StatisticUtils.java
@@ -15,24 +15,25 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.common;
+package org.apache.ambari.metrics.alertservice.prototype.common;
 
 
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
 
 public class StatisticUtils {
 
-  public static double mean(Collection<Double> values) {
+  public static double mean(double[] values) {
     double sum = 0;
     for (double d : values) {
       sum += d;
     }
-    return sum / values.size();
+    return sum / values.length;
   }
 
-  public static double variance(Collection<Double> values) {
+  public static double variance(double[] values) {
     double avg =  mean(values);
     double variance = 0;
     for (double d : values) {
@@ -41,37 +42,21 @@ public class StatisticUtils {
     return variance;
   }
 
-  public static double sdev(Collection<Double> values, boolean useBesselsCorrection) {
+  public static double sdev(double[]  values, boolean useBesselsCorrection) {
     double variance = variance(values);
-    int n = (useBesselsCorrection) ? values.size() - 1 : values.size();
+    int n = (useBesselsCorrection) ? values.length - 1 : values.length;
     return Math.sqrt(variance / n);
   }
 
-  public static double median(Collection<Double> values) {
-    ArrayList<Double> clonedValues = new ArrayList<Double>(values);
-    Collections.sort(clonedValues);
-    int n = values.size();
+  public static double median(double[] values) {
+    double[] clonedValues = Arrays.copyOf(values, values.length);
+    Arrays.sort(clonedValues);
+    int n = values.length;
 
     if (n % 2 != 0) {
-      return clonedValues.get((n-1)/2);
+      return clonedValues[(n-1)/2];
     } else {
-      return ( clonedValues.get((n-1)/2) + clonedValues.get(n/2) ) / 2;
+      return ( clonedValues[(n-1)/2] + clonedValues[n/2] ) / 2;
     }
   }
-
-
-
-//  public static void main(String[] args) {
-//
-//    Collection<Double> values = new ArrayList<>();
-//    values.add(1.0);
-//    values.add(2.0);
-//    values.add(3.0);
-//    values.add(4.0);
-//    values.add(5.0);
-//
-//    System.out.println(mean(values));
-//    System.out.println(sdev(values, false));
-//    System.out.println(median(values));
-//  }
 }
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaResult.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/AnomalyDetectionTechnique.java
similarity index 62%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaResult.java
rename to ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/AnomalyDetectionTechnique.java
index 2d24a9c..0b10b4b 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaResult.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/AnomalyDetectionTechnique.java
@@ -6,31 +6,27 @@
  * to you under the Apache License, Version 2.0 (the
  * "License"); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
  * Unless required by applicable law or agreed to in writing, software
  * distributed under the License is distributed on an "AS IS" BASIS,
  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.methods.ema;
+package org.apache.ambari.metrics.alertservice.prototype.methods;
 
-import org.apache.ambari.metrics.alertservice.common.MethodResult;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 
-public class EmaResult extends MethodResult{
+import java.sql.Time;
+import java.util.List;
+import java.util.Map;
 
-    double diff;
+public abstract class AnomalyDetectionTechnique {
 
-    public EmaResult(double diff) {
-        this.methodType = "EMA";
-        this.diff = diff;
-    }
+  protected String methodType;
 
+  public abstract List<MetricAnomaly> test(TimelineMetric metric);
 
-    @Override
-    public String prettyPrint() {
-        return methodType + "(` = " + diff + ")";
-    }
 }
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/MetricAnomaly.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/MetricAnomaly.java
new file mode 100644
index 0000000..da4f030
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/MetricAnomaly.java
@@ -0,0 +1,86 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype.methods;
+
+import java.io.Serializable;
+import java.util.HashMap;
+import java.util.Map;
+
+public class MetricAnomaly implements Serializable{
+
+  private String methodType;
+  private double anomalyScore;
+  private String metricKey;
+  private long timestamp;
+  private double metricValue;
+
+
+  public MetricAnomaly(String metricKey, long timestamp, double metricValue, String methodType, double anomalyScore) {
+    this.metricKey = metricKey;
+    this.timestamp = timestamp;
+    this.metricValue = metricValue;
+    this.methodType = methodType;
+    this.anomalyScore = anomalyScore;
+
+  }
+
+  public String getMethodType() {
+    return methodType;
+  }
+
+  public void setMethodType(String methodType) {
+    this.methodType = methodType;
+  }
+
+  public double getAnomalyScore() {
+    return anomalyScore;
+  }
+
+  public void setAnomalyScore(double anomalyScore) {
+    this.anomalyScore = anomalyScore;
+  }
+
+  public void setMetricKey(String metricKey) {
+    this.metricKey = metricKey;
+  }
+
+  public String getMetricKey() {
+    return metricKey;
+  }
+
+  public void setMetricName(String metricName) {
+    this.metricKey = metricName;
+  }
+
+  public long getTimestamp() {
+    return timestamp;
+  }
+
+  public void setTimestamp(long timestamp) {
+    this.timestamp = timestamp;
+  }
+
+  public double getMetricValue() {
+    return metricValue;
+  }
+
+  public void setMetricValue(double metricValue) {
+    this.metricValue = metricValue;
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java
new file mode 100644
index 0000000..5e1f76b
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java
@@ -0,0 +1,124 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype.methods.ema;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import javax.xml.bind.annotation.XmlRootElement;
+import java.io.Serializable;
+
+@XmlRootElement
+public class EmaModel implements Serializable {
+
+  private String metricName;
+  private String hostname;
+  private String appId;
+  private double ema;
+  private double ems;
+  private double weight;
+  private double timessdev;
+
+  private int ctr = 0;
+  private static final int suppressAnomaliesTheshold = 30;
+
+  private static final Log LOG = LogFactory.getLog(EmaModel.class);
+
+  public EmaModel(String name, String hostname, String appId, double weight, double timessdev) {
+    this.metricName = name;
+    this.hostname = hostname;
+    this.appId = appId;
+    this.weight = weight;
+    this.timessdev = timessdev;
+    this.ema = 0.0;
+    this.ems = 0.0;
+  }
+
+  public String getMetricName() {
+    return metricName;
+  }
+
+  public String getHostname() {
+    return hostname;
+  }
+
+  public String getAppId() {
+    return appId;
+  }
+
+  public double testAndUpdate(double metricValue) {
+
+    double anomalyScore = 0.0;
+    if (ctr > suppressAnomaliesTheshold) {
+      anomalyScore = test(metricValue);
+    }
+    if (Math.abs(anomalyScore) < 2 * timessdev) {
+      update(metricValue);
+    } else {
+      LOG.info("Not updating model for this value");
+    }
+    ctr++;
+    LOG.info("Counter : " + ctr);
+    LOG.info("Anomaly Score for " + metricValue + " : " + anomalyScore);
+    return anomalyScore;
+  }
+
+  public void update(double metricValue) {
+    ema = weight * ema + (1 - weight) * metricValue;
+    ems = Math.sqrt(weight * Math.pow(ems, 2.0) + (1 - weight) * Math.pow(metricValue - ema, 2.0));
+    LOG.info("In update : ema = " + ema + ", ems = " + ems);
+  }
+
+  public double test(double metricValue) {
+    LOG.info("In test : ema = " + ema + ", ems = " + ems);
+    double diff = Math.abs(ema - metricValue) - (timessdev * ems);
+    LOG.info("diff = " + diff);
+    if (diff > 0) {
+      return Math.abs((metricValue - ema) / ems); //Z score
+    } else {
+      return 0.0;
+    }
+  }
+
+  public void updateModel(boolean increaseSensitivity, double percent) {
+    LOG.info("Updating model for " + metricName + " with increaseSensitivity = " + increaseSensitivity + ", percent = " + percent);
+    double delta = percent / 100;
+    if (increaseSensitivity) {
+      delta = delta * -1;
+    }
+    this.timessdev = timessdev + delta * timessdev;
+    this.weight = Math.min(1.0, weight + delta * weight);
+    LOG.info("New model parameters " + metricName + " : timessdev = " + timessdev + ", weight = " + weight);
+  }
+
+  public double getWeight() {
+    return weight;
+  }
+
+  public void setWeight(double weight) {
+    this.weight = weight;
+  }
+
+  public double getTimessdev() {
+    return timessdev;
+  }
+
+  public void setTimessdev(double timessdev) {
+    this.timessdev = timessdev;
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModelLoader.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModelLoader.java
similarity index 64%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModelLoader.java
rename to ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModelLoader.java
index 0205844..62749c1 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModelLoader.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModelLoader.java
@@ -15,32 +15,32 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.methods.ema;
+package org.apache.ambari.metrics.alertservice.prototype.methods.ema;
 
 import com.google.gson.Gson;
-import com.sun.org.apache.commons.logging.Log;
-import com.sun.org.apache.commons.logging.LogFactory;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.spark.SparkContext;
 import org.apache.spark.mllib.util.Loader;
 
-import java.io.File;
 import java.io.IOException;
 import java.nio.charset.StandardCharsets;
 import java.nio.file.Files;
 import java.nio.file.Paths;
 
-public class EmaModelLoader implements Loader<EmaModel> {
+public class EmaModelLoader implements Loader<EmaTechnique> {
     private static final Log LOG = LogFactory.getLog(EmaModelLoader.class);
 
     @Override
-    public EmaModel load(SparkContext sc, String path) {
-        Gson gson = new Gson();
-        try {
-            String fileString = new String(Files.readAllBytes(Paths.get(path)), StandardCharsets.UTF_8);
-            return gson.fromJson(fileString, EmaModel.class);
-        } catch (IOException e) {
-            LOG.error(e);
-        }
-        return null;
+    public EmaTechnique load(SparkContext sc, String path) {
+        return new EmaTechnique(0.5,3);
+//        Gson gson = new Gson();
+//        try {
+//            String fileString = new String(Files.readAllBytes(Paths.get(path)), StandardCharsets.UTF_8);
+//            return gson.fromJson(fileString, EmaTechnique.class);
+//        } catch (IOException e) {
+//            LOG.error(e);
+//        }
+//        return null;
     }
 }
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java
new file mode 100644
index 0000000..c005e6f
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java
@@ -0,0 +1,142 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype.methods.ema;
+
+import com.google.gson.Gson;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
+import org.apache.ambari.metrics.alertservice.prototype.methods.AnomalyDetectionTechnique;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.spark.SparkContext;
+import org.apache.spark.mllib.util.Saveable;
+
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import java.io.BufferedWriter;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStreamWriter;
+import java.io.Serializable;
+import java.io.Writer;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+@XmlRootElement
+public class EmaTechnique extends AnomalyDetectionTechnique implements Serializable, Saveable {
+
+  @XmlElement(name = "trackedEmas")
+  private Map<String, EmaModel> trackedEmas;
+  private static final Log LOG = LogFactory.getLog(EmaTechnique.class);
+
+  private double startingWeight = 0.5;
+  private double startTimesSdev = 3.0;
+  private String methodType = "ema";
+
+  public EmaTechnique(double startingWeight, double startTimesSdev) {
+    trackedEmas = new HashMap<>();
+    this.startingWeight = startingWeight;
+    this.startTimesSdev = startTimesSdev;
+    LOG.info("New EmaTechnique......");
+  }
+
+  public List<MetricAnomaly> test(TimelineMetric metric) {
+    String metricName = metric.getMetricName();
+    String appId = metric.getAppId();
+    String hostname = metric.getHostName();
+    String key = metricName + "_" + appId + "_" + hostname;
+
+    EmaModel emaModel = trackedEmas.get(key);
+    if (emaModel == null) {
+      LOG.info("EmaModel not present for " + key);
+      LOG.info("Number of tracked Emas : " + trackedEmas.size());
+      emaModel  = new EmaModel(metricName, hostname, appId, startingWeight, startTimesSdev);
+      trackedEmas.put(key, emaModel);
+    } else {
+      LOG.info("EmaModel already present for " + key);
+    }
+
+    List<MetricAnomaly> anomalies = new ArrayList<>();
+
+    for (Long timestamp : metric.getMetricValues().keySet()) {
+      double metricValue = metric.getMetricValues().get(timestamp);
+      double anomalyScore = emaModel.testAndUpdate(metricValue);
+      if (anomalyScore > 0.0) {
+        LOG.info("Found anomaly for : " + key);
+        MetricAnomaly metricAnomaly = new MetricAnomaly(key, timestamp, metricValue, methodType, anomalyScore);
+        anomalies.add(metricAnomaly);
+      } else {
+        LOG.info("Discarding non-anomaly for : " + key);
+      }
+    }
+    return anomalies;
+  }
+
+  public boolean updateModel(TimelineMetric timelineMetric, boolean increaseSensitivity, double percent) {
+    String metricName = timelineMetric.getMetricName();
+    String appId = timelineMetric.getAppId();
+    String hostname = timelineMetric.getHostName();
+    String key = metricName + "_" + appId + "_" + hostname;
+
+
+    EmaModel emaModel = trackedEmas.get(key);
+
+    if (emaModel == null) {
+      LOG.warn("EMA Model for " + key + " not found");
+      return false;
+    }
+    emaModel.updateModel(increaseSensitivity, percent);
+
+    return true;
+  }
+
+  @Override
+  public void save(SparkContext sc, String path) {
+    Gson gson = new Gson();
+    try {
+      String json = gson.toJson(this);
+      try (Writer writer = new BufferedWriter(new OutputStreamWriter(
+        new FileOutputStream(path), "utf-8"))) {
+        writer.write(json);
+      }
+    } catch (IOException e) {
+      LOG.error(e);
+    }
+  }
+
+  @Override
+  public String formatVersion() {
+    return "1.0";
+  }
+
+  public Map<String, EmaModel> getTrackedEmas() {
+    return trackedEmas;
+  }
+
+  public double getStartingWeight() {
+    return startingWeight;
+  }
+
+  public double getStartTimesSdev() {
+    return startTimesSdev;
+  }
+
+}
+
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java
new file mode 100644
index 0000000..50bf9f2
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java
@@ -0,0 +1,77 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype.methods.hsdev;
+
+import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import static org.apache.ambari.metrics.alertservice.prototype.common.StatisticUtils.median;
+import static org.apache.ambari.metrics.alertservice.prototype.common.StatisticUtils.sdev;
+
+import java.io.Serializable;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.Map;
+
+public class HsdevTechnique implements Serializable {
+
+  private Map<String, Double> hsdevMap;
+  private String methodType = "hsdev";
+  private static final Log LOG = LogFactory.getLog(HsdevTechnique.class);
+
+  public HsdevTechnique() {
+    hsdevMap = new HashMap<>();
+  }
+
+  public MetricAnomaly runHsdevTest(String key, DataSeries trainData, DataSeries testData) {
+    int testLength = testData.values.length;
+    int trainLength = trainData.values.length;
+
+    if (trainLength < testLength) {
+      LOG.info("Not enough train data.");
+      return null;
+    }
+
+    if (!hsdevMap.containsKey(key)) {
+      hsdevMap.put(key, 3.0);
+    }
+
+    double n = hsdevMap.get(key);
+
+    double historicSd = sdev(trainData.values, false);
+    double historicMedian = median(trainData.values);
+    double currentMedian = median(testData.values);
+
+    double diff = Math.abs(currentMedian - historicMedian);
+    LOG.info("Found anomaly for metric : " + key + " in the period ending " + new Date((long)testData.ts[testLength - 1]));
+    LOG.info("Current median = " + currentMedian + ", Historic Median = " + historicMedian + ", HistoricSd = " + historicSd);
+
+    if (diff > n * historicSd) {
+      double zScore = diff / historicSd;
+      LOG.info("Z Score of current series : " + zScore);
+      return new MetricAnomaly(key,
+        (long) testData.ts[testLength - 1],
+        testData.values[testLength - 1],
+        methodType,
+        zScore);
+    }
+    return null;
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/kstest/KSTechnique.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/kstest/KSTechnique.java
new file mode 100644
index 0000000..ff8dbcf
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/kstest/KSTechnique.java
@@ -0,0 +1,101 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.alertservice.prototype.methods.kstest;
+
+import org.apache.ambari.metrics.alertservice.prototype.RFunctionInvoker;
+import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
+import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.io.Serializable;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+public class KSTechnique implements Serializable {
+
+  private String methodType = "ks";
+  private Map<String, Double> pValueMap;
+  private static final Log LOG = LogFactory.getLog(KSTechnique.class);
+
+  public KSTechnique() {
+    pValueMap = new HashMap();
+  }
+
+  public MetricAnomaly runKsTest(String key, DataSeries trainData, DataSeries testData) {
+
+    int testLength = testData.values.length;
+    int trainLength = trainData.values.length;
+
+    if (trainLength < testLength) {
+      LOG.info("Not enough train data.");
+      return null;
+    }
+
+    if (!pValueMap.containsKey(key)) {
+      pValueMap.put(key, 0.05);
+    }
+    double pValue = pValueMap.get(key);
+
+    ResultSet result = RFunctionInvoker.ksTest(trainData, testData, Collections.singletonMap("ks.p_value", String.valueOf(pValue)));
+    if (result == null) {
+      LOG.error("Resultset is null when invoking KS R function...");
+      return null;
+    }
+
+    if (result.resultset.size() > 0) {
+
+      LOG.info("Is size 1 ? result size = " + result.resultset.get(0).length);
+      LOG.info("p_value = " + result.resultset.get(3)[0]);
+      double dValue = result.resultset.get(2)[0];
+
+      return new MetricAnomaly(key,
+        (long) testData.ts[testLength - 1],
+        testData.values[testLength - 1],
+        methodType,
+        dValue);
+    }
+
+    return null;
+  }
+
+  public void updateModel(String metricKey, boolean increaseSensitivity, double percent) {
+
+    LOG.info("Updating KS model for " + metricKey + " with increaseSensitivity = " + increaseSensitivity + ", percent = " + percent);
+
+    if (!pValueMap.containsKey(metricKey)) {
+      LOG.error("Unknown metric key : " + metricKey);
+      LOG.info("pValueMap :" + pValueMap.toString());
+      return;
+    }
+
+    double delta = percent / 100;
+    if (!increaseSensitivity) {
+      delta = delta * -1;
+    }
+
+    double pValue = pValueMap.get(metricKey);
+    double newPValue = Math.min(1.0, pValue + delta * pValue);
+    pValueMap.put(metricKey, newPValue);
+    LOG.info("New pValue = " + newPValue);
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MethodResult.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/AbstractMetricSeries.java
similarity index 77%
rename from ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MethodResult.java
rename to ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/AbstractMetricSeries.java
index 6bf58df..a8e31bf 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MethodResult.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/AbstractMetricSeries.java
@@ -15,13 +15,11 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.common;
+package org.apache.ambari.metrics.alertservice.seriesgenerator;
 
-public abstract class MethodResult {
-    protected String methodType;
-    public abstract String prettyPrint();
+public interface AbstractMetricSeries {
+
+  public double nextValue();
+  public double[] getSeries(int n);
 
-    public String getMethodType() {
-        return methodType;
-    }
 }
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/DualBandMetricSeries.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/DualBandMetricSeries.java
new file mode 100644
index 0000000..4158ff4
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/DualBandMetricSeries.java
@@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.seriesgenerator;
+
+import java.util.Random;
+
+public class DualBandMetricSeries implements AbstractMetricSeries {
+
+  double lowBandValue = 0.0;
+  double lowBandDeviationPercentage = 0.0;
+  int lowBandPeriodSize = 10;
+  double highBandValue = 1.0;
+  double highBandDeviationPercentage = 0.0;
+  int highBandPeriodSize = 10;
+
+  Random random = new Random();
+  double lowBandValueLowerLimit, lowBandValueHigherLimit;
+  double highBandLowerLimit, highBandUpperLimit;
+  int l = 0, h = 0;
+
+  public DualBandMetricSeries(double lowBandValue,
+                              double lowBandDeviationPercentage,
+                              int lowBandPeriodSize,
+                              double highBandValue,
+                              double highBandDeviationPercentage,
+                              int highBandPeriodSize) {
+    this.lowBandValue = lowBandValue;
+    this.lowBandDeviationPercentage = lowBandDeviationPercentage;
+    this.lowBandPeriodSize = lowBandPeriodSize;
+    this.highBandValue = highBandValue;
+    this.highBandDeviationPercentage = highBandDeviationPercentage;
+    this.highBandPeriodSize = highBandPeriodSize;
+    init();
+  }
+
+  private void init() {
+    lowBandValueLowerLimit = lowBandValue - lowBandDeviationPercentage * lowBandValue;
+    lowBandValueHigherLimit = lowBandValue + lowBandDeviationPercentage * lowBandValue;
+    highBandLowerLimit = highBandValue - highBandDeviationPercentage * highBandValue;
+    highBandUpperLimit = highBandValue + highBandDeviationPercentage * highBandValue;
+  }
+
+  @Override
+  public double nextValue() {
+
+    double value = 0.0;
+
+    if (l < lowBandPeriodSize) {
+      value = lowBandValueLowerLimit + (lowBandValueHigherLimit - lowBandValueLowerLimit) * random.nextDouble();
+      l++;
+    } else if (h < highBandPeriodSize) {
+      value = highBandLowerLimit + (highBandUpperLimit - highBandLowerLimit) * random.nextDouble();
+      h++;
+    }
+
+    if (l == lowBandPeriodSize && h == highBandPeriodSize) {
+      l = 0;
+      h = 0;
+    }
+
+    return value;
+  }
+
+  @Override
+  public double[] getSeries(int n) {
+    double[] series = new double[n];
+    for (int i = 0; i < n; i++) {
+      series[i] = nextValue();
+    }
+    return series;
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorFactory.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorFactory.java
new file mode 100644
index 0000000..1e37ff3
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorFactory.java
@@ -0,0 +1,379 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.alertservice.seriesgenerator;
+
+import java.util.Arrays;
+import java.util.Map;
+import java.util.Random;
+
+public class MetricSeriesGeneratorFactory {
+
+  /**
+   * Return a normally distributed data series with some deviation % and outliers.
+   *
+   * @param n                                size of the data series
+   * @param value                            The value around which the uniform data series is centered on.
+   * @param deviationPercentage              The allowed deviation % on either side of the uniform value. For example, if value = 10, and deviation % is 0.1, the series values lie between 0.9 to 1.1.
+   * @param outlierProbability               The probability of finding an outlier in the series.
+   * @param outlierDeviationLowerPercentage  min percentage outlier should be away from the uniform value in % terms. if value = 10 and outlierDeviationPercentage = 30%, the outlier is 7 and  13.
+   * @param outlierDeviationHigherPercentage max percentage outlier should be away from the uniform value in % terms. if value = 10 and outlierDeviationPercentage = 60%, the outlier is 4 and  16.
+   * @param outliersAboveValue               Outlier should be greater or smaller than the value.
+   * @return uniform series
+   */
+  public static double[] createUniformSeries(int n,
+                                             double value,
+                                             double deviationPercentage,
+                                             double outlierProbability,
+                                             double outlierDeviationLowerPercentage,
+                                             double outlierDeviationHigherPercentage,
+                                             boolean outliersAboveValue) {
+
+    UniformMetricSeries metricSeries = new UniformMetricSeries(value,
+      deviationPercentage,
+      outlierProbability,
+      outlierDeviationLowerPercentage,
+      outlierDeviationHigherPercentage,
+      outliersAboveValue);
+
+    return metricSeries.getSeries(n);
+  }
+
+
+  /**
+   * /**
+   * Returns a normally distributed series.
+   *
+   * @param n                             size of the data series
+   * @param mean                          mean of the distribution
+   * @param sd                            sd of the distribution
+   * @param outlierProbability            sd of the distribution
+   * @param outlierDeviationSDTimesLower  Lower Limit of the outlier with respect to times sdev from the mean.
+   * @param outlierDeviationSDTimesHigher Higher Limit of the outlier with respect to times sdev from the mean.
+   * @param outlierOnRightEnd             Outlier should be on the right end or the left end.
+   * @return normal series
+   */
+  public static double[] createNormalSeries(int n,
+                                            double mean,
+                                            double sd,
+                                            double outlierProbability,
+                                            double outlierDeviationSDTimesLower,
+                                            double outlierDeviationSDTimesHigher,
+                                            boolean outlierOnRightEnd) {
+
+
+    NormalMetricSeries metricSeries = new NormalMetricSeries(mean,
+      sd,
+      outlierProbability,
+      outlierDeviationSDTimesLower,
+      outlierDeviationSDTimesHigher,
+      outlierOnRightEnd);
+
+    return metricSeries.getSeries(n);
+  }
+
+
+  /**
+   * Returns a monotonically increasing / decreasing series
+   *
+   * @param n                                size of the data series
+   * @param startValue                       Start value of the monotonic sequence
+   * @param slope                            direction of monotonicity m > 0 for increasing and m < 0 for decreasing.
+   * @param deviationPercentage              The allowed deviation % on either side of the current 'y' value. For example, if current value = 10 according to slope, and deviation % is 0.1, the series values lie between 0.9 to 1.1.
+   * @param outlierProbability               The probability of finding an outlier in the series.
+   * @param outlierDeviationLowerPercentage  min percentage outlier should be away from the current 'y' value in % terms. if value = 10 and outlierDeviationPercentage = 30%, the outlier is 7 and  13.
+   * @param outlierDeviationHigherPercentage max percentage outlier should be away from the current 'y' value in % terms. if value = 10 and outlierDeviationPercentage = 60%, the outlier is 4 and  16.
+   * @param outliersAboveValue               Outlier should be greater or smaller than the 'y' value.
+   * @return
+   */
+  public static double[] createMonotonicSeries(int n,
+                                               double startValue,
+                                               double slope,
+                                               double deviationPercentage,
+                                               double outlierProbability,
+                                               double outlierDeviationLowerPercentage,
+                                               double outlierDeviationHigherPercentage,
+                                               boolean outliersAboveValue) {
+
+    MonotonicMetricSeries metricSeries = new MonotonicMetricSeries(startValue,
+      slope,
+      deviationPercentage,
+      outlierProbability,
+      outlierDeviationLowerPercentage,
+      outlierDeviationHigherPercentage,
+      outliersAboveValue);
+
+    return metricSeries.getSeries(n);
+  }
+
+
+  /**
+   * Returns a dual band series (lower and higher)
+   *
+   * @param n                           size of the data series
+   * @param lowBandValue                lower band value
+   * @param lowBandDeviationPercentage  lower band deviation
+   * @param lowBandPeriodSize           lower band
+   * @param highBandValue               high band centre value
+   * @param highBandDeviationPercentage high band deviation.
+   * @param highBandPeriodSize          high band size
+   * @return
+   */
+  public static double[] getDualBandSeries(int n,
+                                           double lowBandValue,
+                                           double lowBandDeviationPercentage,
+                                           int lowBandPeriodSize,
+                                           double highBandValue,
+                                           double highBandDeviationPercentage,
+                                           int highBandPeriodSize) {
+
+    DualBandMetricSeries metricSeries  = new DualBandMetricSeries(lowBandValue,
+      lowBandDeviationPercentage,
+      lowBandPeriodSize,
+      highBandValue,
+      highBandDeviationPercentage,
+      highBandPeriodSize);
+
+    return metricSeries.getSeries(n);
+  }
+
+  /**
+   * Returns a step function series.
+   *
+   * @param n                              size of the data series
+   * @param startValue                     start steady value
+   * @param steadyValueDeviationPercentage required devation in the steady state value
+   * @param steadyPeriodSlope              direction of monotonicity m > 0 for increasing and m < 0 for decreasing, m = 0 no increase or decrease.
+   * @param steadyPeriodMinSize            min size for step period
+   * @param steadyPeriodMaxSize            max size for step period.
+   * @param stepChangePercentage           Increase / decrease in steady state to denote a step in terms of deviation percentage from the last value.
+   * @param upwardStep                     upward or downward step.
+   * @return
+   */
+  public static double[] getStepFunctionSeries(int n,
+                                               double startValue,
+                                               double steadyValueDeviationPercentage,
+                                               double steadyPeriodSlope,
+                                               int steadyPeriodMinSize,
+                                               int steadyPeriodMaxSize,
+                                               double stepChangePercentage,
+                                               boolean upwardStep) {
+
+    StepFunctionMetricSeries metricSeries = new StepFunctionMetricSeries(startValue,
+      steadyValueDeviationPercentage,
+      steadyPeriodSlope,
+      steadyPeriodMinSize,
+      steadyPeriodMaxSize,
+      stepChangePercentage,
+      upwardStep);
+
+    return metricSeries.getSeries(n);
+  }
+
+  /**
+   * Series with small period of turbulence and then back to steady.
+   *
+   * @param n                                        size of the data series
+   * @param steadyStateValue                         steady state center value
+   * @param steadyStateDeviationPercentage           steady state deviation in percentage
+   * @param turbulentPeriodDeviationLowerPercentage  turbulent state lower limit in terms of percentage from centre value.
+   * @param turbulentPeriodDeviationHigherPercentage turbulent state higher limit in terms of percentage from centre value.
+   * @param turbulentPeriodLength                    turbulent period length (number of points)
+   * @param turbulentStatePosition                   Where the turbulent state should be 0 - at the beginning, 1 - in the middle (25% - 50% of the series), 2 - at the end of the series.
+   * @return
+   */
+  public static double[] getSteadySeriesWithTurbulentPeriod(int n,
+                                                            double steadyStateValue,
+                                                            double steadyStateDeviationPercentage,
+                                                            double turbulentPeriodDeviationLowerPercentage,
+                                                            double turbulentPeriodDeviationHigherPercentage,
+                                                            int turbulentPeriodLength,
+                                                            int turbulentStatePosition
+  ) {
+
+
+    SteadyWithTurbulenceMetricSeries metricSeries = new SteadyWithTurbulenceMetricSeries(n,
+      steadyStateValue,
+      steadyStateDeviationPercentage,
+      turbulentPeriodDeviationLowerPercentage,
+      turbulentPeriodDeviationHigherPercentage,
+      turbulentPeriodLength,
+      turbulentStatePosition);
+
+    return metricSeries.getSeries(n);
+  }
+
+
+  public static double[] generateSeries(String type, int n, Map<String, String> configs) {
+
+    double[] series;
+    switch (type) {
+
+      case "normal":
+        series = createNormalSeries(n,
+          Double.parseDouble(configs.getOrDefault("mean", "0")),
+          Double.parseDouble(configs.getOrDefault("sd", "1")),
+          Double.parseDouble(configs.getOrDefault("outlierProbability", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierDeviationSDTimesLower", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierDeviationSDTimesHigher", "0")),
+          Boolean.parseBoolean(configs.getOrDefault("outlierOnRightEnd", "true")));
+        break;
+
+      case "uniform":
+        series = createUniformSeries(n,
+          Double.parseDouble(configs.getOrDefault("value", "10")),
+          Double.parseDouble(configs.getOrDefault("deviationPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierProbability", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierDeviationLowerPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierDeviationHigherPercentage", "0")),
+          Boolean.parseBoolean(configs.getOrDefault("outliersAboveValue", "true")));
+        break;
+
+      case "monotonic":
+        series = createMonotonicSeries(n,
+          Double.parseDouble(configs.getOrDefault("startValue", "10")),
+          Double.parseDouble(configs.getOrDefault("slope", "0")),
+          Double.parseDouble(configs.getOrDefault("deviationPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierProbability", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierDeviationLowerPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierDeviationHigherPercentage", "0")),
+          Boolean.parseBoolean(configs.getOrDefault("outliersAboveValue", "true")));
+        break;
+
+      case "dualband":
+        series = getDualBandSeries(n,
+          Double.parseDouble(configs.getOrDefault("lowBandValue", "10")),
+          Double.parseDouble(configs.getOrDefault("lowBandDeviationPercentage", "0")),
+          Integer.parseInt(configs.getOrDefault("lowBandPeriodSize", "0")),
+          Double.parseDouble(configs.getOrDefault("highBandValue", "10")),
+          Double.parseDouble(configs.getOrDefault("highBandDeviationPercentage", "0")),
+          Integer.parseInt(configs.getOrDefault("highBandPeriodSize", "0")));
+        break;
+
+      case "step":
+        series = getStepFunctionSeries(n,
+          Double.parseDouble(configs.getOrDefault("startValue", "10")),
+          Double.parseDouble(configs.getOrDefault("steadyValueDeviationPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("steadyPeriodSlope", "0")),
+          Integer.parseInt(configs.getOrDefault("steadyPeriodMinSize", "0")),
+          Integer.parseInt(configs.getOrDefault("steadyPeriodMaxSize", "0")),
+          Double.parseDouble(configs.getOrDefault("stepChangePercentage", "0")),
+          Boolean.parseBoolean(configs.getOrDefault("upwardStep", "true")));
+        break;
+
+      case "turbulence":
+        series = getSteadySeriesWithTurbulentPeriod(n,
+          Double.parseDouble(configs.getOrDefault("steadyStateValue", "10")),
+          Double.parseDouble(configs.getOrDefault("steadyStateDeviationPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("turbulentPeriodDeviationLowerPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("turbulentPeriodDeviationHigherPercentage", "10")),
+          Integer.parseInt(configs.getOrDefault("turbulentPeriodLength", "0")),
+          Integer.parseInt(configs.getOrDefault("turbulentStatePosition", "0")));
+        break;
+
+      default:
+        series = createNormalSeries(n,
+          0,
+          1,
+          0,
+          0,
+          0,
+          true);
+    }
+    return series;
+  }
+
+  public static AbstractMetricSeries generateSeries(String type, Map<String, String> configs) {
+
+    AbstractMetricSeries series;
+    switch (type) {
+
+      case "normal":
+        series = new NormalMetricSeries(Double.parseDouble(configs.getOrDefault("mean", "0")),
+          Double.parseDouble(configs.getOrDefault("sd", "1")),
+          Double.parseDouble(configs.getOrDefault("outlierProbability", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierDeviationSDTimesLower", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierDeviationSDTimesHigher", "0")),
+          Boolean.parseBoolean(configs.getOrDefault("outlierOnRightEnd", "true")));
+        break;
+
+      case "uniform":
+        series = new UniformMetricSeries(
+          Double.parseDouble(configs.getOrDefault("value", "10")),
+          Double.parseDouble(configs.getOrDefault("deviationPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierProbability", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierDeviationLowerPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierDeviationHigherPercentage", "0")),
+          Boolean.parseBoolean(configs.getOrDefault("outliersAboveValue", "true")));
+        break;
+
+      case "monotonic":
+        series = new MonotonicMetricSeries(
+          Double.parseDouble(configs.getOrDefault("startValue", "10")),
+          Double.parseDouble(configs.getOrDefault("slope", "0")),
+          Double.parseDouble(configs.getOrDefault("deviationPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierProbability", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierDeviationLowerPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("outlierDeviationHigherPercentage", "0")),
+          Boolean.parseBoolean(configs.getOrDefault("outliersAboveValue", "true")));
+        break;
+
+      case "dualband":
+        series = new DualBandMetricSeries(
+          Double.parseDouble(configs.getOrDefault("lowBandValue", "10")),
+          Double.parseDouble(configs.getOrDefault("lowBandDeviationPercentage", "0")),
+          Integer.parseInt(configs.getOrDefault("lowBandPeriodSize", "0")),
+          Double.parseDouble(configs.getOrDefault("highBandValue", "10")),
+          Double.parseDouble(configs.getOrDefault("highBandDeviationPercentage", "0")),
+          Integer.parseInt(configs.getOrDefault("highBandPeriodSize", "0")));
+        break;
+
+      case "step":
+        series = new StepFunctionMetricSeries(
+          Double.parseDouble(configs.getOrDefault("startValue", "10")),
+          Double.parseDouble(configs.getOrDefault("steadyValueDeviationPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("steadyPeriodSlope", "0")),
+          Integer.parseInt(configs.getOrDefault("steadyPeriodMinSize", "0")),
+          Integer.parseInt(configs.getOrDefault("steadyPeriodMaxSize", "0")),
+          Double.parseDouble(configs.getOrDefault("stepChangePercentage", "0")),
+          Boolean.parseBoolean(configs.getOrDefault("upwardStep", "true")));
+        break;
+
+      case "turbulence":
+        series = new SteadyWithTurbulenceMetricSeries(
+          Integer.parseInt(configs.getOrDefault("approxSeriesLength", "100")),
+          Double.parseDouble(configs.getOrDefault("steadyStateValue", "10")),
+          Double.parseDouble(configs.getOrDefault("steadyStateDeviationPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("turbulentPeriodDeviationLowerPercentage", "0")),
+          Double.parseDouble(configs.getOrDefault("turbulentPeriodDeviationHigherPercentage", "10")),
+          Integer.parseInt(configs.getOrDefault("turbulentPeriodLength", "0")),
+          Integer.parseInt(configs.getOrDefault("turbulentStatePosition", "0")));
+        break;
+
+      default:
+        series = new NormalMetricSeries(0,
+          1,
+          0,
+          0,
+          0,
+          true);
+    }
+    return series;
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MonotonicMetricSeries.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MonotonicMetricSeries.java
new file mode 100644
index 0000000..a883d08
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MonotonicMetricSeries.java
@@ -0,0 +1,101 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.seriesgenerator;
+
+import java.util.Random;
+
+public class MonotonicMetricSeries implements AbstractMetricSeries {
+
+  double startValue = 0.0;
+  double slope = 0.5;
+  double deviationPercentage = 0.0;
+  double outlierProbability = 0.0;
+  double outlierDeviationLowerPercentage = 0.0;
+  double outlierDeviationHigherPercentage = 0.0;
+  boolean outliersAboveValue = true;
+
+  Random random = new Random();
+  double nonOutlierProbability;
+
+  // y = mx + c
+  double y;
+  double m;
+  double x;
+  double c;
+
+  public MonotonicMetricSeries(double startValue,
+                               double slope,
+                               double deviationPercentage,
+                               double outlierProbability,
+                               double outlierDeviationLowerPercentage,
+                               double outlierDeviationHigherPercentage,
+                               boolean outliersAboveValue) {
+    this.startValue = startValue;
+    this.slope = slope;
+    this.deviationPercentage = deviationPercentage;
+    this.outlierProbability = outlierProbability;
+    this.outlierDeviationLowerPercentage = outlierDeviationLowerPercentage;
+    this.outlierDeviationHigherPercentage = outlierDeviationHigherPercentage;
+    this.outliersAboveValue = outliersAboveValue;
+    init();
+  }
+
+  private void init() {
+    y = startValue;
+    m = slope;
+    x = 1;
+    c = y - (m * x);
+    nonOutlierProbability = 1.0 - outlierProbability;
+  }
+
+  @Override
+  public double nextValue() {
+
+    double value;
+    double probability = random.nextDouble();
+
+    y = m * x + c;
+    if (probability <= nonOutlierProbability) {
+      double valueDeviationLowerLimit = y - deviationPercentage * y;
+      double valueDeviationHigherLimit = y + deviationPercentage * y;
+      value = valueDeviationLowerLimit + (valueDeviationHigherLimit - valueDeviationLowerLimit) * random.nextDouble();
+    } else {
+      if (outliersAboveValue) {
+        double outlierLowerLimit = y + outlierDeviationLowerPercentage * y;
+        double outlierUpperLimit = y + outlierDeviationHigherPercentage * y;
+        value = outlierLowerLimit + (outlierUpperLimit - outlierLowerLimit) * random.nextDouble();
+      } else {
+        double outlierLowerLimit = y - outlierDeviationLowerPercentage * y;
+        double outlierUpperLimit = y - outlierDeviationHigherPercentage * y;
+        value = outlierUpperLimit + (outlierLowerLimit - outlierUpperLimit) * random.nextDouble();
+      }
+    }
+    x++;
+    return value;
+  }
+
+  @Override
+  public double[] getSeries(int n) {
+    double[] series = new double[n];
+    for (int i = 0; i < n; i++) {
+      series[i] = nextValue();
+    }
+    return series;
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/NormalMetricSeries.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/NormalMetricSeries.java
new file mode 100644
index 0000000..cc83d2c
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/NormalMetricSeries.java
@@ -0,0 +1,81 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.seriesgenerator;
+
+import java.util.Random;
+
+public class NormalMetricSeries implements AbstractMetricSeries {
+
+  double mean = 0.0;
+  double sd = 1.0;
+  double outlierProbability = 0.0;
+  double outlierDeviationSDTimesLower = 0.0;
+  double outlierDeviationSDTimesHigher = 0.0;
+  boolean outlierOnRightEnd = true;
+
+  Random random = new Random();
+  double nonOutlierProbability;
+
+
+  public NormalMetricSeries(double mean,
+                            double sd,
+                            double outlierProbability,
+                            double outlierDeviationSDTimesLower,
+                            double outlierDeviationSDTimesHigher,
+                            boolean outlierOnRightEnd) {
+    this.mean = mean;
+    this.sd = sd;
+    this.outlierProbability = outlierProbability;
+    this.outlierDeviationSDTimesLower = outlierDeviationSDTimesLower;
+    this.outlierDeviationSDTimesHigher = outlierDeviationSDTimesHigher;
+    this.outlierOnRightEnd = outlierOnRightEnd;
+    init();
+  }
+
+  private void init() {
+    nonOutlierProbability = 1.0 - outlierProbability;
+  }
+
+  @Override
+  public double nextValue() {
+
+    double value;
+    double probability = random.nextDouble();
+
+    if (probability <= nonOutlierProbability) {
+      value = random.nextGaussian() * sd + mean;
+    } else {
+      if (outlierOnRightEnd) {
+        value = mean + (outlierDeviationSDTimesLower + (outlierDeviationSDTimesHigher - outlierDeviationSDTimesLower) * random.nextDouble()) * sd;
+      } else {
+        value = mean - (outlierDeviationSDTimesLower + (outlierDeviationSDTimesHigher - outlierDeviationSDTimesLower) * random.nextDouble()) * sd;
+      }
+    }
+    return value;
+  }
+
+  @Override
+  public double[] getSeries(int n) {
+    double[] series = new double[n];
+    for (int i = 0; i < n; i++) {
+      series[i] = nextValue();
+    }
+    return series;
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java
new file mode 100644
index 0000000..c4ed3ba
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java
@@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.seriesgenerator;
+
+import java.util.Random;
+
+public class SteadyWithTurbulenceMetricSeries implements AbstractMetricSeries {
+
+  double steadyStateValue = 0.0;
+  double steadyStateDeviationPercentage = 0.0;
+  double turbulentPeriodDeviationLowerPercentage = 0.3;
+  double turbulentPeriodDeviationHigherPercentage = 0.5;
+  int turbulentPeriodLength = 5;
+  int turbulentStatePosition = 1;
+  int approximateSeriesLength = 10;
+
+  Random random = new Random();
+  double valueDeviationLowerLimit;
+  double valueDeviationHigherLimit;
+  double tPeriodLowerLimit;
+  double tPeriodUpperLimit;
+  int tPeriodStartIndex = 0;
+  int index = 0;
+
+  public SteadyWithTurbulenceMetricSeries(int approximateSeriesLength,
+                                          double steadyStateValue,
+                                          double steadyStateDeviationPercentage,
+                                          double turbulentPeriodDeviationLowerPercentage,
+                                          double turbulentPeriodDeviationHigherPercentage,
+                                          int turbulentPeriodLength,
+                                          int turbulentStatePosition) {
+    this.approximateSeriesLength = approximateSeriesLength;
+    this.steadyStateValue = steadyStateValue;
+    this.steadyStateDeviationPercentage = steadyStateDeviationPercentage;
+    this.turbulentPeriodDeviationLowerPercentage = turbulentPeriodDeviationLowerPercentage;
+    this.turbulentPeriodDeviationHigherPercentage = turbulentPeriodDeviationHigherPercentage;
+    this.turbulentPeriodLength = turbulentPeriodLength;
+    this.turbulentStatePosition = turbulentStatePosition;
+    init();
+  }
+
+  private void init() {
+
+    if (turbulentStatePosition == 1) {
+      tPeriodStartIndex = (int) (0.25 * approximateSeriesLength + (0.25 * approximateSeriesLength * random.nextDouble()));
+    } else if (turbulentStatePosition == 2) {
+      tPeriodStartIndex = approximateSeriesLength - turbulentPeriodLength;
+    }
+
+    valueDeviationLowerLimit = steadyStateValue - steadyStateDeviationPercentage * steadyStateValue;
+    valueDeviationHigherLimit = steadyStateValue + steadyStateDeviationPercentage * steadyStateValue;
+
+    tPeriodLowerLimit = steadyStateValue + turbulentPeriodDeviationLowerPercentage * steadyStateValue;
+    tPeriodUpperLimit = steadyStateValue + turbulentPeriodDeviationHigherPercentage * steadyStateValue;
+  }
+
+  @Override
+  public double nextValue() {
+
+    double value;
+
+    if (index >= tPeriodStartIndex && index <= (tPeriodStartIndex + turbulentPeriodLength)) {
+      value = tPeriodLowerLimit + (tPeriodUpperLimit - tPeriodLowerLimit) * random.nextDouble();
+    } else {
+      value = valueDeviationLowerLimit + (valueDeviationHigherLimit - valueDeviationLowerLimit) * random.nextDouble();
+    }
+    index++;
+    return value;
+  }
+
+  @Override
+  public double[] getSeries(int n) {
+
+    double[] series = new double[n];
+    int turbulentPeriodStartIndex = 0;
+
+    if (turbulentStatePosition == 1) {
+      turbulentPeriodStartIndex = (int) (0.25 * n + (0.25 * n * random.nextDouble()));
+    } else if (turbulentStatePosition == 2) {
+      turbulentPeriodStartIndex = n - turbulentPeriodLength;
+    }
+
+    double valueDevLowerLimit = steadyStateValue - steadyStateDeviationPercentage * steadyStateValue;
+    double valueDevHigherLimit = steadyStateValue + steadyStateDeviationPercentage * steadyStateValue;
+
+    double turbulentPeriodLowerLimit = steadyStateValue + turbulentPeriodDeviationLowerPercentage * steadyStateValue;
+    double turbulentPeriodUpperLimit = steadyStateValue + turbulentPeriodDeviationHigherPercentage * steadyStateValue;
+
+    for (int i = 0; i < n; i++) {
+      if (i >= turbulentPeriodStartIndex && i < (turbulentPeriodStartIndex + turbulentPeriodLength)) {
+        series[i] = turbulentPeriodLowerLimit + (turbulentPeriodUpperLimit - turbulentPeriodLowerLimit) * random.nextDouble();
+      } else {
+        series[i] = valueDevLowerLimit + (valueDevHigherLimit - valueDevLowerLimit) * random.nextDouble();
+      }
+    }
+
+    return series;
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/StepFunctionMetricSeries.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/StepFunctionMetricSeries.java
new file mode 100644
index 0000000..d5beb48
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/StepFunctionMetricSeries.java
@@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.seriesgenerator;
+
+import java.util.Random;
+
+public class StepFunctionMetricSeries implements AbstractMetricSeries {
+
+  double startValue = 0.0;
+  double steadyValueDeviationPercentage = 0.0;
+  double steadyPeriodSlope = 0.5;
+  int steadyPeriodMinSize = 10;
+  int steadyPeriodMaxSize = 20;
+  double stepChangePercentage = 0.0;
+  boolean upwardStep = true;
+
+  Random random = new Random();
+
+  // y = mx + c
+  double y;
+  double m;
+  double x;
+  double c;
+  int currentStepSize;
+  int currentIndex;
+
+  public StepFunctionMetricSeries(double startValue,
+                                  double steadyValueDeviationPercentage,
+                                  double steadyPeriodSlope,
+                                  int steadyPeriodMinSize,
+                                  int steadyPeriodMaxSize,
+                                  double stepChangePercentage,
+                                  boolean upwardStep) {
+    this.startValue = startValue;
+    this.steadyValueDeviationPercentage = steadyValueDeviationPercentage;
+    this.steadyPeriodSlope = steadyPeriodSlope;
+    this.steadyPeriodMinSize = steadyPeriodMinSize;
+    this.steadyPeriodMaxSize = steadyPeriodMaxSize;
+    this.stepChangePercentage = stepChangePercentage;
+    this.upwardStep = upwardStep;
+    init();
+  }
+
+  private void init() {
+    y = startValue;
+    m = steadyPeriodSlope;
+    x = 1;
+    c = y - (m * x);
+
+    currentStepSize = (int) (steadyPeriodMinSize + (steadyPeriodMaxSize - steadyPeriodMinSize) * random.nextDouble());
+    currentIndex = 0;
+  }
+
+  @Override
+  public double nextValue() {
+
+    double value = 0.0;
+
+    if (currentIndex < currentStepSize) {
+      y = m * x + c;
+      double valueDeviationLowerLimit = y - steadyValueDeviationPercentage * y;
+      double valueDeviationHigherLimit = y + steadyValueDeviationPercentage * y;
+      value = valueDeviationLowerLimit + (valueDeviationHigherLimit - valueDeviationLowerLimit) * random.nextDouble();
+      x++;
+      currentIndex++;
+    }
+
+    if (currentIndex == currentStepSize) {
+      currentIndex = 0;
+      currentStepSize = (int) (steadyPeriodMinSize + (steadyPeriodMaxSize - steadyPeriodMinSize) * random.nextDouble());
+      if (upwardStep) {
+        y = y + stepChangePercentage * y;
+      } else {
+        y = y - stepChangePercentage * y;
+      }
+      x = 1;
+      c = y - (m * x);
+    }
+
+    return value;
+  }
+
+  @Override
+  public double[] getSeries(int n) {
+    double[] series = new double[n];
+    for (int i = 0; i < n; i++) {
+      series[i] = nextValue();
+    }
+    return series;
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/UniformMetricSeries.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/UniformMetricSeries.java
new file mode 100644
index 0000000..a2b0eea
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/UniformMetricSeries.java
@@ -0,0 +1,95 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.seriesgenerator;
+
+import java.util.Random;
+
+public class UniformMetricSeries implements AbstractMetricSeries {
+
+  double value = 0.0;
+  double deviationPercentage = 0.0;
+  double outlierProbability = 0.0;
+  double outlierDeviationLowerPercentage = 0.0;
+  double outlierDeviationHigherPercentage = 0.0;
+  boolean outliersAboveValue= true;
+
+  Random random = new Random();
+  double valueDeviationLowerLimit;
+  double valueDeviationHigherLimit;
+  double outlierLeftLowerLimit;
+  double outlierLeftHigherLimit;
+  double outlierRightLowerLimit;
+  double outlierRightUpperLimit;
+  double nonOutlierProbability;
+
+
+  public UniformMetricSeries(double value,
+                             double deviationPercentage,
+                             double outlierProbability,
+                             double outlierDeviationLowerPercentage,
+                             double outlierDeviationHigherPercentage,
+                             boolean outliersAboveValue) {
+    this.value = value;
+    this.deviationPercentage = deviationPercentage;
+    this.outlierProbability = outlierProbability;
+    this.outlierDeviationLowerPercentage = outlierDeviationLowerPercentage;
+    this.outlierDeviationHigherPercentage = outlierDeviationHigherPercentage;
+    this.outliersAboveValue = outliersAboveValue;
+    init();
+  }
+
+  private void init() {
+    valueDeviationLowerLimit = value - deviationPercentage * value;
+    valueDeviationHigherLimit = value + deviationPercentage * value;
+
+    outlierLeftLowerLimit = value - outlierDeviationHigherPercentage * value;
+    outlierLeftHigherLimit = value - outlierDeviationLowerPercentage * value;
+    outlierRightLowerLimit = value + outlierDeviationLowerPercentage * value;
+    outlierRightUpperLimit = value + outlierDeviationHigherPercentage * value;
+
+    nonOutlierProbability = 1.0 - outlierProbability;
+  }
+
+  @Override
+  public double nextValue() {
+
+    double value;
+    double probability = random.nextDouble();
+
+    if (probability <= nonOutlierProbability) {
+      value = valueDeviationLowerLimit + (valueDeviationHigherLimit - valueDeviationLowerLimit) * random.nextDouble();
+    } else {
+      if (!outliersAboveValue) {
+        value = outlierLeftLowerLimit + (outlierLeftHigherLimit - outlierLeftLowerLimit) * random.nextDouble();
+      } else {
+        value = outlierRightLowerLimit + (outlierRightUpperLimit - outlierRightLowerLimit) * random.nextDouble();
+      }
+    }
+    return value;
+  }
+
+  @Override
+  public double[] getSeries(int n) {
+    double[] series = new double[n];
+    for (int i = 0; i < n; i++) {
+      series[i] = nextValue();
+    }
+    return series;
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AnomalyMetricPublisher.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AnomalyMetricPublisher.java
deleted file mode 100644
index d65790e..0000000
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AnomalyMetricPublisher.java
+++ /dev/null
@@ -1,196 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.alertservice.spark;
-
-import org.apache.ambari.metrics.alertservice.common.MetricAnomaly;
-import org.apache.ambari.metrics.alertservice.common.TimelineMetric;
-import org.apache.ambari.metrics.alertservice.common.TimelineMetrics;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.codehaus.jackson.map.AnnotationIntrospector;
-import org.codehaus.jackson.map.ObjectMapper;
-import org.codehaus.jackson.map.annotate.JsonSerialize;
-import org.codehaus.jackson.xc.JaxbAnnotationIntrospector;
-
-import java.io.IOException;
-import java.io.OutputStream;
-import java.io.Serializable;
-import java.net.HttpURLConnection;
-import java.net.InetAddress;
-import java.net.URL;
-import java.net.UnknownHostException;
-import java.util.*;
-
-public class AnomalyMetricPublisher implements Serializable {
-
-    private String hostName = "UNKNOWN.example.com";
-    private String instanceId = null;
-    private String serviceName = "anomaly-engine";
-    private String collectorHost;
-    private String protocol;
-    private String port;
-    private static final String WS_V1_TIMELINE_METRICS = "/ws/v1/timeline/metrics";
-    private static final Log LOG = LogFactory.getLog(AnomalyMetricPublisher.class);
-    private static ObjectMapper mapper;
-
-    static {
-        mapper = new ObjectMapper();
-        AnnotationIntrospector introspector = new JaxbAnnotationIntrospector();
-        mapper.setAnnotationIntrospector(introspector);
-        mapper.getSerializationConfig()
-                .withSerializationInclusion(JsonSerialize.Inclusion.NON_NULL);
-    }
-
-    public AnomalyMetricPublisher(String collectorHost, String protocol, String port) {
-        this.collectorHost = collectorHost;
-        this.protocol = protocol;
-        this.port = port;
-        this.hostName = getDefaultLocalHostName();
-    }
-
-    private String getDefaultLocalHostName() {
-        try {
-            return InetAddress.getLocalHost().getCanonicalHostName();
-        } catch (UnknownHostException e) {
-            LOG.info("Error getting host address");
-        }
-        return null;
-    }
-
-    public void publish(List<MetricAnomaly> metricAnomalies) {
-        LOG.info("Sending metric anomalies of size : " + metricAnomalies.size());
-        List<TimelineMetric> metricList = getTimelineMetricList(metricAnomalies);
-        if (!metricList.isEmpty()) {
-            TimelineMetrics timelineMetrics = new TimelineMetrics();
-            timelineMetrics.setMetrics(metricList);
-            emitMetrics(timelineMetrics);
-        }
-    }
-
-    private List<TimelineMetric> getTimelineMetricList(List<MetricAnomaly> metricAnomalies) {
-        List<TimelineMetric> metrics = new ArrayList<>();
-
-        if (metricAnomalies.isEmpty()) {
-            return metrics;
-        }
-
-        long currentTime = System.currentTimeMillis();
-        MetricAnomaly prevAnomaly = metricAnomalies.get(0);
-
-        TimelineMetric timelineMetric = new TimelineMetric();
-        timelineMetric.setMetricName(prevAnomaly.getMetricKey() + "_" + prevAnomaly.getMethodResult().getMethodType());
-        timelineMetric.setAppId(serviceName);
-        timelineMetric.setInstanceId(instanceId);
-        timelineMetric.setHostName(hostName);
-        timelineMetric.setStartTime(currentTime);
-
-        TreeMap<Long,Double> metricValues = new TreeMap<>();
-        metricValues.put(prevAnomaly.getTimestamp(), prevAnomaly.getMetricValue());
-        MetricAnomaly currentAnomaly;
-
-        for (int i = 1; i < metricAnomalies.size(); i++) {
-            currentAnomaly = metricAnomalies.get(i);
-            if (currentAnomaly.getMetricKey().equals(prevAnomaly.getMetricKey())) {
-                metricValues.put(currentAnomaly.getTimestamp(), currentAnomaly.getMetricValue());
-            } else {
-                timelineMetric.setMetricValues(metricValues);
-                metrics.add(timelineMetric);
-
-                timelineMetric = new TimelineMetric();
-                timelineMetric.setMetricName(currentAnomaly.getMetricKey() + "_" + currentAnomaly.getMethodResult().getMethodType());
-                timelineMetric.setAppId(serviceName);
-                timelineMetric.setInstanceId(instanceId);
-                timelineMetric.setHostName(hostName);
-                timelineMetric.setStartTime(currentTime);
-                metricValues = new TreeMap<>();
-                metricValues.put(currentAnomaly.getTimestamp(), currentAnomaly.getMetricValue());
-                prevAnomaly = currentAnomaly;
-            }
-        }
-
-        timelineMetric.setMetricValues(metricValues);
-        metrics.add(timelineMetric);
-        return metrics;
-    }
-
-    private boolean emitMetrics(TimelineMetrics metrics) {
-        String connectUrl = constructTimelineMetricUri();
-        String jsonData = null;
-        LOG.info("EmitMetrics connectUrl = "  + connectUrl);
-        try {
-            jsonData = mapper.writeValueAsString(metrics);
-        } catch (IOException e) {
-            LOG.error("Unable to parse metrics", e);
-        }
-        if (jsonData != null) {
-            return emitMetricsJson(connectUrl, jsonData);
-        }
-        return false;
-    }
-
-    private HttpURLConnection getConnection(String spec) throws IOException {
-        return (HttpURLConnection) new URL(spec).openConnection();
-    }
-
-    private boolean emitMetricsJson(String connectUrl, String jsonData) {
-        int timeout = 10000;
-        HttpURLConnection connection = null;
-        try {
-            if (connectUrl == null) {
-                throw new IOException("Unknown URL. Unable to connect to metrics collector.");
-            }
-            connection = getConnection(connectUrl);
-
-            connection.setRequestMethod("POST");
-            connection.setRequestProperty("Content-Type", "application/json");
-            connection.setRequestProperty("Connection", "Keep-Alive");
-            connection.setConnectTimeout(timeout);
-            connection.setReadTimeout(timeout);
-            connection.setDoOutput(true);
-
-            if (jsonData != null) {
-                try (OutputStream os = connection.getOutputStream()) {
-                    os.write(jsonData.getBytes("UTF-8"));
-                }
-            }
-
-            int statusCode = connection.getResponseCode();
-
-            if (statusCode != 200) {
-                LOG.info("Unable to POST metrics to collector, " + connectUrl + ", " +
-                        "statusCode = " + statusCode);
-            } else {
-                LOG.info("Metrics posted to Collector " + connectUrl);
-            }
-            return true;
-        } catch (IOException ioe) {
-            LOG.error(ioe.getMessage());
-        }
-        return false;
-    }
-
-    private String constructTimelineMetricUri() {
-        StringBuilder sb = new StringBuilder(protocol);
-        sb.append("://");
-        sb.append(collectorHost);
-        sb.append(":");
-        sb.append(port);
-        sb.append(WS_V1_TIMELINE_METRICS);
-        return sb.toString();
-    }
-}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/MetricAnomalyDetector.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/MetricAnomalyDetector.java
deleted file mode 100644
index 3989c67..0000000
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/MetricAnomalyDetector.java
+++ /dev/null
@@ -1,147 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.alertservice.spark;
-
-import com.fasterxml.jackson.databind.ObjectMapper;
-import org.apache.ambari.metrics.alertservice.common.MetricAnomaly;
-import org.apache.ambari.metrics.alertservice.common.TimelineMetric;
-import org.apache.ambari.metrics.alertservice.common.TimelineMetrics;
-import org.apache.ambari.metrics.alertservice.methods.ema.EmaModel;
-import org.apache.ambari.metrics.alertservice.methods.MetricAnomalyModel;
-import org.apache.ambari.metrics.alertservice.methods.ema.EmaModelLoader;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.spark.SparkConf;
-import org.apache.spark.api.java.function.Function;
-import org.apache.spark.streaming.Duration;
-import org.apache.spark.streaming.api.java.JavaDStream;
-import org.apache.spark.streaming.api.java.JavaPairDStream;
-import org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream;
-import org.apache.spark.streaming.api.java.JavaStreamingContext;
-import org.apache.spark.streaming.kafka.KafkaUtils;
-import scala.Tuple2;
-
-import java.util.*;
-
-public class MetricAnomalyDetector {
-
-    private static final Log LOG = LogFactory.getLog(MetricAnomalyDetector.class);
-    private static String groupId = "ambari-metrics-group";
-    private static String topicName = "ambari-metrics-topic";
-    private static int numThreads = 1;
-
-    //private static String zkQuorum = "avijayan-ams-1.openstacklocal:2181,avijayan-ams-2.openstacklocal:2181,avijayan-ams-3.openstacklocal:2181";
-    //private static Map<String, String> kafkaParams = new HashMap<>();
-    //static {
-    //    kafkaParams.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "avijayan-ams-2.openstacklocal:6667");
-    //    kafkaParams.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArraySerializer");
-    //    kafkaParams.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.connect.json.JsonSerializer");
-    //    kafkaParams.put("metadata.broker.list", "avijayan-ams-2.openstacklocal:6667");
-    //}
-
-    public MetricAnomalyDetector() {
-    }
-
-    public static void main(String[] args) throws InterruptedException {
-
-
-        if (args.length < 6) {
-            System.err.println("Usage: MetricAnomalyDetector <method1,method2> <appid1,appid2> <collector_host> <port> <protocol> <zkQuorum>");
-            System.exit(1);
-        }
-
-        List<String> appIds = Arrays.asList(args[1].split(","));
-        String collectorHost = args[2];
-        String collectorPort = args[3];
-        String collectorProtocol = args[4];
-        String zkQuorum = args[5];
-
-        List<MetricAnomalyModel> anomalyDetectionModels = new ArrayList<>();
-        AnomalyMetricPublisher anomalyMetricPublisher = new AnomalyMetricPublisher(collectorHost, collectorProtocol, collectorPort);
-
-        SparkConf sparkConf = new SparkConf().setAppName("AmbariMetricsAnomalyDetector");
-
-        JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(10000));
-
-        for (String method : args[0].split(",")) {
-            if (method.equals("ema")) {
-                LOG.info("Model EMA requested.");
-                EmaModel emaModel = new EmaModelLoader().load(jssc.sparkContext().sc(), "/tmp/model/ema");
-                anomalyDetectionModels.add(emaModel);
-            }
-        }
-
-        JavaPairReceiverInputDStream<String, String> messages =
-                KafkaUtils.createStream(jssc, zkQuorum, groupId, Collections.singletonMap(topicName, numThreads));
-
-        //Convert JSON string to TimelineMetrics.
-        JavaDStream<TimelineMetrics> timelineMetricsStream = messages.map(new Function<Tuple2<String, String>, TimelineMetrics>() {
-            @Override
-            public TimelineMetrics call(Tuple2<String, String> message) throws Exception {
-                ObjectMapper mapper = new ObjectMapper();
-                TimelineMetrics metrics = mapper.readValue(message._2, TimelineMetrics.class);
-                return metrics;
-            }
-        });
-
-        //Group TimelineMetric by AppId.
-        JavaPairDStream<String, TimelineMetrics> appMetricStream = timelineMetricsStream.mapToPair(
-                timelineMetrics -> new Tuple2<String, TimelineMetrics>(timelineMetrics.getMetrics().get(0).getAppId(),timelineMetrics)
-        );
-
-        appMetricStream.print();
-
-        //Filter AppIds that are not needed.
-        JavaPairDStream<String, TimelineMetrics> filteredAppMetricStream = appMetricStream.filter(new Function<Tuple2<String, TimelineMetrics>, Boolean>() {
-            @Override
-            public Boolean call(Tuple2<String, TimelineMetrics> appMetricTuple) throws Exception {
-                return appIds.contains(appMetricTuple._1);
-            }
-        });
-
-        filteredAppMetricStream.print();
-
-        filteredAppMetricStream.foreachRDD(rdd -> {
-            rdd.foreach(
-                    tuple2 -> {
-                        TimelineMetrics metrics = tuple2._2();
-                        for (TimelineMetric metric : metrics.getMetrics()) {
-
-                            TimelineMetric timelineMetric =
-                                    new TimelineMetric(metric.getMetricName(), metric.getAppId(), metric.getHostName(), metric.getMetricValues());
-
-                            for (MetricAnomalyModel model : anomalyDetectionModels) {
-                                List<MetricAnomaly> anomalies = model.test(timelineMetric);
-                                anomalyMetricPublisher.publish(anomalies);
-                                for (MetricAnomaly anomaly : anomalies) {
-                                    LOG.info(anomaly.getAnomalyAsString());
-                                }
-
-                            }
-                        }
-                    });
-        });
-
-        jssc.start();
-        jssc.awaitTermination();
-    }
-}
-
-
-
-
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/hsdev.r b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/hsdev.r
index b25e79d..bca3366 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/hsdev.r
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/hsdev.r
@@ -25,12 +25,9 @@ hsdev_daily <- function(train_data, test_data, n, num_historic_periods, interval
   granularity <- train_data[2,1] - train_data[1,1]
   test_start <- test_data[1,1]
   test_end <- test_data[length(test_data[1,]),1]
-  cat ("\n test_start : ", as.numeric(test_start))
   train_start <- test_start - num_historic_periods*period
-  cat ("\n train_start : ", as.numeric(train_start))
   # round to start of day
   train_start <- train_start - (train_start %% interval)
-  cat ("\n train_start after rounding: ", as.numeric(train_start))
 
   time <- as.POSIXlt(as.numeric(test_data[1,1])/1000, origin = "1970-01-01" ,tz = "GMT")
   test_data_day <- time$wday
@@ -39,7 +36,6 @@ hsdev_daily <- function(train_data, test_data, n, num_historic_periods, interval
   for ( i in length(train_data[,1]):1) {
     ts <- train_data[i,1]
     if ( ts < train_start) {
-      cat ("\n Breaking out of loop : ", ts)
       break
     }
     time <- as.POSIXlt(as.numeric(ts)/1000, origin = "1970-01-01" ,tz = "GMT")
@@ -49,20 +45,14 @@ hsdev_daily <- function(train_data, test_data, n, num_historic_periods, interval
     }
   }
 
-  cat ("\n Train data length : ", length(train_data[,1]))
-  cat ("\n Test data length : ", length(test_data[,1]))
-  cat ("\n Historic data length : ", length(h_data))
   if (length(h_data) < 2*length(test_data[,1])) {
     cat ("\nNot enough training data")
     return (anomalies)
   }
 
   past_median <- median(h_data)
-  cat ("\npast_median : ", past_median)
   past_sd <- sd(h_data)
-  cat ("\npast_sd : ", past_sd)
   curr_median <- median(test_data[,2])
-  cat ("\ncurr_median : ", curr_median)
 
   if (abs(curr_median - past_median) > n * past_sd) {
     anomaly <- c(test_start, test_end, curr_median, past_median, past_sd)
@@ -70,7 +60,7 @@ hsdev_daily <- function(train_data, test_data, n, num_historic_periods, interval
   }
 
   if(length(anomalies) > 0) {
-    names(anomalies) <- c("TS Start", "TS End", "Current Median", "Past Median", " Past SD")
+    names(anomalies) <- c("TS Start", "TS End", "Current Median", "Past Median", "Past SD")
   }
 
   return (anomalies)
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/kstest.r b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/kstest.r
index b4dfdcb..f22bc15 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/kstest.r
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/kstest.r
@@ -24,7 +24,7 @@ ams_ks <- function(train_data, test_data, p_value) {
 #  test_data <- data[which(data$TS >= test_start & data$TS <= test_end), 2]
   
   anomalies <- data.frame()
-  res <- ks.test(train_data, test_data[,2])
+  res <- ks.test(train_data[,2], test_data[,2])
   
   if (res[2] < p_value) {
     anomaly <- c(test_data[1,1], test_data[length(test_data),1], res[1], res[2])
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r
index 7fffbdd..f33b6ec 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r
@@ -32,12 +32,17 @@ ams_tukeys <- function(train_data, test_data, n) {
     lb <- quantiles[2] - n*iqr
     ub <- quantiles[4] + n*iqr
     if ( (x < lb)  || (x > ub) ) {
-      anomaly <- c(test_data[i,1], x)
+      if (x < lb) {
+        niqr <- (quantiles[2] - x) / iqr
+      } else {
+        niqr <- (x - quantiles[4]) / iqr
+      }
+      anomaly <- c(test_data[i,1], x, niqr)
       anomalies <- rbind(anomalies, anomaly)
     }
   }
   if(length(anomalies) > 0) {
-    names(anomalies) <- c("TS", "Value")
+    names(anomalies) <- c("TS", "Value", "niqr")
   }
   return (anomalies)
 }
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/util.R b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/util.R
deleted file mode 100644
index 3827006..0000000
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/util.R
+++ /dev/null
@@ -1,36 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-#url_prefix = 'http://104.196.95.78:3000/api/datasources/proxy/1/ws/v1/timeline/metrics?'
-#url_suffix = '&startTime=1459972944&endTime=1491508944&precision=MINUTES'
-#data_url <- paste(url_prefix, query, sep ="")
-#data_url <- paste(data_url, url_suffix, sep="")
-
-get_data <- function(url) {
-  library(rjson)
-  res <- fromJSON(readLines(url)[1])
-  return (res)
-}
-
-find_index <- function(data, ts) {
-  for (i in 1:length(data)) {
-    if (as.numeric(ts) == as.numeric(data[i])) {
-      return (i)
-    }
-  }
-  return (-1)
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java b/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java
new file mode 100644
index 0000000..539ca40
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java
@@ -0,0 +1,86 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype;
+
+import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
+import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.util.List;
+import java.util.TreeMap;
+
+public class TestEmaTechnique {
+
+  @Test
+  public void testEmaInitialization() {
+
+    EmaTechnique ema = new EmaTechnique(0.5, 3);
+    Assert.assertTrue(ema.getTrackedEmas().isEmpty());
+    Assert.assertTrue(ema.getStartingWeight() == 0.5);
+    Assert.assertTrue(ema.getStartTimesSdev() == 3);
+  }
+
+  @Test
+  public void testEma() {
+    EmaTechnique ema = new EmaTechnique(0.5, 3);
+
+    long now = System.currentTimeMillis();
+
+    TimelineMetric metric1 = new TimelineMetric();
+    metric1.setMetricName("M1");
+    metric1.setHostName("H1");
+    metric1.setStartTime(now - 1000);
+    metric1.setAppId("A1");
+    metric1.setInstanceId(null);
+    metric1.setType("Integer");
+
+    //Train
+    TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
+    for (int i = 0; i < 50; i++) {
+      double metric = 20000 + Math.random();
+      metricValues.put(now - i * 100, metric);
+    }
+    metric1.setMetricValues(metricValues);
+    List<MetricAnomaly> anomalyList = ema.test(metric1);
+//    Assert.assertTrue(anomalyList.isEmpty());
+
+    metricValues = new TreeMap<Long, Double>();
+    for (int i = 0; i < 50; i++) {
+      double metric = 20000 + Math.random();
+      metricValues.put(now - i * 100, metric);
+    }
+    metric1.setMetricValues(metricValues);
+    anomalyList = ema.test(metric1);
+    Assert.assertTrue(!anomalyList.isEmpty());
+    int l1 = anomalyList.size();
+
+    Assert.assertTrue(ema.updateModel(metric1, false, 20));
+    anomalyList = ema.test(metric1);
+    int l2 = anomalyList.size();
+    Assert.assertTrue(l2 < l1);
+
+    Assert.assertTrue(ema.updateModel(metric1, true, 50));
+    anomalyList = ema.test(metric1);
+    int l3 = anomalyList.size();
+    Assert.assertTrue(l3 > l2 && l3 > l1);
+
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestRFunctionInvoker.java b/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestRFunctionInvoker.java
new file mode 100644
index 0000000..9a102a0
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestRFunctionInvoker.java
@@ -0,0 +1,161 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype;
+
+import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
+import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.alertservice.seriesgenerator.UniformMetricSeries;
+import org.apache.commons.lang.ArrayUtils;
+import org.junit.Assert;
+import org.junit.Assume;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.io.File;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Random;
+
+public class TestRFunctionInvoker {
+
+  private static String metricName = "TestMetric";
+  private static double[] ts;
+  private static String fullFilePath;
+
+  @BeforeClass
+  public static void init() throws URISyntaxException {
+
+    Assume.assumeTrue(System.getenv("R_HOME") != null);
+    ts = getTS(1000);
+    URL url = ClassLoader.getSystemResource("R-scripts");
+    fullFilePath = new File(url.toURI()).getAbsolutePath();
+    RFunctionInvoker.setScriptsDir(fullFilePath);
+  }
+
+  @Test
+  public void testTukeys() throws URISyntaxException {
+
+    double[] train_ts = ArrayUtils.subarray(ts, 0, 750);
+    double[] train_x = getRandomData(750);
+    DataSeries trainData = new DataSeries(metricName, train_ts, train_x);
+
+    double[] test_ts = ArrayUtils.subarray(ts, 750, 1000);
+    double[] test_x = getRandomData(250);
+    test_x[50] = 5.5; //Anomaly
+    DataSeries testData = new DataSeries(metricName, test_ts, test_x);
+    Map<String, String> configs = new HashMap();
+    configs.put("tukeys.n", "3");
+
+    ResultSet rs = RFunctionInvoker.tukeys(trainData, testData, configs);
+    Assert.assertEquals(rs.resultset.size(), 2);
+    Assert.assertEquals(rs.resultset.get(1)[0], 5.5, 0.1);
+
+  }
+
+  public static void main(String[] args) throws URISyntaxException {
+
+    String metricName = "TestMetric";
+    double[] ts = getTS(1000);
+    URL url = ClassLoader.getSystemResource("R-scripts");
+    String fullFilePath = new File(url.toURI()).getAbsolutePath();
+    RFunctionInvoker.setScriptsDir(fullFilePath);
+
+    double[] train_ts = ArrayUtils.subarray(ts, 0, 750);
+    double[] train_x = getRandomData(750);
+    DataSeries trainData = new DataSeries(metricName, train_ts, train_x);
+
+    double[] test_ts = ArrayUtils.subarray(ts, 750, 1000);
+    double[] test_x = getRandomData(250);
+    test_x[50] = 5.5; //Anomaly
+    DataSeries testData = new DataSeries(metricName, test_ts, test_x);
+    ResultSet rs;
+
+    Map<String, String> configs = new HashMap();
+
+    System.out.println("TUKEYS");
+    configs.put("tukeys.n", "3");
+    rs = RFunctionInvoker.tukeys(trainData, testData, configs);
+    rs.print();
+    System.out.println("--------------");
+
+//    System.out.println("EMA Global");
+//    configs.put("ema.n", "3");
+//    configs.put("ema.w", "0.8");
+//    rs = RFunctionInvoker.ema_global(trainData, testData, configs);
+//    rs.print();
+//    System.out.println("--------------");
+//
+//    System.out.println("EMA Daily");
+//    rs = RFunctionInvoker.ema_daily(trainData, testData, configs);
+//    rs.print();
+//    System.out.println("--------------");
+//
+//    configs.put("ks.p_value", "0.00005");
+//    System.out.println("KS Test");
+//    rs = RFunctionInvoker.ksTest(trainData, testData, configs);
+//    rs.print();
+//    System.out.println("--------------");
+//
+    ts = getTS(5000);
+    train_ts = ArrayUtils.subarray(ts, 0, 4800);
+    train_x = getRandomData(4800);
+    trainData = new DataSeries(metricName, train_ts, train_x);
+    test_ts = ArrayUtils.subarray(ts, 4800, 5000);
+    test_x = getRandomData(200);
+    for (int i = 0; i < 200; i++) {
+      test_x[i] = test_x[i] * 5;
+    }
+    testData = new DataSeries(metricName, test_ts, test_x);
+    configs.put("hsdev.n", "3");
+    configs.put("hsdev.nhp", "3");
+    configs.put("hsdev.interval", "86400000");
+    configs.put("hsdev.period", "604800000");
+    System.out.println("HSdev");
+    rs = RFunctionInvoker.hsdev(trainData, testData, configs);
+    rs.print();
+    System.out.println("--------------");
+
+  }
+
+  static double[] getTS(int n) {
+    long currentTime = System.currentTimeMillis();
+    double[] ts = new double[n];
+    currentTime = currentTime - (currentTime % (5 * 60 * 1000));
+
+    for (int i = 0, j = n - 1; i < n; i++, j--) {
+      ts[j] = currentTime;
+      currentTime = currentTime - (5 * 60 * 1000);
+    }
+    return ts;
+  }
+
+  static double[] getRandomData(int n) {
+
+    UniformMetricSeries metricSeries =  new UniformMetricSeries(10, 0.1,0.05, 0.6, 0.8, true);
+    return metricSeries.getSeries(n);
+
+//    double[] metrics = new double[n];
+//    Random random = new Random();
+//    for (int i = 0; i < n; i++) {
+//      metrics[i] = random.nextDouble();
+//    }
+//    return metrics;
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java b/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java
new file mode 100644
index 0000000..bb409cf
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.prototype;
+
+import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
+import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.junit.Assert;
+import org.junit.Assume;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.io.File;
+import java.net.InetAddress;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.net.UnknownHostException;
+import java.util.List;
+import java.util.TreeMap;
+
+public class TestTukeys {
+
+  @BeforeClass
+  public static void init() throws URISyntaxException {
+    Assume.assumeTrue(System.getenv("R_HOME") != null);
+  }
+
+  @Test
+  public void testPointInTimeDetectionSystem() throws UnknownHostException, URISyntaxException {
+
+    URL url = ClassLoader.getSystemResource("R-scripts");
+    String fullFilePath = new File(url.toURI()).getAbsolutePath();
+    RFunctionInvoker.setScriptsDir(fullFilePath);
+
+    MetricsCollectorInterface metricsCollectorInterface = new MetricsCollectorInterface("avijayan-ams-1.openstacklocal","http", "6188");
+
+    EmaTechnique ema = new EmaTechnique(0.5, 3);
+    long now = System.currentTimeMillis();
+
+    TimelineMetric metric1 = new TimelineMetric();
+    metric1.setMetricName("mm9");
+    metric1.setHostName(MetricsCollectorInterface.getDefaultLocalHostName());
+    metric1.setStartTime(now);
+    metric1.setAppId("aa9");
+    metric1.setInstanceId(null);
+    metric1.setType("Integer");
+
+    //Train
+    TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
+
+    //2hr data.
+    for (int i = 0; i < 120; i++) {
+      double metric = 20000 + Math.random();
+      metricValues.put(now - i * 60 * 1000, metric);
+    }
+    metric1.setMetricValues(metricValues);
+    TimelineMetrics timelineMetrics = new TimelineMetrics();
+    timelineMetrics.addOrMergeTimelineMetric(metric1);
+
+    metricsCollectorInterface.emitMetrics(timelineMetrics);
+
+    List<MetricAnomaly> anomalyList = ema.test(metric1);
+    metricsCollectorInterface.publish(anomalyList);
+//
+//    PointInTimeADSystem pointInTimeADSystem = new PointInTimeADSystem(ema, metricsCollectorInterface, 3, 5*60*1000, 15*60*1000);
+//    pointInTimeADSystem.runOnce();
+//
+//    List<MetricAnomaly> anomalyList2 = ema.test(metric1);
+//
+//    pointInTimeADSystem.runOnce();
+//    List<MetricAnomaly> anomalyList3 = ema.test(metric1);
+//
+//    pointInTimeADSystem.runOnce();
+//    List<MetricAnomaly> anomalyList4 = ema.test(metric1);
+//
+//    pointInTimeADSystem.runOnce();
+//    List<MetricAnomaly> anomalyList5 = ema.test(metric1);
+//
+//    pointInTimeADSystem.runOnce();
+//    List<MetricAnomaly> anomalyList6 = ema.test(metric1);
+//
+//    Assert.assertTrue(anomalyList6.size() < anomalyList.size());
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorTest.java b/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorTest.java
new file mode 100644
index 0000000..575ea8b
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorTest.java
@@ -0,0 +1,108 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.alertservice.seriesgenerator;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.ambari.metrics.alertservice.prototype.MetricAnomalyDetectorTestInput;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.Map;
+
+public class MetricSeriesGeneratorTest {
+
+  @Test
+  public void testUniformSeries() {
+
+    UniformMetricSeries metricSeries = new UniformMetricSeries(5, 0.2, 0, 0, 0, true);
+    Assert.assertTrue(metricSeries.nextValue() <= 6 && metricSeries.nextValue() >= 4);
+
+    double[] uniformSeries = MetricSeriesGeneratorFactory.createUniformSeries(50, 10, 0.2, 0.1, 0.4, 0.5, true);
+    Assert.assertTrue(uniformSeries.length == 50);
+
+    for (int i = 0; i < uniformSeries.length; i++) {
+      double value = uniformSeries[i];
+
+      if (value > 10 * 1.2) {
+        Assert.assertTrue(value >= 10 * 1.4 && value <= 10 * 1.6);
+      } else {
+        Assert.assertTrue(value >= 10 * 0.8 && value <= 10 * 1.2);
+      }
+    }
+  }
+
+  @Test
+  public void testNormalSeries() {
+    NormalMetricSeries metricSeries = new NormalMetricSeries(0, 1, 0, 0, 0, true);
+    Assert.assertTrue(metricSeries.nextValue() <= 3 && metricSeries.nextValue() >= -3);
+  }
+
+  @Test
+  public void testMonotonicSeries() {
+
+    MonotonicMetricSeries metricSeries = new MonotonicMetricSeries(0, 0.5, 0, 0, 0, 0, true);
+    Assert.assertTrue(metricSeries.nextValue() == 0);
+    Assert.assertTrue(metricSeries.nextValue() == 0.5);
+
+    double[] incSeries = MetricSeriesGeneratorFactory.createMonotonicSeries(20, 0, 0.5, 0, 0, 0, 0, true);
+    Assert.assertTrue(incSeries.length == 20);
+    for (int i = 0; i < incSeries.length; i++) {
+      Assert.assertTrue(incSeries[i] == i * 0.5);
+    }
+  }
+
+  @Test
+  public void testDualBandSeries() {
+    double[] dualBandSeries = MetricSeriesGeneratorFactory.getDualBandSeries(30, 5, 0.2, 5, 15, 0.3, 4);
+    Assert.assertTrue(dualBandSeries[0] >= 4 && dualBandSeries[0] <= 6);
+    Assert.assertTrue(dualBandSeries[4] >= 4 && dualBandSeries[4] <= 6);
+    Assert.assertTrue(dualBandSeries[5] >= 10.5 && dualBandSeries[5] <= 19.5);
+    Assert.assertTrue(dualBandSeries[8] >= 10.5 && dualBandSeries[8] <= 19.5);
+    Assert.assertTrue(dualBandSeries[9] >= 4 && dualBandSeries[9] <= 6);
+  }
+
+  @Test
+  public void testStepSeries() {
+    double[] stepSeries = MetricSeriesGeneratorFactory.getStepFunctionSeries(30, 10, 0, 0, 5, 5, 0.5, true);
+
+    Assert.assertTrue(stepSeries[0] == 10);
+    Assert.assertTrue(stepSeries[4] == 10);
+
+    Assert.assertTrue(stepSeries[5] == 10*1.5);
+    Assert.assertTrue(stepSeries[9] == 10*1.5);
+
+    Assert.assertTrue(stepSeries[10] == 10*1.5*1.5);
+    Assert.assertTrue(stepSeries[14] == 10*1.5*1.5);
+  }
+
+  @Test
+  public void testSteadySeriesWithTurbulence() {
+    double[] steadySeriesWithTurbulence = MetricSeriesGeneratorFactory.getSteadySeriesWithTurbulentPeriod(30, 5, 0, 1, 1, 5, 1);
+
+    int count = 0;
+    for (int i = 0; i < steadySeriesWithTurbulence.length; i++) {
+      if (steadySeriesWithTurbulence[i] == 10) {
+        count++;
+      }
+    }
+    Assert.assertTrue(count == 5);
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java
index 1f03fe9..3dfcf4e 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.metrics2.sink.timeline;
 
+import java.io.Serializable;
 import java.util.HashMap;
 import java.util.Map;
 import java.util.TreeMap;
@@ -34,11 +35,11 @@ import org.codehaus.jackson.map.annotate.JsonDeserialize;
 @XmlAccessorType(XmlAccessType.NONE)
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
-public class TimelineMetric implements Comparable<TimelineMetric> {
+public class TimelineMetric implements Comparable<TimelineMetric>, Serializable {
 
   private String metricName;
   private String appId;
-  private String instanceId;
+  private String instanceId = null;
   private String hostName;
   private long startTime;
   private String type;
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetrics.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetrics.java
index 0c5965c..a8d3da8 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetrics.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetrics.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.metrics2.sink.timeline;
 
+import java.io.Serializable;
 import java.util.ArrayList;
 import java.util.List;
 
@@ -35,7 +36,7 @@ import org.apache.hadoop.classification.InterfaceStability;
 @XmlAccessorType(XmlAccessType.NONE)
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
-public class TimelineMetrics {
+public class TimelineMetrics implements Serializable{
 
   private List<TimelineMetric> allMetrics = new ArrayList<TimelineMetric>();
 
diff --git a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala b/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
index bff094b..e51a47f 100644
--- a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
+++ b/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
@@ -21,13 +21,13 @@ import java.util
 import java.util.logging.LogManager
 
 import com.fasterxml.jackson.databind.ObjectMapper
+import org.apache.ambari.metrics.alertservice.prototype.MetricsCollectorInterface
 import org.apache.spark.SparkConf
 import org.apache.spark.streaming._
 import org.apache.spark.streaming.kafka._
-import org.apache.ambari.metrics.alertservice.common.{MetricAnomaly, TimelineMetrics}
-import org.apache.ambari.metrics.alertservice.methods.MetricAnomalyModel
-import org.apache.ambari.metrics.alertservice.methods.ema.{EmaModel, EmaModelLoader}
-import org.apache.ambari.metrics.alertservice.spark.AnomalyMetricPublisher
+import org.apache.ambari.metrics.alertservice.prototype.methods.{AnomalyDetectionTechnique, MetricAnomaly}
+import org.apache.ambari.metrics.alertservice.prototype.methods.ema.{EmaModelLoader, EmaTechnique}
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics
 import org.apache.log4j.Logger
 import org.apache.spark.storage.StorageLevel
 
@@ -41,7 +41,7 @@ object MetricAnomalyDetector extends Logging {
   var groupId = "ambari-metrics-group"
   var topicName = "ambari-metrics-topic"
   var numThreads = 1
-  val anomalyDetectionModels: Array[MetricAnomalyModel] = Array[MetricAnomalyModel]()
+  val anomalyDetectionModels: Array[AnomalyDetectionTechnique] = Array[AnomalyDetectionTechnique]()
 
   def main(args: Array[String]): Unit = {
 
@@ -54,7 +54,7 @@ object MetricAnomalyDetector extends Logging {
     }
 
     for (method <- args(0).split(",")) {
-      if (method == "ema") anomalyDetectionModels :+ new EmaModel()
+      if (method == "ema") anomalyDetectionModels :+ new EmaTechnique(0.5, 3)
     }
 
     val appIds = util.Arrays.asList(args(1).split(","))
@@ -63,7 +63,7 @@ object MetricAnomalyDetector extends Logging {
     val collectorPort = args(3)
     val collectorProtocol = args(4)
 
-    val anomalyMetricPublisher: AnomalyMetricPublisher = new AnomalyMetricPublisher(collectorHost, collectorProtocol, collectorPort)
+    val anomalyMetricPublisher: MetricsCollectorInterface = new MetricsCollectorInterface(collectorHost, collectorProtocol, collectorPort)
 
     val sparkConf = new SparkConf().setAppName("AmbariMetricsAnomalyDetector")
 
@@ -99,10 +99,6 @@ object MetricAnomalyDetector extends Logging {
         for (timelineMetric <- timelineMetrics.getMetrics) {
           var anomalies = emaModel.test(timelineMetric)
           anomalyMetricPublisher.publish(anomalies)
-          for (anomaly <- anomalies) {
-            var an = anomaly : MetricAnomaly
-            logger.info(an.getAnomalyAsString)
-          }
         }
       })
     })
diff --git a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala b/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
index 3c8e1ed..edd6366 100644
--- a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
+++ b/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
@@ -17,8 +17,8 @@
 
 package org.apache.ambari.metrics.spark
 
-import org.apache.ambari.metrics.alertservice.common.TimelineMetric
-import org.apache.ambari.metrics.alertservice.methods.ema.EmaModel
+import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric
 import org.apache.spark.mllib.stat.Statistics
 import org.apache.spark.sql.SQLContext
 import org.apache.spark.{SparkConf, SparkContext}
@@ -61,15 +61,19 @@ object SparkPhoenixReader {
       t => metricValues.put(t.getLong(3), t.getDouble(4) / t.getInt(5))
     )
 
-    //val metricName = result.head().getString(0)
+    //val seriesName = result.head().getString(0)
     //val hostname = result.head().getString(1)
     //val appId = result.head().getString(2)
 
-    val timelineMetric = new TimelineMetric(metricName, appId, hostname, metricValues)
+    val timelineMetric = new TimelineMetric()
+    timelineMetric.setMetricName(metricName)
+    timelineMetric.setAppId(appId)
+    timelineMetric.setHostName(hostname)
+    timelineMetric.setMetricValues(metricValues)
 
-    var emaModel = new EmaModel()
-    emaModel.train(timelineMetric, weight, timessdev)
-    emaModel.save(sc, modelDir)
+//    var emaModel = new EmaTechnique()
+//    emaModel.train(timelineMetric, weight, timessdev)
+//    emaModel.save(sc, modelDir)
 
 //    var metricData:Seq[Double] = Seq.empty
 //    result.collect().foreach(
diff --git a/ambari-metrics/ambari-metrics-timelineservice/pom.xml b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
index d306ad3..a8ac1da 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/pom.xml
+++ b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
@@ -348,7 +348,7 @@
     <dependency>
       <groupId>org.apache.ambari</groupId>
       <artifactId>ambari-metrics-alertservice</artifactId>
-      <version>2.0.0.0-SNAPSHOT</version>
+      <version>${project.version}</version>
     </dependency>
 
     <dependency>
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
index 110b094..95682f9 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
@@ -407,30 +407,6 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
   }
 
 
-  private org.apache.ambari.metrics.alertservice.common.TimelineMetrics fromTimelineMetrics(TimelineMetrics timelineMetrics) {
-    org.apache.ambari.metrics.alertservice.common.TimelineMetrics otherMetrics = new org.apache.ambari.metrics.alertservice.common.TimelineMetrics();
-
-    List<org.apache.ambari.metrics.alertservice.common.TimelineMetric> timelineMetricList = new ArrayList<>();
-    for (TimelineMetric timelineMetric : timelineMetrics.getMetrics()) {
-      timelineMetricList.add(fromTimelineMetric(timelineMetric));
-    }
-    otherMetrics.setMetrics(timelineMetricList);
-    return otherMetrics;
-  }
-
-  private org.apache.ambari.metrics.alertservice.common.TimelineMetric fromTimelineMetric(TimelineMetric timelineMetric) {
-
-    org.apache.ambari.metrics.alertservice.common.TimelineMetric otherMetric = new org.apache.ambari.metrics.alertservice.common.TimelineMetric();
-    otherMetric.setMetricValues(timelineMetric.getMetricValues());
-    otherMetric.setStartTime(timelineMetric.getStartTime());
-    otherMetric.setHostName(timelineMetric.getHostName());
-    otherMetric.setInstanceId(timelineMetric.getInstanceId());
-    otherMetric.setAppId(timelineMetric.getAppId());
-    otherMetric.setMetricName(timelineMetric.getMetricName());
-
-    return otherMetric;
-  }
-
   @Override
   public TimelinePutResponse putContainerMetrics(List<ContainerMetric> metrics)
       throws SQLException, IOException {
diff --git a/ambari-server/src/test/java/org/apache/ambari/server/controller/metrics/timeline/MetricsPaddingMethodTest.java b/ambari-server/src/test/java/org/apache/ambari/server/controller/metrics/timeline/MetricsPaddingMethodTest.java
index 4ec624a..b57f7e9 100644
--- a/ambari-server/src/test/java/org/apache/ambari/server/controller/metrics/timeline/MetricsPaddingMethodTest.java
+++ b/ambari-server/src/test/java/org/apache/ambari/server/controller/metrics/timeline/MetricsPaddingMethodTest.java
@@ -39,7 +39,6 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
-    timelineMetric.setTimestamp(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
     inputValues.put(now - 1000, 1.0d);
     inputValues.put(now - 2000, 2.0d);
@@ -67,7 +66,6 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
-    timelineMetric.setTimestamp(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
     inputValues.put(now - 1000, 1.0d);
     inputValues.put(now - 2000, 2.0d);
@@ -95,7 +93,6 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
-    timelineMetric.setTimestamp(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
     inputValues.put(now, 0.0d);
     inputValues.put(now - 1000, 1.0d);
@@ -123,7 +120,6 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
-    timelineMetric.setTimestamp(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
     inputValues.put(now - 1000, 1.0d);
     timelineMetric.setMetricValues(inputValues);
@@ -149,7 +145,6 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
-    timelineMetric.setTimestamp(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
     inputValues.put(now - 1000, 1.0d);
     timelineMetric.setMetricValues(inputValues);
@@ -173,7 +168,6 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
-    timelineMetric.setTimestamp(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
 
     long seconds = 1000;
@@ -234,7 +228,6 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
-    timelineMetric.setTimestamp(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
     inputValues.put(now - 100, 1.0d);
     inputValues.put(now - 200, 2.0d);

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 11/39: AMBARI-21279 Handle scenario when host in-memory aggregation is not working (dsen)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 4efcf9d9f22131e0afcd1e46de14660f18d2f00e
Author: Dmytro Sen <ds...@apache.org>
AuthorDate: Tue Jul 11 14:17:58 2017 +0300

    AMBARI-21279 Handle scenario when host in-memory aggregation is not working (dsen)
---
 .../timeline/HBaseTimelineMetricsService.java      |  15 +-
 .../TimelineMetricAggregatorFactory.java           |  49 ++++
 .../TimelineMetricFilteringHostAggregator.java     |  94 ++++++++
 .../v2/TimelineMetricFilteringHostAggregator.java  | 119 ++++++++++
 .../discovery/TimelineMetricMetadataManager.java   |  76 +++++--
 .../metrics/timeline/query/Condition.java          |   2 +
 .../metrics/timeline/query/DefaultCondition.java   | 246 ++++++++++-----------
 .../metrics/timeline/query/EmptyCondition.java     |  11 +
 .../metrics/timeline/query/PhoenixTransactSQL.java |  13 +-
 .../query/SplitByMetricNamesCondition.java         |  10 +
 .../metrics/timeline/TestPhoenixTransactSQL.java   |   4 +-
 .../timeline/query/DefaultConditionTest.java       | 194 ++++++++++------
 12 files changed, 608 insertions(+), 225 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
index 2d890c0..4318fd3 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
@@ -82,6 +82,7 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
   private static volatile boolean isInitialized = false;
   private final ScheduledExecutorService watchdogExecutorService = Executors.newSingleThreadScheduledExecutor();
   private final Map<AGGREGATOR_NAME, ScheduledExecutorService> scheduledExecutors = new HashMap<>();
+  private final ConcurrentHashMap<String, Long> postedAggregatedMap = new ConcurrentHashMap<>();
   private TimelineMetricMetadataManager metricMetadataManager;
   private Integer defaultTopNHostsLimit;
   private MetricCollectorHAController haController;
@@ -172,7 +173,11 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
 
       // Start the minute host aggregator
       if (Boolean.parseBoolean(metricsConf.get(TIMELINE_METRICS_HOST_INMEMORY_AGGREGATION, "true"))) {
-        LOG.info("timeline.metrics.host.inmemory.aggregation is set to True, disabling host minute aggregation on collector");
+        LOG.info("timeline.metrics.host.inmemory.aggregation is set to True, switching to filtering host minute aggregation on collector");
+        TimelineMetricAggregator minuteHostAggregator =
+          TimelineMetricAggregatorFactory.createFilteringTimelineMetricAggregatorMinute(
+            hBaseAccessor, metricsConf, metricMetadataManager, haController, postedAggregatedMap);
+        scheduleAggregatorThread(minuteHostAggregator);
       } else {
         TimelineMetricAggregator minuteHostAggregator =
           TimelineMetricAggregatorFactory.createTimelineMetricAggregatorMinute(
@@ -463,8 +468,16 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
   @Override
   public TimelinePutResponse putHostAggregatedMetrics(AggregationResult aggregationResult) throws SQLException, IOException {
     Map<TimelineMetric, MetricHostAggregate> aggregateMap = new HashMap<>();
+    String hostname = null;
     for (TimelineMetricWithAggregatedValues entry : aggregationResult.getResult()) {
       aggregateMap.put(entry.getTimelineMetric(), entry.getMetricAggregate());
+      hostname = hostname == null ? entry.getTimelineMetric().getHostName() : hostname;
+      break;
+    }
+    long timestamp = aggregationResult.getTimeInMilis();
+    postedAggregatedMap.put(hostname, timestamp);
+    if (LOG.isDebugEnabled()) {
+      LOG.debug(String.format("Adding host %s to aggregated by in-memory aggregator. Timestamp : %s", hostname, timestamp));
     }
     hBaseAccessor.saveHostAggregateRecords(aggregateMap, PhoenixTransactSQL.METRICS_AGGREGATE_MINUTE_TABLE_NAME);
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
index 081e610..e90fa84 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
@@ -23,6 +23,8 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 
+import java.util.concurrent.ConcurrentHashMap;
+
 import static java.util.concurrent.TimeUnit.SECONDS;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_DAILY_CHECKPOINT_CUTOFF_MULTIPLIER;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_DAILY_DISABLED;
@@ -455,4 +457,51 @@ public class TimelineMetricAggregatorFactory {
       haController
     );
   }
+
+  public static TimelineMetricAggregator createFilteringTimelineMetricAggregatorMinute(PhoenixHBaseAccessor hBaseAccessor, Configuration metricsConf, TimelineMetricMetadataManager metricMetadataManager, MetricCollectorHAController haController, ConcurrentHashMap<String, Long> postedAggregatedMap) {
+    String checkpointDir = metricsConf.get(
+      TIMELINE_METRICS_AGGREGATOR_CHECKPOINT_DIR, DEFAULT_CHECKPOINT_LOCATION);
+    String checkpointLocation = FilenameUtils.concat(checkpointDir,
+      HOST_AGGREGATE_MINUTE_CHECKPOINT_FILE);
+    long sleepIntervalMillis = SECONDS.toMillis(metricsConf.getLong
+      (HOST_AGGREGATOR_MINUTE_SLEEP_INTERVAL, 300l));  // 5 mins
+
+    int checkpointCutOffMultiplier = metricsConf.getInt
+      (HOST_AGGREGATOR_MINUTE_CHECKPOINT_CUTOFF_MULTIPLIER, 3);
+    String hostAggregatorDisabledParam = HOST_AGGREGATOR_MINUTE_DISABLED;
+
+    String inputTableName = METRICS_RECORD_TABLE_NAME;
+    String outputTableName = METRICS_AGGREGATE_MINUTE_TABLE_NAME;
+
+    if (useGroupByAggregator(metricsConf)) {
+      return new org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.v2.TimelineMetricFilteringHostAggregator(
+        METRIC_RECORD_MINUTE,
+        metricMetadataManager,
+        hBaseAccessor, metricsConf,
+        checkpointLocation,
+        sleepIntervalMillis,
+        checkpointCutOffMultiplier,
+        hostAggregatorDisabledParam,
+        inputTableName,
+        outputTableName,
+        120000l,
+        haController,
+        postedAggregatedMap
+      );
+    }
+
+    return new TimelineMetricFilteringHostAggregator(
+      METRIC_RECORD_MINUTE,
+      metricMetadataManager,
+      hBaseAccessor, metricsConf,
+      checkpointLocation,
+      sleepIntervalMillis,
+      checkpointCutOffMultiplier,
+      hostAggregatorDisabledParam,
+      inputTableName,
+      outputTableName,
+      120000l,
+      haController,
+      postedAggregatedMap);
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricFilteringHostAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricFilteringHostAggregator.java
new file mode 100644
index 0000000..6e766e9
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricFilteringHostAggregator.java
@@ -0,0 +1,94 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultCondition;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_METRIC_AGGREGATE_ONLY_SQL;
+
+public class TimelineMetricFilteringHostAggregator extends TimelineMetricHostAggregator {
+  private static final Log LOG = LogFactory.getLog(TimelineMetricFilteringHostAggregator.class);
+  private TimelineMetricMetadataManager metricMetadataManager;
+  private ConcurrentHashMap<String, Long> postedAggregatedMap;
+
+  public TimelineMetricFilteringHostAggregator(AggregationTaskRunner.AGGREGATOR_NAME aggregatorName,
+                                               TimelineMetricMetadataManager metricMetadataManager,
+                                               PhoenixHBaseAccessor hBaseAccessor,
+                                               Configuration metricsConf,
+                                               String checkpointLocation,
+                                               Long sleepIntervalMillis,
+                                               Integer checkpointCutOffMultiplier,
+                                               String hostAggregatorDisabledParam,
+                                               String tableName,
+                                               String outputTableName,
+                                               Long nativeTimeRangeDelay,
+                                               MetricCollectorHAController haController,
+                                               ConcurrentHashMap<String, Long> postedAggregatedMap) {
+    super(aggregatorName, metricMetadataManager,
+      hBaseAccessor, metricsConf,
+      checkpointLocation,
+      sleepIntervalMillis,
+      checkpointCutOffMultiplier,
+      hostAggregatorDisabledParam,
+      tableName,
+      outputTableName,
+      nativeTimeRangeDelay,
+      haController);
+    this.metricMetadataManager = metricMetadataManager;
+    this.postedAggregatedMap = postedAggregatedMap;
+  }
+
+  @Override
+  protected Condition prepareMetricQueryCondition(long startTime, long endTime) {
+    List<String> aggregatedHostnames = new ArrayList<>();
+    for (Map.Entry<String, Long> entry : postedAggregatedMap.entrySet()) {
+      if (entry.getValue() > startTime && entry.getValue() <= endTime) {
+        aggregatedHostnames.add(entry.getKey());
+      }
+    }
+    List<String> notAggregatedHostnames = metricMetadataManager.getNotLikeHostnames(aggregatedHostnames);
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Already aggregated hostnames based on postedAggregatedMap : " + aggregatedHostnames);
+      LOG.debug("Hostnames that will be aggregated : " + notAggregatedHostnames);
+    }
+    List<byte[]> uuids = metricMetadataManager.getUuids(new ArrayList<String>(), notAggregatedHostnames, "", "");
+
+    Condition condition = new DefaultCondition(uuids, null, null, null, null, startTime,
+      endTime, null, null, true);
+    condition.setNoLimit();
+    condition.setFetchSize(resultsetFetchSize);
+    condition.setStatement(String.format(GET_METRIC_AGGREGATE_ONLY_SQL, tableName));
+    // Retaining order of the row-key avoids client side merge sort.
+    condition.addOrderByColumn("UUID");
+    condition.addOrderByColumn("SERVER_TIME");
+    return condition;
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricFilteringHostAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricFilteringHostAggregator.java
new file mode 100644
index 0000000..f6e0d8f
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricFilteringHostAggregator.java
@@ -0,0 +1,119 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.v2;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.EmptyCondition;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_AGGREGATED_HOST_METRIC_GROUPBY_SQL;
+
+public class TimelineMetricFilteringHostAggregator extends TimelineMetricHostAggregator {
+  private TimelineMetricMetadataManager metricMetadataManager;
+  private ConcurrentHashMap<String, Long> postedAggregatedMap;
+
+  public TimelineMetricFilteringHostAggregator(AggregationTaskRunner.AGGREGATOR_NAME aggregatorName,
+                                               TimelineMetricMetadataManager metricMetadataManager,
+                                               PhoenixHBaseAccessor hBaseAccessor,
+                                               Configuration metricsConf,
+                                               String checkpointLocation,
+                                               Long sleepIntervalMillis,
+                                               Integer checkpointCutOffMultiplier,
+                                               String hostAggregatorDisabledParam,
+                                               String tableName,
+                                               String outputTableName,
+                                               Long nativeTimeRangeDelay,
+                                               MetricCollectorHAController haController,
+                                               ConcurrentHashMap<String, Long> postedAggregatedMap) {
+    super(aggregatorName,
+      hBaseAccessor, metricsConf,
+      checkpointLocation,
+      sleepIntervalMillis,
+      checkpointCutOffMultiplier,
+      hostAggregatorDisabledParam,
+      tableName,
+      outputTableName,
+      nativeTimeRangeDelay,
+      haController);
+    this.metricMetadataManager = metricMetadataManager;
+    this.postedAggregatedMap = postedAggregatedMap;
+  }
+
+  @Override
+  protected Condition prepareMetricQueryCondition(long startTime, long endTime) {
+    List<String> aggregatedHostnames = new ArrayList<>();
+    for (Map.Entry<String, Long> entry : postedAggregatedMap.entrySet()) {
+      if (entry.getValue() > startTime && entry.getValue() <= endTime) {
+        aggregatedHostnames.add(entry.getKey());
+      }
+    }
+    List<String> notAggregatedHostnames = metricMetadataManager.getNotLikeHostnames(aggregatedHostnames);
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Already aggregated hostnames based on postedAggregatedMap : " + aggregatedHostnames);
+      LOG.debug("Hostnames that will be aggregated : " + notAggregatedHostnames);
+    }
+    List<byte[]> uuids = metricMetadataManager.getUuids(new ArrayList<String>(), notAggregatedHostnames, "", "");
+
+    EmptyCondition condition = new EmptyCondition();
+    condition.setDoUpdate(true);
+
+    condition.setStatement(String.format(GET_AGGREGATED_HOST_METRIC_GROUPBY_SQL,
+      outputTableName, endTime, tableName,
+      getDownsampledMetricSkipClause() + getIncludedUuidsClause(uuids), startTime, endTime));
+
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Condition: " + condition.toString());
+    }
+
+    return condition;
+  }
+
+  private String getIncludedUuidsClause(List<byte[]> uuids) {
+    StringBuilder sb = new StringBuilder();
+    sb.append("(");
+
+    //LIKE clause
+    // (UUID LIKE ? OR UUID LIKE ?) AND
+    if (CollectionUtils.isNotEmpty(uuids)) {
+      for (int i = 0; i < uuids.size(); i++) {
+        sb.append("UUID");
+        sb.append(" LIKE ");
+        sb.append("'%");
+        sb.append(new String(uuids.get(i)));
+        sb.append("'");
+
+        if (i == uuids.size() - 1) {
+          sb.append(") AND ");
+        } else {
+          sb.append(" OR ");
+        }
+      }
+    }
+    return sb.toString();
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
index e00c045..bd508c4 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
@@ -69,8 +69,8 @@ public class TimelineMetricMetadataManager {
   AtomicBoolean SYNC_HOSTED_APPS_METADATA = new AtomicBoolean(false);
   AtomicBoolean SYNC_HOSTED_INSTANCES_METADATA = new AtomicBoolean(false);
   private MetricUuidGenStrategy uuidGenStrategy = new HashBasedUuidGenStrategy();
-  private static final int timelineMetricUuidLength = 16;
-  private static final int hostnameUuidLength = 4;
+  public static final int TIMELINE_METRIC_UUID_LENGTH = 16;
+  public static final int HOSTNAME_UUID_LENGTH = 4;
 
   // Single thread to sync back new writes to the store
   private final ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor();
@@ -344,9 +344,9 @@ public class TimelineMetricMetadataManager {
   }
 
   /**
-   * Given the hostname, generates a byte array of length 'hostnameUuidLength'
+   * Given the hostname, generates a byte array of length 'HOSTNAME_UUID_LENGTH'
    * @param hostname
-   * @return uuid byte array of length 'hostnameUuidLength'
+   * @return uuid byte array of length 'HOSTNAME_UUID_LENGTH'
    */
   private byte[] getUuidForHostname(String hostname) {
 
@@ -358,7 +358,7 @@ public class TimelineMetricMetadataManager {
       }
     }
 
-    byte[] uuid = uuidGenStrategy.computeUuid(hostname, hostnameUuidLength);
+    byte[] uuid = uuidGenStrategy.computeUuid(hostname, HOSTNAME_UUID_LENGTH);
 
     String uuidStr = new String(uuid);
     if (uuidHostMap.containsKey(uuidStr)) {
@@ -379,7 +379,7 @@ public class TimelineMetricMetadataManager {
   /**
    * Given a timelineClusterMetric instance, generates a UUID for Metric-App-Instance combination.
    * @param timelineClusterMetric
-   * @return uuid byte array of length 'timelineMetricUuidLength'
+   * @return uuid byte array of length 'TIMELINE_METRIC_UUID_LENGTH'
    */
   public byte[] getUuid(TimelineClusterMetric timelineClusterMetric) {
     TimelineMetricMetadataKey key = new TimelineMetricMetadataKey(timelineClusterMetric.getMetricName(),
@@ -393,7 +393,7 @@ public class TimelineMetricMetadataManager {
       }
     }
 
-    byte[] uuid = uuidGenStrategy.computeUuid(timelineClusterMetric, timelineMetricUuidLength);
+    byte[] uuid = uuidGenStrategy.computeUuid(timelineClusterMetric, TIMELINE_METRIC_UUID_LENGTH);
 
     String uuidStr = new String(uuid);
     if (uuidKeyMap.containsKey(uuidStr) && !uuidKeyMap.get(uuidStr).equals(key)) {
@@ -419,7 +419,7 @@ public class TimelineMetricMetadataManager {
   /**
    * Given a timelineMetric instance, generates a UUID for Metric-App-Instance combination.
    * @param timelineMetric
-   * @return uuid byte array of length 'timelineMetricUuidLength' + 'hostnameUuidLength'
+   * @return uuid byte array of length 'TIMELINE_METRIC_UUID_LENGTH' + 'HOSTNAME_UUID_LENGTH'
    */
   public byte[] getUuid(TimelineMetric timelineMetric) {
 
@@ -433,8 +433,8 @@ public class TimelineMetricMetadataManager {
   public String getMetricNameFromUuid(byte[]  uuid) {
 
     byte[] metricUuid = uuid;
-    if (uuid.length == timelineMetricUuidLength + hostnameUuidLength) {
-      metricUuid = ArrayUtils.subarray(uuid, 0, timelineMetricUuidLength);
+    if (uuid.length == TIMELINE_METRIC_UUID_LENGTH + HOSTNAME_UUID_LENGTH) {
+      metricUuid = ArrayUtils.subarray(uuid, 0, TIMELINE_METRIC_UUID_LENGTH);
     }
 
     TimelineMetricMetadataKey key = uuidKeyMap.get(new String(metricUuid));
@@ -446,11 +446,11 @@ public class TimelineMetricMetadataManager {
       return null;
     }
 
-    if (uuid.length == timelineMetricUuidLength) {
+    if (uuid.length == TIMELINE_METRIC_UUID_LENGTH) {
       TimelineMetricMetadataKey key = uuidKeyMap.get(new String(uuid));
       return key != null ? new TimelineMetric(key.metricName, null, key.appId, key.instanceId) : null;
     } else {
-      byte[] metricUuid = ArrayUtils.subarray(uuid, 0, timelineMetricUuidLength);
+      byte[] metricUuid = ArrayUtils.subarray(uuid, 0, TIMELINE_METRIC_UUID_LENGTH);
       TimelineMetricMetadataKey key = uuidKeyMap.get(new String(metricUuid));
       if (key == null) {
         LOG.error("TimelineMetricMetadataKey is null for : " + Arrays.toString(uuid));
@@ -461,7 +461,7 @@ public class TimelineMetricMetadataManager {
       timelineMetric.setAppId(key.appId);
       timelineMetric.setInstanceId(key.instanceId);
 
-      byte[] hostUuid = ArrayUtils.subarray(uuid, timelineMetricUuidLength, hostnameUuidLength + timelineMetricUuidLength);
+      byte[] hostUuid = ArrayUtils.subarray(uuid, TIMELINE_METRIC_UUID_LENGTH, HOSTNAME_UUID_LENGTH + TIMELINE_METRIC_UUID_LENGTH);
       timelineMetric.setHostName(uuidHostMap.get(new String(hostUuid)));
       return timelineMetric;
     }
@@ -525,14 +525,23 @@ public class TimelineMetricMetadataManager {
       appId = appId.toLowerCase();
     }
     if (CollectionUtils.isNotEmpty(sanitizedHostNames)) {
-      for (String metricName : sanitizedMetricNames) {
-        TimelineMetric metric = new TimelineMetric();
-        metric.setMetricName(metricName);
-        metric.setAppId(appId);
-        metric.setInstanceId(instanceId);
+      if (CollectionUtils.isNotEmpty(sanitizedMetricNames)) {
+        for (String metricName : sanitizedMetricNames) {
+          TimelineMetric metric = new TimelineMetric();
+          metric.setMetricName(metricName);
+          metric.setAppId(appId);
+          metric.setInstanceId(instanceId);
+          for (String hostname : sanitizedHostNames) {
+            metric.setHostName(hostname);
+            byte[] uuid = getUuid(metric);
+            if (uuid != null) {
+              uuids.add(uuid);
+            }
+          }
+        }
+      } else {
         for (String hostname : sanitizedHostNames) {
-          metric.setHostName(hostname);
-          byte[] uuid = getUuid(metric);
+          byte[] uuid = getUuidForHostname(hostname);
           if (uuid != null) {
             uuids.add(uuid);
           }
@@ -554,4 +563,31 @@ public class TimelineMetricMetadataManager {
   public Map<String, TimelineMetricMetadataKey> getUuidKeyMap() {
     return uuidKeyMap;
   }
+
+  public List<String> getNotLikeHostnames(List<String> hostnames) {
+    List<String> result = new ArrayList<>();
+    Set<String> sanitizedHostNames = new HashSet<>();
+    if (CollectionUtils.isNotEmpty(hostnames)) {
+      for (String hostname : hostnames) {
+        if (hostname.contains("%")) {
+          String hostRegEx;
+          hostRegEx = hostname.replace("%", ".*");
+          for (String host : HOSTED_APPS_MAP.keySet()) {
+            if (host.matches(hostRegEx)) {
+              sanitizedHostNames.add(host);
+            }
+          }
+        } else {
+          sanitizedHostNames.add(hostname);
+        }
+      }
+    }
+
+    for (String hostname: HOSTED_APPS_MAP.keySet()) {
+      if (!sanitizedHostNames.contains(hostname)) {
+        result.add(hostname);
+      }
+    }
+    return result;
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java
index 9714e1a..4e04e6c 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java
@@ -46,4 +46,6 @@ public interface Condition {
   void setNoLimit();
   boolean doUpdate();
   void setMetricNamesNotCondition(boolean metricNamesNotCondition);
+  void setHostnamesNotCondition(boolean hostNamesNotCondition);
+  void setUuidNotCondition(boolean uuidNotCondition);
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultCondition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultCondition.java
index 3c03dca..763e4c7 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultCondition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultCondition.java
@@ -19,9 +19,9 @@ package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 
 import java.util.ArrayList;
 import java.util.LinkedHashSet;
@@ -43,6 +43,8 @@ public class DefaultCondition implements Condition {
   String statement;
   Set<String> orderByColumns = new LinkedHashSet<String>();
   boolean metricNamesNotCondition = false;
+  boolean hostNamesNotCondition = false;
+  boolean uuidNotCondition = false;
   List<byte[]> uuids = new ArrayList<>();
 
   private static final Log LOG = LogFactory.getLog(DefaultCondition.class);
@@ -230,164 +232,142 @@ public class DefaultCondition implements Condition {
     boolean appendConjunction = false;
 
     if (CollectionUtils.isNotEmpty(uuids)) {
-      // Put a '(' first
-      sb.append("(");
-
-      //IN clause
-      // UUID (NOT) IN (?,?,?,?)
-      if (CollectionUtils.isNotEmpty(uuids)) {
-        sb.append("UUID");
-        if (metricNamesNotCondition) {
-          sb.append(" NOT");
-        }
-        sb.append(" IN (");
-        //Append ?,?,?,?
-        for (int i = 0; i < uuids.size(); i++) {
-          sb.append("?");
-          if (i < uuids.size() - 1) {
-            sb.append(", ");
-          }
-        }
-        sb.append(")");
-      }
-      appendConjunction = true;
-      sb.append(")");
-    }
 
-    return appendConjunction;
-  }
-
-  protected boolean appendMetricNameClause(StringBuilder sb) {
-    boolean appendConjunction = false;
-    List<String> metricsLike = new ArrayList<>();
-    List<String> metricsIn = new ArrayList<>();
-
-    if (getMetricNames() != null) {
-      for (String name : getMetricNames()) {
-        if (name.contains("%")) {
-          metricsLike.add(name);
-        } else {
-          metricsIn.add(name);
+      List<byte[]> uuidsHost = new ArrayList<>();
+      List<byte[]> uuidsMetric = new ArrayList<>();
+      List<byte[]> uuidsFull = new ArrayList<>();
+
+      if (getUuids() != null) {
+        for (byte[] uuid : uuids) {
+          if (uuid.length == TimelineMetricMetadataManager.TIMELINE_METRIC_UUID_LENGTH) {
+            uuidsMetric.add(uuid);
+          } else if (uuid.length == TimelineMetricMetadataManager.HOSTNAME_UUID_LENGTH) {
+            uuidsHost.add(uuid);
+          } else {
+            uuidsFull.add(uuid);
+          }
         }
-      }
 
-      // Put a '(' first
-      sb.append("(");
+        // Put a '(' first
+        sb.append("(");
 
-      //IN clause
-      // METRIC_NAME (NOT) IN (?,?,?,?)
-      if (CollectionUtils.isNotEmpty(metricsIn)) {
-        sb.append("METRIC_NAME");
-        if (metricNamesNotCondition) {
-          sb.append(" NOT");
-        }
-        sb.append(" IN (");
-        //Append ?,?,?,?
-        for (int i = 0; i < metricsIn.size(); i++) {
-          sb.append("?");
-          if (i < metricsIn.size() - 1) {
-            sb.append(", ");
+        //IN clause
+        // METRIC_NAME (NOT) IN (?,?,?,?)
+        if (CollectionUtils.isNotEmpty(uuidsFull)) {
+          sb.append("UUID");
+          if (uuidNotCondition) {
+            sb.append(" NOT");
+          }
+          sb.append(" IN (");
+          //Append ?,?,?,?
+          for (int i = 0; i < uuidsFull.size(); i++) {
+            sb.append("?");
+            if (i < uuidsFull.size() - 1) {
+              sb.append(", ");
+            }
           }
+          sb.append(")");
+          appendConjunction = true;
         }
-        sb.append(")");
-        appendConjunction = true;
-      }
 
-      //Put an OR/AND if both types are present
-      if (CollectionUtils.isNotEmpty(metricsIn) &&
-        CollectionUtils.isNotEmpty(metricsLike)) {
-        if (metricNamesNotCondition) {
+        //Put an AND if both types are present
+        if (CollectionUtils.isNotEmpty(uuidsFull) &&
+          CollectionUtils.isNotEmpty(uuidsMetric)) {
           sb.append(" AND ");
-        } else {
-          sb.append(" OR ");
         }
-      }
 
-      //LIKE clause
-      // METRIC_NAME (NOT) LIKE ? OR(AND) METRIC_NAME LIKE ?
-      if (CollectionUtils.isNotEmpty(metricsLike)) {
+        // ( for OR
+        if (!metricNamesNotCondition && uuidsMetric.size() > 1 && (CollectionUtils.isNotEmpty(uuidsFull) || CollectionUtils.isNotEmpty(uuidsHost))) {
+          sb.append("(");
+        }
 
-        for (int i = 0; i < metricsLike.size(); i++) {
-          sb.append("METRIC_NAME");
-          if (metricNamesNotCondition) {
-            sb.append(" NOT");
-          }
-          sb.append(" LIKE ");
-          sb.append("?");
+        //LIKE clause for clusterMetric UUIDs
+        // UUID (NOT) LIKE ? OR(AND) UUID LIKE ?
+        if (CollectionUtils.isNotEmpty(uuidsMetric)) {
 
-          if (i < metricsLike.size() - 1) {
+          for (int i = 0; i < uuidsMetric.size(); i++) {
+            sb.append("UUID");
             if (metricNamesNotCondition) {
-              sb.append(" AND ");
-            } else {
-              sb.append(" OR ");
+              sb.append(" NOT");
+            }
+            sb.append(" LIKE ");
+            sb.append("?");
+
+            if (i < uuidsMetric.size() - 1) {
+              if (metricNamesNotCondition) {
+                sb.append(" AND ");
+              } else {
+                sb.append(" OR ");
+              }
+            // ) for OR
+            } else if ((CollectionUtils.isNotEmpty(uuidsFull) || CollectionUtils.isNotEmpty(uuidsHost)) && !metricNamesNotCondition && uuidsMetric.size() > 1) {
+              sb.append(")");
             }
           }
+          appendConjunction = true;
         }
-        appendConjunction = true;
-      }
 
-      // Finish with a ')'
-      if (appendConjunction) {
-        sb.append(")");
-      }
+        //Put an AND if both types are present
+        if ((CollectionUtils.isNotEmpty(uuidsMetric) || (CollectionUtils.isNotEmpty(uuidsFull) && CollectionUtils.isEmpty(uuidsMetric)))
+          && CollectionUtils.isNotEmpty(uuidsHost)) {
+          sb.append(" AND ");
+        }
+        // ( for OR
+        if((CollectionUtils.isNotEmpty(uuidsFull) || CollectionUtils.isNotEmpty(uuidsMetric)) && !hostNamesNotCondition && uuidsHost.size() > 1){
+          sb.append("(");
+        }
 
-      metricNames.clear();
-      if (CollectionUtils.isNotEmpty(metricsIn)) {
-        metricNames.addAll(metricsIn);
-      }
-      if (CollectionUtils.isNotEmpty(metricsLike)) {
-        metricNames.addAll(metricsLike);
-      }
-    }
-    return appendConjunction;
-  }
+        //LIKE clause for HOST UUIDs
+        // UUID (NOT) LIKE ? OR(AND) UUID LIKE ?
+        if (CollectionUtils.isNotEmpty(uuidsHost)) {
 
-  protected boolean appendHostnameClause(StringBuilder sb, boolean appendConjunction) {
-    boolean hostnameContainsRegex = false;
-    if (hostnames != null) {
-      for (String hostname : hostnames) {
-        if (hostname.contains("%")) {
-          hostnameContainsRegex = true;
-          break;
+          for (int i = 0; i < uuidsHost.size(); i++) {
+            sb.append("UUID");
+            if (hostNamesNotCondition) {
+              sb.append(" NOT");
+            }
+            sb.append(" LIKE ");
+            sb.append("?");
+
+            if (i < uuidsHost.size() - 1) {
+              if (hostNamesNotCondition) {
+                sb.append(" AND ");
+              } else {
+                sb.append(" OR ");
+              }
+            // ) for OR
+            } else if ((CollectionUtils.isNotEmpty(uuidsFull) || CollectionUtils.isNotEmpty(uuidsMetric)) && !hostNamesNotCondition && uuidsHost.size() > 1) {
+              sb.append(")");
+            }
+          }
+          appendConjunction = true;
         }
-      }
-    }
 
-    StringBuilder hostnamesCondition = new StringBuilder();
-    if (hostnameContainsRegex) {
-      hostnamesCondition.append(" (");
-      for (String hostname : getHostnames()) {
-        if (hostnamesCondition.length() > 2) {
-          hostnamesCondition.append(" OR ");
+        // Finish with a ')'
+        if (appendConjunction) {
+          sb.append(")");
         }
-        hostnamesCondition.append("HOSTNAME LIKE ?");
-      }
-      hostnamesCondition.append(")");
-
-      appendConjunction = append(sb, appendConjunction, getHostnames(), hostnamesCondition.toString());
-    } else if (hostnames != null && getHostnames().size() > 1) {
-      for (String hostname : getHostnames()) {
-        if (hostnamesCondition.length() > 0) {
-          hostnamesCondition.append(" ,");
-        } else {
-          hostnamesCondition.append(" HOSTNAME IN (");
+
+        uuids = new ArrayList<>();
+        if (CollectionUtils.isNotEmpty(uuidsFull)) {
+          uuids.addAll(uuidsFull);
+        }
+        for (byte[] uuid: uuidsMetric) {
+          uuids.add(new String(uuid).concat("%").getBytes());
+        }
+        for (byte[] uuid: uuidsHost) {
+          uuids.add("%".concat(new String(uuid)).getBytes());
         }
-        hostnamesCondition.append('?');
       }
-      hostnamesCondition.append(')');
-      appendConjunction = append(sb, appendConjunction, getHostnames(), hostnamesCondition.toString());
-
-    } else {
-      appendConjunction = append(sb, appendConjunction, getHostnames(), " HOSTNAME = ?");
     }
+
     return appendConjunction;
   }
 
   @Override
   public String toString() {
     return "Condition{" +
-      "metricNames=" + metricNames +
-      ", hostnames='" + hostnames + '\'' +
+      "uuids=" + uuids +
       ", appId='" + appId + '\'' +
       ", instanceId='" + instanceId + '\'' +
       ", startTime=" + startTime +
@@ -424,6 +404,16 @@ public class DefaultCondition implements Condition {
   }
 
   @Override
+  public void setHostnamesNotCondition(boolean hostNamesNotCondition) {
+    this.hostNamesNotCondition = hostNamesNotCondition;
+  }
+
+  @Override
+  public void setUuidNotCondition(boolean uuidNotCondition) {
+    this.uuidNotCondition = uuidNotCondition;
+  }
+
+  @Override
   public List<byte[]> getUuids() {
     return uuids;
   }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/EmptyCondition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/EmptyCondition.java
index b667df3..6d43179 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/EmptyCondition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/EmptyCondition.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
+import org.apache.commons.lang.NotImplementedException;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
 
 import java.util.List;
@@ -155,4 +156,14 @@ public class EmptyCondition implements Condition {
   public void setMetricNamesNotCondition(boolean metricNamesNotCondition) {
     this.metricNamesNotCondition = metricNamesNotCondition;
   }
+
+  @Override
+  public void setHostnamesNotCondition(boolean hostNamesNotCondition) {
+    throw new NotImplementedException("Not implemented");
+  }
+
+  @Override
+  public void setUuidNotCondition(boolean uuidNotCondition) {
+    throw new NotImplementedException("Not implemented");
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
index d94d14c..2478fb1 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
@@ -22,6 +22,7 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
 import org.apache.hadoop.metrics2.sink.timeline.PrecisionLimitExceededException;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 
 import java.sql.Connection;
 import java.sql.PreparedStatement;
@@ -194,7 +195,6 @@ public class PhoenixTransactSQL {
 
   public static final String UPSERT_CLUSTER_AGGREGATE_TIME_SQL = "UPSERT INTO" +
     " %s (UUID, SERVER_TIME, " +
-    "UNITS, " +
     "METRIC_SUM, " +
     "METRIC_COUNT, " +
     "METRIC_MAX, " +
@@ -204,7 +204,6 @@ public class PhoenixTransactSQL {
   public static final String UPSERT_AGGREGATE_RECORD_SQL = "UPSERT INTO " +
     "%s (UUID, " +
     "SERVER_TIME, " +
-    "UNITS, " +
     "METRIC_SUM, " +
     "METRIC_MAX, " +
     "METRIC_MIN," +
@@ -775,10 +774,16 @@ public class PhoenixTransactSQL {
   private static int addUuids(Condition condition, int pos, PreparedStatement stmt) throws SQLException {
     if (condition.getUuids() != null) {
       for (int pos2 = 1 ; pos2 <= condition.getUuids().size(); pos2++,pos++) {
+        byte[] uuid = condition.getUuids().get(pos2 - 1);
         if (LOG.isDebugEnabled()) {
-          LOG.debug("Setting pos: " + pos + ", value = " + condition.getUuids().get(pos2 - 1));
+          LOG.debug("Setting pos: " + pos + ", value = " + new String(uuid));
+        }
+
+        if (uuid.length != TimelineMetricMetadataManager.HOSTNAME_UUID_LENGTH + TimelineMetricMetadataManager.TIMELINE_METRIC_UUID_LENGTH) {
+          stmt.setString(pos, new String(uuid));
+        } else {
+          stmt.setBytes(pos, uuid);
         }
-        stmt.setBytes(pos, condition.getUuids().get(pos2 - 1));
       }
     }
     return pos;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/SplitByMetricNamesCondition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/SplitByMetricNamesCondition.java
index 45ea74c..554d2e8 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/SplitByMetricNamesCondition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/SplitByMetricNamesCondition.java
@@ -176,4 +176,14 @@ public class SplitByMetricNamesCondition implements Condition {
   public void setMetricNamesNotCondition(boolean metricNamesNotCondition) {
     this.metricNamesNotCondition = metricNamesNotCondition;
   }
+
+  @Override
+  public void setHostnamesNotCondition(boolean hostNamesNotCondition) {
+
+  }
+
+  @Override
+  public void setUuidNotCondition(boolean uuidNotCondition) {
+
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestPhoenixTransactSQL.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestPhoenixTransactSQL.java
index cb3f3a7..fe801ad 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestPhoenixTransactSQL.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestPhoenixTransactSQL.java
@@ -567,7 +567,7 @@ public class TestPhoenixTransactSQL {
     String conditionClause = condition.getConditionClause().toString();
     String expectedClause = " UUID IN (" +
       "SELECT UUID FROM METRIC_RECORD WHERE " +
-          "(UUID IN (?, ?)) AND " +
+          "(UUID LIKE ? OR UUID LIKE ?) AND " +
           "SERVER_TIME >= ? AND SERVER_TIME < ? " +
           "GROUP BY UUID ORDER BY MAX(METRIC_MAX) DESC LIMIT 2) AND SERVER_TIME >= ? AND SERVER_TIME < ?";
 
@@ -585,7 +585,7 @@ public class TestPhoenixTransactSQL {
     String conditionClause = condition.getConditionClause().toString();
     String expectedClause = " UUID IN (" +
       "SELECT UUID FROM METRIC_RECORD WHERE " +
-      "(UUID IN (?, ?, ?)) AND " +
+      "(UUID LIKE ? OR UUID LIKE ? OR UUID LIKE ?) AND " +
       "SERVER_TIME >= ? AND SERVER_TIME < ? " +
       "GROUP BY UUID ORDER BY MAX(METRIC_MAX) DESC LIMIT 2) AND SERVER_TIME >= ? AND SERVER_TIME < ?";
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultConditionTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultConditionTest.java
index e4e9225..71226e8 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultConditionTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultConditionTest.java
@@ -29,88 +29,142 @@ public class DefaultConditionTest {
 
   @Test
   public void testMetricNameWhereCondition() {
-    List<String> metricNames = new ArrayList<>();
-
-    //Only IN clause.
-
-    metricNames.add("M1");
-    DefaultCondition condition = new DefaultCondition(metricNames,null,null,null,null,null,null,null,true);
+    //EMPTY
+    List<byte[]> uuids = new ArrayList<>();
+    DefaultCondition condition = new DefaultCondition(uuids,null,null,null,null,null,null,null,null,true);
     StringBuilder sb = new StringBuilder();
-    condition.appendMetricNameClause(sb);
-    Assert.assertEquals(sb.toString(), "(METRIC_NAME IN (?))");
-    Assert.assertTrue(CollectionUtils.isEqualCollection(metricNames, condition.getMetricNames()));
+    condition.appendUuidClause(sb);
+    Assert.assertEquals(sb.toString(), "");
+    Assert.assertTrue(CollectionUtils.isEqualCollection(uuids, condition.getUuids()));
 
-    metricNames.add("m2");
-    condition = new DefaultCondition(metricNames,null,null,null,null,null,null,null,true);
+    //Metric uuid
+    uuids.add(new byte[16]);
+    condition = new DefaultCondition(uuids,null,null,null,null,null,null,null,null,true);
     sb = new StringBuilder();
-    condition.appendMetricNameClause(sb);
-    Assert.assertEquals(sb.toString(), "(METRIC_NAME IN (?, ?))");
-    Assert.assertTrue(CollectionUtils.isEqualCollection(metricNames, condition.getMetricNames()));
-
-    // Only NOT IN clause
-    condition = new DefaultCondition(metricNames,null,null,null,null,null,null,null,true);
-    condition.setMetricNamesNotCondition(true);
+    condition.appendUuidClause(sb);
+    Assert.assertEquals(sb.toString(), "(UUID LIKE ?)");
+    Assert.assertEquals(uuids.size(), condition.getUuids().size());
+    Assert.assertTrue(new String(condition.getUuids().get(0)).endsWith("%"));
+
+    //metric uuid + Host uuid
+    uuids.add(new byte[4]);
+    condition = new DefaultCondition(uuids,null,null,null,null,null,null,null,null,true);
     sb = new StringBuilder();
-    condition.appendMetricNameClause(sb);
-    Assert.assertEquals(sb.toString(), "(METRIC_NAME NOT IN (?, ?))");
-    Assert.assertTrue(CollectionUtils.isEqualCollection(metricNames, condition.getMetricNames()));
-
-    metricNames.clear();
-
-    //Only LIKE clause
-    metricNames.add("disk%");
-    condition = new DefaultCondition(metricNames,null,null,null,null,null,null,null,true);
+    condition.appendUuidClause(sb);
+    Assert.assertEquals(sb.toString(), "(UUID LIKE ? AND UUID LIKE ?)");
+    Assert.assertEquals(uuids.size(), condition.getUuids().size());
+    Assert.assertTrue(new String(condition.getUuids().get(1)).startsWith("%"));
+
+    //metric + host + full
+    uuids.add(new byte[20]);
+    uuids.add(new byte[20]);
+    condition = new DefaultCondition(uuids,null,null,null,null,null,null,null,null,true);
     sb = new StringBuilder();
-    condition.appendMetricNameClause(sb);
-    Assert.assertEquals(sb.toString(), "(METRIC_NAME LIKE ?)");
-    Assert.assertTrue(CollectionUtils.isEqualCollection(metricNames, condition.getMetricNames()));
+    condition.appendUuidClause(sb);
+    Assert.assertEquals(sb.toString(), "(UUID IN (?, ?) AND UUID LIKE ? AND UUID LIKE ?)");
+    Assert.assertEquals(uuids.size(), condition.getUuids().size());
 
-    metricNames.add("cpu%");
-    condition = new DefaultCondition(metricNames,null,null,null,null,null,null,null,true);
+    //Only IN clause.
+    uuids.clear();
+    uuids.add(new byte[20]);
+    condition = new DefaultCondition(uuids,null,null,null,null,null,null,null,null,true);
+    sb = new StringBuilder();
+    condition.appendUuidClause(sb);
+    Assert.assertEquals(sb.toString(), "(UUID IN (?))");
+    Assert.assertEquals(uuids.size(), condition.getUuids().size());
+
+    //metric NOT LIKE
+    uuids.clear();
+    uuids.add(new byte[16]);
+    condition = new DefaultCondition(uuids,null,null,null,null,null,null,null,null,true);
     sb = new StringBuilder();
-    condition.appendMetricNameClause(sb);
-    Assert.assertEquals(sb.toString(), "(METRIC_NAME LIKE ? OR METRIC_NAME LIKE ?)");
-    Assert.assertTrue(CollectionUtils.isEqualCollection(metricNames, condition.getMetricNames()));
-
-    //Only NOT LIKE clause
-    condition = new DefaultCondition(metricNames,null,null,null,null,null,null,null,true);
     condition.setMetricNamesNotCondition(true);
+    condition.appendUuidClause(sb);
+    Assert.assertEquals(sb.toString(), "(UUID NOT LIKE ?)");
+    Assert.assertEquals(uuids.size(), condition.getUuids().size());
+
+    //metric NOT LIKE host LIKE
+    uuids.clear();
+    uuids.add(new byte[16]);
+    uuids.add(new byte[4]);
+    condition = new DefaultCondition(uuids,null,null,null,null,null,null,null,null,true);
     sb = new StringBuilder();
-    condition.appendMetricNameClause(sb);
-    Assert.assertEquals(sb.toString(), "(METRIC_NAME NOT LIKE ? AND METRIC_NAME NOT LIKE ?)");
-    Assert.assertTrue(CollectionUtils.isEqualCollection(metricNames, condition.getMetricNames()));
-
-    metricNames.clear();
-
-    // IN followed by LIKE clause
-    metricNames.add("M1");
-    metricNames.add("disk%");
-    metricNames.add("M2");
-    condition = new DefaultCondition(metricNames,null,null,null,null,null,null,null,true);
+    condition.setMetricNamesNotCondition(true);
+    condition.appendUuidClause(sb);
+    Assert.assertEquals(sb.toString(), "(UUID NOT LIKE ? AND UUID LIKE ?)");
+    Assert.assertEquals(uuids.size(), condition.getUuids().size());
+    Assert.assertTrue(new String(condition.getUuids().get(0)).endsWith("%"));
+    Assert.assertTrue(new String(condition.getUuids().get(1)).startsWith("%"));
+
+    //metric LIKE host NOT LIKE
+    uuids.clear();
+    uuids.add(new byte[16]);
+    uuids.add(new byte[4]);
+    condition = new DefaultCondition(uuids,null,null,null,null,null,null,null,null,true);
     sb = new StringBuilder();
-    condition.appendMetricNameClause(sb);
-    Assert.assertEquals(sb.toString(), "(METRIC_NAME IN (?, ?) OR METRIC_NAME LIKE ?)");
-    Assert.assertEquals(metricNames.get(2), "disk%");
-
-    metricNames.clear();
-    //NOT IN followed by NOT LIKE clause
-    metricNames.add("disk%");
-    metricNames.add("metric1");
-    metricNames.add("cpu%");
-    condition = new DefaultCondition(metricNames,null,null,null,null,null,null,null,true);
+    condition.setHostnamesNotCondition(true);
+    condition.appendUuidClause(sb);
+    Assert.assertEquals(sb.toString(), "(UUID LIKE ? AND UUID NOT LIKE ?)");
+    Assert.assertEquals(uuids.size(), condition.getUuids().size());
+    Assert.assertTrue(new String(condition.getUuids().get(0)).endsWith("%"));
+    Assert.assertTrue(new String(condition.getUuids().get(1)).startsWith("%"));
+
+    //metric LIKE or LIKE host LIKE
+    uuids.clear();
+    uuids.add(new byte[4]);
+    uuids.add(new byte[16]);
+    uuids.add(new byte[16]);
+    condition = new DefaultCondition(uuids,null,null,null,null,null,null,null,null,true);
     sb = new StringBuilder();
-    condition.setMetricNamesNotCondition(true);
-    condition.appendMetricNameClause(sb);
-    Assert.assertEquals(sb.toString(), "(METRIC_NAME NOT IN (?) AND METRIC_NAME NOT LIKE ? AND METRIC_NAME NOT LIKE ?)");
-    Assert.assertEquals(metricNames.get(0), "metric1");
-
-    //Empty
-    metricNames.clear();
-    condition = new DefaultCondition(metricNames,null,null,null,null,null,null,null,true);
+    condition.appendUuidClause(sb);
+    Assert.assertEquals(sb.toString(), "((UUID LIKE ? OR UUID LIKE ?) AND UUID LIKE ?)");
+    Assert.assertEquals(uuids.size(), condition.getUuids().size());
+    Assert.assertTrue(new String(condition.getUuids().get(0)).endsWith("%"));
+    Assert.assertTrue(new String(condition.getUuids().get(1)).endsWith("%"));
+    Assert.assertTrue(new String(condition.getUuids().get(2)).startsWith("%"));
+
+    //UUID in metric LIKE or LIKE host LIKE
+    uuids.clear();
+    uuids.add(new byte[16]);
+    uuids.add(new byte[16]);
+    uuids.add(new byte[20]);
+    uuids.add(new byte[4]);
+    condition = new DefaultCondition(uuids,null,null,null,null,null,null,null,null,true);
     sb = new StringBuilder();
-    condition.appendMetricNameClause(sb);
-    Assert.assertEquals(sb.toString(), "");
-
+    condition.appendUuidClause(sb);
+    Assert.assertEquals(sb.toString(), "(UUID IN (?) AND (UUID LIKE ? OR UUID LIKE ?) AND UUID LIKE ?)");
+    Assert.assertEquals(uuids.size(), condition.getUuids().size());
+    Assert.assertTrue(new String(condition.getUuids().get(1)).endsWith("%"));
+    Assert.assertTrue(new String(condition.getUuids().get(2)).endsWith("%"));
+    Assert.assertTrue(new String(condition.getUuids().get(3)).startsWith("%"));
+
+    //metric LIKE host LIKE or LIKE
+    uuids.clear();
+    uuids.add(new byte[16]);
+    uuids.add(new byte[4]);
+    uuids.add(new byte[4]);
+    condition = new DefaultCondition(uuids,null,null,null,null,null,null,null,null,true);
+    sb = new StringBuilder();
+    condition.appendUuidClause(sb);
+    Assert.assertEquals(sb.toString(), "(UUID LIKE ? AND (UUID LIKE ? OR UUID LIKE ?))");
+    Assert.assertEquals(uuids.size(), condition.getUuids().size());
+    Assert.assertTrue(new String(condition.getUuids().get(0)).endsWith("%"));
+    Assert.assertTrue(new String(condition.getUuids().get(1)).startsWith("%"));
+    Assert.assertTrue(new String(condition.getUuids().get(2)).startsWith("%"));
+
+    //UUID NOT IN metric LIKE host LIKE
+    uuids.clear();
+    uuids.add(new byte[20]);
+    uuids.add(new byte[16]);
+    uuids.add(new byte[4]);
+    condition = new DefaultCondition(uuids,null,null,null,null,null,null,null,null,true);
+    sb = new StringBuilder();
+    condition.setUuidNotCondition(true);
+    condition.appendUuidClause(sb);
+    Assert.assertEquals(sb.toString(), "(UUID NOT IN (?) AND UUID LIKE ? AND UUID LIKE ?)");
+    Assert.assertEquals(uuids.size(), condition.getUuids().size());
+    Assert.assertTrue(new String(condition.getUuids().get(1)).endsWith("%"));
+    Assert.assertTrue(new String(condition.getUuids().get(2)).startsWith("%"));
   }
 }
 

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 15/39: Fixed compile errors from Merge trunk into branch-3.0-ams

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 73aee5c3d4100ad7379af265cc6351666932d2ae
Author: Siddharth Wagle <sw...@hortonworks.com>
AuthorDate: Tue Sep 26 15:30:09 2017 -0700

    Fixed compile errors from Merge trunk into branch-3.0-ams
---
 .../ambari/server/controller/metrics/timeline/MetricsRequestHelper.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/MetricsRequestHelper.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/MetricsRequestHelper.java
index d7fbe31..ce0fe6d 100644
--- a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/MetricsRequestHelper.java
+++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/MetricsRequestHelper.java
@@ -87,7 +87,7 @@ public class MetricsRequestHelper {
           uriBuilder.setParameter("precision", higherPrecision);
           String newSpec = uriBuilder.toString();
           connection = streamProvider.processURL(newSpec, HttpMethod.GET, (String) null,
-            Collections.emptyMap());
+            Collections.<String, List<String>>emptyMap());
           if (!checkConnectionForPrecisionException(connection)) {
             throw new IOException("Encountered Precision exception : Higher precision request also failed.");
           }

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 14/39: AMBARI-21106 : ML-Prototype: Detect timeseries anomaly for a metric. (Refine PIT & Trend subsystems, Integrate with AMS, Ambari Alerts.)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 2fdf774dd655ef950e52dcc871b080fc00e65555
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Tue Sep 26 14:38:40 2017 -0700

    AMBARI-21106 : ML-Prototype: Detect timeseries anomaly for a metric. (Refine PIT & Trend subsystems, Integrate with AMS, Ambari Alerts.)
---
 .../prototype/AmbariServerInterface.java           |   1 -
 .../prototype/MetricSparkConsumer.java             | 113 ++++++++++---
 .../prototype/MetricsCollectorInterface.java       |  10 +-
 .../prototype/PointInTimeADSystem.java             |  18 +-
 .../alertservice/prototype/TrendADSystem.java      |  26 +--
 .../prototype/methods/ema/EmaModel.java            |  31 ++--
 .../prototype/methods/ema/EmaTechnique.java        |  21 ++-
 .../prototype/methods/hsdev/HsdevTechnique.java    |  26 +--
 .../src/main/resources/R-scripts/tukeys.r          |  17 +-
 .../src/main/resources/input-config.properties     |  24 +++
 .../alertservice/prototype/TestEmaTechnique.java   |  22 ++-
 .../metrics/alertservice/prototype/TestTukeys.java |   1 -
 .../ambari-metrics-grafana/src/main/scripted.js    | 118 +++++++++++++
 .../metrics/TestMetricSeriesGenerator.java         |  87 ++++++++++
 .../timeline/HBaseTimelineMetricsService.java      |  18 +-
 .../metrics/timeline/PhoenixHBaseAccessor.java     | 122 +++++++++++++-
 .../timeline/TimelineMetricConfiguration.java      |  11 +-
 .../metrics/timeline/TimelineMetricStore.java      |   2 +
 .../metrics/timeline/query/PhoenixTransactSQL.java |  94 +++++++++++
 .../webapp/MetricAnomalyDetectorTestService.java   |  87 ++++++++++
 .../webapp/TimelineWebServices.java                |  36 +++-
 .../metrics/timeline/TestTimelineMetricStore.java  |   5 +
 .../AMBARI_METRICS/0.1.0/alerts.json               |  70 ++++++++
 .../alerts/alert_point_in_time_metric_anomalies.py | 185 +++++++++++++++++++++
 .../package/alerts/alert_trend_metric_anomalies.py | 185 +++++++++++++++++++++
 .../package/alerts/alert_metrics_deviation.py      |   4 +-
 .../metrics/timeline/MetricsPaddingMethodTest.java |   7 +
 27 files changed, 1234 insertions(+), 107 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/AmbariServerInterface.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/AmbariServerInterface.java
index 0c1c6fc..b98f04c 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/AmbariServerInterface.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/AmbariServerInterface.java
@@ -76,7 +76,6 @@ public class AmbariServerInterface implements Serializable{
       JSONArray array = jsonObject.getJSONArray("items");
       for(int i = 0 ; i < array.length() ; i++){
         JSONObject alertDefn = array.getJSONObject(i).getJSONObject("AlertDefinition");
-        LOG.info("alertDefn : " + alertDefn.get("name"));
         if (alertDefn.get("name") != null && alertDefn.get("name").equals("point_in_time_metrics_anomalies")) {
           JSONObject sourceNode = alertDefn.getJSONObject("source");
           JSONArray params = sourceNode.getJSONArray("parameters");
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricSparkConsumer.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricSparkConsumer.java
index 7735d6c..61b3dee 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricSparkConsumer.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricSparkConsumer.java
@@ -37,6 +37,12 @@ import org.apache.spark.streaming.kafka.KafkaUtils;
 import scala.Tuple2;
 
 import java.util.*;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.*;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
 
 public class MetricSparkConsumer {
 
@@ -47,38 +53,75 @@ public class MetricSparkConsumer {
   private static long pitStartTime = System.currentTimeMillis();
   private static long ksStartTime = pitStartTime;
   private static long hdevStartTime = ksStartTime;
+  private static Set<Pattern> includeMetricPatterns = new HashSet<>();
+  private static Set<String> includedHosts = new HashSet<>();
+  private static Set<TrendMetric> trendMetrics = new HashSet<>();
 
   public MetricSparkConsumer() {
   }
 
+  public static Properties readProperties(String propertiesFile) {
+    try {
+      Properties properties = new Properties();
+      InputStream inputStream = ClassLoader.getSystemResourceAsStream(propertiesFile);
+      if (inputStream == null) {
+        inputStream = new FileInputStream(propertiesFile);
+      }
+      properties.load(inputStream);
+      return properties;
+    } catch (IOException ioEx) {
+      LOG.error("Error reading properties file for jmeter");
+      return null;
+    }
+  }
+
   public static void main(String[] args) throws InterruptedException {
 
-    if (args.length < 5) {
-      System.err.println("Usage: MetricSparkConsumer <appid1,appid2> <collector_host> <port> <protocol> <zkQuorum>");
+    if (args.length < 1) {
+      System.err.println("Usage: MetricSparkConsumer <input-config-file>");
       System.exit(1);
     }
 
-    List<String> appIds = Arrays.asList(args[0].split(","));
-    String collectorHost = args[1];
-    String collectorPort = args[2];
-    String collectorProtocol = args[3];
-    String zkQuorum = args[4];
+    Properties properties = readProperties(args[0]);
+
+    List<String> appIds = Arrays.asList(properties.getProperty("appIds").split(","));
+
+    String collectorHost = properties.getProperty("collectorHost");
+    String collectorPort = properties.getProperty("collectorPort");
+    String collectorProtocol = properties.getProperty("collectorProtocol");
 
-    double emaW = StringUtils.isNotEmpty(args[5]) ? Double.parseDouble(args[5]) : 0.5;
-    double emaN = StringUtils.isNotEmpty(args[8]) ? Double.parseDouble(args[6]) : 3;
-    double tukeysN = StringUtils.isNotEmpty(args[7]) ? Double.parseDouble(args[7]) : 3;
+    String zkQuorum = properties.getProperty("zkQuorum");
 
-    long pitTestInterval = StringUtils.isNotEmpty(args[8]) ? Long.parseLong(args[8]) : 5 * 60 * 1000;
-    long pitTrainInterval = StringUtils.isNotEmpty(args[9]) ? Long.parseLong(args[9]) : 15 * 60 * 1000;
+    double emaW = Double.parseDouble(properties.getProperty("emaW"));
+    double emaN = Double.parseDouble(properties.getProperty("emaN"));
+    int emaThreshold = Integer.parseInt(properties.getProperty("emaThreshold"));
+    double tukeysN = Double.parseDouble(properties.getProperty("tukeysN"));
 
-    String fileName = args[10];
-    long ksTestInterval = StringUtils.isNotEmpty(args[11]) ? Long.parseLong(args[11]) : 10 * 60 * 1000;
-    long ksTrainInterval = StringUtils.isNotEmpty(args[12]) ? Long.parseLong(args[12]) : 10 * 60 * 1000;
-    int hsdevNhp = StringUtils.isNotEmpty(args[13]) ? Integer.parseInt(args[13]) : 3;
-    long hsdevInterval = StringUtils.isNotEmpty(args[14]) ? Long.parseLong(args[14]) : 30 * 60 * 1000;
+    long pitTestInterval = Long.parseLong(properties.getProperty("pointInTimeTestInterval"));
+    long pitTrainInterval = Long.parseLong(properties.getProperty("pointInTimeTrainInterval"));
 
-    String ambariServerHost = args[15];
-    String clusterName = args[16];
+    long ksTestInterval = Long.parseLong(properties.getProperty("ksTestInterval"));
+    long ksTrainInterval = Long.parseLong(properties.getProperty("ksTrainInterval"));
+    int hsdevNhp = Integer.parseInt(properties.getProperty("hsdevNhp"));
+    long hsdevInterval = Long.parseLong(properties.getProperty("hsdevInterval"));
+
+    String ambariServerHost = properties.getProperty("ambariServerHost");
+    String clusterName = properties.getProperty("clusterName");
+
+    String includeMetricPatternStrings = properties.getProperty("includeMetricPatterns");
+    if (includeMetricPatternStrings != null && !includeMetricPatternStrings.isEmpty()) {
+      String[] patterns = includeMetricPatternStrings.split(",");
+      for (String p : patterns) {
+        LOG.info("Included Pattern : " + p);
+        includeMetricPatterns.add(Pattern.compile(p));
+      }
+    }
+
+    String includedHostList = properties.getProperty("hosts");
+    if (includedHostList != null && !includedHostList.isEmpty()) {
+      String[] hosts = includedHostList.split(",");
+      includedHosts.addAll(Arrays.asList(hosts));
+    }
 
     MetricsCollectorInterface metricsCollectorInterface = new MetricsCollectorInterface(collectorHost, collectorProtocol, collectorPort);
 
@@ -86,7 +129,7 @@ public class MetricSparkConsumer {
 
     JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(10000));
 
-    EmaTechnique emaTechnique = new EmaTechnique(emaW, emaN);
+    EmaTechnique emaTechnique = new EmaTechnique(emaW, emaN, emaThreshold);
     PointInTimeADSystem pointInTimeADSystem = new PointInTimeADSystem(metricsCollectorInterface,
       tukeysN,
       pitTestInterval,
@@ -97,13 +140,14 @@ public class MetricSparkConsumer {
     TrendADSystem trendADSystem = new TrendADSystem(metricsCollectorInterface,
       ksTestInterval,
       ksTrainInterval,
-      hsdevNhp,
-      fileName);
+      hsdevNhp);
 
     Broadcast<EmaTechnique> emaTechniqueBroadcast = jssc.sparkContext().broadcast(emaTechnique);
     Broadcast<PointInTimeADSystem> pointInTimeADSystemBroadcast = jssc.sparkContext().broadcast(pointInTimeADSystem);
     Broadcast<TrendADSystem> trendADSystemBroadcast = jssc.sparkContext().broadcast(trendADSystem);
     Broadcast<MetricsCollectorInterface> metricsCollectorInterfaceBroadcast = jssc.sparkContext().broadcast(metricsCollectorInterface);
+    Broadcast<Set<Pattern>> includePatternBroadcast = jssc.sparkContext().broadcast(includeMetricPatterns);
+    Broadcast<Set<String>> includedHostBroadcast = jssc.sparkContext().broadcast(includedHosts);
 
     JavaPairReceiverInputDStream<String, String> messages =
       KafkaUtils.createStream(jssc, zkQuorum, groupId, Collections.singletonMap(topicName, numThreads));
@@ -150,7 +194,7 @@ public class MetricSparkConsumer {
 
           if (currentTime > ksStartTime + ksTestInterval) {
             LOG.info("Running KS Test....");
-            trendADSystemBroadcast.getValue().runKSTest(currentTime);
+            trendADSystemBroadcast.getValue().runKSTest(currentTime, trendMetrics);
             ksStartTime = ksStartTime + ksTestInterval;
           }
 
@@ -162,8 +206,27 @@ public class MetricSparkConsumer {
 
           TimelineMetrics metrics = tuple2._2();
           for (TimelineMetric timelineMetric : metrics.getMetrics()) {
-            List<MetricAnomaly> anomalies = ema.test(timelineMetric);
-            metricsCollectorInterfaceBroadcast.getValue().publish(anomalies);
+
+            boolean includeHost = includedHostBroadcast.getValue().contains(timelineMetric.getHostName());
+            boolean includeMetric = false;
+            if (includeHost) {
+              if (includePatternBroadcast.getValue().isEmpty()) {
+                includeMetric = true;
+              }
+              for (Pattern p : includePatternBroadcast.getValue()) {
+                Matcher m = p.matcher(timelineMetric.getMetricName());
+                if (m.find()) {
+                  includeMetric = true;
+                }
+              }
+            }
+
+            if (includeMetric) {
+              trendMetrics.add(new TrendMetric(timelineMetric.getMetricName(), timelineMetric.getAppId(),
+                timelineMetric.getHostName()));
+              List<MetricAnomaly> anomalies = ema.test(timelineMetric);
+              metricsCollectorInterfaceBroadcast.getValue().publish(anomalies);
+            }
           }
         });
     });
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricsCollectorInterface.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricsCollectorInterface.java
index 7b3f63d..dab4a0a 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricsCollectorInterface.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/MetricsCollectorInterface.java
@@ -96,7 +96,7 @@ public class MetricsCollectorInterface implements Serializable {
         emitMetrics(timelineMetrics);
       }
     } else {
-      LOG.info("No anomalies to send.");
+      LOG.debug("No anomalies to send.");
     }
   }
 
@@ -130,7 +130,7 @@ public class MetricsCollectorInterface implements Serializable {
   public boolean emitMetrics(TimelineMetrics metrics) {
     String connectUrl = constructTimelineMetricUri();
     String jsonData = null;
-    LOG.info("EmitMetrics connectUrl = " + connectUrl);
+    LOG.debug("EmitMetrics connectUrl = " + connectUrl);
     try {
       jsonData = mapper.writeValueAsString(metrics);
       LOG.info(jsonData);
@@ -202,7 +202,7 @@ public class MetricsCollectorInterface implements Serializable {
 
     String url = constructTimelineMetricUri() + "?metricNames=" + metricName + "&appId=" + appId +
       "&hostname=" + hostname + "&startTime=" + startime + "&endTime=" + endtime;
-    LOG.info("Fetch metrics URL : " + url);
+    LOG.debug("Fetch metrics URL : " + url);
 
     URL obj = null;
     BufferedReader in = null;
@@ -213,8 +213,8 @@ public class MetricsCollectorInterface implements Serializable {
       HttpURLConnection con = (HttpURLConnection) obj.openConnection();
       con.setRequestMethod("GET");
       int responseCode = con.getResponseCode();
-      LOG.info("Sending 'GET' request to URL : " + url);
-      LOG.info("Response Code : " + responseCode);
+      LOG.debug("Sending 'GET' request to URL : " + url);
+      LOG.debug("Response Code : " + responseCode);
 
       in = new BufferedReader(
         new InputStreamReader(con.getInputStream()));
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/PointInTimeADSystem.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/PointInTimeADSystem.java
index b4a8593..b3e7bd3 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/PointInTimeADSystem.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/PointInTimeADSystem.java
@@ -49,7 +49,7 @@ public class PointInTimeADSystem implements Serializable {
   private AmbariServerInterface ambariServerInterface;
   private int sensitivity = 50;
   private int minSensitivity = 0;
-  private int maxSensitivity = 10;
+  private int maxSensitivity = 100;
 
   public PointInTimeADSystem(MetricsCollectorInterface metricsCollectorInterface, double defaultTukeysN,
                              long testIntervalMillis, long trainIntervalMillis, String ambariServerHost, String clusterName) {
@@ -73,13 +73,13 @@ public class PointInTimeADSystem implements Serializable {
       if (requiredSensivity > sensitivity) {
         int targetSensitivity = Math.min(maxSensitivity, requiredSensivity);
         while (sensitivity < targetSensitivity) {
-          defaultTukeysN = defaultTukeysN + defaultTukeysN * 0.1;
+          defaultTukeysN = defaultTukeysN + defaultTukeysN * 0.05;
           sensitivity++;
         }
       } else {
         int targetSensitivity = Math.max(minSensitivity, requiredSensivity);
         while (sensitivity > targetSensitivity) {
-          defaultTukeysN = defaultTukeysN - defaultTukeysN * 0.1;
+          defaultTukeysN = defaultTukeysN - defaultTukeysN * 0.05;
           sensitivity--;
         }
       }
@@ -201,10 +201,10 @@ public class PointInTimeADSystem implements Serializable {
 
       if (recall < 0.5) {
         LOG.info("Increasing EMA sensitivity by 10%");
-        emaModel.updateModel(true, 10);
+        emaModel.updateModel(true, 5);
       } else if (precision < 0.5) {
         LOG.info("Decreasing EMA sensitivity by 10%");
-        emaModel.updateModel(false, 10);
+        emaModel.updateModel(false, 5);
       }
 
     }
@@ -233,7 +233,7 @@ public class PointInTimeADSystem implements Serializable {
       double[] anomalyScore = result.resultset.get(2);
       for (int i = 0; i < ts.length; i++) {
         TimelineMetric timelineMetric = new TimelineMetric();
-        timelineMetric.setMetricName(metricName + "_" + appId + "_" + hostname);
+        timelineMetric.setMetricName(metricName + ":" + appId + ":" + hostname);
         timelineMetric.setHostName(MetricsCollectorInterface.getDefaultLocalHostName());
         timelineMetric.setAppId(MetricsCollectorInterface.serviceName + "-tukeys");
         timelineMetric.setInstanceId(null);
@@ -243,7 +243,11 @@ public class PointInTimeADSystem implements Serializable {
 
         HashMap<String, String> metadata = new HashMap<>();
         metadata.put("method", "tukeys");
-        metadata.put("anomaly-score", String.valueOf(anomalyScore[i]));
+        if (String.valueOf(anomalyScore[i]).equals("infinity")) {
+          LOG.info("Got anomalyScore = infinity for " + metricName + ":" + appId + ":" + hostname);
+        } else {
+          metadata.put("anomaly-score", String.valueOf(anomalyScore[i]));
+        }
         timelineMetric.setMetadata(metadata);
 
         timelineMetric.setMetricValues(metricValues);
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendADSystem.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendADSystem.java
index 1534b55..df36a4a 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendADSystem.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/TrendADSystem.java
@@ -31,11 +31,11 @@ import java.io.FileReader;
 import java.io.IOException;
 import java.io.Serializable;
 import java.util.ArrayList;
-import java.util.Collections;
 import java.util.Date;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
+import java.util.Set;
 import java.util.TreeMap;
 
 public class TrendADSystem implements Serializable {
@@ -57,8 +57,7 @@ public class TrendADSystem implements Serializable {
   public TrendADSystem(MetricsCollectorInterface metricsCollectorInterface,
                        long ksTestIntervalMillis,
                        long ksTrainIntervalMillis,
-                       int hsdevNumHistoricalPeriods,
-                       String inputFileName) {
+                       int hsdevNumHistoricalPeriods) {
 
     this.metricsCollectorInterface = metricsCollectorInterface;
     this.ksTestIntervalMillis = ksTestIntervalMillis;
@@ -69,11 +68,9 @@ public class TrendADSystem implements Serializable {
     this.hsdevTechnique = new HsdevTechnique();
 
     trendMetrics = new ArrayList<>();
-    this.inputFile = inputFileName;
-    readInputFile(inputFileName);
   }
 
-  public void runKSTest(long currentEndTime) {
+  public void runKSTest(long currentEndTime, Set<TrendMetric> trendMetrics) {
     readInputFile(inputFile);
 
     long ksTestIntervalStartTime = currentEndTime - ksTestIntervalMillis;
@@ -85,7 +82,7 @@ public class TrendADSystem implements Serializable {
       String metricName = metric.metricName;
       String appId = metric.appId;
       String hostname = metric.hostname;
-      String key = metricName + "_" + appId + "_" + hostname;
+      String key = metricName + ":" + appId + ":" + hostname;
 
       TimelineMetrics ksData = metricsCollectorInterface.fetchMetrics(metricName, appId, hostname, ksTestIntervalStartTime - ksTrainIntervalMillis,
         currentEndTime);
@@ -112,6 +109,7 @@ public class TrendADSystem implements Serializable {
         }
       }
 
+      LOG.info("Train Data size : " + trainDataList.size() + ", Test Data Size : " + testDataList.size());
       if (trainDataList.isEmpty() || testDataList.isEmpty() || trainDataList.size() < testDataList.size()) {
         LOG.info("Not enough train/test data to perform KS analysis.");
         continue;
@@ -184,6 +182,7 @@ public class TrendADSystem implements Serializable {
     return timelineMetric;
 
   }
+
   public void runHsdevMethod() {
 
     List<TimelineMetric> hsdevMetricAnomalies = new ArrayList<>();
@@ -315,17 +314,4 @@ public class TrendADSystem implements Serializable {
       this.hostname = hostname;
     }
   }
-
-  /*
-          boolean isPresent = false;
-        for (TrendMetric trendMetric : trendMetrics) {
-          if (trendMetric.metricName.equalsIgnoreCase(splits[0])) {
-            isPresent = true;
-          }
-        }
-        if (!isPresent) {
-          LOG.info("Adding a new metric to track in Trend AD system : " + splits[0]);
-          trendMetrics.add(new TrendMetric(splits[0], splits[1], splits[2]));
-        }
-   */
 }
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java
index 5e1f76b..a31410d 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java
@@ -23,6 +23,8 @@ import org.apache.commons.logging.LogFactory;
 import javax.xml.bind.annotation.XmlRootElement;
 import java.io.Serializable;
 
+import static org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique.suppressAnomaliesTheshold;
+
 @XmlRootElement
 public class EmaModel implements Serializable {
 
@@ -35,7 +37,6 @@ public class EmaModel implements Serializable {
   private double timessdev;
 
   private int ctr = 0;
-  private static final int suppressAnomaliesTheshold = 30;
 
   private static final Log LOG = LogFactory.getLog(EmaModel.class);
 
@@ -64,30 +65,36 @@ public class EmaModel implements Serializable {
   public double testAndUpdate(double metricValue) {
 
     double anomalyScore = 0.0;
+    LOG.info("Before Update ->" + metricName + ":" + appId + ":" + hostname + " - " + "ema = " + ema + ", ems = " + ems + ", timessdev = " + timessdev);
+    update(metricValue);
     if (ctr > suppressAnomaliesTheshold) {
       anomalyScore = test(metricValue);
-    }
-    if (Math.abs(anomalyScore) < 2 * timessdev) {
-      update(metricValue);
+      if (anomalyScore > 0.0) {
+        LOG.info("Anomaly ->" + metricName + ":" + appId + ":" + hostname + " - " + "ema = " + ema + ", ems = " + ems +
+          ", timessdev = " + timessdev + ", metricValue = " + metricValue);
+      } else {
+        LOG.info("Not an Anomaly ->" + metricName + ":" + appId + ":" + hostname + " - " + "ema = " + ema + ", ems = " + ems +
+          ", timessdev = " + timessdev + ", metricValue = " + metricValue);
+      }
     } else {
-      LOG.info("Not updating model for this value");
+      ctr++;
+      if (ctr > suppressAnomaliesTheshold) {
+        LOG.info("Ema Model for " + metricName + ":" + appId + ":" + hostname + " is ready for testing data.");
+      }
     }
-    ctr++;
-    LOG.info("Counter : " + ctr);
-    LOG.info("Anomaly Score for " + metricValue + " : " + anomalyScore);
     return anomalyScore;
   }
 
   public void update(double metricValue) {
     ema = weight * ema + (1 - weight) * metricValue;
     ems = Math.sqrt(weight * Math.pow(ems, 2.0) + (1 - weight) * Math.pow(metricValue - ema, 2.0));
-    LOG.info("In update : ema = " + ema + ", ems = " + ems);
+    LOG.debug("In update : ema = " + ema + ", ems = " + ems);
   }
 
   public double test(double metricValue) {
-    LOG.info("In test : ema = " + ema + ", ems = " + ems);
+    LOG.debug("In test : ema = " + ema + ", ems = " + ems);
     double diff = Math.abs(ema - metricValue) - (timessdev * ems);
-    LOG.info("diff = " + diff);
+    LOG.debug("diff = " + diff);
     if (diff > 0) {
       return Math.abs((metricValue - ema) / ems); //Z score
     } else {
@@ -102,7 +109,7 @@ public class EmaModel implements Serializable {
       delta = delta * -1;
     }
     this.timessdev = timessdev + delta * timessdev;
-    this.weight = Math.min(1.0, weight + delta * weight);
+    //this.weight = Math.min(1.0, weight + delta * weight);
     LOG.info("New model parameters " + metricName + " : timessdev = " + timessdev + ", weight = " + weight);
   }
 
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java
index c005e6f..52c6cf3 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java
@@ -49,6 +49,15 @@ public class EmaTechnique extends AnomalyDetectionTechnique implements Serializa
   private double startingWeight = 0.5;
   private double startTimesSdev = 3.0;
   private String methodType = "ema";
+  public static int suppressAnomaliesTheshold = 100;
+
+  public EmaTechnique(double startingWeight, double startTimesSdev, int suppressAnomaliesTheshold) {
+    trackedEmas = new HashMap<>();
+    this.startingWeight = startingWeight;
+    this.startTimesSdev = startTimesSdev;
+    EmaTechnique.suppressAnomaliesTheshold = suppressAnomaliesTheshold;
+    LOG.info("New EmaTechnique......");
+  }
 
   public EmaTechnique(double startingWeight, double startTimesSdev) {
     trackedEmas = new HashMap<>();
@@ -61,16 +70,16 @@ public class EmaTechnique extends AnomalyDetectionTechnique implements Serializa
     String metricName = metric.getMetricName();
     String appId = metric.getAppId();
     String hostname = metric.getHostName();
-    String key = metricName + "_" + appId + "_" + hostname;
+    String key = metricName + ":" + appId + ":" + hostname;
 
     EmaModel emaModel = trackedEmas.get(key);
     if (emaModel == null) {
-      LOG.info("EmaModel not present for " + key);
-      LOG.info("Number of tracked Emas : " + trackedEmas.size());
+      LOG.debug("EmaModel not present for " + key);
+      LOG.debug("Number of tracked Emas : " + trackedEmas.size());
       emaModel  = new EmaModel(metricName, hostname, appId, startingWeight, startTimesSdev);
       trackedEmas.put(key, emaModel);
     } else {
-      LOG.info("EmaModel already present for " + key);
+      LOG.debug("EmaModel already present for " + key);
     }
 
     List<MetricAnomaly> anomalies = new ArrayList<>();
@@ -79,11 +88,11 @@ public class EmaTechnique extends AnomalyDetectionTechnique implements Serializa
       double metricValue = metric.getMetricValues().get(timestamp);
       double anomalyScore = emaModel.testAndUpdate(metricValue);
       if (anomalyScore > 0.0) {
-        LOG.info("Found anomaly for : " + key);
+        LOG.info("Found anomaly for : " + key + ", anomalyScore = " + anomalyScore);
         MetricAnomaly metricAnomaly = new MetricAnomaly(key, timestamp, metricValue, methodType, anomalyScore);
         anomalies.add(metricAnomaly);
       } else {
-        LOG.info("Discarding non-anomaly for : " + key);
+        LOG.debug("Discarding non-anomaly for : " + key);
       }
     }
     return anomalies;
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java
index 50bf9f2..04f4a73 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java
@@ -58,19 +58,23 @@ public class HsdevTechnique implements Serializable {
     double historicMedian = median(trainData.values);
     double currentMedian = median(testData.values);
 
-    double diff = Math.abs(currentMedian - historicMedian);
-    LOG.info("Found anomaly for metric : " + key + " in the period ending " + new Date((long)testData.ts[testLength - 1]));
-    LOG.info("Current median = " + currentMedian + ", Historic Median = " + historicMedian + ", HistoricSd = " + historicSd);
 
-    if (diff > n * historicSd) {
-      double zScore = diff / historicSd;
-      LOG.info("Z Score of current series : " + zScore);
-      return new MetricAnomaly(key,
-        (long) testData.ts[testLength - 1],
-        testData.values[testLength - 1],
-        methodType,
-        zScore);
+    if (historicSd > 0) {
+      double diff = Math.abs(currentMedian - historicMedian);
+      LOG.info("Found anomaly for metric : " + key + " in the period ending " + new Date((long)testData.ts[testLength - 1]));
+      LOG.info("Current median = " + currentMedian + ", Historic Median = " + historicMedian + ", HistoricSd = " + historicSd);
+
+      if (diff > n * historicSd) {
+        double zScore = diff / historicSd;
+        LOG.info("Z Score of current series : " + zScore);
+        return new MetricAnomaly(key,
+          (long) testData.ts[testLength - 1],
+          testData.values[testLength - 1],
+          methodType,
+          zScore);
+      }
     }
+
     return null;
   }
 
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r
index f33b6ec..0312226 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/R-scripts/tukeys.r
@@ -26,20 +26,23 @@ ams_tukeys <- function(train_data, test_data, n) {
   anomalies <- data.frame()
   quantiles <- quantile(train_data[,2])
   iqr <- quantiles[4] - quantiles[2]
+  niqr <- 0
 
   for ( i in 1:length(test_data[,1])) {
     x <- test_data[i,2]
     lb <- quantiles[2] - n*iqr
     ub <- quantiles[4] + n*iqr
     if ( (x < lb)  || (x > ub) ) {
-      if (x < lb) {
-        niqr <- (quantiles[2] - x) / iqr
-      } else {
-        niqr <- (x - quantiles[4]) / iqr
+      if (iqr != 0) {
+        if (x < lb) {
+          niqr <- (quantiles[2] - x) / iqr
+        } else {
+          niqr <- (x - quantiles[4]) / iqr
+        }
+      }
+        anomaly <- c(test_data[i,1], x, niqr)
+        anomalies <- rbind(anomalies, anomaly)
       }
-      anomaly <- c(test_data[i,1], x, niqr)
-      anomalies <- rbind(anomalies, anomaly)
-    }
   }
   if(length(anomalies) > 0) {
     names(anomalies) <- c("TS", "Value", "niqr")
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/input-config.properties b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/input-config.properties
new file mode 100644
index 0000000..88304c7
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/input-config.properties
@@ -0,0 +1,24 @@
+appIds=HOST
+
+collectorHost=localhost
+collectorPort=6188
+collectorProtocol=http
+
+zkQuorum=localhost:2181
+
+ambariServerHost=localhost
+clusterName=c1
+
+emaW=0.8
+emaN=3
+tukeysN=3
+pointInTimeTestInterval=300000
+pointInTimeTrainInterval=900000
+
+ksTestInterval=600000
+ksTrainInterval=600000
+hsdevNhp=3
+hsdevInterval=1800000;
+
+skipMetricPatterns=sdisk*,cpu_sintr*,proc*,disk*,boottime
+hosts=avijayan-ad-1.openstacklocal
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java b/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java
index 539ca40..d1e2b41 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java
@@ -21,21 +21,41 @@ import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
 import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.junit.Assert;
+import org.junit.Assume;
 import org.junit.Before;
+import org.junit.BeforeClass;
 import org.junit.Test;
 
+import java.io.File;
+import java.net.URISyntaxException;
+import java.net.URL;
 import java.util.List;
 import java.util.TreeMap;
 
+import static org.apache.ambari.metrics.alertservice.prototype.TestRFunctionInvoker.getTS;
+
 public class TestEmaTechnique {
 
+  private static double[] ts;
+  private static String fullFilePath;
+
+  @BeforeClass
+  public static void init() throws URISyntaxException {
+
+    Assume.assumeTrue(System.getenv("R_HOME") != null);
+    ts = getTS(1000);
+    URL url = ClassLoader.getSystemResource("R-scripts");
+    fullFilePath = new File(url.toURI()).getAbsolutePath();
+    RFunctionInvoker.setScriptsDir(fullFilePath);
+  }
+
   @Test
   public void testEmaInitialization() {
 
     EmaTechnique ema = new EmaTechnique(0.5, 3);
     Assert.assertTrue(ema.getTrackedEmas().isEmpty());
     Assert.assertTrue(ema.getStartingWeight() == 0.5);
-    Assert.assertTrue(ema.getStartTimesSdev() == 3);
+    Assert.assertTrue(ema.getStartTimesSdev() == 2);
   }
 
   @Test
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java b/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java
index bb409cf..ef0125f 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java
+++ b/ambari-metrics/ambari-metrics-alertservice/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java
@@ -21,7 +21,6 @@ import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
 import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-import org.junit.Assert;
 import org.junit.Assume;
 import org.junit.BeforeClass;
 import org.junit.Test;
diff --git a/ambari-metrics/ambari-metrics-grafana/src/main/scripted.js b/ambari-metrics/ambari-metrics-grafana/src/main/scripted.js
new file mode 100644
index 0000000..298535f
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-grafana/src/main/scripted.js
@@ -0,0 +1,118 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/* global _ */
+
+/*
+ * Complex scripted dashboard
+ * This script generates a dashboard object that Grafana can load. It also takes a number of user
+ * supplied URL parameters (in the ARGS variable)
+ *
+ * Return a dashboard object, or a function
+ *
+ * For async scripts, return a function, this function must take a single callback function as argument,
+ * call this callback function with the dashboard object (look at scripted_async.js for an example)
+ */
+
+'use strict';
+
+// accessible variables in this scope
+var window, document, ARGS, $, jQuery, moment, kbn;
+
+// Setup some variables
+var dashboard;
+
+// All url parameters are available via the ARGS object
+var ARGS;
+
+// Intialize a skeleton with nothing but a rows array and service object
+dashboard = {
+    rows : [],
+};
+
+// Set a title
+dashboard.title = 'Scripted dash';
+
+// Set default time
+// time can be overriden in the url using from/to parameters, but this is
+// handled automatically in grafana core during dashboard initialization
+
+
+var obj = JSON.parse(ARGS.anomalies);
+var metrics = obj.metrics;
+var rows = metrics.length
+
+dashboard.time = {
+    from: "now-1h",
+    to: "now"
+};
+
+var metricSet = new Set();
+
+for (var i = 0; i < rows; i++) {
+
+    var key = metrics[i].metricname;
+    if (metricSet.has(key)) {
+        continue;
+    }
+    metricSet.add(key)
+    var metricKeyElements = key.split(":");
+    var metricName = metricKeyElements[0];
+    var appId = metricKeyElements[1];
+    var hostname = metricKeyElements[2];
+
+    dashboard.rows.push({
+        title: 'Chart',
+        height: '300px',
+        panels: [
+            {
+                title: metricName,
+                type: 'graph',
+                span: 12,
+                fill: 1,
+                linewidth: 2,
+                targets: [
+                    {
+                        "aggregator": "none",
+                        "alias": metricName,
+                        "app": appId,
+                        "errors": {},
+                        "metric": metricName,
+                        "precision": "default",
+                        "refId": "A",
+                        "hosts": hostname
+                    }
+                ],
+                seriesOverrides: [
+                    {
+                        alias: '/random/',
+                        yaxis: 2,
+                        fill: 0,
+                        linewidth: 5
+                    }
+                ],
+                tooltip: {
+                    shared: true
+                }
+            }
+        ]
+    });
+}
+
+
+return dashboard;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/TestMetricSeriesGenerator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/TestMetricSeriesGenerator.java
new file mode 100644
index 0000000..2420ef3
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/TestMetricSeriesGenerator.java
@@ -0,0 +1,87 @@
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics;
+
+import org.apache.ambari.metrics.alertservice.prototype.TestSeriesInputRequest;
+import org.apache.ambari.metrics.alertservice.seriesgenerator.AbstractMetricSeries;
+import org.apache.ambari.metrics.alertservice.seriesgenerator.MetricSeriesGeneratorFactory;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
+
+import java.io.IOException;
+import java.net.InetAddress;
+import java.net.UnknownHostException;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.TreeMap;
+
+public class TestMetricSeriesGenerator implements Runnable {
+
+  private Map<TestSeriesInputRequest, AbstractMetricSeries> configuredSeries = new HashMap<>();
+  private static final Log LOG = LogFactory.getLog(TestMetricSeriesGenerator.class);
+  private TimelineMetricStore metricStore;
+  private String hostname;
+
+  public TestMetricSeriesGenerator(TimelineMetricStore metricStore) {
+    this.metricStore = metricStore;
+    try {
+      this.hostname = InetAddress.getLocalHost().getHostName();
+    } catch (UnknownHostException e) {
+      e.printStackTrace();
+    }
+  }
+
+  public void addSeries(TestSeriesInputRequest inputRequest) {
+    if (!configuredSeries.containsKey(inputRequest)) {
+      AbstractMetricSeries metricSeries = MetricSeriesGeneratorFactory.generateSeries(inputRequest.getSeriesType(), inputRequest.getConfigs());
+      configuredSeries.put(inputRequest, metricSeries);
+      LOG.info("Added series " + inputRequest.getSeriesName());
+    }
+  }
+
+  public void removeSeries(String seriesName) {
+    boolean isPresent = false;
+    TestSeriesInputRequest tbd = null;
+    for (TestSeriesInputRequest inputRequest : configuredSeries.keySet()) {
+      if (inputRequest.getSeriesName().equals(seriesName)) {
+        isPresent = true;
+        tbd = inputRequest;
+      }
+    }
+    if (isPresent) {
+      LOG.info("Removing series " + seriesName);
+      configuredSeries.remove(tbd);
+    } else {
+      LOG.info("Series not found : " + seriesName);
+    }
+  }
+
+  @Override
+  public void run() {
+    long currentTime = System.currentTimeMillis();
+    TimelineMetrics timelineMetrics = new TimelineMetrics();
+
+    for (TestSeriesInputRequest input : configuredSeries.keySet()) {
+      AbstractMetricSeries metricSeries = configuredSeries.get(input);
+      TimelineMetric timelineMetric = new TimelineMetric();
+      timelineMetric.setMetricName(input.getSeriesName());
+      timelineMetric.setAppId("anomaly-engine-test-metric");
+      timelineMetric.setInstanceId(null);
+      timelineMetric.setStartTime(currentTime);
+      timelineMetric.setHostName(hostname);
+      TreeMap<Long, Double> metricValues = new TreeMap();
+      metricValues.put(currentTime, metricSeries.nextValue());
+      timelineMetric.setMetricValues(metricValues);
+      timelineMetrics.addOrMergeTimelineMetric(timelineMetric);
+      LOG.info("Emitting metric with appId = " + timelineMetric.getAppId());
+    }
+    try {
+      LOG.info("Publishing test metrics for " + timelineMetrics.getMetrics().size() + " series.");
+      metricStore.putMetrics(timelineMetrics);
+    } catch (Exception e) {
+      LOG.error(e);
+    }
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
index 95682f9..4450d65 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
@@ -157,6 +157,10 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
               "start cache node", e);
         }
       }
+//      String kafkaServers = configuration.getKafkaServers();
+//      if (kafkaServers != null) {
+//        metricKafkaProducer = new MetricKafkaProducer(kafkaServers);
+//      }
 
       defaultTopNHostsLimit = Integer.parseInt(metricsConf.get(DEFAULT_TOPN_HOSTS_LIMIT, "20"));
       if (Boolean.parseBoolean(metricsConf.get(USE_GROUPBY_AGGREGATOR_QUERIES, "true"))) {
@@ -235,6 +239,11 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
   }
 
   @Override
+  public TimelineMetrics getAnomalyMetrics(String method, long startTime, long endTime, Integer limit) throws SQLException {
+    return hBaseAccessor.getAnomalyMetricRecords(method, startTime, endTime, limit);
+  }
+
+  @Override
   public TimelineMetrics getTimelineMetrics(List<String> metricNames,
       List<String> hostnames, String applicationId, String instanceId,
       Long startTime, Long endTime, Precision precision, Integer limit,
@@ -403,10 +412,17 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
       cache.putMetrics(metrics.getMetrics(), metricMetadataManager);
     }
 
+//    try {
+//      metricKafkaProducer.sendMetrics(metrics);
+////      if (metrics.getMetrics().size() != 0 && metrics.getMetrics().get(0).getAppId().equals("anomaly-engine-test-metric")) {
+////      }
+//    } catch (Exception e) {
+//      LOG.error(e);
+//    }
+
     return response;
   }
 
-
   @Override
   public TimelinePutResponse putContainerMetrics(List<ContainerMetric> metrics)
       throws SQLException, IOException {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
index da14fd1..f470c58 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
@@ -18,7 +18,6 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline;
 
 import static java.util.concurrent.TimeUnit.SECONDS;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.AGGREGATE_TABLE_SPLIT_POINTS;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.AGGREGATORS_SKIP_BLOCK_CACHE;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_DAILY_TABLE_TTL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_HOUR_TABLE_TTL;
@@ -35,7 +34,6 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_HOUR_TABLE_TTL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_MINUTE_TABLE_TTL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.OUT_OFF_BAND_DATA_TIME_ALLOWANCE;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.PRECISION_TABLE_SPLIT_POINTS;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.PRECISION_TABLE_TTL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_AGGREGATE_TABLES_DURABILITY;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_AGGREGATE_TABLE_HBASE_BLOCKING_STORE_FILES;
@@ -50,15 +48,18 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_PRECISION_TABLE_HBASE_BLOCKING_STORE_FILES;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRIC_AGGREGATOR_SINK_CLASS;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.ALTER_METRICS_METADATA_TABLE;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.ANOMALY_METRICS_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CONTAINER_METRICS_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_CONTAINER_METRICS_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_HOSTED_APPS_METADATA_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_INSTANCE_HOST_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_METRICS_AGGREGATE_TABLE_SQL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_ANOMALY_METRICS_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_METRICS_CLUSTER_AGGREGATE_GROUPED_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_METRICS_CLUSTER_AGGREGATE_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_METRICS_METADATA_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_METRICS_TABLE_SQL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_TREND_ANOMALY_METRICS_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.DEFAULT_ENCODING;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.DEFAULT_TABLE_COMPRESSION;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_HOSTED_APPS_METADATA_SQL;
@@ -73,7 +74,9 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_RECORD_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.PHOENIX_TABLES;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.TREND_ANOMALY_METRICS_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_AGGREGATE_RECORD_SQL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_ANOMALY_METRICS_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_CLUSTER_AGGREGATE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_CLUSTER_AGGREGATE_TIME_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_CONTAINER_METRICS_SQL;
@@ -81,6 +84,7 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_INSTANCE_HOST_METADATA_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_METADATA_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_METRICS_SQL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_TREND_ANOMALY_METRICS_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider.SOURCE_NAME.RAW_METRICS;
 
 import java.io.IOException;
@@ -309,11 +313,63 @@ public class PhoenixHBaseAccessor {
     commitMetrics(Collections.singletonList(timelineMetrics));
   }
 
+  private void commitAnomalyMetric(Connection conn, TimelineMetric metric) {
+    PreparedStatement metricRecordStmt = null;
+    try {
+
+      Map<String, String> metricMetadata = metric.getMetadata();
+
+
+      byte[] uuid = metadataManagerInstance.getUuid(metric);
+      if (uuid == null) {
+        LOG.error("Error computing UUID for metric. Cannot write metrics : " + metric.toString());
+        return;
+      }
+
+      if (metric.getAppId().equals("anomaly-engine-ks") || metric.getAppId().equals("anomaly-engine-hsdev")) {
+        metricRecordStmt = conn.prepareStatement(String.format(UPSERT_TREND_ANOMALY_METRICS_SQL,
+          TREND_ANOMALY_METRICS_TABLE_NAME));
+
+        metricRecordStmt.setBytes(1, uuid);
+        metricRecordStmt.setLong(2, metric.getStartTime());
+        metricRecordStmt.setLong(3, Long.parseLong(metricMetadata.get("test-start-time")));
+        metricRecordStmt.setLong(4, Long.parseLong(metricMetadata.get("train-start-time")));
+        metricRecordStmt.setLong(5, Long.parseLong(metricMetadata.get("train-end-time")));
+        String json = TimelineUtils.dumpTimelineRecordtoJSON(metric.getMetricValues());
+        metricRecordStmt.setString(6, json);
+        metricRecordStmt.setString(7, metric.getMetadata().get("method"));
+        double anomalyScore = metric.getMetadata().containsKey("anomaly-score") ? Double.parseDouble(metric.getMetadata().get("anomaly-score"))  : 0.0;
+        metricRecordStmt.setDouble(8, anomalyScore);
+
+      } else {
+        metricRecordStmt = conn.prepareStatement(String.format(
+          UPSERT_ANOMALY_METRICS_SQL, ANOMALY_METRICS_TABLE_NAME));
+
+        metricRecordStmt.setBytes(1, uuid);
+        metricRecordStmt.setLong(2, metric.getStartTime());
+        String json = TimelineUtils.dumpTimelineRecordtoJSON(metric.getMetricValues());
+        metricRecordStmt.setString(3, json);
+        metricRecordStmt.setString(4, metric.getMetadata().get("method"));
+        double anomalyScore = metric.getMetadata().containsKey("anomaly-score") ? Double.parseDouble(metric.getMetadata().get("anomaly-score"))  : 0.0;
+        metricRecordStmt.setDouble(5, anomalyScore);
+      }
+
+      try {
+        metricRecordStmt.executeUpdate();
+      } catch (SQLException sql) {
+        LOG.error("Failed on insert records to store.", sql);
+      }
+
+    } catch (Exception e) {
+      LOG.error("Failed on insert records to anomaly table.", e);
+    }
+
+  }
+
   public void commitMetrics(Collection<TimelineMetrics> timelineMetricsCollection) {
     LOG.debug("Committing metrics to store");
     Connection conn = null;
     PreparedStatement metricRecordStmt = null;
-    long currentTime = System.currentTimeMillis();
 
     try {
       conn = getConnection();
@@ -321,6 +377,10 @@ public class PhoenixHBaseAccessor {
               UPSERT_METRICS_SQL, METRICS_RECORD_TABLE_NAME));
       for (TimelineMetrics timelineMetrics : timelineMetricsCollection) {
         for (TimelineMetric metric : timelineMetrics.getMetrics()) {
+          if (metric.getAppId().startsWith("anomaly-engine") && !metric.getAppId().equals("anomaly-engine-test-metric")) {
+            commitAnomalyMetric(conn, metric);
+          }
+
           metricRecordStmt.clearParameters();
 
           if (LOG.isTraceEnabled()) {
@@ -469,6 +529,20 @@ public class PhoenixHBaseAccessor {
       stmt.executeUpdate( String.format(CREATE_CONTAINER_METRICS_TABLE_SQL,
         encoding, tableTTL.get(CONTAINER_METRICS_TABLE_NAME), compression));
 
+      //Anomaly Metrics
+      stmt.executeUpdate(String.format(CREATE_ANOMALY_METRICS_TABLE_SQL,
+        ANOMALY_METRICS_TABLE_NAME,
+        encoding,
+        tableTTL.get(METRICS_AGGREGATE_HOURLY_TABLE_NAME),
+        compression));
+
+      //Trend Anomaly Metrics
+      stmt.executeUpdate(String.format(CREATE_TREND_ANOMALY_METRICS_TABLE_SQL,
+        TREND_ANOMALY_METRICS_TABLE_NAME,
+        encoding,
+        tableTTL.get(METRICS_AGGREGATE_HOURLY_TABLE_NAME),
+        compression));
+
       // Host level
       String precisionSql = String.format(CREATE_METRICS_TABLE_SQL,
         encoding, tableTTL.get(METRICS_RECORD_TABLE_NAME), compression);
@@ -842,6 +916,48 @@ public class PhoenixHBaseAccessor {
     insertMetricRecords(metrics, false);
   }
 
+  public TimelineMetrics getAnomalyMetricRecords(String method, long startTime, long endTime, Integer limit) throws SQLException {
+    Connection conn = getConnection();
+    PreparedStatement stmt = null;
+    ResultSet rs = null;
+    TimelineMetrics metrics = new TimelineMetrics();
+    try {
+      stmt = PhoenixTransactSQL.prepareAnomalyMetricsGetSqlStatement(conn, method, startTime, endTime, limit);
+      rs = stmt.executeQuery();
+      while (rs.next()) {
+
+        byte[] uuid = rs.getBytes("UUID");
+        TimelineMetric metric = metadataManagerInstance.getMetricFromUuid(uuid);
+
+        if (method.equals("ks") || method.equals("hsdev")) {
+          metric.setStartTime(rs.getLong("TEST_END_TIME"));
+        } else {
+          metric.setStartTime(rs.getLong("SERVER_TIME"));
+        }
+        metric.setInstanceId(null);
+
+        HashMap<String, String> metadata = new HashMap<>();
+        metadata.put("method", rs.getString("METHOD"));
+        metadata.put("anomaly-score", String.valueOf(rs.getDouble("ANOMALY_SCORE")));
+        if (method.equals("ks") || method.equals("hsdev")) {
+          metadata.put("test-start-time", String.valueOf(rs.getLong("TEST_START_TIME")));
+          metadata.put("train-start-time", String.valueOf(rs.getLong("TRAIN_START_TIME")));
+          metadata.put("train-end-time", String.valueOf(rs.getLong("TRAIN_END_TIME")));
+        }
+        metric.setMetadata(metadata);
+
+        TreeMap<Long, Double> sortedByTimeMetrics = readMetricFromJSON(rs.getString("METRICS"));
+        metric.setMetricValues(sortedByTimeMetrics);
+
+        metrics.getMetrics().add(metric);
+      }
+    } catch (Exception ex) {
+      LOG.error(ex);
+    }
+    return metrics;
+  }
+
+
   @SuppressWarnings("unchecked")
   public TimelineMetrics getMetricRecords(
     final Condition condition, Multimap<String, List<Function>> metricFunctions)
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
index 258e9c6..85dad1f 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
@@ -322,16 +322,14 @@ public class TimelineMetricConfiguration {
   public static final String TIMELINE_METRICS_PRECISION_TABLE_HBASE_BLOCKING_STORE_FILES =
     "timeline.metrics.precision.table.hbase.hstore.blockingStoreFiles";
 
-<<<<<<< HEAD
   public static final String TIMELINE_METRICS_SUPPORT_MULTIPLE_CLUSTERS =
     "timeline.metrics.support.multiple.clusters";
 
   public static final String TIMELINE_METRICS_EVENT_METRIC_PATTERNS =
     "timeline.metrics.downsampler.event.metric.patterns";
-=======
+
   public static final String TIMELINE_METRICS_UUID_GEN_STRATEGY =
     "timeline.metrics.uuid.gen.strategy";
->>>>>>> AMBARI-21214 : Use a uuid vs long row key for metrics in AMS schema. (avijayan)
 
   public static final String HOST_APP_ID = "HOST";
 
@@ -534,6 +532,13 @@ public class TimelineMetricConfiguration {
     return defaultRpcAddress;
   }
 
+  public String getKafkaServers() {
+    if (metricsConf != null) {
+      return metricsConf.get("timeline.metrics.kafka.servers", null);
+    }
+    return null;
+  }
+
   public boolean isDistributedCollectorModeDisabled() {
     try {
       if (getMetricsConf() != null) {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
index dab4494..cdeefdc 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
@@ -107,4 +107,6 @@ public interface TimelineMetricStore {
      * @return [ hostname ]
      */
   List<String> getLiveInstances();
+
+  TimelineMetrics getAnomalyMetrics(String method, long startTime, long endTime, Integer limit) throws SQLException;
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
index 2478fb1..75a9d28 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
@@ -27,6 +27,7 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import java.sql.Connection;
 import java.sql.PreparedStatement;
 import java.sql.SQLException;
+import java.sql.Statement;
 import java.util.List;
 import java.util.concurrent.TimeUnit;
 
@@ -37,6 +38,27 @@ public class PhoenixTransactSQL {
 
   public static final Log LOG = LogFactory.getLog(PhoenixTransactSQL.class);
 
+  public static final String CREATE_ANOMALY_METRICS_TABLE_SQL =
+    "CREATE TABLE IF NOT EXISTS %s " +
+      "(UUID BINARY(20) NOT NULL, " +
+      "SERVER_TIME UNSIGNED_LONG NOT NULL, " +
+      "METRICS VARCHAR, " +
+      "METHOD VARCHAR, " +
+      "ANOMALY_SCORE DOUBLE CONSTRAINT pk " +
+      "PRIMARY KEY (UUID, SERVER_TIME)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, TTL=%s, COMPRESSION='%s'";
+
+  public static final String CREATE_TREND_ANOMALY_METRICS_TABLE_SQL =
+    "CREATE TABLE IF NOT EXISTS %s " +
+      "(UUID BINARY(20) NOT NULL, " +
+      "TEST_START_TIME UNSIGNED_LONG NOT NULL, " +
+      "TEST_END_TIME UNSIGNED_LONG NOT NULL, " +
+      "TRAIN_START_TIME UNSIGNED_LONG, " +
+      "TRAIN_END_TIME UNSIGNED_LONG, " +
+      "METRICS VARCHAR, " +
+      "METHOD VARCHAR, " +
+      "ANOMALY_SCORE DOUBLE CONSTRAINT pk " +
+      "PRIMARY KEY (UUID, TEST_START_TIME, TEST_END_TIME)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, TTL=%s, COMPRESSION='%s'";
+
   /**
    * Create table to store individual metric records.
    */
@@ -146,6 +168,25 @@ public class PhoenixTransactSQL {
    */
   public static final String ALTER_SQL = "ALTER TABLE %s SET TTL=%s";
 
+  public static final String UPSERT_ANOMALY_METRICS_SQL = "UPSERT INTO %s " +
+    "(UUID, " +
+    "SERVER_TIME, " +
+    "METRICS, " +
+    "METHOD, " +
+    "ANOMALY_SCORE) VALUES " +
+    "(?, ?, ?, ?, ?)";
+
+  public static final String UPSERT_TREND_ANOMALY_METRICS_SQL = "UPSERT INTO %s " +
+    "(UUID, " +
+    "TEST_START_TIME, " +
+    "TEST_END_TIME, " +
+    "TRAIN_START_TIME, " +
+    "TRAIN_END_TIME, " +
+    "METRICS, " +
+    "METHOD, " +
+    "ANOMALY_SCORE) VALUES " +
+    "(?, ?, ?, ?, ?, ?, ?, ?)";
+
   /**
    * Insert into metric records table.
    */
@@ -221,6 +262,22 @@ public class PhoenixTransactSQL {
   public static final String UPSERT_INSTANCE_HOST_METADATA_SQL =
     "UPSERT INTO INSTANCE_HOST_METADATA (INSTANCE_ID, HOSTNAME) VALUES (?, ?)";
 
+  public static final String GET_ANOMALY_METRIC_SQL = "SELECT UUID, SERVER_TIME, " +
+    "METRICS, " +
+    "METHOD, " +
+    "ANOMALY_SCORE " +
+    "FROM %s " +
+    "WHERE METHOD = ? AND SERVER_TIME > ? AND SERVER_TIME <= ? ORDER BY ANOMALY_SCORE DESC";
+
+  public static final String GET_TREND_ANOMALY_METRIC_SQL = "SELECT UUID, " +
+    "TEST_START_TIME, TEST_END_TIME, " +
+    "TRAIN_START_TIME, TRAIN_END_TIME, " +
+    "METRICS, " +
+    "METHOD, " +
+    "ANOMALY_SCORE " +
+    "FROM %s " +
+    "WHERE METHOD = ? AND TEST_END_TIME > ? AND TEST_END_TIME <= ? ORDER BY ANOMALY_SCORE DESC";
+
   /**
    * Retrieve a set of rows from metrics records table.
    */
@@ -345,6 +402,9 @@ public class PhoenixTransactSQL {
     "MAX(METRIC_MAX), MIN(METRIC_MIN) FROM %s WHERE METRIC_NAME LIKE %s AND SERVER_TIME > %s AND " +
     "SERVER_TIME <= %s GROUP BY METRIC_NAME, APP_ID, INSTANCE_ID, UNITS";
 
+  public static final String ANOMALY_METRICS_TABLE_NAME = "METRIC_ANOMALIES";
+  public static final String TREND_ANOMALY_METRICS_TABLE_NAME = "TREND_METRIC_ANOMALIES";
+
   public static final String METRICS_RECORD_TABLE_NAME = "METRIC_RECORD";
 
   public static final String CONTAINER_METRICS_TABLE_NAME = "CONTAINER_METRICS";
@@ -407,6 +467,40 @@ public class PhoenixTransactSQL {
     PhoenixTransactSQL.sortMergeJoinEnabled = sortMergeJoinEnabled;
   }
 
+  public static PreparedStatement prepareAnomalyMetricsGetSqlStatement(Connection connection, String method,
+                                                                       long startTime, long endTime, Integer limit) throws SQLException {
+    StringBuilder sb = new StringBuilder();
+    if (method.equals("ema") || method.equals("tukeys")) {
+      sb.append(String.format(GET_ANOMALY_METRIC_SQL, ANOMALY_METRICS_TABLE_NAME));
+    } else {
+      sb.append(String.format(GET_TREND_ANOMALY_METRIC_SQL, TREND_ANOMALY_METRICS_TABLE_NAME));
+    }
+    if (limit != null) {
+      sb.append(" LIMIT " + limit);
+    }
+    PreparedStatement stmt = null;
+    try {
+      stmt = connection.prepareStatement(sb.toString());
+      int pos = 1;
+
+      stmt.setString(pos++, method);
+      stmt.setLong(pos++, startTime);
+      stmt.setLong(pos, endTime);
+      if (limit != null) {
+        stmt.setFetchSize(limit);
+      }
+
+    } catch (SQLException e) {
+      if (stmt != null) {
+        stmt.close();
+      }
+      throw e;
+    }
+
+    return stmt;
+  }
+
+
   public static PreparedStatement prepareGetMetricsSqlStmt(Connection connection,
                                                            Condition condition) throws SQLException {
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/MetricAnomalyDetectorTestService.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/MetricAnomalyDetectorTestService.java
new file mode 100644
index 0000000..6f7b14a
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/MetricAnomalyDetectorTestService.java
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
+
+import com.google.inject.Inject;
+import com.google.inject.Singleton;
+import org.apache.ambari.metrics.alertservice.prototype.MetricAnomalyDetectorTestInput;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
+
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import javax.ws.rs.Consumes;
+import javax.ws.rs.GET;
+import javax.ws.rs.POST;
+import javax.ws.rs.Path;
+import javax.ws.rs.Produces;
+import javax.ws.rs.QueryParam;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.Response;
+
+@Singleton
+@Path("/ws/v1/metrictestservice")
+public class MetricAnomalyDetectorTestService {
+
+  private static final Log LOG = LogFactory.getLog(MetricAnomalyDetectorTestService.class);
+
+  @Inject
+  public MetricAnomalyDetectorTestService() {
+  }
+
+  private void init(HttpServletResponse response) {
+    response.setContentType(null);
+  }
+
+  @Path("/anomaly")
+  @POST
+  @Consumes({ MediaType.APPLICATION_JSON /* , MediaType.APPLICATION_XML */})
+  public TimelinePutResponse postAnomalyDetectionRequest(
+    @Context HttpServletRequest req,
+    @Context HttpServletResponse res,
+    MetricAnomalyDetectorTestInput input) {
+
+    init(res);
+    if (input == null) {
+      return new TimelinePutResponse();
+    }
+
+    try {
+      return null;
+    } catch (Exception e) {
+      throw new WebApplicationException(e, Response.Status.INTERNAL_SERVER_ERROR);
+    }
+  }
+
+  @GET
+  @Path("/dataseries")
+  @Produces({MediaType.APPLICATION_JSON})
+  public TimelineMetrics getTestDataSeries(
+    @Context HttpServletRequest req,
+    @Context HttpServletResponse res,
+    @QueryParam("type") String seriesType,
+    @QueryParam("configs") String config
+  ) {
+    return null;
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
index 472a787..20aba23 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
@@ -24,7 +24,6 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
-import org.apache.hadoop.metrics2.annotation.Metric;
 import org.apache.hadoop.metrics2.sink.timeline.AggregationResult;
 import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
 import org.apache.hadoop.metrics2.sink.timeline.PrecisionLimitExceededException;
@@ -37,6 +36,7 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.TestMetricSeriesGenerator;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.EntityIdentifier;
@@ -50,6 +50,7 @@ import org.apache.hadoop.yarn.webapp.BadRequestException;
 import javax.servlet.http.HttpServletRequest;
 import javax.servlet.http.HttpServletResponse;
 import javax.ws.rs.Consumes;
+import javax.ws.rs.DELETE;
 import javax.ws.rs.GET;
 import javax.ws.rs.POST;
 import javax.ws.rs.Path;
@@ -75,6 +76,10 @@ import java.util.Map;
 import java.util.Set;
 import java.util.SortedSet;
 import java.util.TreeSet;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
 
 import static org.apache.hadoop.yarn.util.StringHelper.CSV_JOINER;
 
@@ -389,7 +394,7 @@ public class TimelineWebServices {
       }
 
       return timelineMetricStore.getTimelineMetrics(
-        parseListStr(metricNames, ","), parseListStr(hostname, ","), appId, instanceId,
+        parseListStr(metricNames, ","), parseListStr(hostname, ","), appId, parseStr(instanceId),
         parseLongStr(startTime), parseLongStr(endTime),
         Precision.getPrecision(precision), parseIntStr(limit),
         parseBoolean(grouped), parseTopNConfig(topN, topNFunction, isBottomN),
@@ -412,6 +417,25 @@ public class TimelineWebServices {
   }
 
   @GET
+  @Path("/metrics/anomalies")
+  @Produces({ MediaType.APPLICATION_JSON })
+  public TimelineMetrics getAnomalyMetrics(
+    @Context HttpServletRequest req,
+    @Context HttpServletResponse res,
+    @QueryParam("method") String method,
+    @QueryParam("startTime") String startTime,
+    @QueryParam("endTime") String endTime,
+    @QueryParam("limit") String limit
+    ) {
+    init(res);
+
+    try {
+      return timelineMetricStore.getAnomalyMetrics(method, parseLongStr(startTime), parseLongStr(endTime), parseIntStr(limit));
+    } catch (Exception e) {
+      throw new WebApplicationException(e, Response.Status.INTERNAL_SERVER_ERROR);
+    }
+  }
+  @GET
   @Path("/metrics/metadata")
   @Produces({ MediaType.APPLICATION_JSON })
   public Map<String, List<TimelineMetricMetadata>> getTimelineMetricMetadata(
@@ -660,6 +684,12 @@ public class TimelineWebServices {
   }
 
   private static String parseStr(String str) {
-    return str == null ? null : str.trim();
+    String trimmedInstance = (str == null) ? null : str.trim();
+    if (trimmedInstance != null) {
+      if (trimmedInstance.isEmpty() || trimmedInstance.equalsIgnoreCase("undefined")) {
+        trimmedInstance = null;
+      }
+    }
+    return trimmedInstance;
   }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java
index 07e0daa..7c879e1 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java
@@ -119,4 +119,9 @@ public class TestTimelineMetricStore implements TimelineMetricStore {
     return null;
   }
 
+  @Override
+  public TimelineMetrics getAnomalyMetrics(String method, long startTime, long endTime, Integer limit) {
+    return null;
+  }
+
 }
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/alerts.json b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/alerts.json
index e41adb5..acecb62 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/alerts.json
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/alerts.json
@@ -142,6 +142,76 @@
             "value": "{0} * 100"
           }
         }
+      },
+      {
+        "name": "point_in_time_metrics_anomalies",
+        "label": "Point in Time metric anomalies",
+        "description": "This service-level alert if there are metric anomalies in the last 10 mins or configured interval.",
+        "interval": 10,
+        "scope": "ANY",
+        "enabled": true,
+        "source": {
+          "type": "SCRIPT",
+          "path": "AMBARI_METRICS/0.1.0/package/alerts/alert_point_in_time_metric_anomalies.py",
+          "parameters": [
+            {
+              "name": "num_anomalies",
+              "display_name": "Value of N in Top 'N' anomalies to be reported.",
+              "value": 5,
+              "type": "NUMERIC",
+              "description": "Report only this amount of anomalies."
+            },
+            {
+              "name": "interval",
+              "display_name": "Query Time interval in minutes",
+              "value": 10,
+              "type": "NUMERIC",
+              "description": "Query Time interval in minutes."
+            },
+            {
+              "name": "sensitivity",
+              "display_name": "Alert Sensitivity",
+              "value": 50,
+              "type": "NUMERIC",
+              "description": "Sensitivity of the alert. Scale of 1 - 100. Default = 50."
+            }
+          ]
+        }
+      },
+      {
+        "name": "trend_metrics_anomalies",
+        "label": "Trend metric anomalies",
+        "description": "This service-level alert if there are metric anomalies in the last 10 mins or configured interval.",
+        "interval": 10,
+        "scope": "ANY",
+        "enabled": true,
+        "source": {
+          "type": "SCRIPT",
+          "path": "AMBARI_METRICS/0.1.0/package/alerts/alert_trend_metric_anomalies.py",
+          "parameters": [
+            {
+              "name": "num_anomalies",
+              "display_name": "Value of N in Top 'N' anomalies to be reported.",
+              "value": 5,
+              "type": "NUMERIC",
+              "description": "Report only this amount of anomalies."
+            },
+            {
+              "name": "interval",
+              "display_name": "Query Time interval in minutes",
+              "value": 10,
+              "type": "NUMERIC",
+              "description": "Query Time interval in minutes."
+            },
+            {
+              "name": "sensitivity",
+              "display_name": "Alert Sensitivity",
+              "value": 50,
+              "type": "NUMERIC",
+              "description": "Sensitivity of the alert. Scale of 1 - 100. Default = 50."
+            }
+          ]
+        }
       }
     ],
     "METRICS_MONITOR": [
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/alerts/alert_point_in_time_metric_anomalies.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/alerts/alert_point_in_time_metric_anomalies.py
new file mode 100644
index 0000000..154ce1c
--- /dev/null
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/alerts/alert_point_in_time_metric_anomalies.py
@@ -0,0 +1,185 @@
+#!/usr/bin/env python
+
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+import json
+import urllib
+import time
+import os
+import ambari_commons.network as network
+import logging
+
+from ambari_agent.AmbariConfig import AmbariConfig
+
+RESULT_STATE_OK = 'OK'
+RESULT_STATE_CRITICAL = 'CRITICAL'
+RESULT_STATE_WARNING = 'WARNING'
+RESULT_STATE_UNKNOWN = 'UNKNOWN'
+RESULT_STATE_SKIPPED = 'SKIPPED'
+
+AMS_HTTP_POLICY = '{{ams-site/timeline.metrics.service.http.policy}}'
+METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY = '{{ams-site/timeline.metrics.service.webapp.address}}'
+METRICS_COLLECTOR_VIP_HOST_KEY = '{{cluster-env/metrics_collector_vip_host}}'
+METRICS_COLLECTOR_VIP_PORT_KEY = '{{cluster-env/metrics_collector_vip_port}}'
+
+INTERVAL_PARAM_KEY = 'interval'
+INTERVAL_PARAM_DEFAULT = 10
+
+NUM_ANOMALIES_KEY = 'num_anomalies'
+NUM_ANOMALIES_DEFAULT = 5
+
+SENSITIVITY_KEY = 'sensitivity'
+SENSITIVITY_DEFAULT = 5
+
+AMS_METRICS_GET_URL = "/ws/v1/timeline/metrics/anomalies?%s"
+
+logger = logging.getLogger()
+
+def get_tokens():
+  """
+  Returns a tuple of tokens in the format {{site/property}} that will be used
+  to build the dictionary passed into execute
+  """
+  return (METRICS_COLLECTOR_VIP_HOST_KEY, METRICS_COLLECTOR_VIP_PORT_KEY,
+          METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY, AMS_HTTP_POLICY)
+
+
+def execute(configurations={}, parameters={}, host_name=None):
+  """
+  Returns a tuple containing the result code and a pre-formatted result label
+
+  Keyword arguments:
+  configurations (dictionary): a mapping of configuration key to value
+  parameters (dictionary): a mapping of script parameter key to value
+  host_name (string): the name of this host where the alert is running
+  """
+
+  """
+  Get ready with AMS GET url.
+  Query AMS for point in time anomalies in the last 30mins. 
+  Generate a message with anomalies.
+  """
+  if configurations is None:
+    return (RESULT_STATE_UNKNOWN, ['There were no configurations supplied to the script.'])
+
+  collector_host = host_name
+  current_time = int(time.time()) * 1000
+
+  interval = INTERVAL_PARAM_DEFAULT
+  if INTERVAL_PARAM_KEY in parameters:
+    interval = _coerce_to_integer(parameters[INTERVAL_PARAM_KEY])
+
+  num_anomalies = NUM_ANOMALIES_DEFAULT
+  if NUM_ANOMALIES_KEY in parameters:
+    num_anomalies = _coerce_to_integer(parameters[NUM_ANOMALIES_KEY])
+
+  sensitivity = SENSITIVITY_DEFAULT
+  if SENSITIVITY_KEY in parameters:
+    sensitivity = _coerce_to_integer(parameters[SENSITIVITY_KEY])
+
+  if METRICS_COLLECTOR_VIP_HOST_KEY in configurations and METRICS_COLLECTOR_VIP_PORT_KEY in configurations:
+    collector_host = configurations[METRICS_COLLECTOR_VIP_HOST_KEY]
+    collector_port = int(configurations[METRICS_COLLECTOR_VIP_PORT_KEY])
+  else:
+    # ams-site/timeline.metrics.service.webapp.address is required
+    if not METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY in configurations:
+      return (RESULT_STATE_UNKNOWN,
+              ['{0} is a required parameter for the script'.format(METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY)])
+    else:
+      collector_webapp_address = configurations[METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY].split(":")
+      if valid_collector_webapp_address(collector_webapp_address):
+        collector_port = int(collector_webapp_address[1])
+      else:
+        return (RESULT_STATE_UNKNOWN, ['{0} value should be set as "fqdn_hostname:port", but set to {1}'.format(
+          METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY, configurations[METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY])])
+
+  get_ema_anomalies_parameters = {
+    "method": "ema",
+    "startTime": current_time - interval * 60 * 1000,
+    "endTime": current_time,
+    "limit": num_anomalies + 1
+  }
+
+  encoded_get_metrics_parameters = urllib.urlencode(get_ema_anomalies_parameters)
+
+  ams_collector_conf_dir = "/etc/ambari-metrics-collector/conf"
+  metric_truststore_ca_certs = 'ca.pem'
+  ca_certs = os.path.join(ams_collector_conf_dir,
+                          metric_truststore_ca_certs)
+  metric_collector_https_enabled = str(configurations[AMS_HTTP_POLICY]) == "HTTPS_ONLY"
+
+  try:
+    conn = network.get_http_connection(
+      collector_host,
+      int(collector_port),
+      metric_collector_https_enabled,
+      ca_certs,
+      ssl_version=AmbariConfig.get_resolved_config().get_force_https_protocol_value()
+    )
+    conn.request("GET", AMS_METRICS_GET_URL % encoded_get_metrics_parameters)
+    response = conn.getresponse()
+    data = response.read()
+    logger.info("Data read from metric anomaly endpoint")
+    logger.info(data)
+    conn.close()
+  except Exception:
+    return (RESULT_STATE_UNKNOWN, ["Unable to retrieve anomaly metrics from the Ambari Metrics service."])
+
+  if response.status != 200:
+    return (RESULT_STATE_UNKNOWN, ["Unable to retrieve anomaly metrics from the Ambari Metrics service."])
+
+  data_json = json.loads(data)
+  length = len(data_json["metrics"])
+  logger.info("Number of anomalies returned : {0}".format(length))
+
+  if length == 0:
+    alert_state = RESULT_STATE_OK
+    alert_label = 'No point in time anomalies in the last {0} minutes.'.format(interval)
+    logger.info(alert_label)
+  elif length <= 5:
+    alert_state = RESULT_STATE_WARNING
+    alert_label = "http://avijayan-ad-1.openstacklocal:3000/dashboard/script/scripted.js?anomalies=" + data
+  else:
+    alert_state = RESULT_STATE_CRITICAL
+    alert_label = "http://avijayan-ad-1.openstacklocal:3000/dashboard/script/scripted.js?anomalies=" + data
+
+  return (alert_state, [alert_label])
+
+
+def valid_collector_webapp_address(webapp_address):
+  if len(webapp_address) == 2 \
+    and webapp_address[0] != '127.0.0.1' \
+    and webapp_address[1].isdigit():
+    return True
+
+  return False
+
+
+def _coerce_to_integer(value):
+  """
+  Attempts to correctly coerce a value to an integer. For the case of an integer or a float,
+  this will essentially either NOOP or return a truncated value. If the parameter is a string,
+  then it will first attempt to be coerced from a integer, and failing that, a float.
+  :param value: the value to coerce
+  :return: the coerced value as an integer
+  """
+  try:
+    return int(value)
+  except ValueError:
+    return int(float(value))
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/alerts/alert_trend_metric_anomalies.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/alerts/alert_trend_metric_anomalies.py
new file mode 100644
index 0000000..8813d8e
--- /dev/null
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/alerts/alert_trend_metric_anomalies.py
@@ -0,0 +1,185 @@
+#!/usr/bin/env python
+
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+import json
+import urllib
+import time
+import os
+import ambari_commons.network as network
+import logging
+
+from ambari_agent.AmbariConfig import AmbariConfig
+
+RESULT_STATE_OK = 'OK'
+RESULT_STATE_CRITICAL = 'CRITICAL'
+RESULT_STATE_WARNING = 'WARNING'
+RESULT_STATE_UNKNOWN = 'UNKNOWN'
+RESULT_STATE_SKIPPED = 'SKIPPED'
+
+AMS_HTTP_POLICY = '{{ams-site/timeline.metrics.service.http.policy}}'
+METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY = '{{ams-site/timeline.metrics.service.webapp.address}}'
+METRICS_COLLECTOR_VIP_HOST_KEY = '{{cluster-env/metrics_collector_vip_host}}'
+METRICS_COLLECTOR_VIP_PORT_KEY = '{{cluster-env/metrics_collector_vip_port}}'
+
+INTERVAL_PARAM_KEY = 'interval'
+INTERVAL_PARAM_DEFAULT = 10
+
+NUM_ANOMALIES_KEY = 'num_anomalies'
+NUM_ANOMALIES_DEFAULT = 5
+
+SENSITIVITY_KEY = 'sensitivity'
+SENSITIVITY_DEFAULT = 5
+
+AMS_METRICS_GET_URL = "/ws/v1/timeline/metrics/anomalies?%s"
+
+logger = logging.getLogger()
+
+def get_tokens():
+  """
+  Returns a tuple of tokens in the format {{site/property}} that will be used
+  to build the dictionary passed into execute
+  """
+  return (METRICS_COLLECTOR_VIP_HOST_KEY, METRICS_COLLECTOR_VIP_PORT_KEY,
+          METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY, AMS_HTTP_POLICY)
+
+
+def execute(configurations={}, parameters={}, host_name=None):
+  """
+  Returns a tuple containing the result code and a pre-formatted result label
+
+  Keyword arguments:
+  configurations (dictionary): a mapping of configuration key to value
+  parameters (dictionary): a mapping of script parameter key to value
+  host_name (string): the name of this host where the alert is running
+  """
+
+  """
+  Get ready with AMS GET url.
+  Query AMS for point in time anomalies in the last 30mins. 
+  Generate a message with anomalies.
+  """
+  if configurations is None:
+    return (RESULT_STATE_UNKNOWN, ['There were no configurations supplied to the script.'])
+
+  collector_host = host_name
+  current_time = int(time.time()) * 1000
+
+  interval = INTERVAL_PARAM_DEFAULT
+  if INTERVAL_PARAM_KEY in parameters:
+    interval = _coerce_to_integer(parameters[INTERVAL_PARAM_KEY])
+
+  num_anomalies = NUM_ANOMALIES_DEFAULT
+  if NUM_ANOMALIES_KEY in parameters:
+    num_anomalies = _coerce_to_integer(parameters[NUM_ANOMALIES_KEY])
+
+  sensitivity = SENSITIVITY_DEFAULT
+  if SENSITIVITY_KEY in parameters:
+    sensitivity = _coerce_to_integer(parameters[SENSITIVITY_KEY])
+
+  if METRICS_COLLECTOR_VIP_HOST_KEY in configurations and METRICS_COLLECTOR_VIP_PORT_KEY in configurations:
+    collector_host = configurations[METRICS_COLLECTOR_VIP_HOST_KEY]
+    collector_port = int(configurations[METRICS_COLLECTOR_VIP_PORT_KEY])
+  else:
+    # ams-site/timeline.metrics.service.webapp.address is required
+    if not METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY in configurations:
+      return (RESULT_STATE_UNKNOWN,
+              ['{0} is a required parameter for the script'.format(METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY)])
+    else:
+      collector_webapp_address = configurations[METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY].split(":")
+      if valid_collector_webapp_address(collector_webapp_address):
+        collector_port = int(collector_webapp_address[1])
+      else:
+        return (RESULT_STATE_UNKNOWN, ['{0} value should be set as "fqdn_hostname:port", but set to {1}'.format(
+          METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY, configurations[METRICS_COLLECTOR_WEBAPP_ADDRESS_KEY])])
+
+  get_ema_anomalies_parameters = {
+    "method": "ks",
+    "startTime": current_time - interval * 60 * 1000,
+    "endTime": current_time,
+    "limit": num_anomalies + 1
+  }
+
+  encoded_get_metrics_parameters = urllib.urlencode(get_ema_anomalies_parameters)
+
+  ams_collector_conf_dir = "/etc/ambari-metrics-collector/conf"
+  metric_truststore_ca_certs = 'ca.pem'
+  ca_certs = os.path.join(ams_collector_conf_dir,
+                          metric_truststore_ca_certs)
+  metric_collector_https_enabled = str(configurations[AMS_HTTP_POLICY]) == "HTTPS_ONLY"
+
+  try:
+    conn = network.get_http_connection(
+      collector_host,
+      int(collector_port),
+      metric_collector_https_enabled,
+      ca_certs,
+      ssl_version=AmbariConfig.get_resolved_config().get_force_https_protocol_value()
+    )
+    conn.request("GET", AMS_METRICS_GET_URL % encoded_get_metrics_parameters)
+    response = conn.getresponse()
+    data = response.read()
+    logger.info("Data read from metric anomaly endpoint")
+    logger.info(data)
+    conn.close()
+  except Exception:
+    return (RESULT_STATE_UNKNOWN, ["Unable to retrieve anomaly metrics from the Ambari Metrics service."])
+
+  if response.status != 200:
+    return (RESULT_STATE_UNKNOWN, ["Unable to retrieve anomaly metrics from the Ambari Metrics service."])
+
+  data_json = json.loads(data)
+  length = len(data_json["metrics"])
+  logger.info("Number of anomalies returned : {0}".format(length))
+
+  if length == 0:
+    alert_state = RESULT_STATE_OK
+    alert_label = 'No trend anomalies in the last {0} minutes.'.format(interval)
+    logger.info(alert_label)
+  elif length <= 5:
+    alert_state = RESULT_STATE_WARNING
+    alert_label = "http://avijayan-ad-1.openstacklocal:3000/dashboard/script/scripted.js?anomalies=" + data
+  else:
+    alert_state = RESULT_STATE_CRITICAL
+    alert_label = "http://avijayan-ad-1.openstacklocal:3000/dashboard/script/scripted.js?anomalies=" + data
+
+  return (alert_state, [alert_label])
+
+
+def valid_collector_webapp_address(webapp_address):
+  if len(webapp_address) == 2 \
+    and webapp_address[0] != '127.0.0.1' \
+    and webapp_address[1].isdigit():
+    return True
+
+  return False
+
+
+def _coerce_to_integer(value):
+  """
+  Attempts to correctly coerce a value to an integer. For the case of an integer or a float,
+  this will essentially either NOOP or return a truncated value. If the parameter is a string,
+  then it will first attempt to be coerced from a integer, and failing that, a float.
+  :param value: the value to coerce
+  :return: the coerced value as an integer
+  """
+  try:
+    return int(value)
+  except ValueError:
+    return int(float(value))
diff --git a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/alerts/alert_metrics_deviation.py b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/alerts/alert_metrics_deviation.py
index 7f64c80..3f8eb2e 100644
--- a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/alerts/alert_metrics_deviation.py
+++ b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/alerts/alert_metrics_deviation.py
@@ -331,10 +331,12 @@ def execute(configurations={}, parameters={}, host_name=None):
     response = conn.getresponse()
     data = response.read()
     conn.close()
-  except Exception:
+  except Exception, e:
+    logger.info(str(e))
     return (RESULT_STATE_UNKNOWN, ["Unable to retrieve metrics from the Ambari Metrics service."])
 
   if response.status != 200:
+    logger.info(str(data))
     return (RESULT_STATE_UNKNOWN, ["Unable to retrieve metrics from the Ambari Metrics service."])
 
   data_json = json.loads(data)
diff --git a/ambari-server/src/test/java/org/apache/ambari/server/controller/metrics/timeline/MetricsPaddingMethodTest.java b/ambari-server/src/test/java/org/apache/ambari/server/controller/metrics/timeline/MetricsPaddingMethodTest.java
index b57f7e9..e66e5b8 100644
--- a/ambari-server/src/test/java/org/apache/ambari/server/controller/metrics/timeline/MetricsPaddingMethodTest.java
+++ b/ambari-server/src/test/java/org/apache/ambari/server/controller/metrics/timeline/MetricsPaddingMethodTest.java
@@ -39,6 +39,7 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
+    timelineMetric.setStartTime(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
     inputValues.put(now - 1000, 1.0d);
     inputValues.put(now - 2000, 2.0d);
@@ -66,6 +67,7 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
+    timelineMetric.setStartTime(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
     inputValues.put(now - 1000, 1.0d);
     inputValues.put(now - 2000, 2.0d);
@@ -93,6 +95,7 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
+    timelineMetric.setStartTime(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
     inputValues.put(now, 0.0d);
     inputValues.put(now - 1000, 1.0d);
@@ -120,6 +123,7 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
+    timelineMetric.setStartTime(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
     inputValues.put(now - 1000, 1.0d);
     timelineMetric.setMetricValues(inputValues);
@@ -145,6 +149,7 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
+    timelineMetric.setStartTime(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
     inputValues.put(now - 1000, 1.0d);
     timelineMetric.setMetricValues(inputValues);
@@ -168,6 +173,7 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
+    timelineMetric.setStartTime(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
 
     long seconds = 1000;
@@ -228,6 +234,7 @@ public class MetricsPaddingMethodTest {
     timelineMetric.setMetricName("m1");
     timelineMetric.setHostName("h1");
     timelineMetric.setAppId("a1");
+    timelineMetric.setStartTime(now);
     TreeMap<Long, Double> inputValues = new TreeMap<>();
     inputValues.put(now - 100, 1.0d);
     inputValues.put(now - 200, 2.0d);

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 09/39: AMBARI-21244 Add https support to local metrics aggregator application (dsen)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 1e43864585259e1a3b2d4835256200d2d3448c1f
Author: Dmytro Sen <ds...@apache.org>
AuthorDate: Wed Jun 21 15:49:27 2017 +0300

    AMBARI-21244 Add https support to local metrics aggregator application (dsen)
---
 .../logfeeder/metrics/LogFeederAMSClient.java      |   5 +
 .../sink/timeline/AbstractTimelineMetricsSink.java |  13 +-
 .../AbstractTimelineMetricSinkTest.java            |   5 +
 .../availability/MetricCollectorHATest.java        |   5 +
 .../timeline/cache/HandleConnectExceptionTest.java |   5 +
 .../src/main/conf/flume-metrics2.properties.j2     |   8 +-
 .../sink/flume/FlumeTimelineMetricsSink.java       |  13 +-
 .../sink/timeline/HadoopTimelineMetricsSink.java   |  13 +-
 .../timeline/HadoopTimelineMetricsSinkTest.java    |   4 +
 .../ambari-metrics-host-aggregator/pom.xml         |  10 ++
 .../host/aggregator/AggregatorApplication.java     |  51 +++++++-
 .../sink/timeline/AbstractMetricPublisher.java     |  12 +-
 .../sink/timeline/AggregatedMetricsPublisher.java  |   5 +
 .../sink/timeline/RawMetricsPublisher.java         |   5 +
 .../src/main/python/core/aggregator.py             |   6 +-
 .../src/main/python/core/config_reader.py          |   9 +-
 .../src/main/python/core/emitter.py                |  42 +++---
 .../src/main/python/core/host_info.py              |   1 -
 .../src/main/python/core/stop_handler.py           |   4 +-
 .../sink/kafka/KafkaTimelineMetricsReporter.java   |  10 +-
 .../sink/storm/StormTimelineMetricsReporter.java   |  16 ++-
 .../sink/storm/StormTimelineMetricsSink.java       |  13 +-
 .../sink/storm/StormTimelineMetricsReporter.java   |  13 +-
 .../sink/storm/StormTimelineMetricsSink.java       |  13 +-
 .../metrics/system/impl/AmbariMetricSinkImpl.java  |   5 +
 .../ACCUMULO/1.6.1.2.2.0/package/scripts/params.py |   6 +
 .../hadoop-metrics2-accumulo.properties.j2         |   3 +
 .../0.1.0/configuration/ams-site.xml               |   4 +
 .../AMBARI_METRICS/0.1.0/package/scripts/ams.py    |  42 ++++--
 .../AMBARI_METRICS/0.1.0/package/scripts/params.py |   8 +-
 .../templates/hadoop-metrics2-hbase.properties.j2  | 100 ++++++---------
 .../0.1.0/package/templates/metric_monitor.ini.j2  |   3 +
 .../FLUME/1.4.0.2.0/package/scripts/params.py      |   6 +
 .../package/templates/flume-metrics2.properties.j2 |   3 +
 .../0.96.0.2.0/package/scripts/params_linux.py     |   6 +
 ...oop-metrics2-hbase.properties-GANGLIA-MASTER.j2 |   3 +
 .../hadoop-metrics2-hbase.properties-GANGLIA-RS.j2 |   3 +
 .../0.12.0.2.0/package/scripts/params_linux.py     |   6 +
 .../hadoop-metrics2-hivemetastore.properties.j2    |   4 +-
 .../hadoop-metrics2-hiveserver2.properties.j2      |   3 +
 .../templates/hadoop-metrics2-llapdaemon.j2        |   3 +
 .../templates/hadoop-metrics2-llaptaskscheduler.j2 |   3 +
 .../KAFKA/0.8.1/configuration/kafka-broker.xml     |   5 +
 .../KAFKA/0.8.1/package/scripts/params.py          |   6 +
 .../STORM/0.9.1/package/scripts/params_linux.py    |   6 +
 .../STORM/0.9.1/package/templates/config.yaml.j2   |   7 +-
 .../package/templates/storm-metrics2.properties.j2 |   3 +
 .../stack-hooks/before-START/scripts/params.py     |   6 +
 .../templates/hadoop-metrics2.properties.j2        |   3 +
 .../STORM/package/templates/config.yaml.j2         |  48 ++++---
 .../configuration/hadoop-metrics2.properties.xml   |   3 +
 .../configuration/hadoop-metrics2.properties.xml   |   5 +
 .../system/impl/TestAmbariMetricsSinkImpl.java     |   5 +
 .../2.0.6/AMBARI_METRICS/test_metrics_monitor.py   | 142 +++++++++++++++++++++
 .../stacks/2.0.6/configs/default_ams_embedded.json |   1 +
 .../HDF/2.0/hooks/before-START/scripts/params.py   |   6 +
 .../templates/hadoop-metrics2.properties.j2        |  10 +-
 .../ODPi/2.0/hooks/before-START/scripts/params.py  |  19 +++
 .../services/HIVE/package/scripts/params_linux.py  |   9 ++
 .../hadoop-metrics2-hivemetastore.properties.j2    |  10 +-
 .../hadoop-metrics2-hiveserver2.properties.j2      |  10 +-
 .../templates/hadoop-metrics2-llapdaemon.j2        |  11 +-
 .../templates/hadoop-metrics2-llaptaskscheduler.j2 |   9 +-
 63 files changed, 656 insertions(+), 160 deletions(-)

diff --git a/ambari-logsearch/ambari-logsearch-logfeeder/src/main/java/org/apache/ambari/logfeeder/metrics/LogFeederAMSClient.java b/ambari-logsearch/ambari-logsearch-logfeeder/src/main/java/org/apache/ambari/logfeeder/metrics/LogFeederAMSClient.java
index ba986c7..0ccdff3 100644
--- a/ambari-logsearch/ambari-logsearch-logfeeder/src/main/java/org/apache/ambari/logfeeder/metrics/LogFeederAMSClient.java
+++ b/ambari-logsearch/ambari-logsearch-logfeeder/src/main/java/org/apache/ambari/logfeeder/metrics/LogFeederAMSClient.java
@@ -100,6 +100,11 @@ public class LogFeederAMSClient extends AbstractTimelineMetricsSink {
   }
 
   @Override
+  protected String getHostInMemoryAggregationProtocol() {
+    return "http";
+  }
+
+  @Override
   protected boolean emitMetrics(TimelineMetrics metrics) {
     return super.emitMetrics(metrics);
   }
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java
index 337f640..3c06032 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java
@@ -81,6 +81,7 @@ public abstract class AbstractTimelineMetricsSink {
   public static final String SSL_KEYSTORE_PASSWORD_PROPERTY = "truststore.password";
   public static final String HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY = "host_in_memory_aggregation";
   public static final String HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY = "host_in_memory_aggregation_port";
+  public static final String HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY = "host_in_memory_aggregation_protocol";
   public static final String COLLECTOR_LIVE_NODES_PATH = "/ws/v1/timeline/metrics/livenodes";
   public static final String INSTANCE_ID_PROPERTY = "instanceId";
   public static final String SET_INSTANCE_ID_PROPERTY = "set.instanceId";
@@ -293,7 +294,11 @@ public abstract class AbstractTimelineMetricsSink {
     boolean validCollectorHost = true;
 
     if (isHostInMemoryAggregationEnabled()) {
-      connectUrl = constructTimelineMetricUri("http", "localhost", String.valueOf(getHostInMemoryAggregationPort()));
+      String hostname = "localhost";
+      if (getHostInMemoryAggregationProtocol().equalsIgnoreCase("https")) {
+        hostname = getHostname();
+      }
+      connectUrl = constructTimelineMetricUri(getHostInMemoryAggregationProtocol(), hostname, String.valueOf(getHostInMemoryAggregationPort()));
     } else {
       String collectorHost  = getCurrentCollectorHost();
       if (collectorHost == null) {
@@ -647,4 +652,10 @@ public abstract class AbstractTimelineMetricsSink {
    * @return
    */
   abstract protected int getHostInMemoryAggregationPort();
+
+  /**
+   * In memory aggregation protocol
+   * @return
+   */
+  abstract protected String getHostInMemoryAggregationProtocol();
 }
diff --git a/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/availability/AbstractTimelineMetricSinkTest.java b/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/availability/AbstractTimelineMetricSinkTest.java
index ce2cf79..396d08d 100644
--- a/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/availability/AbstractTimelineMetricSinkTest.java
+++ b/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/availability/AbstractTimelineMetricSinkTest.java
@@ -100,6 +100,11 @@ public class AbstractTimelineMetricSinkTest {
     }
 
     @Override
+    protected String getHostInMemoryAggregationProtocol() {
+      return "http";
+    }
+
+    @Override
     public boolean emitMetrics(TimelineMetrics metrics) {
       super.init();
       return super.emitMetrics(metrics);
diff --git a/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/availability/MetricCollectorHATest.java b/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/availability/MetricCollectorHATest.java
index f0174d5..0abc5fc 100644
--- a/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/availability/MetricCollectorHATest.java
+++ b/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/availability/MetricCollectorHATest.java
@@ -202,5 +202,10 @@ public class MetricCollectorHATest {
     protected int getHostInMemoryAggregationPort() {
       return 61888;
     }
+
+    @Override
+    protected String getHostInMemoryAggregationProtocol() {
+      return "http";
+    }
   }
 }
diff --git a/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/cache/HandleConnectExceptionTest.java b/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/cache/HandleConnectExceptionTest.java
index 3be2162..77aba6b 100644
--- a/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/cache/HandleConnectExceptionTest.java
+++ b/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/cache/HandleConnectExceptionTest.java
@@ -146,6 +146,11 @@ public class HandleConnectExceptionTest {
     }
 
     @Override
+    protected String getHostInMemoryAggregationProtocol() {
+      return "http";
+    }
+
+    @Override
     public boolean emitMetrics(TimelineMetrics metrics) {
       super.init();
       return super.emitMetrics(metrics);
diff --git a/ambari-metrics/ambari-metrics-flume-sink/src/main/conf/flume-metrics2.properties.j2 b/ambari-metrics/ambari-metrics-flume-sink/src/main/conf/flume-metrics2.properties.j2
index f9b303e..58c5f09 100644
--- a/ambari-metrics/ambari-metrics-flume-sink/src/main/conf/flume-metrics2.properties.j2
+++ b/ambari-metrics/ambari-metrics-flume-sink/src/main/conf/flume-metrics2.properties.j2
@@ -19,7 +19,13 @@
 collector=http://localhost:6188
 collectionFrequency=60000
 maxRowCacheSize=10000
-sendInterval=59000
+sendInterval={{metrics_report_interval}}000
+clusterReporterAppId=nimbus
+host_in_memory_aggregation = {{host_in_memory_aggregation}}
+host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
 
 # Metric names having type COUNTER
 counters=EventTakeSuccessCount,EventPutSuccessCount,EventTakeAttemptCount,EventPutAttemptCount
diff --git a/ambari-metrics/ambari-metrics-flume-sink/src/main/java/org/apache/hadoop/metrics2/sink/flume/FlumeTimelineMetricsSink.java b/ambari-metrics/ambari-metrics-flume-sink/src/main/java/org/apache/hadoop/metrics2/sink/flume/FlumeTimelineMetricsSink.java
index 6277907..720c371 100644
--- a/ambari-metrics/ambari-metrics-flume-sink/src/main/java/org/apache/hadoop/metrics2/sink/flume/FlumeTimelineMetricsSink.java
+++ b/ambari-metrics/ambari-metrics-flume-sink/src/main/java/org/apache/hadoop/metrics2/sink/flume/FlumeTimelineMetricsSink.java
@@ -65,6 +65,7 @@ public class FlumeTimelineMetricsSink extends AbstractTimelineMetricsSink implem
   private String instanceId;
   private boolean hostInMemoryAggregationEnabled;
   private int hostInMemoryAggregationPort;
+  private String hostInMemoryAggregationProtocol;
 
 
   @Override
@@ -114,12 +115,13 @@ public class FlumeTimelineMetricsSink extends AbstractTimelineMetricsSink implem
     setInstanceId = Boolean.valueOf(configuration.getProperty(SET_INSTANCE_ID_PROPERTY, "false"));
     instanceId = configuration.getProperty(INSTANCE_ID_PROPERTY, "");
 
-    hostInMemoryAggregationEnabled = Boolean.getBoolean(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY));
-    hostInMemoryAggregationPort = Integer.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY));
+    hostInMemoryAggregationEnabled = Boolean.getBoolean(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY, "false"));
+    hostInMemoryAggregationPort = Integer.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY, "61888"));
+    hostInMemoryAggregationProtocol = configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY, "http");
     // Initialize the collector write strategy
     super.init();
 
-    if (protocol.contains("https")) {
+    if (protocol.contains("https") || hostInMemoryAggregationProtocol.contains("https")) {
       String trustStorePath = configuration.getProperty(SSL_KEYSTORE_PATH_PROPERTY).trim();
       String trustStoreType = configuration.getProperty(SSL_KEYSTORE_TYPE_PROPERTY).trim();
       String trustStorePwd = configuration.getProperty(SSL_KEYSTORE_PASSWORD_PROPERTY).trim();
@@ -178,6 +180,11 @@ public class FlumeTimelineMetricsSink extends AbstractTimelineMetricsSink implem
     return hostInMemoryAggregationPort;
   }
 
+  @Override
+  protected String getHostInMemoryAggregationProtocol() {
+    return hostInMemoryAggregationProtocol;
+  }
+
   public void setPollFrequency(long pollFrequency) {
     this.pollFrequency = pollFrequency;
   }
diff --git a/ambari-metrics/ambari-metrics-hadoop-sink/src/main/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSink.java b/ambari-metrics/ambari-metrics-hadoop-sink/src/main/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSink.java
index bbc9617..f0eefc2 100644
--- a/ambari-metrics/ambari-metrics-hadoop-sink/src/main/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSink.java
+++ b/ambari-metrics/ambari-metrics-hadoop-sink/src/main/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSink.java
@@ -77,6 +77,7 @@ public class HadoopTimelineMetricsSink extends AbstractTimelineMetricsSink imple
   });
   private int hostInMemoryAggregationPort;
   private boolean hostInMemoryAggregationEnabled;
+  private String hostInMemoryAggregationProtocol;
 
   @Override
   public void init(SubsetConfiguration conf) {
@@ -109,12 +110,13 @@ public class HadoopTimelineMetricsSink extends AbstractTimelineMetricsSink imple
     protocol = conf.getString(COLLECTOR_PROTOCOL, "http");
     collectorHosts = parseHostsStringArrayIntoCollection(conf.getStringArray(COLLECTOR_HOSTS_PROPERTY));
     port = conf.getString(COLLECTOR_PORT, "6188");
-    hostInMemoryAggregationEnabled = conf.getBoolean(HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY);
-    hostInMemoryAggregationPort = conf.getInt(HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY);
+    hostInMemoryAggregationEnabled = conf.getBoolean(HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY, false);
+    hostInMemoryAggregationPort = conf.getInt(HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY, 61888);
+    hostInMemoryAggregationProtocol = conf.getString(HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY, "http");
     if (collectorHosts.isEmpty()) {
       LOG.error("No Metric collector configured.");
     } else {
-      if (protocol.contains("https")) {
+      if (protocol.contains("https") || hostInMemoryAggregationProtocol.contains("https")) {
         String trustStorePath = conf.getString(SSL_KEYSTORE_PATH_PROPERTY).trim();
         String trustStoreType = conf.getString(SSL_KEYSTORE_TYPE_PROPERTY).trim();
         String trustStorePwd = conf.getString(SSL_KEYSTORE_PASSWORD_PROPERTY).trim();
@@ -262,6 +264,11 @@ public class HadoopTimelineMetricsSink extends AbstractTimelineMetricsSink imple
   }
 
   @Override
+  protected String getHostInMemoryAggregationProtocol() {
+    return hostInMemoryAggregationProtocol;
+  }
+
+  @Override
   public void putMetrics(MetricsRecord record) {
     try {
       String recordName = record.name();
diff --git a/ambari-metrics/ambari-metrics-hadoop-sink/src/test/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSinkTest.java b/ambari-metrics/ambari-metrics-hadoop-sink/src/test/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSinkTest.java
index 6bb6454..a92b436 100644
--- a/ambari-metrics/ambari-metrics-hadoop-sink/src/test/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSinkTest.java
+++ b/ambari-metrics/ambari-metrics-hadoop-sink/src/test/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSinkTest.java
@@ -60,6 +60,7 @@ import static org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSi
 import static org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.MAX_METRIC_ROW_CACHE_SIZE;
 import static org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.METRICS_SEND_INTERVAL;
 import static org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.SET_INSTANCE_ID_PROPERTY;
+import static org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY;
 import static org.easymock.EasyMock.anyInt;
 import static org.easymock.EasyMock.anyObject;
 import static org.easymock.EasyMock.anyString;
@@ -116,6 +117,7 @@ public class HadoopTimelineMetricsSinkTest {
     expect(conf.getInt(eq(METRICS_SEND_INTERVAL), anyInt())).andReturn(1000).anyTimes();
     expect(conf.getBoolean(eq(SET_INSTANCE_ID_PROPERTY), eq(false))).andReturn(true).anyTimes();
     expect(conf.getString(eq(INSTANCE_ID_PROPERTY), anyString())).andReturn("instanceId").anyTimes();
+    expect(conf.getString(eq(HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY), anyString())).andReturn("http").anyTimes();
 
     conf.setListDelimiterHandler(new DefaultListDelimiterHandler(eq(',')));
     expectLastCall().anyTimes();
@@ -188,6 +190,7 @@ public class HadoopTimelineMetricsSinkTest {
     expect(conf.getString(eq("serviceName-prefix"), eq(""))).andReturn("").anyTimes();
     expect(conf.getString(eq(COLLECTOR_PROTOCOL), eq("http"))).andReturn("http").anyTimes();
     expect(conf.getString(eq(COLLECTOR_PORT), eq("6188"))).andReturn("6188").anyTimes();
+    expect(conf.getString(eq(HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY), anyString())).andReturn("http").anyTimes();
 
     expect(conf.getInt(eq(MAX_METRIC_ROW_CACHE_SIZE), anyInt())).andReturn(10).anyTimes();
     // Return eviction time smaller than time diff for first 3 entries
@@ -326,6 +329,7 @@ public class HadoopTimelineMetricsSinkTest {
 
     expect(conf.getInt(eq(MAX_METRIC_ROW_CACHE_SIZE), anyInt())).andReturn(10).anyTimes();
     expect(conf.getInt(eq(METRICS_SEND_INTERVAL), anyInt())).andReturn(10).anyTimes();
+    expect(conf.getString(eq(HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY), anyString())).andReturn("http").anyTimes();
 
     conf.setListDelimiterHandler(new DefaultListDelimiterHandler(eq(',')));
     expectLastCall().anyTimes();
diff --git a/ambari-metrics/ambari-metrics-host-aggregator/pom.xml b/ambari-metrics/ambari-metrics-host-aggregator/pom.xml
index 24432dd..d126be5 100644
--- a/ambari-metrics/ambari-metrics-host-aggregator/pom.xml
+++ b/ambari-metrics/ambari-metrics-host-aggregator/pom.xml
@@ -101,6 +101,16 @@
             <version>4.2</version>
             <scope>test</scope>
         </dependency>
+        <dependency>
+            <groupId>org.eclipse.jetty</groupId>
+            <artifactId>jetty-server</artifactId>
+            <version>9.2.11.v20150529</version>
+        </dependency>
+        <dependency>
+            <groupId>org.eclipse.jetty</groupId>
+            <artifactId>jetty-webapp</artifactId>
+            <version>9.2.11.v20150529</version>
+        </dependency>
     </dependencies>
 
     <build>
diff --git a/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/host/aggregator/AggregatorApplication.java b/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/host/aggregator/AggregatorApplication.java
index 1e5cc82..f8ed95f 100644
--- a/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/host/aggregator/AggregatorApplication.java
+++ b/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/host/aggregator/AggregatorApplication.java
@@ -22,20 +22,23 @@ import com.sun.jersey.api.core.PackagesResourceConfig;
 import com.sun.jersey.api.core.ResourceConfig;
 import com.sun.net.httpserver.HttpServer;
 
+import javax.net.ssl.SSLContext;
 import javax.ws.rs.core.UriBuilder;
-import java.io.IOException;
 import java.net.InetAddress;
 import java.net.URI;
 import java.net.URL;
 import java.net.UnknownHostException;
 import java.util.HashMap;
 
+import com.sun.net.httpserver.HttpsConfigurator;
+import com.sun.net.httpserver.HttpsServer;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.metrics2.sink.timeline.AbstractMetricPublisher;
 import org.apache.hadoop.metrics2.sink.timeline.AggregatedMetricsPublisher;
 import org.apache.hadoop.metrics2.sink.timeline.RawMetricsPublisher;
+import org.eclipse.jetty.util.ssl.SslContextFactory;
 
 /**
  * WEB application with 2 publisher threads that processes received metrics and submits results to the collector
@@ -45,10 +48,12 @@ public class AggregatorApplication
     private static final int STOP_SECONDS_DELAY = 0;
     private static final int JOIN_SECONDS_TIMEOUT = 5;
     private static final String METRICS_SITE_CONFIGURATION_FILE = "ams-site.xml";
+    private static final String METRICS_SSL_SERVER_CONFIGURATION_FILE = "ssl-server.xml";
     private Log LOG;
     private final int webApplicationPort;
     private final int rawPublishingInterval;
     private final int aggregationInterval;
+    private final String webServerProtocol;
     private Configuration configuration;
     private Thread aggregatePublisherThread;
     private Thread rawPublisherThread;
@@ -65,10 +70,11 @@ public class AggregatorApplication
         this.aggregationInterval = configuration.getInt("timeline.metrics.host.aggregator.minute.interval", 300);
         this.rawPublishingInterval = configuration.getInt("timeline.metrics.sink.report.interval", 60);
         this.webApplicationPort = configuration.getInt("timeline.metrics.host.inmemory.aggregation.port", 61888);
+        this.webServerProtocol = configuration.get("timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY").equalsIgnoreCase("HTTP_ONLY") ? "http" : "https";
         this.timelineMetricsHolder = TimelineMetricsHolder.getInstance(rawPublishingInterval, aggregationInterval);
         try {
             this.httpServer = createHttpServer();
-        } catch (IOException e) {
+        } catch (Exception e) {
             LOG.error("Exception while starting HTTP server. Exiting", e);
             System.exit(1);
         }
@@ -88,13 +94,20 @@ public class AggregatorApplication
 
         URL amsResUrl = classLoader.getResource(METRICS_SITE_CONFIGURATION_FILE);
         LOG.info("Found metric service configuration: " + amsResUrl);
+        URL sslConfUrl = classLoader.getResource(METRICS_SSL_SERVER_CONFIGURATION_FILE);
+        LOG.info("Found metric service configuration: " + sslConfUrl);
         if (amsResUrl == null) {
-            throw new IllegalStateException("Unable to initialize the metrics " +
-                    "subsystem. No ams-site present in the classpath.");
+            throw new IllegalStateException(String.format("Unable to initialize the metrics " +
+                    "subsystem. No %s present in the classpath.", METRICS_SITE_CONFIGURATION_FILE));
+        }
+        if (sslConfUrl == null) {
+            throw new IllegalStateException(String.format("Unable to initialize the metrics " +
+                    "subsystem. No %s present in the classpath.", METRICS_SSL_SERVER_CONFIGURATION_FILE));
         }
 
         try {
             configuration.addResource(amsResUrl.toURI().toURL());
+            configuration.addResource(sslConfUrl.toURI().toURL());
         } catch (Exception e) {
             LOG.error("Couldn't init configuration. ", e);
             System.exit(1);
@@ -112,17 +125,41 @@ public class AggregatorApplication
     }
 
     protected URI getURI() {
-        URI uri = UriBuilder.fromUri("http://" + getHostName() + "/").port(this.webApplicationPort).build();
+        URI uri = UriBuilder.fromUri("/").scheme(this.webServerProtocol).host(getHostName()).port(this.webApplicationPort).build();
         LOG.info(String.format("Web server at %s", uri));
         return uri;
     }
 
-    protected HttpServer createHttpServer() throws IOException {
+    protected HttpServer createHttpServer() throws Exception {
         ResourceConfig resourceConfig = new PackagesResourceConfig("org.apache.hadoop.metrics2.host.aggregator");
         HashMap<String, Object> params = new HashMap();
         params.put("com.sun.jersey.api.json.POJOMappingFeature", "true");
         resourceConfig.setPropertiesAndFeatures(params);
-        return HttpServerFactory.create(getURI(), resourceConfig);
+        HttpServer server = HttpServerFactory.create(getURI(), resourceConfig);
+
+        if (webServerProtocol.equalsIgnoreCase("https")) {
+            HttpsServer httpsServer = (HttpsServer) server;
+            SslContextFactory sslContextFactory = new SslContextFactory();
+            String keyStorePath = configuration.get("ssl.server.keystore.location");
+            String keyStorePassword = configuration.get("ssl.server.keystore.password");
+            String keyManagerPassword = configuration.get("ssl.server.keystore.keypassword");
+            String trustStorePath = configuration.get("ssl.server.truststore.location");
+            String trustStorePassword = configuration.get("ssl.server.truststore.password");
+
+            sslContextFactory.setKeyStorePath(keyStorePath);
+            sslContextFactory.setKeyStorePassword(keyStorePassword);
+            sslContextFactory.setKeyManagerPassword(keyManagerPassword);
+            sslContextFactory.setTrustStorePath(trustStorePath);
+            sslContextFactory.setTrustStorePassword(trustStorePassword);
+
+            sslContextFactory.start();
+            SSLContext sslContext = sslContextFactory.getSslContext();
+            sslContextFactory.stop();
+            HttpsConfigurator httpsConfigurator = new HttpsConfigurator(sslContext);
+            httpsServer.setHttpsConfigurator(httpsConfigurator);
+            server = httpsServer;
+        }
+        return server;
     }
 
     private void startWebServer() {
diff --git a/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractMetricPublisher.java b/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractMetricPublisher.java
index 5af115f..7ce0815 100644
--- a/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractMetricPublisher.java
+++ b/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractMetricPublisher.java
@@ -30,9 +30,9 @@ import java.util.Map;
  */
 public abstract class AbstractMetricPublisher extends AbstractTimelineMetricsSink implements Runnable {
 
-    private static final String AMS_SITE_SSL_KEYSTORE_PATH_PROPERTY = "ssl.server.truststore.location";
-    private static final String AMS_SITE_SSL_KEYSTORE_TYPE_PROPERTY = "ssl.server.truststore.password";
-    private static final String AMS_SITE_SSL_KEYSTORE_PASSWORD_PROPERTY = "ssl.server.truststore.type";
+    private static final String AMS_SITE_SSL_TRUSTSTORE_PATH_PROPERTY = "ssl.server.truststore.location";
+    private static final String AMS_SITE_SSL_TRUSTSTORE_TYPE_PROPERTY = "ssl.server.truststore.type";
+    private static final String AMS_SITE_SSL_TRUSTSTORE_PASSWORD_PROPERTY = "ssl.server.truststore.password";
     private static final String AMS_SITE_HTTP_POLICY_PROPERTY = "timeline.metrics.service.http.policy";
     private static final String AMS_SITE_COLLECTOR_WEBAPP_ADDRESS_PROPERTY = "timeline.metrics.service.webapp.address";
     private static final String PUBLISHER_COLLECTOR_HOSTS_PROPERTY = "timeline.metrics.collector.hosts";
@@ -68,9 +68,9 @@ public abstract class AbstractMetricPublisher extends AbstractTimelineMetricsSin
             LOG.error("No Metric collector configured.");
         } else {
             if (collectorProtocol.contains("https")) {
-                String trustStorePath = configuration.get(AMS_SITE_SSL_KEYSTORE_PATH_PROPERTY).trim();
-                String trustStoreType = configuration.get(AMS_SITE_SSL_KEYSTORE_TYPE_PROPERTY).trim();
-                String trustStorePwd = configuration.get(AMS_SITE_SSL_KEYSTORE_PASSWORD_PROPERTY).trim();
+                String trustStorePath = configuration.get(AMS_SITE_SSL_TRUSTSTORE_PATH_PROPERTY).trim();
+                String trustStoreType = configuration.get(AMS_SITE_SSL_TRUSTSTORE_TYPE_PROPERTY).trim();
+                String trustStorePwd = configuration.get(AMS_SITE_SSL_TRUSTSTORE_PASSWORD_PROPERTY).trim();
                 loadTruststore(trustStorePath, trustStoreType, trustStorePwd);
             }
         }
diff --git a/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AggregatedMetricsPublisher.java b/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AggregatedMetricsPublisher.java
index c8dffab..fa0c8fb 100644
--- a/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AggregatedMetricsPublisher.java
+++ b/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AggregatedMetricsPublisher.java
@@ -100,4 +100,9 @@ public class AggregatedMetricsPublisher extends AbstractMetricPublisher {
     protected String getPostUrl() {
         return BASE_POST_URL + AGGREGATED_POST_PREFIX;
     }
+
+    @Override
+    protected String getHostInMemoryAggregationProtocol() {
+        return "http";
+    }
 }
diff --git a/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/sink/timeline/RawMetricsPublisher.java b/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/sink/timeline/RawMetricsPublisher.java
index 89addb7..2469449 100644
--- a/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/sink/timeline/RawMetricsPublisher.java
+++ b/ambari-metrics/ambari-metrics-host-aggregator/src/main/java/org/apache/hadoop/metrics2/sink/timeline/RawMetricsPublisher.java
@@ -62,4 +62,9 @@ public class RawMetricsPublisher extends AbstractMetricPublisher {
     protected String getPostUrl() {
         return BASE_POST_URL;
     }
+
+    @Override
+    protected String getHostInMemoryAggregationProtocol() {
+        return "http";
+    }
 }
diff --git a/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/aggregator.py b/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/aggregator.py
index ba05e9b..59cdd27 100644
--- a/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/aggregator.py
+++ b/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/aggregator.py
@@ -59,7 +59,7 @@ class Aggregator(threading.Thread):
   def stop(self):
     self.stopped = True
     if self._aggregator_process :
-      logger.info('Stopping Aggregator thread.')
+      logger.info('Shutting down Aggregator thread.')
       self._aggregator_process.terminate()
       self._aggregator_process = None
 
@@ -71,7 +71,7 @@ class AggregatorWatchdog(threading.Thread):
     threading.Thread.__init__(self)
     self._config = config
     self._stop_handler = stop_handler
-    self.URL = 'http://localhost:' + self._config.get_inmemory_aggregation_port() + self.AMS_AGGREGATOR_METRICS_CHECK_URL
+    self.URL = self._config.get_inmemory_aggregation_protocol() + '://localhost:' + self._config.get_inmemory_aggregation_port() + self.AMS_AGGREGATOR_METRICS_CHECK_URL
     self._is_ok = threading.Event()
     self.set_is_ok(True)
     self.stopped = False
@@ -106,7 +106,7 @@ class AggregatorWatchdog(threading.Thread):
 
 
   def stop(self):
-    logger.info('Stopping watcher thread.')
+    logger.info('Shutting down watcher thread.')
     self.stopped = True
 
 
diff --git a/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/config_reader.py b/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/config_reader.py
index 017ad24..7cc9fb8 100644
--- a/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/config_reader.py
+++ b/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/config_reader.py
@@ -258,17 +258,20 @@ class Configuration:
   def get_max_queue_size(self):
     return int(self.get("collector", "max_queue_size", 5000))
 
-  def is_server_https_enabled(self):
+  def is_collector_https_enabled(self):
     return "true" == str(self.get("collector", "https_enabled")).lower()
 
   def get_java_home(self):
     return self.get("aggregation", "java_home")
 
   def is_inmemory_aggregation_enabled(self):
-    return "true" == str(self.get("aggregation", "host_in_memory_aggregation")).lower()
+    return "true" == str(self.get("aggregation", "host_in_memory_aggregation", "false")).lower()
 
   def get_inmemory_aggregation_port(self):
-    return self.get("aggregation", "host_in_memory_aggregation_port")
+    return self.get("aggregation", "host_in_memory_aggregation_port", "61888")
+
+  def get_inmemory_aggregation_protocol(self):
+    return self.get("aggregation", "host_in_memory_aggregation_protocol", "http")
 
   def get_aggregator_jvm_agrs(self):
     hosts = self.get("aggregation", "jvm_arguments", "-Xmx256m -Xms128m -XX:PermSize=68m")
diff --git a/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/emitter.py b/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/emitter.py
index f19434d..df79d69 100644
--- a/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/emitter.py
+++ b/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/emitter.py
@@ -54,17 +54,19 @@ class Emitter(threading.Thread):
     self.application_metric_map = application_metric_map
     self.collector_port = config.get_server_port()
     self.all_metrics_collector_hosts = config.get_metrics_collector_hosts_as_list()
-    self.is_server_https_enabled = config.is_server_https_enabled()
+    self.is_collector_https_enabled = config.is_collector_https_enabled()
+    self.collector_protocol = "https" if self.is_collector_https_enabled else "http"
     self.set_instanceid = config.is_set_instanceid()
     self.instanceid = config.get_instanceid()
     self.is_inmemory_aggregation_enabled = config.is_inmemory_aggregation_enabled()
 
     if self.is_inmemory_aggregation_enabled:
-      self.collector_port = config.get_inmemory_aggregation_port()
-      self.all_metrics_collector_hosts = ['localhost']
-      self.is_server_https_enabled = False
+      self.inmemory_aggregation_port = config.get_inmemory_aggregation_port()
+      self.inmemory_aggregation_protocol = config.get_inmemory_aggregation_protocol()
+      if self.inmemory_aggregation_protocol == "https":
+        self.ca_certs = config.get_ca_certs()
 
-    if self.is_server_https_enabled:
+    if self.is_collector_https_enabled:
       self.ca_certs = config.get_ca_certs()
 
     # TimedRoundRobinSet
@@ -101,22 +103,26 @@ class Emitter(threading.Thread):
 
   def push_metrics(self, data):
     success = False
-    while self.active_collector_hosts.get_actual_size() > 0:
+    if self.is_inmemory_aggregation_enabled:
+      success = self.try_with_collector(self.inmemory_aggregation_protocol, "localhost", self.inmemory_aggregation_port, data)
+      if not success:
+        logger.warning("Failed to submit metrics to local aggregator. Trying to post them to collector...")
+    while not success and self.active_collector_hosts.get_actual_size() > 0:
       collector_host = self.get_collector_host_shard()
-      success = self.try_with_collector_host(collector_host, data)
-      if success:
-        break
+      success = self.try_with_collector(self.collector_protocol, collector_host, self.collector_port, data)
     pass
 
     if not success:
       logger.info('No valid collectors found...')
       for collector_host in self.active_collector_hosts:
-        success = self.try_with_collector_host(collector_host, data)
+        success = self.try_with_collector(self.collector_protocol, collector_host, self.ollector_port, data)
+        if success:
+          break
       pass
 
-  def try_with_collector_host(self, collector_host, data):
+  def try_with_collector(self, collector_protocol, collector_host, collector_port, data):
     headers = {"Content-Type" : "application/json", "Accept" : "*/*"}
-    connection = self.get_connection(collector_host)
+    connection = self.get_connection(collector_protocol, collector_host, collector_port)
     logger.debug("message to send: %s" % data)
 
     try:
@@ -169,16 +175,16 @@ class Emitter(threading.Thread):
       logger.warn("Metric collector host {0} was blacklisted.".format(collector_host))
       return False
 
-  def get_connection(self, collector_host):
+  def get_connection(self, protocol, host, port):
     timeout = int(self.send_interval - 10)
-    if self.is_server_https_enabled:
-      connection = CachedHTTPSConnection(collector_host,
-                                         self.collector_port,
+    if protocol == "https":
+      connection = CachedHTTPSConnection(host,
+                                         port,
                                          timeout=timeout,
                                          ca_certs=self.ca_certs)
     else:
-      connection = CachedHTTPConnection(collector_host,
-                                        self.collector_port,
+      connection = CachedHTTPConnection(host,
+                                        port,
                                         timeout=timeout)
     return connection
 
diff --git a/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/host_info.py b/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/host_info.py
index 035c833..6198c53 100644
--- a/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/host_info.py
+++ b/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/host_info.py
@@ -265,7 +265,6 @@ class HostInfo():
 
     skip_disk_patterns = self.__config.get_disk_metrics_skip_pattern()
     logger.debug('skip_disk_patterns: %s' % skip_disk_patterns)
-    print skip_disk_patterns
     if not skip_disk_patterns or skip_disk_patterns == 'None':
       io_counters = psutil.disk_io_counters()
       print io_counters
diff --git a/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/stop_handler.py b/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/stop_handler.py
index 7a9fbec..330e018 100644
--- a/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/stop_handler.py
+++ b/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/stop_handler.py
@@ -78,7 +78,7 @@ class StopHandlerWindows(StopHandler):
       raise FatalException(-1, "Error waiting for stop event: " + str(result))
     if (win32event.WAIT_TIMEOUT == result):
       return -1
-      logger.info("Stop event received")
+      logger.debug("Stop event received")
     return result # 0 -> stop
 
 
@@ -119,7 +119,7 @@ class StopHandlerLinux(StopHandler):
     # Stop process when stop event received
     self.stop_event.wait(timeout)
     if self.stop_event.isSet():
-      logger.info("Stop event received")
+      logger.debug("Stop event received")
       return 0
     # Timeout
     return -1
diff --git a/ambari-metrics/ambari-metrics-kafka-sink/src/main/java/org/apache/hadoop/metrics2/sink/kafka/KafkaTimelineMetricsReporter.java b/ambari-metrics/ambari-metrics-kafka-sink/src/main/java/org/apache/hadoop/metrics2/sink/kafka/KafkaTimelineMetricsReporter.java
index e126016..f07d508 100644
--- a/ambari-metrics/ambari-metrics-kafka-sink/src/main/java/org/apache/hadoop/metrics2/sink/kafka/KafkaTimelineMetricsReporter.java
+++ b/ambari-metrics/ambari-metrics-kafka-sink/src/main/java/org/apache/hadoop/metrics2/sink/kafka/KafkaTimelineMetricsReporter.java
@@ -74,6 +74,7 @@ public class KafkaTimelineMetricsReporter extends AbstractTimelineMetricsSink
   private static final String TIMELINE_METRICS_KAFKA_SET_INSTANCE_ID_PROPERTY = TIMELINE_METRICS_KAFKA_PREFIX + SET_INSTANCE_ID_PROPERTY;
   private static final String TIMELINE_METRICS_KAFKA_HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY = TIMELINE_METRICS_KAFKA_PREFIX + HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY;
   private static final String TIMELINE_METRICS_KAFKA_HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY = TIMELINE_METRICS_KAFKA_PREFIX + HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY;
+  private static final String TIMELINE_METRICS_KAFKA_HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY = TIMELINE_METRICS_KAFKA_PREFIX + HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY;
   private static final String TIMELINE_DEFAULT_HOST = "localhost";
   private static final String TIMELINE_DEFAULT_PORT = "6188";
   private static final String TIMELINE_DEFAULT_PROTOCOL = "http";
@@ -100,6 +101,7 @@ public class KafkaTimelineMetricsReporter extends AbstractTimelineMetricsSink
   private Set<String> excludedMetrics = new HashSet<>();
   private boolean hostInMemoryAggregationEnabled;
   private int hostInMemoryAggregationPort;
+  private String hostInMemoryAggregationProtocol;
 
   @Override
   protected String getCollectorUri(String host) {
@@ -147,6 +149,11 @@ public class KafkaTimelineMetricsReporter extends AbstractTimelineMetricsSink
     return hostInMemoryAggregationPort;
   }
 
+  @Override
+  protected String getHostInMemoryAggregationProtocol() {
+    return hostInMemoryAggregationProtocol;
+  }
+
   public void setMetricsCache(TimelineMetricsCache metricsCache) {
     this.metricsCache = metricsCache;
   }
@@ -186,9 +193,10 @@ public class KafkaTimelineMetricsReporter extends AbstractTimelineMetricsSink
 
         hostInMemoryAggregationEnabled = props.getBoolean(TIMELINE_METRICS_KAFKA_HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY, false);
         hostInMemoryAggregationPort = props.getInt(TIMELINE_METRICS_KAFKA_HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY, 61888);
+        hostInMemoryAggregationProtocol = props.getString(TIMELINE_METRICS_KAFKA_HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY, "http");
         setMetricsCache(new TimelineMetricsCache(maxRowCacheSize, metricsSendInterval));
 
-        if (metricCollectorProtocol.contains("https")) {
+        if (metricCollectorProtocol.contains("https") || hostInMemoryAggregationProtocol.contains("https")) {
           String trustStorePath = props.getString(TIMELINE_METRICS_SSL_KEYSTORE_PATH_PROPERTY).trim();
           String trustStoreType = props.getString(TIMELINE_METRICS_SSL_KEYSTORE_TYPE_PROPERTY).trim();
           String trustStorePwd = props.getString(TIMELINE_METRICS_SSL_KEYSTORE_PASSWORD_PROPERTY).trim();
diff --git a/ambari-metrics/ambari-metrics-storm-sink-legacy/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsReporter.java b/ambari-metrics/ambari-metrics-storm-sink-legacy/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsReporter.java
index d408e1a..842fad8 100644
--- a/ambari-metrics/ambari-metrics-storm-sink-legacy/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsReporter.java
+++ b/ambari-metrics/ambari-metrics-storm-sink-legacy/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsReporter.java
@@ -57,6 +57,7 @@ public class StormTimelineMetricsReporter extends AbstractTimelineMetricsSink
   private int timeoutSeconds;
   private boolean hostInMemoryAggregationEnabled;
   private int hostInMemoryAggregationPort;
+  private String hostInMemoryAggregationProtocol;
 
   public StormTimelineMetricsReporter() {
 
@@ -108,6 +109,11 @@ public class StormTimelineMetricsReporter extends AbstractTimelineMetricsSink
   }
 
   @Override
+  protected String getHostInMemoryAggregationProtocol() {
+    return hostInMemoryAggregationProtocol;
+  }
+
+  @Override
   public void prepare(Map conf) {
     LOG.info("Preparing Storm Metrics Reporter");
     try {
@@ -144,11 +150,15 @@ public class StormTimelineMetricsReporter extends AbstractTimelineMetricsSink
         setInstanceId = Boolean.getBoolean(cf.get(SET_INSTANCE_ID_PROPERTY).toString());
         instanceId = cf.get(INSTANCE_ID_PROPERTY).toString();
       }
-      hostInMemoryAggregationEnabled = Boolean.valueOf(cf.get(HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY).toString());
-      hostInMemoryAggregationPort = Integer.valueOf(cf.get(HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY).toString());
+      hostInMemoryAggregationEnabled = Boolean.valueOf(cf.get(HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY) != null ?
+        cf.get(HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY).toString() : "false");
+      hostInMemoryAggregationPort = Integer.valueOf(cf.get(HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY) != null ?
+        cf.get(HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY).toString() : "61888");
+      hostInMemoryAggregationProtocol = cf.get(HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY) != null ?
+        cf.get(HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY).toString() : "http";
 
       collectorUri = constructTimelineMetricUri(protocol, findPreferredCollectHost(), port);
-      if (protocol.contains("https")) {
+      if (protocol.contains("https") || hostInMemoryAggregationProtocol.contains("https")) {
         String trustStorePath = cf.get(SSL_KEYSTORE_PATH_PROPERTY).toString().trim();
         String trustStoreType = cf.get(SSL_KEYSTORE_TYPE_PROPERTY).toString().trim();
         String trustStorePwd = cf.get(SSL_KEYSTORE_PASSWORD_PROPERTY).toString().trim();
diff --git a/ambari-metrics/ambari-metrics-storm-sink-legacy/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsSink.java b/ambari-metrics/ambari-metrics-storm-sink-legacy/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsSink.java
index ff72f24..e3494fd 100644
--- a/ambari-metrics/ambari-metrics-storm-sink-legacy/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsSink.java
+++ b/ambari-metrics/ambari-metrics-storm-sink-legacy/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsSink.java
@@ -63,6 +63,7 @@ public class StormTimelineMetricsSink extends AbstractTimelineMetricsSink implem
   private String instanceId;
   private boolean hostInMemoryAggregationEnabled;
   private int hostInMemoryAggregationPort;
+  private String hostInMemoryAggregationProtocol;
 
   @Override
   protected String getCollectorUri(String host) {
@@ -110,6 +111,11 @@ public class StormTimelineMetricsSink extends AbstractTimelineMetricsSink implem
   }
 
   @Override
+  protected String getHostInMemoryAggregationProtocol() {
+    return hostInMemoryAggregationProtocol;
+  }
+
+  @Override
   public void prepare(Map map, Object o, TopologyContext topologyContext, IErrorReporter iErrorReporter) {
     LOG.info("Preparing Storm Metrics Sink");
     try {
@@ -138,12 +144,13 @@ public class StormTimelineMetricsSink extends AbstractTimelineMetricsSink implem
 
     instanceId = configuration.getProperty(INSTANCE_ID_PROPERTY, null);
     setInstanceId = Boolean.valueOf(configuration.getProperty(SET_INSTANCE_ID_PROPERTY, "false"));
-    hostInMemoryAggregationEnabled = Boolean.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY));
-    hostInMemoryAggregationPort = Integer.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY));
+    hostInMemoryAggregationEnabled = Boolean.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY, "false"));
+    hostInMemoryAggregationPort = Integer.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY, "61888"));
+    hostInMemoryAggregationProtocol = configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY, "http");
     // Initialize the collector write strategy
     super.init();
 
-    if (protocol.contains("https")) {
+    if (protocol.contains("https") || hostInMemoryAggregationProtocol.contains("https")) {
       String trustStorePath = configuration.getProperty(SSL_KEYSTORE_PATH_PROPERTY).trim();
       String trustStoreType = configuration.getProperty(SSL_KEYSTORE_TYPE_PROPERTY).trim();
       String trustStorePwd = configuration.getProperty(SSL_KEYSTORE_PASSWORD_PROPERTY).trim();
diff --git a/ambari-metrics/ambari-metrics-storm-sink/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsReporter.java b/ambari-metrics/ambari-metrics-storm-sink/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsReporter.java
index 5b75065..4fcf2fb 100644
--- a/ambari-metrics/ambari-metrics-storm-sink/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsReporter.java
+++ b/ambari-metrics/ambari-metrics-storm-sink/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsReporter.java
@@ -52,6 +52,7 @@ public class StormTimelineMetricsReporter extends AbstractTimelineMetricsSink
   private int timeoutSeconds;
   private boolean hostInMemoryAggregationEnabled;
   private int hostInMemoryAggregationPort;
+  private String hostInMemoryAggregationProtocol;
 
   public StormTimelineMetricsReporter() {
 
@@ -103,6 +104,11 @@ public class StormTimelineMetricsReporter extends AbstractTimelineMetricsSink
   }
 
   @Override
+  protected String getHostInMemoryAggregationProtocol() {
+    return hostInMemoryAggregationProtocol;
+  }
+
+  @Override
   public void prepare(Object registrationArgument) {
     LOG.info("Preparing Storm Metrics Reporter");
     try {
@@ -132,10 +138,11 @@ public class StormTimelineMetricsReporter extends AbstractTimelineMetricsSink
       setInstanceId = Boolean.valueOf(configuration.getProperty(SET_INSTANCE_ID_PROPERTY));
       instanceId = configuration.getProperty(INSTANCE_ID_PROPERTY);
 
-      hostInMemoryAggregationEnabled = Boolean.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY));
-      hostInMemoryAggregationPort = Integer.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY));
+      hostInMemoryAggregationEnabled = Boolean.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY, "false"));
+      hostInMemoryAggregationPort = Integer.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY, "61888"));
+      hostInMemoryAggregationProtocol = configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY, "http");
 
-      if (protocol.contains("https")) {
+      if (protocol.contains("https") || hostInMemoryAggregationProtocol.contains("https")) {
         String trustStorePath = configuration.getProperty(SSL_KEYSTORE_PATH_PROPERTY).trim();
         String trustStoreType = configuration.getProperty(SSL_KEYSTORE_TYPE_PROPERTY).trim();
         String trustStorePwd = configuration.getProperty(SSL_KEYSTORE_PASSWORD_PROPERTY).trim();
diff --git a/ambari-metrics/ambari-metrics-storm-sink/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsSink.java b/ambari-metrics/ambari-metrics-storm-sink/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsSink.java
index 4d5a229..dc92f80 100644
--- a/ambari-metrics/ambari-metrics-storm-sink/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsSink.java
+++ b/ambari-metrics/ambari-metrics-storm-sink/src/main/java/org/apache/hadoop/metrics2/sink/storm/StormTimelineMetricsSink.java
@@ -72,6 +72,7 @@ public class StormTimelineMetricsSink extends AbstractTimelineMetricsSink implem
   private boolean setInstanceId;
   private boolean hostInMemoryAggregationEnabled;
   private int hostInMemoryAggregationPort;
+  private String hostInMemoryAggregationProtocol;
 
   @Override
   protected String getCollectorUri(String host) {
@@ -119,6 +120,11 @@ public class StormTimelineMetricsSink extends AbstractTimelineMetricsSink implem
   }
 
   @Override
+  protected String getHostInMemoryAggregationProtocol() {
+    return hostInMemoryAggregationProtocol;
+  }
+
+  @Override
   public void prepare(Map map, Object o, TopologyContext topologyContext, IErrorReporter iErrorReporter) {
     LOG.info("Preparing Storm Metrics Sink");
     try {
@@ -150,13 +156,14 @@ public class StormTimelineMetricsSink extends AbstractTimelineMetricsSink implem
     instanceId = configuration.getProperty(INSTANCE_ID_PROPERTY, null);
     setInstanceId = Boolean.valueOf(configuration.getProperty(SET_INSTANCE_ID_PROPERTY, "false"));
 
-    hostInMemoryAggregationEnabled = Boolean.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY));
-    hostInMemoryAggregationPort = Integer.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY));
+    hostInMemoryAggregationEnabled = Boolean.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_ENABLED_PROPERTY, "false"));
+    hostInMemoryAggregationPort = Integer.valueOf(configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_PORT_PROPERTY, "61888"));
+    hostInMemoryAggregationProtocol = configuration.getProperty(HOST_IN_MEMORY_AGGREGATION_PROTOCOL_PROPERTY, "http");
 
     // Initialize the collector write strategy
     super.init();
 
-    if (protocol.contains("https")) {
+    if (protocol.contains("https") || hostInMemoryAggregationProtocol.contains("https")) {
       String trustStorePath = configuration.getProperty(SSL_KEYSTORE_PATH_PROPERTY).trim();
       String trustStoreType = configuration.getProperty(SSL_KEYSTORE_TYPE_PROPERTY).trim();
       String trustStorePwd = configuration.getProperty(SSL_KEYSTORE_PASSWORD_PROPERTY).trim();
diff --git a/ambari-server/src/main/java/org/apache/ambari/server/metrics/system/impl/AmbariMetricSinkImpl.java b/ambari-server/src/main/java/org/apache/ambari/server/metrics/system/impl/AmbariMetricSinkImpl.java
index 6cd7059..a0346f6 100644
--- a/ambari-server/src/main/java/org/apache/ambari/server/metrics/system/impl/AmbariMetricSinkImpl.java
+++ b/ambari-server/src/main/java/org/apache/ambari/server/metrics/system/impl/AmbariMetricSinkImpl.java
@@ -310,6 +310,11 @@ public class AmbariMetricSinkImpl extends AbstractTimelineMetricsSink implements
     return 0;
   }
 
+  @Override
+  protected String getHostInMemoryAggregationProtocol() {
+    return "http";
+  }
+
   private List<TimelineMetric> getFilteredMetricList(List<SingleMetric> metrics) {
     final List<TimelineMetric> metricList = new ArrayList<>();
     for (SingleMetric metric : metrics) {
diff --git a/ambari-server/src/main/resources/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/params.py b/ambari-server/src/main/resources/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/params.py
index 38396f3..256be1f 100644
--- a/ambari-server/src/main/resources/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/params.py
+++ b/ambari-server/src/main/resources/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/params.py
@@ -155,6 +155,12 @@ metrics_report_interval = default("/configurations/ams-site/timeline.metrics.sin
 metrics_collection_period = default("/configurations/ams-site/timeline.metrics.sink.collection.period", 10)
 host_in_memory_aggregation = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", True)
 host_in_memory_aggregation_port = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.port", 61888)
+is_aggregation_https_enabled = False
+if default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+  host_in_memory_aggregation_protocol = 'https'
+  is_aggregation_https_enabled = True
+else:
+  host_in_memory_aggregation_protocol = 'http'
 
 # if accumulo is selected accumulo_tserver_hosts should not be empty, but still default just in case
 if 'slave_hosts' in config['clusterHostInfo']:
diff --git a/ambari-server/src/main/resources/common-services/ACCUMULO/1.6.1.2.2.0/package/templates/hadoop-metrics2-accumulo.properties.j2 b/ambari-server/src/main/resources/common-services/ACCUMULO/1.6.1.2.2.0/package/templates/hadoop-metrics2-accumulo.properties.j2
index e59ba11..282f904 100644
--- a/ambari-server/src/main/resources/common-services/ACCUMULO/1.6.1.2.2.0/package/templates/hadoop-metrics2-accumulo.properties.j2
+++ b/ambari-server/src/main/resources/common-services/ACCUMULO/1.6.1.2.2.0/package/templates/hadoop-metrics2-accumulo.properties.j2
@@ -18,6 +18,9 @@
 
 *.host_in_memory_aggregation = {{host_in_memory_aggregation}}
 *.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+*.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
 
 {% if has_metric_collector %}
 
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-site.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-site.xml
index 7dfb213..49dfd95 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-site.xml
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-site.xml
@@ -850,6 +850,10 @@
     </value-attributes>
   </property>
   <property>
+    <name>timeline.metrics.host.inmemory.aggregation.http.policy</name>
+    <value>HTTP_ONLY</value>
+  </property>
+  <property>
     <name>timeline.metrics.downsampler.event.metric.patterns</name>
     <value></value>
     <description>
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
index c2b673c..9b15fae 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
@@ -163,19 +163,37 @@ def ams(name=None):
               create_parents = True
     )
 
-    if params.host_in_memory_aggregation and params.log4j_props is not None:
-      File(os.path.join(params.ams_monitor_conf_dir, "log4j.properties"),
-           owner=params.ams_user,
-           content=params.log4j_props
-           )
+    if params.host_in_memory_aggregation:
+      if params.log4j_props is not None:
+        File(os.path.join(params.ams_monitor_conf_dir, "log4j.properties"),
+             owner=params.ams_user,
+             content=params.log4j_props
+             )
+        pass
+
+      XmlConfig("ams-site.xml",
+                conf_dir=params.ams_monitor_conf_dir,
+                configurations=params.config['configurations']['ams-site'],
+                configuration_attributes=params.config['configurationAttributes']['ams-site'],
+                owner=params.ams_user,
+                group=params.user_group
+                )
 
-    XmlConfig("ams-site.xml",
+      XmlConfig("ams-site.xml",
               conf_dir=params.ams_monitor_conf_dir,
               configurations=params.config['configurations']['ams-site'],
               configuration_attributes=params.config['configurationAttributes']['ams-site'],
               owner=params.ams_user,
               group=params.user_group
               )
+      XmlConfig("ssl-server.xml",
+              conf_dir=params.ams_monitor_conf_dir,
+              configurations=params.config['configurations']['ams-ssl-server'],
+              configuration_attributes=params.config['configurationAttributes']['ams-ssl-server'],
+              owner=params.ams_user,
+              group=params.user_group
+              )
+      pass
 
     TemplateConfig(
       os.path.join(params.ams_monitor_conf_dir, "metric_monitor.ini"),
@@ -393,13 +411,21 @@ def ams(name=None, action=None):
            content=InlineTemplate(params.log4j_props)
            )
 
-    XmlConfig("ams-site.xml",
+      XmlConfig("ams-site.xml",
               conf_dir=params.ams_monitor_conf_dir,
               configurations=params.config['configurations']['ams-site'],
               configuration_attributes=params.config['configurationAttributes']['ams-site'],
               owner=params.ams_user,
               group=params.user_group
               )
+      XmlConfig("ssl-server.xml",
+              conf_dir=params.ams_monitor_conf_dir,
+              configurations=params.config['configurations']['ams-ssl-server'],
+              configuration_attributes=params.config['configuration_attributes']['ams-ssl-server'],
+              owner=params.ams_user,
+              group=params.user_group
+              )
+      pass
 
     Execute(format("{sudo} chown -R {ams_user}:{user_group} {ams_monitor_log_dir}")
             )
@@ -440,7 +466,7 @@ def ams(name=None, action=None):
          content=InlineTemplate(params.ams_env_sh_template)
     )
 
-    if params.metric_collector_https_enabled:
+    if params.metric_collector_https_enabled or params.is_aggregation_https_enabled:
       export_ca_certs(params.ams_monitor_conf_dir)
 
     pass
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py
index cf1cce4..6389696 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py
@@ -234,8 +234,14 @@ metrics_collector_heapsize = check_append_heap_property(str(metrics_collector_he
 master_heapsize = check_append_heap_property(str(master_heapsize), "m")
 regionserver_heapsize = check_append_heap_property(str(regionserver_heapsize), "m")
 
-host_in_memory_aggregation = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", True)
+host_in_memory_aggregation = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", False)
 host_in_memory_aggregation_port = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.port", 61888)
+is_aggregation_https_enabled = False
+if default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+  host_in_memory_aggregation_protocol = 'https'
+  is_aggregation_https_enabled = True
+else:
+  host_in_memory_aggregation_protocol = 'http'
 host_in_memory_aggregation_jvm_arguments = default("/configurations/ams-env/timeline.metrics.host.inmemory.aggregation.jvm.arguments",
                                                    "-Xmx256m -Xms128m -XX:PermSize=68m")
 
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/hadoop-metrics2-hbase.properties.j2 b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/hadoop-metrics2-hbase.properties.j2
index 600436c..4a6cd29 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/hadoop-metrics2-hbase.properties.j2
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/hadoop-metrics2-hbase.properties.j2
@@ -16,67 +16,47 @@
 # limitations under the License.
 #}
 
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements. See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License. You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
+hbase.extendedperiod = 3600
 
-# syntax: [prefix].[source|sink|jmx].[instance].[options]
-# See package.html for org.apache.hadoop.metrics2 for details
+hbase.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+hbase.period=30
+hbase.collector.hosts={{ams_collector_hosts}}
+hbase.port={{metric_collector_port}}
+hbase.protocol={{metric_collector_protocol}}
 
-# HBase-specific configuration to reset long-running stats (e.g. compactions)
-# If this variable is left out, then the default is no expiration.
+jvm.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+jvm.period=30
+jvm.collector.hosts={{ams_collector_hosts}}
+jvm.port={{metric_collector_port}}
+jvm.protocol={{metric_collector_protocol}}
 
-# Disable metrics since AMS 2.X doesn't work with Hadoop 3.x sink
+rpc.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+rpc.period=30
+rpc.collector.hosts={{ams_collector_hosts}}
+rpc.port={{metric_collector_port}}
+rpc.protocol={{metric_collector_protocol}}
 
-#hbase.extendedperiod = 3600
-#
-#hbase.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
-#hbase.period=30
-#hbase.collector.hosts={{ams_collector_hosts}}
-#hbase.port={{metric_collector_port}}
-#hbase.protocol={{metric_collector_protocol}}
-#
-#jvm.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
-#jvm.period=30
-#jvm.collector.hosts={{ams_collector_hosts}}
-#jvm.port={{metric_collector_port}}
-#jvm.protocol={{metric_collector_protocol}}
-#
-#rpc.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
-#rpc.period=30
-#rpc.collector.hosts={{ams_collector_hosts}}
-#rpc.port={{metric_collector_port}}
-#rpc.protocol={{metric_collector_protocol}}
-#
-#*.timeline.plugin.urls=file:///usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink.jar
-#*.sink.timeline.slave.host.name={{hostname}}
-#*.host_in_memory_aggregation = {{host_in_memory_aggregation}}
-#*.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
-#
-#hbase.sink.timeline.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
-#hbase.sink.timeline.period={{metrics_collection_period}}
-#hbase.sink.timeline.sendInterval={{metrics_report_interval}}000
-#hbase.sink.timeline.collector.hosts={{ams_collector_hosts}}
-#hbase.sink.timeline.port={{metric_collector_port}}
-#hbase.sink.timeline.protocol={{metric_collector_protocol}}
-#hbase.sink.timeline.serviceName-prefix=ams
-#
-## HTTPS properties
-#hbase.sink.timeline.truststore.path = {{metric_truststore_path}}
-#hbase.sink.timeline.truststore.type = {{metric_truststore_type}}
-#hbase.sink.timeline.truststore.password = {{metric_truststore_password}}
-#
-## Switch off metrics generation on a per region basis
-#*.source.filter.class=org.apache.hadoop.metrics2.filter.RegexFilter
-#hbase.*.source.filter.exclude=.*(Regions|Users|Tables).*
+*.timeline.plugin.urls=file:///usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink.jar
+*.sink.timeline.slave.host.name={{hostname}}
+*.host_in_memory_aggregation = {{host_in_memory_aggregation}}
+*.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+*.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
+
+hbase.sink.timeline.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+hbase.sink.timeline.period={{metrics_collection_period}}
+hbase.sink.timeline.sendInterval={{metrics_report_interval}}000
+hbase.sink.timeline.collector.hosts={{ams_collector_hosts}}
+hbase.sink.timeline.port={{metric_collector_port}}
+hbase.sink.timeline.protocol={{metric_collector_protocol}}
+hbase.sink.timeline.serviceName-prefix=ams
+
+# HTTPS properties
+hbase.sink.timeline.truststore.path = {{metric_truststore_path}}
+hbase.sink.timeline.truststore.type = {{metric_truststore_type}}
+hbase.sink.timeline.truststore.password = {{metric_truststore_password}}
+
+# Switch off metrics generation on a per region basis
+*.source.filter.class=org.apache.hadoop.metrics2.filter.RegexFilter
+hbase.*.source.filter.exclude=.*(Regions|Users|Tables).*
\ No newline at end of file
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/metric_monitor.ini.j2 b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/metric_monitor.ini.j2
index 245ba3b..6256eaa 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/metric_monitor.ini.j2
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/metric_monitor.ini.j2
@@ -44,6 +44,9 @@ https_enabled = {{metric_collector_https_enabled}}
 [aggregation]
 host_in_memory_aggregation = {{host_in_memory_aggregation}}
 host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
 java_home = {{java64_home}}
 jvm_arguments = {{host_in_memory_aggregation_jvm_arguments}}
 ams_monitor_log_dir = {{ams_monitor_log_dir}}
\ No newline at end of file
diff --git a/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/package/scripts/params.py b/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/package/scripts/params.py
index c10508c..90f83b9 100644
--- a/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/package/scripts/params.py
+++ b/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/package/scripts/params.py
@@ -126,6 +126,12 @@ metrics_collection_period = default("/configurations/ams-site/timeline.metrics.s
 
 host_in_memory_aggregation = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", True)
 host_in_memory_aggregation_port = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.port", 61888)
+is_aggregation_https_enabled = False
+if default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+  host_in_memory_aggregation_protocol = 'https'
+  is_aggregation_https_enabled = True
+else:
+  host_in_memory_aggregation_protocol = 'http'
 
 # Cluster Zookeeper quorum
 zookeeper_quorum = None
diff --git a/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/package/templates/flume-metrics2.properties.j2 b/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/package/templates/flume-metrics2.properties.j2
index c476019..c9a320f 100644
--- a/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/package/templates/flume-metrics2.properties.j2
+++ b/ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/package/templates/flume-metrics2.properties.j2
@@ -25,6 +25,9 @@ maxRowCacheSize=10000
 sendInterval={{metrics_report_interval}}000
 host_in_memory_aggregation = {{host_in_memory_aggregation}}
 host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
 
 # HTTPS properties
 truststore.path = {{metric_truststore_path}}
diff --git a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py
index 59fe778..f60cb5b 100644
--- a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py
+++ b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py
@@ -193,6 +193,12 @@ metrics_collection_period = default("/configurations/ams-site/timeline.metrics.s
 
 host_in_memory_aggregation = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", True)
 host_in_memory_aggregation_port = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.port", 61888)
+is_aggregation_https_enabled = False
+if default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+  host_in_memory_aggregation_protocol = 'https'
+  is_aggregation_https_enabled = True
+else:
+  host_in_memory_aggregation_protocol = 'http'
 
 # if hbase is selected the hbase_rs_hosts, should not be empty, but still default just in case
 if 'slave_hosts' in config['clusterHostInfo']:
diff --git a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2 b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2
index 7368ffe..66796b4 100644
--- a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2
+++ b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2
@@ -78,6 +78,9 @@ hbase.sink.timeline.protocol={{metric_collector_protocol}}
 hbase.sink.timeline.port={{metric_collector_port}}
 hbase.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
 hbase.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+hbase.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
 
 # HTTPS properties
 hbase.sink.timeline.truststore.path = {{metric_truststore_path}}
diff --git a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-RS.j2 b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-RS.j2
index f245365..4ed68ba 100644
--- a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-RS.j2
+++ b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-RS.j2
@@ -76,6 +76,9 @@ hbase.sink.timeline.protocol={{metric_collector_protocol}}
 hbase.sink.timeline.port={{metric_collector_port}}
 hbase.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
 hbase.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+hbase.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
 
 # HTTPS properties
 hbase.sink.timeline.truststore.path = {{metric_truststore_path}}
diff --git a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py
index 92038e1..b13fa06 100644
--- a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py
+++ b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py
@@ -579,6 +579,12 @@ metrics_collection_period = default("/configurations/ams-site/timeline.metrics.s
 
 host_in_memory_aggregation = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", True)
 host_in_memory_aggregation_port = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.port", 61888)
+is_aggregation_https_enabled = False
+if default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+  host_in_memory_aggregation_protocol = 'https'
+  is_aggregation_https_enabled = True
+else:
+  host_in_memory_aggregation_protocol = 'http'
 ########################################################
 ############# Atlas related params #####################
 ########################################################
diff --git a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-hivemetastore.properties.j2 b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-hivemetastore.properties.j2
index 3093e56..d4573c3 100644
--- a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-hivemetastore.properties.j2
+++ b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-hivemetastore.properties.j2
@@ -53,6 +53,8 @@
   hivemetastore.sink.timeline.protocol={{metric_collector_protocol}}
   hivemetastore.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
   hivemetastore.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
-
+  {% if is_aggregation_https_enabled %}
+    hivemetastore.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+  {% endif %}
 
 {% endif %}
diff --git a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-hiveserver2.properties.j2 b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-hiveserver2.properties.j2
index 59a7c1b..c67d002 100644
--- a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-hiveserver2.properties.j2
+++ b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-hiveserver2.properties.j2
@@ -53,5 +53,8 @@
   hiveserver2.sink.timeline.protocol={{metric_collector_protocol}}
   hiveserver2.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
   hiveserver2.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+  {% if is_aggregation_https_enabled %}
+    hiveserver2.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+  {% endif %}
 
 {% endif %}
diff --git a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-llapdaemon.j2 b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-llapdaemon.j2
index 69f6071..cd23e8a 100644
--- a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-llapdaemon.j2
+++ b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-llapdaemon.j2
@@ -52,5 +52,8 @@
   llapdaemon.sink.timeline.protocol={{metric_collector_protocol}}
   llapdaemon.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
   llapdaemon.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+  {% if is_aggregation_https_enabled %}
+    llapdaemon.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+  {% endif %}
 
 {% endif %}
diff --git a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-llaptaskscheduler.j2 b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-llaptaskscheduler.j2
index c08a498..674d3cc 100644
--- a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-llaptaskscheduler.j2
+++ b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/templates/hadoop-metrics2-llaptaskscheduler.j2
@@ -52,5 +52,8 @@
   llaptaskscheduler.sink.timeline.protocol={{metric_collector_protocol}}
   llaptaskscheduler.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
   llaptaskscheduler.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+  {% if is_aggregation_https_enabled %}
+    llaptaskscheduler.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+  {% endif %}
 
 {% endif %}
diff --git a/ambari-server/src/main/resources/common-services/KAFKA/0.8.1/configuration/kafka-broker.xml b/ambari-server/src/main/resources/common-services/KAFKA/0.8.1/configuration/kafka-broker.xml
index 61136c8..9a7b667 100644
--- a/ambari-server/src/main/resources/common-services/KAFKA/0.8.1/configuration/kafka-broker.xml
+++ b/ambari-server/src/main/resources/common-services/KAFKA/0.8.1/configuration/kafka-broker.xml
@@ -424,4 +424,9 @@
     <value>{{host_in_memory_aggregation_port}}</value>
     <on-ambari-upgrade add="true"/>
   </property>
+  <property>
+    <name>kafka.timeline.metrics.host_in_memory_aggregation_protocol</name>
+    <value>{{host_in_memory_aggregation_protocol}}</value>
+    <on-ambari-upgrade add="true"/>
+  </property>
 </configuration>
diff --git a/ambari-server/src/main/resources/common-services/KAFKA/0.8.1/package/scripts/params.py b/ambari-server/src/main/resources/common-services/KAFKA/0.8.1/package/scripts/params.py
index 4c2647e..4f79d24 100644
--- a/ambari-server/src/main/resources/common-services/KAFKA/0.8.1/package/scripts/params.py
+++ b/ambari-server/src/main/resources/common-services/KAFKA/0.8.1/package/scripts/params.py
@@ -154,6 +154,12 @@ if has_metric_collector:
 
   host_in_memory_aggregation = str(default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", True)).lower()
   host_in_memory_aggregation_port = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.port", 61888)
+  is_aggregation_https_enabled = False
+  if default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+    host_in_memory_aggregation_protocol = 'https'
+    is_aggregation_https_enabled = True
+  else:
+    host_in_memory_aggregation_protocol = 'http'
   pass
 
 # Security-related params
diff --git a/ambari-server/src/main/resources/common-services/STORM/0.9.1/package/scripts/params_linux.py b/ambari-server/src/main/resources/common-services/STORM/0.9.1/package/scripts/params_linux.py
index fb624b8..9b7f27af 100644
--- a/ambari-server/src/main/resources/common-services/STORM/0.9.1/package/scripts/params_linux.py
+++ b/ambari-server/src/main/resources/common-services/STORM/0.9.1/package/scripts/params_linux.py
@@ -210,6 +210,12 @@ metric_collector_sink_jar = "/usr/lib/storm/lib/ambari-metrics-storm-sink-with-c
 metric_collector_legacy_sink_jar = "/usr/lib/storm/lib/ambari-metrics-storm-sink-legacy-with-common-*.jar"
 host_in_memory_aggregation = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", True)
 host_in_memory_aggregation_port = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.port", 61888)
+is_aggregation_https_enabled = False
+if default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+  host_in_memory_aggregation_protocol = 'https'
+  is_aggregation_https_enabled = True
+else:
+  host_in_memory_aggregation_protocol = 'http'
 
 
 # Cluster Zookeeper quorum
diff --git a/ambari-server/src/main/resources/common-services/STORM/0.9.1/package/templates/config.yaml.j2 b/ambari-server/src/main/resources/common-services/STORM/0.9.1/package/templates/config.yaml.j2
index b2dd3c8..7560822 100644
--- a/ambari-server/src/main/resources/common-services/STORM/0.9.1/package/templates/config.yaml.j2
+++ b/ambari-server/src/main/resources/common-services/STORM/0.9.1/package/templates/config.yaml.j2
@@ -61,8 +61,11 @@ metrics_collector:
   protocol: "{{metric_collector_protocol}}"
   port: "{{metric_collector_port}}"
   appId: "{{metric_collector_app_id}}"
-  host_in_memory_aggregation = {{host_in_memory_aggregation}}
-  host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+  host_in_memory_aggregation: {{host_in_memory_aggregation}}
+  host_in_memory_aggregation_port: {{host_in_memory_aggregation_port}}
+  {% if is_aggregation_https_enabled %}
+    host_in_memory_aggregation_protocol: {{host_in_memory_aggregation_protocol}}
+  {% endif %}
 
   # HTTPS settings
   truststore.path : "{{metric_truststore_path}}"
diff --git a/ambari-server/src/main/resources/common-services/STORM/0.9.1/package/templates/storm-metrics2.properties.j2 b/ambari-server/src/main/resources/common-services/STORM/0.9.1/package/templates/storm-metrics2.properties.j2
index e7db91e..79d0b89 100644
--- a/ambari-server/src/main/resources/common-services/STORM/0.9.1/package/templates/storm-metrics2.properties.j2
+++ b/ambari-server/src/main/resources/common-services/STORM/0.9.1/package/templates/storm-metrics2.properties.j2
@@ -25,6 +25,9 @@ sendInterval={{metrics_report_interval}}000
 clusterReporterAppId=nimbus
 host_in_memory_aggregation = {{host_in_memory_aggregation}}
 host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
 
 # HTTPS properties
 truststore.path = {{metric_truststore_path}}
diff --git a/ambari-server/src/main/resources/stack-hooks/before-START/scripts/params.py b/ambari-server/src/main/resources/stack-hooks/before-START/scripts/params.py
index da8ef5e..04a5604 100644
--- a/ambari-server/src/main/resources/stack-hooks/before-START/scripts/params.py
+++ b/ambari-server/src/main/resources/stack-hooks/before-START/scripts/params.py
@@ -169,6 +169,12 @@ metrics_collection_period = default("/configurations/ams-site/timeline.metrics.s
 
 host_in_memory_aggregation = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", True)
 host_in_memory_aggregation_port = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.port", 61888)
+is_aggregation_https_enabled = False
+if default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+  host_in_memory_aggregation_protocol = 'https'
+  is_aggregation_https_enabled = True
+else:
+  host_in_memory_aggregation_protocol = 'http'
 
 # Cluster Zookeeper quorum
 zookeeper_quorum = None
diff --git a/ambari-server/src/main/resources/stack-hooks/before-START/templates/hadoop-metrics2.properties.j2 b/ambari-server/src/main/resources/stack-hooks/before-START/templates/hadoop-metrics2.properties.j2
index 2cd9aa8..281ac27 100644
--- a/ambari-server/src/main/resources/stack-hooks/before-START/templates/hadoop-metrics2.properties.j2
+++ b/ambari-server/src/main/resources/stack-hooks/before-START/templates/hadoop-metrics2.properties.j2
@@ -77,6 +77,9 @@ resourcemanager.sink.ganglia.tagsForPrefix.yarn=Queue
 *.sink.timeline.port={{metric_collector_port}}
 *.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
 *.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+*.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
 
 # HTTPS properties
 *.sink.timeline.truststore.path = {{metric_truststore_path}}
diff --git a/ambari-server/src/main/resources/stacks/HDP/2.1.GlusterFS/services/STORM/package/templates/config.yaml.j2 b/ambari-server/src/main/resources/stacks/HDP/2.1.GlusterFS/services/STORM/package/templates/config.yaml.j2
index 445df31..7560822 100644
--- a/ambari-server/src/main/resources/stacks/HDP/2.1.GlusterFS/services/STORM/package/templates/config.yaml.j2
+++ b/ambari-server/src/main/resources/stacks/HDP/2.1.GlusterFS/services/STORM/package/templates/config.yaml.j2
@@ -1,21 +1,3 @@
-{#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#}
-
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
 # this work for additional information regarding copyright ownership.
@@ -62,4 +44,32 @@ ganglia:
   # an <IP>:<HOSTNAME> pair to spoof
   # this allows us to simulate storm cluster metrics coming from a specific host
   #spoof: "192.168.1.1:storm"
-{% endif %}
\ No newline at end of file
+{% endif %}
+
+{% if has_metric_collector and stack_supports_storm_ams %}
+enableGanglia: False
+
+ganglia:
+  reportInterval: {{metric_collector_report_interval}}
+
+enableMetricsSink: True
+
+metrics_collector:
+
+  reportInterval: {{metric_collector_report_interval}}
+  collector.hosts: "{{ams_collector_hosts}}"
+  protocol: "{{metric_collector_protocol}}"
+  port: "{{metric_collector_port}}"
+  appId: "{{metric_collector_app_id}}"
+  host_in_memory_aggregation: {{host_in_memory_aggregation}}
+  host_in_memory_aggregation_port: {{host_in_memory_aggregation_port}}
+  {% if is_aggregation_https_enabled %}
+    host_in_memory_aggregation_protocol: {{host_in_memory_aggregation_protocol}}
+  {% endif %}
+
+  # HTTPS settings
+  truststore.path : "{{metric_truststore_path}}"
+  truststore.type : "{{metric_truststore_type}}"
+  truststore.password : "{{metric_truststore_password}}"
+
+{% endif %}
diff --git a/ambari-server/src/main/resources/stacks/HDP/2.6/services/HDFS/configuration/hadoop-metrics2.properties.xml b/ambari-server/src/main/resources/stacks/HDP/2.6/services/HDFS/configuration/hadoop-metrics2.properties.xml
index 84ea231..02be755 100644
--- a/ambari-server/src/main/resources/stacks/HDP/2.6/services/HDFS/configuration/hadoop-metrics2.properties.xml
+++ b/ambari-server/src/main/resources/stacks/HDP/2.6/services/HDFS/configuration/hadoop-metrics2.properties.xml
@@ -88,6 +88,9 @@ resourcemanager.sink.ganglia.tagsForPrefix.yarn=Queue
 *.sink.timeline.port={{metric_collector_port}}
 *.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
 *.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+*.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
 
 # HTTPS properties
 *.sink.timeline.truststore.path = {{metric_truststore_path}}
diff --git a/ambari-server/src/main/resources/stacks/PERF/1.0/services/FAKEHDFS/configuration/hadoop-metrics2.properties.xml b/ambari-server/src/main/resources/stacks/PERF/1.0/services/FAKEHDFS/configuration/hadoop-metrics2.properties.xml
index 4aadb83..02be755 100644
--- a/ambari-server/src/main/resources/stacks/PERF/1.0/services/FAKEHDFS/configuration/hadoop-metrics2.properties.xml
+++ b/ambari-server/src/main/resources/stacks/PERF/1.0/services/FAKEHDFS/configuration/hadoop-metrics2.properties.xml
@@ -86,6 +86,11 @@ resourcemanager.sink.ganglia.tagsForPrefix.yarn=Queue
 *.sink.timeline.zookeeper.quorum={{zookeeper_quorum}}
 *.sink.timeline.protocol={{metric_collector_protocol}}
 *.sink.timeline.port={{metric_collector_port}}
+*.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
+*.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+*.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
 
 # HTTPS properties
 *.sink.timeline.truststore.path = {{metric_truststore_path}}
diff --git a/ambari-server/src/test/java/org/apache/ambari/server/metric/system/impl/TestAmbariMetricsSinkImpl.java b/ambari-server/src/test/java/org/apache/ambari/server/metric/system/impl/TestAmbariMetricsSinkImpl.java
index 969070d..10e5ef8 100644
--- a/ambari-server/src/test/java/org/apache/ambari/server/metric/system/impl/TestAmbariMetricsSinkImpl.java
+++ b/ambari-server/src/test/java/org/apache/ambari/server/metric/system/impl/TestAmbariMetricsSinkImpl.java
@@ -84,6 +84,11 @@ public class TestAmbariMetricsSinkImpl extends AbstractTimelineMetricsSink imple
   }
 
   @Override
+  protected String getHostInMemoryAggregationProtocol() {
+    return "http";
+  }
+
+  @Override
   public void init(MetricsConfiguration configuration) {
 
   }
diff --git a/ambari-server/src/test/python/stacks/2.0.6/AMBARI_METRICS/test_metrics_monitor.py b/ambari-server/src/test/python/stacks/2.0.6/AMBARI_METRICS/test_metrics_monitor.py
new file mode 100644
index 0000000..945d87a
--- /dev/null
+++ b/ambari-server/src/test/python/stacks/2.0.6/AMBARI_METRICS/test_metrics_monitor.py
@@ -0,0 +1,142 @@
+#!/usr/bin/env python
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+from mock.mock import MagicMock, patch
+from stacks.utils.RMFTestCase import *
+
+@patch("tempfile.mkdtemp", new = MagicMock(return_value='/some_tmp_dir'))
+@patch("os.path.exists", new = MagicMock(return_value=True))
+@patch("platform.linux_distribution", new = MagicMock(return_value="Linux"))
+class TestMetricsMonitor(RMFTestCase):
+  COMMON_SERVICES_PACKAGE_DIR = "AMBARI_METRICS/0.1.0/package"
+  STACK_VERSION = "2.0.6"
+  DEFAULT_IMMUTABLE_PATHS = ['/apps/hive/warehouse', '/apps/falcon', '/mr-history/done', '/app-logs', '/tmp']
+
+  def test_start_default_with_aggregator_https(self):
+    self.executeScript(self.COMMON_SERVICES_PACKAGE_DIR + "/scripts/metrics_monitor.py",
+                       classname = "AmsMonitor",
+                       command = "start",
+                       config_file="default.json",
+                       stack_version = self.STACK_VERSION,
+                       target = RMFTestCase.TARGET_COMMON_SERVICES
+                       )
+    self.maxDiff=None
+    self.assert_ams(inmemory_aggregation = False)
+    self.assertResourceCalled('Execute', 'ambari-sudo.sh /usr/jdk64/jdk1.7.0_45/bin/keytool -importkeystore -srckeystore /etc/security/clientKeys/all.jks -destkeystore /some_tmp_dir/truststore.p12 -srcalias c6402.ambari.apache.org -deststoretype PKCS12 -srcstorepass bigdata -deststorepass bigdata',
+                              )
+    self.assertResourceCalled('Execute', 'ambari-sudo.sh openssl pkcs12 -in /some_tmp_dir/truststore.p12 -out /etc/ambari-metrics-monitor/conf/ca.pem -cacerts -nokeys -passin pass:bigdata',
+                              )
+    self.assertResourceCalled('Execute', ('chown', u'ams:hadoop', '/etc/ambari-metrics-monitor/conf/ca.pem'),
+                              sudo=True
+                              )
+    self.assertResourceCalled('Execute', ('chmod', '644', '/etc/ambari-metrics-monitor/conf/ca.pem'),
+                              sudo=True)
+    self.assertResourceCalled('Execute', 'ambari-sudo.sh rm -rf /some_tmp_dir',
+                              )
+    self.assertResourceCalled('Execute', '/usr/sbin/ambari-metrics-monitor --config /etc/ambari-metrics-monitor/conf start',
+                              user = 'ams'
+                              )
+    self.assertNoMoreResources()
+
+  def test_start_inmemory_aggregator(self):
+    self.executeScript(self.COMMON_SERVICES_PACKAGE_DIR + "/scripts/metrics_monitor.py",
+                       classname = "AmsMonitor",
+                       command = "start",
+                       config_file="default_ams_embedded.json",
+                       stack_version = self.STACK_VERSION,
+                       target = RMFTestCase.TARGET_COMMON_SERVICES
+                       )
+    self.maxDiff=None
+    self.assert_ams(inmemory_aggregation = True)
+
+    self.assertResourceCalled('Execute', '/usr/sbin/ambari-metrics-monitor --config /etc/ambari-metrics-monitor/conf start',
+                              user = 'ams'
+                              )
+    self.assertNoMoreResources()
+
+  def assert_ams(self, inmemory_aggregation=False):
+    self.assertResourceCalled('Directory', '/etc/ambari-metrics-monitor/conf',
+                              owner = 'ams',
+                              group = 'hadoop',
+                              create_parents = True
+                              )
+
+    self.assertResourceCalled('Directory', '/var/log/ambari-metrics-monitor',
+                              owner = 'ams',
+                              group = 'hadoop',
+                              mode = 0755,
+                              create_parents = True
+                              )
+
+    if inmemory_aggregation:
+      self.assertResourceCalled('File', '/etc/ambari-metrics-monitor/conf/log4j.properties',
+                                owner = 'ams',
+                                group = 'hadoop',
+                                content = InlineTemplate(self.getConfig()['configurations']['ams-log4j']['content']),
+                                mode=0644,
+                                )
+      self.assertResourceCalled('XmlConfig', 'ams-site.xml',
+                                owner = 'ams',
+                                group = 'hadoop',
+                                conf_dir = '/etc/ambari-metrics-monitor/conf',
+                                configurations = self.getConfig()['configurations']['ams-site'],
+                                configuration_attributes = self.getConfig()['configuration_attributes']['ams-hbase-site']
+                                )
+
+      self.assertResourceCalled('XmlConfig', 'ssl-server.xml',
+                              owner = 'ams',
+                              group = 'hadoop',
+                              conf_dir = '/etc/ambari-metrics-monitor/conf',
+                              configurations = self.getConfig()['configurations']['ams-ssl-server'],
+                              configuration_attributes = self.getConfig()['configuration_attributes']['ams-ssl-server']
+                              )
+      pass
+
+    self.assertResourceCalled('Execute', 'ambari-sudo.sh chown -R ams:hadoop /var/log/ambari-metrics-monitor')
+    self.assertResourceCalled('Directory', '/var/run/ambari-metrics-monitor',
+                              owner = 'ams',
+                              group = 'hadoop',
+                              mode = 0755,
+                              cd_access = 'a',
+                              create_parents = True
+                              )
+    self.assertResourceCalled('Directory', '/usr/lib/python2.6/site-packages/resource_monitoring/psutil/build',
+                              owner = 'ams',
+                              group = 'hadoop',
+                              cd_access = 'a',
+                              create_parents = True
+                              )
+
+    self.assertResourceCalled('Execute', 'ambari-sudo.sh chown -R ams:hadoop /usr/lib/python2.6/site-packages/resource_monitoring')
+    self.assertResourceCalled('TemplateConfig', '/etc/ambari-metrics-monitor/conf/metric_monitor.ini',
+                              owner = 'ams',
+                              group = 'hadoop',
+                              template_tag = None,
+                              )
+    self.assertResourceCalled('TemplateConfig', '/etc/ambari-metrics-monitor/conf/metric_groups.conf',
+                              owner = 'ams',
+                              group = 'hadoop',
+                              template_tag = None,
+                              )
+    self.assertResourceCalled('File', '/etc/ambari-metrics-monitor/conf/ams-env.sh',
+                              owner = 'ams',
+                              content = InlineTemplate(self.getConfig()['configurations']['ams-env']['content'])
+                              )
+
diff --git a/ambari-server/src/test/python/stacks/2.0.6/configs/default_ams_embedded.json b/ambari-server/src/test/python/stacks/2.0.6/configs/default_ams_embedded.json
index 9098bc1..cf6a7df 100644
--- a/ambari-server/src/test/python/stacks/2.0.6/configs/default_ams_embedded.json
+++ b/ambari-server/src/test/python/stacks/2.0.6/configs/default_ams_embedded.json
@@ -956,6 +956,7 @@
             "content": "\n"
         },
         "ams-site": {
+            "timeline.metrics.host.inmemory.aggregation": "true",
             "timeline.metrics.host.aggregator.minute.ttl": "604800",
             "timeline.metrics.cluster.aggregator.daily.checkpointCutOffMultiplier": "1",
             "timeline.metrics.cluster.aggregator.daily.ttl": "63072000",
diff --git a/contrib/management-packs/hdf-ambari-mpack/src/main/resources/stacks/HDF/2.0/hooks/before-START/scripts/params.py b/contrib/management-packs/hdf-ambari-mpack/src/main/resources/stacks/HDF/2.0/hooks/before-START/scripts/params.py
index 8cc876f..223ae56 100644
--- a/contrib/management-packs/hdf-ambari-mpack/src/main/resources/stacks/HDF/2.0/hooks/before-START/scripts/params.py
+++ b/contrib/management-packs/hdf-ambari-mpack/src/main/resources/stacks/HDF/2.0/hooks/before-START/scripts/params.py
@@ -137,6 +137,12 @@ metrics_collection_period = default("/configurations/ams-site/timeline.metrics.s
 
 host_in_memory_aggregation = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", True)
 host_in_memory_aggregation_port = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.port", 61888)
+is_aggregation_https_enabled = False
+if default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+  host_in_memory_aggregation_protocol = 'https'
+  is_aggregation_https_enabled = True
+else:
+  host_in_memory_aggregation_protocol = 'http'
 #hadoop params
 
 if has_namenode or dfs_type == 'HCFS':
diff --git a/contrib/management-packs/hdf-ambari-mpack/src/main/resources/stacks/HDF/2.0/hooks/before-START/templates/hadoop-metrics2.properties.j2 b/contrib/management-packs/hdf-ambari-mpack/src/main/resources/stacks/HDF/2.0/hooks/before-START/templates/hadoop-metrics2.properties.j2
index fcd9b23..57b4959 100644
--- a/contrib/management-packs/hdf-ambari-mpack/src/main/resources/stacks/HDF/2.0/hooks/before-START/templates/hadoop-metrics2.properties.j2
+++ b/contrib/management-packs/hdf-ambari-mpack/src/main/resources/stacks/HDF/2.0/hooks/before-START/templates/hadoop-metrics2.properties.j2
@@ -71,7 +71,15 @@ resourcemanager.sink.ganglia.tagsForPrefix.yarn=Queue
 *.sink.timeline.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
 *.sink.timeline.period={{metrics_collection_period}}
 *.sink.timeline.sendInterval={{metrics_report_interval}}000
-*.sink.timeline.slave.host.name = {{hostname}}
+*.sink.timeline.slave.host.name={{hostname}}
+*.sink.timeline.zookeeper.quorum={{zookeeper_quorum}}
+*.sink.timeline.protocol={{metric_collector_protocol}}
+*.sink.timeline.port={{metric_collector_port}}
+*.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
+*.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+{% if is_aggregation_https_enabled %}
+*.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+{% endif %}
 
 # HTTPS properties
 *.sink.timeline.truststore.path = {{metric_truststore_path}}
diff --git a/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/hooks/before-START/scripts/params.py b/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/hooks/before-START/scripts/params.py
index fc2c61f..9425d61 100755
--- a/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/hooks/before-START/scripts/params.py
+++ b/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/hooks/before-START/scripts/params.py
@@ -134,6 +134,25 @@ if has_metric_collector:
   pass
 metrics_report_interval = default("/configurations/ams-site/timeline.metrics.sink.report.interval", 60)
 metrics_collection_period = default("/configurations/ams-site/timeline.metrics.sink.collection.period", 10)
+host_in_memory_aggregation = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", True)
+host_in_memory_aggregation_port = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.port", 61888)
+is_aggregation_https_enabled = False
+if default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+  host_in_memory_aggregation_protocol = 'https'
+  is_aggregation_https_enabled = True
+else:
+  host_in_memory_aggregation_protocol = 'http'
+
+# Cluster Zookeeper quorum
+zookeeper_quorum = None
+if has_zk_host:
+  if 'zoo.cfg' in config['configurations'] and 'clientPort' in config['configurations']['zoo.cfg']:
+    zookeeper_clientPort = config['configurations']['zoo.cfg']['clientPort']
+  else:
+    zookeeper_clientPort = '2181'
+  zookeeper_quorum = (':' + zookeeper_clientPort + ',').join(config['clusterHostInfo']['zookeeper_hosts'])
+  # last port config
+  zookeeper_quorum += ':' + zookeeper_clientPort
 
 #hadoop params
 
diff --git a/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/scripts/params_linux.py b/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/scripts/params_linux.py
index 1e4487d..56d5c4d 100755
--- a/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/scripts/params_linux.py
+++ b/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/scripts/params_linux.py
@@ -509,6 +509,15 @@ if has_metric_collector:
 metrics_report_interval = default("/configurations/ams-site/timeline.metrics.sink.report.interval", 60)
 metrics_collection_period = default("/configurations/ams-site/timeline.metrics.sink.collection.period", 10)
 
+host_in_memory_aggregation = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation", True)
+host_in_memory_aggregation_port = default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.port", 61888)
+is_aggregation_https_enabled = False
+if default("/configurations/ams-site/timeline.metrics.host.inmemory.aggregation.http.policy", "HTTP_ONLY") == "HTTPS_ONLY":
+  host_in_memory_aggregation_protocol = 'https'
+  is_aggregation_https_enabled = True
+else:
+  host_in_memory_aggregation_protocol = 'http'
+
 ########################################################
 ############# Atlas related params #####################
 ########################################################
diff --git a/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-hivemetastore.properties.j2 b/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-hivemetastore.properties.j2
index e4d88bc..d4573c3 100755
--- a/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-hivemetastore.properties.j2
+++ b/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-hivemetastore.properties.j2
@@ -48,7 +48,13 @@
   *.sink.timeline.truststore.type = {{metric_truststore_type}}
   *.sink.timeline.truststore.password = {{metric_truststore_password}}
 
-  hivemetastore.sink.timeline.collector={{metric_collector_protocol}}://{{metric_collector_host}}:{{metric_collector_port}}
-
+  hivemetastore.sink.timeline.collector.hosts={{ams_collector_hosts}}
+  hivemetastore.sink.timeline.port={{metric_collector_port}}
+  hivemetastore.sink.timeline.protocol={{metric_collector_protocol}}
+  hivemetastore.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
+  hivemetastore.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+  {% if is_aggregation_https_enabled %}
+    hivemetastore.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+  {% endif %}
 
 {% endif %}
diff --git a/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-hiveserver2.properties.j2 b/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-hiveserver2.properties.j2
index b5c4891..c67d002 100755
--- a/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-hiveserver2.properties.j2
+++ b/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-hiveserver2.properties.j2
@@ -48,7 +48,13 @@
   *.sink.timeline.truststore.type = {{metric_truststore_type}}
   *.sink.timeline.truststore.password = {{metric_truststore_password}}
 
-  hiveserver2.sink.timeline.collector={{metric_collector_protocol}}://{{metric_collector_host}}:{{metric_collector_port}}
-
+  hiveserver2.sink.timeline.collector.hosts={{ams_collector_hosts}}
+  hiveserver2.sink.timeline.port={{metric_collector_port}}
+  hiveserver2.sink.timeline.protocol={{metric_collector_protocol}}
+  hiveserver2.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
+  hiveserver2.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+  {% if is_aggregation_https_enabled %}
+    hiveserver2.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+  {% endif %}
 
 {% endif %}
diff --git a/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-llapdaemon.j2 b/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-llapdaemon.j2
index 1d75ccf..cd23e8a 100755
--- a/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-llapdaemon.j2
+++ b/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-llapdaemon.j2
@@ -47,6 +47,13 @@
   *.sink.timeline.truststore.type = {{metric_truststore_type}}
   *.sink.timeline.truststore.password = {{metric_truststore_password}}
 
-  llapdaemon.sink.timeline.collector={{metric_collector_protocol}}://{{metric_collector_host}}:{{metric_collector_port}}
+  llapdaemon.sink.timeline.collector.hosts={{ams_collector_hosts}}
+  llapdaemon.sink.timeline.port={{metric_collector_port}}
+  llapdaemon.sink.timeline.protocol={{metric_collector_protocol}}
+  llapdaemon.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
+  llapdaemon.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+  {% if is_aggregation_https_enabled %}
+    llapdaemon.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+  {% endif %}
 
-{% endif %}
\ No newline at end of file
+{% endif %}
diff --git a/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-llaptaskscheduler.j2 b/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-llaptaskscheduler.j2
index 5ab787c..9469443 100755
--- a/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-llaptaskscheduler.j2
+++ b/contrib/management-packs/odpi-ambari-mpack/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/templates/hadoop-metrics2-llaptaskscheduler.j2
@@ -47,6 +47,13 @@
   *.sink.timeline.truststore.type = {{metric_truststore_type}}
   *.sink.timeline.truststore.password = {{metric_truststore_password}}
 
-  llaptaskscheduler.sink.timeline.collector={{metric_collector_protocol}}://{{metric_collector_host}}:{{metric_collector_port}}
+  llaptaskscheduler.sink.timeline.collector.hosts={{ams_collector_hosts}}
+  llaptaskscheduler.sink.timeline.port={{metric_collector_port}}
+  llaptaskscheduler.sink.timeline.protocol={{metric_collector_protocol}}
+  llaptaskscheduler.sink.timeline.host_in_memory_aggregation = {{host_in_memory_aggregation}}
+  llaptaskscheduler.sink.timeline.host_in_memory_aggregation_port = {{host_in_memory_aggregation_port}}
+  {% if is_aggregation_https_enabled %}
+    llaptaskscheduler.sink.timeline.host_in_memory_aggregation_protocol = {{host_in_memory_aggregation_protocol}}
+  {% endif %}
 
 {% endif %}
\ No newline at end of file

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 26/39: AMBARI-22359 : Fix Serialization issues in Metric Definition Service (avijayan).

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit afa154ba10a1e02956efe31238cc96a7cd032c8e
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Thu Nov 2 14:41:38 2017 -0700

    AMBARI-22359 : Fix Serialization issues in Metric Definition Service (avijayan).
---
 .../src/main/resources/config.yml                  |  4 +-
 .../adservice/app/AnomalyDetectionAppConfig.scala  | 10 ++---
 .../ambari/metrics/adservice/common/Season.scala   |  4 +-
 .../MetricCollectorConfiguration.scala             | 10 -----
 ... => MetricDefinitionServiceConfiguration.scala} |  2 +-
 .../adservice/db/PhoenixAnomalyStoreAccessor.scala |  8 ++--
 .../adservice/metadata/ADMetadataProvider.scala    | 12 ++---
 .../adservice/metadata/MetricDefinition.scala      | 52 ++++++++++++++++++----
 ...Service.scala => MetricDefinitionService.scala} |  2 +-
 ...mpl.scala => MetricDefinitionServiceImpl.scala} | 34 ++++++++++++--
 .../metadata/MetricSourceDefinition.scala          | 38 +---------------
 .../app/AnomalyDetectionAppConfigTest.scala        |  2 +-
 .../metrics/adservice/common/SeasonTest.scala      |  4 +-
 ...est.scala => MetricDefinitionServiceTest.scala} | 34 +++++++-------
 .../metadata/MetricSourceDefinitionTest.scala      | 11 +++--
 15 files changed, 126 insertions(+), 101 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
index 6953745..920c50c 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
@@ -20,8 +20,8 @@ server:
 logging:
   type: external
 
-metricManagerService:
-  inputDefinitionDirectory: /etc/adservice/conf/input-definitions-directory
+metricDefinitionService:
+  inputDefinitionDirectory: /etc/ambari-metrics-anomaly-detection/conf
 
 metricsCollector:
   hostPortList: host1:6188,host2:6188
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
index be8d027..c1ef0d1 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
@@ -20,7 +20,7 @@ package org.apache.ambari.metrics.adservice.app
 
 import javax.validation.Valid
 
-import org.apache.ambari.metrics.adservice.configuration.{AdServiceConfiguration, HBaseConfiguration, MetricCollectorConfiguration, MetricManagerServiceConfiguration}
+import org.apache.ambari.metrics.adservice.configuration.{AdServiceConfiguration, HBaseConfiguration, MetricCollectorConfiguration, MetricDefinitionServiceConfiguration}
 
 import com.fasterxml.jackson.annotation.JsonProperty
 
@@ -35,7 +35,7 @@ class AnomalyDetectionAppConfig extends Configuration {
    Metric Definition Service configuration
     */
   @Valid
-  private val metricManagerServiceConfiguration = new MetricManagerServiceConfiguration
+  private val metricDefinitionServiceConfiguration = new MetricDefinitionServiceConfiguration
 
   @Valid
   private val metricCollectorConfiguration = new MetricCollectorConfiguration
@@ -53,9 +53,9 @@ class AnomalyDetectionAppConfig extends Configuration {
     HBaseConfiguration.getHBaseConf
   }
 
-  @JsonProperty("metricManagerService")
-  def getMetricManagerServiceConfiguration: MetricManagerServiceConfiguration = {
-    metricManagerServiceConfiguration
+  @JsonProperty("metricDefinitionService")
+  def getMetricDefinitionServiceConfiguration: MetricDefinitionServiceConfiguration = {
+    metricDefinitionServiceConfiguration
   }
 
   @JsonProperty("adQueryService")
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Season.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Season.scala
index aba2587..f875e3b 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Season.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Season.scala
@@ -112,11 +112,11 @@ object Season {
     validSeasons.toList
   }
 
-  def serialize(season: Season) : String = {
+  def toJson(season: Season) : String = {
     mapper.writeValueAsString(season)
   }
 
-  def deserialize(seasonString: String) : Season = {
+  def fromJson(seasonString: String) : Season = {
     mapper.readValue[Season](seasonString)
   }
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala
index 50a0b72..9418897 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala
@@ -39,14 +39,4 @@ class MetricCollectorConfiguration {
   @JsonProperty
   def getMetadataEndpoint: String = metadataEndpoint
 
-  @JsonProperty
-  def setHostPortList(hostPortList: String): Unit = {
-    this.hostPortList = hostPortList
-  }
-
-  @JsonProperty
-  def setMetadataEndpoint(metadataEndpoint: String): Unit = {
-    this.metadataEndpoint = metadataEndpoint
-  }
-
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricManagerServiceConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionServiceConfiguration.scala
similarity index 96%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricManagerServiceConfiguration.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionServiceConfiguration.scala
index e5960d5..b560713 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricManagerServiceConfiguration.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionServiceConfiguration.scala
@@ -24,7 +24,7 @@ import com.fasterxml.jackson.annotation.JsonProperty
 /**
   * Class to capture the Metric Definition Service configuration.
   */
-class MetricManagerServiceConfiguration {
+class MetricDefinitionServiceConfiguration {
 
   @NotNull
   private val inputDefinitionDirectory: String = ""
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
index 1191e90..36aea21 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
@@ -94,11 +94,11 @@ object PhoenixAnomalyStoreAccessor  {
           val timestamp: Long = rs.getLong("ANOMALY_TIMESTAMP")
           val metricValue: Double = rs.getDouble("METRIC_VALUE")
           val methodType: AnomalyDetectionMethod = AnomalyDetectionMethod.withName(rs.getString("METHOD_NAME"))
-          val season: Season = Season.deserialize(rs.getString("SEASONAL_INFO"))
+          val season: Season = Season.fromJson(rs.getString("SEASONAL_INFO"))
           val anomalyScore: Double = rs.getDouble("ANOMALY_SCORE")
           val modelSnapshot: String = rs.getString("MODEL_PARAMETERS")
 
-          val metricKey: MetricKey = null //MetricManager.getMetricKeyFromUuid(uuid)
+          val metricKey: MetricKey = null //MetricManager.getMetricKeyFromUuid(uuid) //TODO
           val anomalyInstance: SingleMetricAnomalyInstance = new PointInTimeAnomalyInstance(metricKey, timestamp,
             metricValue, methodType, anomalyScore, season, modelSnapshot)
           anomalies.+=(anomalyInstance)
@@ -111,11 +111,11 @@ object PhoenixAnomalyStoreAccessor  {
           val referenceStart: Long = rs.getLong("TEST_PERIOD_START")
           val referenceEnd: Long = rs.getLong("TEST_PERIOD_END")
           val methodType: AnomalyDetectionMethod = AnomalyDetectionMethod.withName(rs.getString("METHOD_NAME"))
-          val season: Season = Season.deserialize(rs.getString("SEASONAL_INFO"))
+          val season: Season = Season.fromJson(rs.getString("SEASONAL_INFO"))
           val anomalyScore: Double = rs.getDouble("ANOMALY_SCORE")
           val modelSnapshot: String = rs.getString("MODEL_PARAMETERS")
 
-          val metricKey: MetricKey = null //MetricManager.getMetricKeyFromUuid(uuid)
+          val metricKey: MetricKey = null //MetricManager.getMetricKeyFromUuid(uuid) //TODO
           val anomalyInstance: SingleMetricAnomalyInstance = TrendAnomalyInstance(metricKey,
             TimeRange(anomalyStart, anomalyEnd),
             TimeRange(referenceStart, referenceEnd),
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala
index 801c5f5..3bcf4b0 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala
@@ -56,11 +56,13 @@ class ADMetadataProvider extends MetricMetadataProvider {
     val metricKeySet: scala.collection.mutable.Set[MetricKey] = scala.collection.mutable.Set.empty[MetricKey]
 
     for (metricDef <- metricSourceDefinition.metricDefinitions) {
-      for (hostPort <- metricCollectorHostPorts) {
-        val metricKeys: Set[MetricKey] = getKeysFromMetricsCollector(hostPort + metricMetadataPath, metricDef)
-        if (metricKeys != null) {
-          keysMap += (metricDef -> metricKeys)
-          metricKeySet.++(metricKeys)
+      if (metricDef.isValid) { //Skip requesting metric keys for invalid definitions.
+        for (hostPort <- metricCollectorHostPorts) {
+          val metricKeys: Set[MetricKey] = getKeysFromMetricsCollector(hostPort + metricMetadataPath, metricDef)
+          if (metricKeys != null) {
+            keysMap += (metricDef -> metricKeys)
+            metricKeySet.++(metricKeys)
+          }
         }
       }
     }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala
index 0a5e51f..036867b 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala
@@ -18,9 +18,12 @@
 
 package org.apache.ambari.metrics.adservice.metadata
 
+import org.apache.commons.lang3.StringUtils
 /*
    {
        "metric-name": "mem_free",
+       "appId" : "HOST",
+       "hosts" : ["h1","h2"],
        "metric-description" : "Free memory on a Host.",
        "troubleshooting-info" : "Sudden drop / hike in free memory on a host.",
        "static-threshold" : 10,
@@ -28,12 +31,33 @@ package org.apache.ambari.metrics.adservice.metadata
 }
  */
 
-case class MetricDefinition (metricName: String,
-                             appId: String,
-                             hosts: List[String],
-                             metricDescription: String,
-                             troubleshootingInfo: String,
-                             staticThreshold: Double)  {
+@SerialVersionUID(1002L)
+class MetricDefinition extends Serializable {
+
+  var metricName: String = _
+  var appId: String = _
+  var hosts: List[String] = List.empty[String]
+  var metricDescription: String = ""
+  var troubleshootingInfo: String = ""
+  var staticThreshold: Double = _
+
+  //A Metric definition is valid if we can resolve a metricName and appId (defined or inherited) at runtime)
+  private var valid : Boolean = true
+
+  def this(metricName: String,
+           appId: String,
+           hosts: List[String],
+           metricDescription: String,
+           troubleshootingInfo: String,
+           staticThreshold: Double) = {
+    this
+    this.metricName = metricName
+    this.appId = appId
+    this.hosts = hosts
+    this.metricDescription = metricDescription
+    this.troubleshootingInfo = troubleshootingInfo
+    this.staticThreshold = staticThreshold
+  }
 
   @Override
   override def equals(obj: scala.Any): Boolean = {
@@ -46,10 +70,20 @@ case class MetricDefinition (metricName: String,
     if (!(metricName == that.metricName))
       return false
 
-    if (!(appId == that.appId))
-      return false
+    if (StringUtils.isNotEmpty(appId)) {
+      appId == that.appId
+    }
+    else {
+      StringUtils.isEmpty(that.appId)
+    }
+  }
+
+  def isValid: Boolean = {
+    valid
+  }
 
-    true
+  def makeInvalid() : Unit = {
+    valid = false
   }
 }
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerService.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionService.scala
similarity index 98%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerService.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionService.scala
index 12bd7e4..635dc60 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerService.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionService.scala
@@ -17,7 +17,7 @@
 
 package org.apache.ambari.metrics.adservice.metadata
 
-trait MetricManagerService {
+trait MetricDefinitionService {
 
   /**
     * Given a 'UUID', return the metric key associated with it.
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerServiceImpl.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala
similarity index 83%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerServiceImpl.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala
index ce02775..ffa9944 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerServiceImpl.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala
@@ -23,7 +23,7 @@ import org.apache.ambari.metrics.adservice.db.AdMetadataStoreAccessor
 import com.google.inject.{Inject, Singleton}
 
 @Singleton
-class MetricManagerServiceImpl extends MetricManagerService {
+class MetricDefinitionServiceImpl extends MetricDefinitionService {
 
   @Inject
   var adMetadataStoreAccessor: AdMetadataStoreAccessor = _
@@ -66,15 +66,21 @@ class MetricManagerServiceImpl extends MetricManagerService {
 
     //Load definitions from metadata store
     val definitionsFromStore: List[MetricSourceDefinition] = adMetadataStoreAccessor.getSavedInputDefinitions
+    for (definition <- definitionsFromStore) {
+      validateAndSanitizeMetricSourceDefinition(definition)
+    }
 
     //Load definitions from configs
     val definitionsFromConfig: List[MetricSourceDefinition] = getInputDefinitionsFromConfig
+    for (definition <- definitionsFromConfig) {
+      validateAndSanitizeMetricSourceDefinition(definition)
+    }
 
     //Union the 2 sources, with DB taking precedence.
     //Save new definition list to DB.
     metricSourceDefinitionMap = metricSourceDefinitionMap.++(combineDefinitionSources(definitionsFromConfig, definitionsFromStore))
 
-      //Reach out to AMS Metadata and get Metric Keys. Pass in List<CD> and get back Map<MD,Set<MK>>
+    //Reach out to AMS Metadata and get Metric Keys. Pass in List<CD> and get back (Map<MD,Set<MK>>, Set<MK>)
     for (definition <- metricSourceDefinitionMap.values) {
       val (definitionKeyMap: Map[MetricDefinition, Set[MetricKey]], keys: Set[MetricKey])= metricMetadataProvider.getMetricKeysForDefinitions(definition)
       metricDefinitionMetricKeyMap = metricDefinitionMetricKeyMap.++(definitionKeyMap)
@@ -173,11 +179,33 @@ class MetricManagerServiceImpl extends MetricManagerService {
   }
 
   def getInputDefinitionsFromConfig: List[MetricSourceDefinition] = {
-    val configDirectory = configuration.getMetricManagerServiceConfiguration.getInputDefinitionDirectory
+    val configDirectory = configuration.getMetricDefinitionServiceConfiguration.getInputDefinitionDirectory
     InputMetricDefinitionParser.parseInputDefinitionsFromDirectory(configDirectory)
   }
 
   def setAdMetadataStoreAccessor (adMetadataStoreAccessor: AdMetadataStoreAccessor) : Unit = {
     this.adMetadataStoreAccessor = adMetadataStoreAccessor
   }
+
+  def validateAndSanitizeMetricSourceDefinition(metricSourceDefinition: MetricSourceDefinition): Unit = {
+    val sourceLevelAppId: String = metricSourceDefinition.appId
+    val sourceLevelHostList: List[String] = metricSourceDefinition.hosts
+
+    for (metricDef <- metricSourceDefinition.metricDefinitions.toList) {
+      if (metricDef.appId == null) {
+        if (sourceLevelAppId == null || sourceLevelAppId.isEmpty) {
+          metricDef.makeInvalid()
+        } else {
+          metricDef.appId = sourceLevelAppId
+        }
+      }
+
+      if (metricDef.isValid && metricDef.hosts.isEmpty) {
+        if (sourceLevelHostList != null && sourceLevelHostList.nonEmpty) {
+          metricDef.hosts = sourceLevelHostList
+        }
+      }
+    }
+  }
+
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinition.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinition.scala
index 60198e0..47b1499 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinition.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinition.scala
@@ -22,10 +22,6 @@ import javax.xml.bind.annotation.XmlRootElement
 import org.apache.ambari.metrics.adservice.metadata.MetricSourceDefinitionType.MetricSourceDefinitionType
 import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
 
-import com.fasterxml.jackson.databind.ObjectMapper
-import com.fasterxml.jackson.module.scala.DefaultScalaModule
-import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
-
 /*
 {
  "definition-name": "host-memory",
@@ -45,27 +41,10 @@ import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
 }
 */
 
-/*
-
-On Startup
-Read input definitions directory, parse the JSONs
-Create / Update the metric definitions in DB
-Convert metric definitions to Map<MetricKey, MetricDefinition>
-
-What to do want to have in memory?
-Map of Metric Key -> List<Component Definitions>
-
-What do we use metric definitions for?
-Anomaly GET - Associate definition information as well.
-Definition CRUD - Get definition given definition name
-Get set of metrics that are being tracked
-Return definition information for a metric key
-Given a metric definition name, return set of metrics.
-
-*/
 
+@SerialVersionUID(10001L)
 @XmlRootElement
-class MetricSourceDefinition {
+class MetricSourceDefinition extends Serializable{
 
   var definitionName: String = _
   var appId: String = _
@@ -103,17 +82,4 @@ class MetricSourceDefinition {
     val that = obj.asInstanceOf[MetricSourceDefinition]
     definitionName.equals(that.definitionName)
   }
-}
-
-object MetricSourceDefinition {
-  val mapper = new ObjectMapper() with ScalaObjectMapper
-  mapper.registerModule(DefaultScalaModule)
-
-  def serialize(definition: MetricSourceDefinition) : String = {
-    mapper.writeValueAsString(definition)
-  }
-
-  def deserialize(definitionString: String) : MetricSourceDefinition = {
-    mapper.readValue[MetricSourceDefinition](definitionString)
-  }
 }
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala
index 8e3a949..104ccea 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala
@@ -44,7 +44,7 @@ class AnomalyDetectionAppConfigTest extends FunSuite {
 
     assert(config.isInstanceOf[AnomalyDetectionAppConfig])
 
-    assert(config.getMetricManagerServiceConfiguration.getInputDefinitionDirectory == "/etc/adservice/conf/input-definitions-directory")
+    assert(config.getMetricDefinitionServiceConfiguration.getInputDefinitionDirectory == "/etc/ambari-metrics-anomaly-detection/conf")
 
     assert(config.getMetricCollectorConfiguration.getHostPortList == "host1:6188,host2:6188")
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/SeasonTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/SeasonTest.scala
index 4d542e8..a823c73 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/SeasonTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/SeasonTest.scala
@@ -78,9 +78,9 @@ class SeasonTest extends FunSuite {
   test("testSerialize") {
     val season1 : Season = Season(Range(Calendar.MONDAY,Calendar.FRIDAY), Range(9,17))
 
-    val seasonString = Season.serialize(season1)
+    val seasonString = Season.toJson(season1)
 
-    val season2 : Season = Season.deserialize(seasonString)
+    val season2 : Season = Season.fromJson(seasonString)
     assert(season1 == season2)
 
     val season3 : Season = Season(Range(Calendar.MONDAY,Calendar.THURSDAY), Range(9,17))
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerServiceTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceTest.scala
similarity index 73%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerServiceTest.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceTest.scala
index 8e19a0f..d3454f2 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerServiceTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceTest.scala
@@ -24,7 +24,7 @@ import org.easymock.EasyMock.{anyObject, expect, expectLastCall, replay}
 import org.scalatest.FunSuite
 import org.scalatest.easymock.EasyMockSugar
 
-class MetricManagerServiceTest extends FunSuite {
+class MetricDefinitionServiceTest extends FunSuite {
 
   test("testAddDefinition") {
 
@@ -42,14 +42,14 @@ class MetricManagerServiceTest extends FunSuite {
     expect(adMetadataStoreAccessor.saveInputDefinition(newDef)).andReturn(true).once()
     replay(adMetadataStoreAccessor)
 
-    val metricManagerService: MetricManagerServiceImpl = new MetricManagerServiceImpl(new AnomalyDetectionAppConfig, adMetadataStoreAccessor)
+    val metricDefinitionService: MetricDefinitionServiceImpl = new MetricDefinitionServiceImpl(new AnomalyDetectionAppConfig, adMetadataStoreAccessor)
 
-    metricManagerService.setAdMetadataStoreAccessor(adMetadataStoreAccessor)
+    metricDefinitionService.setAdMetadataStoreAccessor(adMetadataStoreAccessor)
 
-    metricManagerService.addDefinition(newDef)
+    metricDefinitionService.addDefinition(newDef)
 
-    assert(metricManagerService.metricSourceDefinitionMap.size == 4)
-    assert(metricManagerService.metricSourceDefinitionMap.get("testDefinition") != null)
+    assert(metricDefinitionService.metricSourceDefinitionMap.size == 4)
+    assert(metricDefinitionService.metricSourceDefinitionMap.get("testDefinition") != null)
   }
 
   test("testGetDefinitionByName") {
@@ -64,11 +64,11 @@ class MetricManagerServiceTest extends FunSuite {
     expect(adMetadataStoreAccessor.getSavedInputDefinitions).andReturn(definitions.toList).once()
     replay(adMetadataStoreAccessor)
 
-    val metricManagerService: MetricManagerServiceImpl = new MetricManagerServiceImpl(new AnomalyDetectionAppConfig, adMetadataStoreAccessor)
+    val metricDefinitionService: MetricDefinitionServiceImpl = new MetricDefinitionServiceImpl(new AnomalyDetectionAppConfig, adMetadataStoreAccessor)
 
-    metricManagerService.setAdMetadataStoreAccessor(adMetadataStoreAccessor)
+    metricDefinitionService.setAdMetadataStoreAccessor(adMetadataStoreAccessor)
     for (i <- 1 to 3) {
-      val definition: MetricSourceDefinition = metricManagerService.getDefinitionByName("TestDefinition" + i)
+      val definition: MetricSourceDefinition = metricDefinitionService.getDefinitionByName("TestDefinition" + i)
       assert(definition != null)
     }
   }
@@ -90,10 +90,10 @@ class MetricManagerServiceTest extends FunSuite {
     expect(adMetadataStoreAccessor.getSavedInputDefinitions).andReturn(definitions.toList).once()
     replay(adMetadataStoreAccessor)
 
-    val metricManagerService: MetricManagerServiceImpl = new MetricManagerServiceImpl(new AnomalyDetectionAppConfig, adMetadataStoreAccessor)
+    val metricDefinitionService: MetricDefinitionServiceImpl = new MetricDefinitionServiceImpl(new AnomalyDetectionAppConfig, adMetadataStoreAccessor)
 
-    metricManagerService.setAdMetadataStoreAccessor(adMetadataStoreAccessor)
-    val definitionsByAppId: List[MetricSourceDefinition] = metricManagerService.getDefinitionByAppId("testAppId")
+    metricDefinitionService.setAdMetadataStoreAccessor(adMetadataStoreAccessor)
+    val definitionsByAppId: List[MetricSourceDefinition] = metricDefinitionService.getDefinitionByAppId("testAppId")
     assert(definitionsByAppId.size == 2)
   }
 
@@ -115,15 +115,15 @@ class MetricManagerServiceTest extends FunSuite {
     expect(adMetadataStoreAccessor.removeInputDefinition(anyObject[String])).andReturn(true).times(2)
     replay(adMetadataStoreAccessor)
 
-    val metricManagerService: MetricManagerServiceImpl = new MetricManagerServiceImpl(new AnomalyDetectionAppConfig, adMetadataStoreAccessor)
+    val metricDefinitionService: MetricDefinitionServiceImpl = new MetricDefinitionServiceImpl(new AnomalyDetectionAppConfig, adMetadataStoreAccessor)
 
-    metricManagerService.setAdMetadataStoreAccessor(adMetadataStoreAccessor)
+    metricDefinitionService.setAdMetadataStoreAccessor(adMetadataStoreAccessor)
 
-    var success: Boolean = metricManagerService.deleteDefinitionByName("TestDefinition1")
+    var success: Boolean = metricDefinitionService.deleteDefinitionByName("TestDefinition1")
     assert(success)
-    success = metricManagerService.deleteDefinitionByName("TestDefinition2")
+    success = metricDefinitionService.deleteDefinitionByName("TestDefinition2")
     assert(!success)
-    success = metricManagerService.deleteDefinitionByName("TestDefinition3")
+    success = metricDefinitionService.deleteDefinitionByName("TestDefinition3")
     assert(success)
   }
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionTest.scala
index c4d639c..0149673 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionTest.scala
@@ -17,6 +17,7 @@
 
 package org.apache.ambari.metrics.adservice.metadata
 
+import org.apache.commons.lang.SerializationUtils
 import org.scalatest.FunSuite
 
 class MetricSourceDefinitionTest extends FunSuite {
@@ -46,6 +47,10 @@ class MetricSourceDefinitionTest extends FunSuite {
     val msd1 : MetricSourceDefinition = new MetricSourceDefinition("testDefinition", "testAppId", MetricSourceDefinitionType.API)
     val msd2 : MetricSourceDefinition = new MetricSourceDefinition("testDefinition", "testAppId2", MetricSourceDefinitionType.API)
     assert(msd1 == msd2)
+
+    val msd3 : MetricSourceDefinition = new MetricSourceDefinition("testDefinition1", "testAppId", MetricSourceDefinitionType.API)
+    val msd4 : MetricSourceDefinition = new MetricSourceDefinition("testDefinition2", "testAppId2", MetricSourceDefinitionType.API)
+    assert(msd3 != msd4)
   }
 
   test("testRemoveMetricDefinition") {
@@ -61,10 +66,10 @@ class MetricSourceDefinitionTest extends FunSuite {
 
   test("serializeDeserialize") {
     val msd : MetricSourceDefinition = new MetricSourceDefinition("testDefinition", "testAppId", MetricSourceDefinitionType.API)
-    val msdString: String = MetricSourceDefinition.serialize(msd)
-    assert(msdString.nonEmpty)
+    val msdByteArray: Array[Byte] = SerializationUtils.serialize(msd)
+    assert(msdByteArray.nonEmpty)
 
-    val msd2: MetricSourceDefinition = MetricSourceDefinition.deserialize(msdString)
+    val msd2: MetricSourceDefinition = SerializationUtils.deserialize(msdByteArray).asInstanceOf[MetricSourceDefinition]
     assert(msd2 != null)
     assert(msd == msd2)
 

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 25/39: AMBARI-22348 : Metric Definition Service V1 Implementation. (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit d580f272c39ee83d4212e0ae5080342ea3753372
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Wed Nov 1 15:38:20 2017 -0700

    AMBARI-22348 : Metric Definition Service V1 Implementation. (avijayan)
---
 .../pom.xml                                        |   6 +
 .../adservice/prototype/common/StatisticUtils.java |   3 -
 .../prototype/core/MetricSparkConsumer.java        |   7 +-
 .../prototype/core/PointInTimeADSystem.java        |   2 +-
 .../adservice/prototype/core/TrendADSystem.java    |   2 +-
 .../adservice/prototype/methods/MetricAnomaly.java |   2 -
 .../prototype/methods/hsdev/HsdevTechnique.java    |   5 +-
 .../prototype/methods/kstest/KSTechnique.java      |   2 +-
 .../testing/utilities/MetricAnomalyTester.java     |  17 --
 .../src/main/resources/config.yml                  |  27 +++
 .../adservice/app/AnomalyDetectionApp.scala        |   1 +
 .../adservice/app/AnomalyDetectionAppConfig.scala  |  53 +++++-
 .../adservice/app/AnomalyDetectionAppModule.scala  |   1 +
 .../adservice/common/ADServiceConfiguration.scala  |  74 ---------
 .../ambari/metrics/adservice/common/Range.scala    |  44 +++++
 .../ambari/metrics/adservice/common/Season.scala   | 122 ++++++++++++++
 .../metrics/adservice/common/SeasonType.scala      |  24 +++
 .../metrics/adservice/common/TimeRange.scala       |  39 +++++
 .../configuration/AdServiceConfiguration.scala     |  40 +++++
 .../configuration/HBaseConfiguration.scala         |  54 ++++++
 .../MetricCollectorConfiguration.scala}            |  42 +++--
 .../MetricManagerServiceConfiguration.scala        |  34 ++++
 .../adservice/db/AdMetadataStoreAccessor.scala     |  53 ++++++
 .../adservice/db/AdMetadataStoreConstants.scala    |  39 +++++
 .../adservice/db/PhoenixAnomalyStoreAccessor.scala | 110 ++++++++++++-
 .../{common => db}/PhoenixQueryConstants.scala     |  23 ++-
 .../adservice/metadata/ADMetadataProvider.scala    | 112 +++++++++++++
 .../metadata/InputMetricDefinitionParser.scala     |  52 ++++++
 .../adservice/metadata/MetricDefinition.scala      |  69 ++++++++
 .../metrics/adservice/metadata/MetricKey.scala}    |  42 +++--
 .../adservice/metadata/MetricManagerService.scala  |  64 +++++++
 .../metadata/MetricManagerServiceImpl.scala        | 183 +++++++++++++++++++++
 .../metadata/MetricMetadataProvider.scala          |  31 ++++
 .../metadata/MetricSourceDefinition.scala          | 119 ++++++++++++++
 .../metadata/MetricSourceDefinitionType.scala      |  26 +++
 .../adservice/model/AnomalyDetectionMethod.scala   |  23 +++
 .../AnomalyType.scala}                             |  12 +-
 .../SingleMetricAnomalyInstance.scala}             |  15 +-
 .../adservice/resource/AnomalyResource.scala       |   4 +-
 .../resource/MetricDefinitionResource.scala        |  28 ++++
 .../metrics/adservice/resource/RootResource.scala  |   4 +-
 .../adservice/resource/SubsystemResource.scala     |  26 +++
 .../metrics/adservice/service/ADQueryService.scala |  12 ++
 .../adservice/service/ADQueryServiceImpl.scala     |  15 ++
 .../spark/prototype/MetricAnomalyDetector.scala    |  16 --
 .../spark/prototype/SparkPhoenixReader.scala       |   5 -
 .../pointintime/PointInTimeAnomalyInstance.scala   |  48 ++++++
 .../subsystem/trend/TrendAnomalyInstance.scala     |  29 ++++
 .../metrics/adservice/prototype/TestTukeys.java    |   2 +-
 .../app/AnomalyDetectionAppConfigTest.scala        |  54 ++++++
 .../adservice/app/DefaultADResourceSpecTest.scala  |   2 +-
 .../metrics/adservice/common/RangeTest.scala       |  37 +++++
 .../metrics/adservice/common/SeasonTest.scala      |  91 ++++++++++
 .../metadata/AMSMetadataProviderTest.scala         |  43 +++++
 .../metadata/MetricManagerServiceTest.scala        | 130 +++++++++++++++
 .../metadata/MetricSourceDefinitionTest.scala      |  72 ++++++++
 .../metrics2/sink/timeline/TimelineMetricKey.java  |  59 +++++++
 .../timeline/HBaseTimelineMetricsService.java      | 114 +++++++++----
 .../metrics/timeline/TimelineMetricStore.java      |  10 +-
 .../discovery/TimelineMetricMetadataManager.java   |  10 ++
 .../webapp/TimelineWebServices.java                |  44 ++++-
 .../metrics/timeline/TestTimelineMetricStore.java  |  11 +-
 62 files changed, 2208 insertions(+), 232 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
index e96e957..44bdc1f 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
@@ -446,5 +446,11 @@
       <artifactId>metrics-core</artifactId>
       <version>3.2.5</version>
     </dependency>
+    <dependency>
+      <groupId>org.easymock</groupId>
+      <artifactId>easymock</artifactId>
+      <version>2.5</version>
+      <scope>test</scope>
+    </dependency>
   </dependencies>
 </project>
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/StatisticUtils.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/StatisticUtils.java
index 7f0aed3..0a22e50 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/StatisticUtils.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/StatisticUtils.java
@@ -18,10 +18,7 @@
 package org.apache.ambari.metrics.adservice.prototype.common;
 
 
-import java.util.ArrayList;
 import java.util.Arrays;
-import java.util.Collection;
-import java.util.Collections;
 
 public class StatisticUtils {
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricSparkConsumer.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricSparkConsumer.java
index e8257e5..addeda7 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricSparkConsumer.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricSparkConsumer.java
@@ -35,10 +35,15 @@ import org.apache.spark.streaming.api.java.JavaStreamingContext;
 import org.apache.spark.streaming.kafka.KafkaUtils;
 import scala.Tuple2;
 
-import java.util.*;
 import java.io.FileInputStream;
 import java.io.IOException;
 import java.io.InputStream;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Properties;
+import java.util.Set;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/PointInTimeADSystem.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/PointInTimeADSystem.java
index 0a2271a..f379605 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/PointInTimeADSystem.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/PointInTimeADSystem.java
@@ -17,8 +17,8 @@
  */
 package org.apache.ambari.metrics.adservice.prototype.core;
 
-import org.apache.ambari.metrics.adservice.prototype.common.ResultSet;
 import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.adservice.prototype.common.ResultSet;
 import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaModel;
 import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique;
 import org.apache.commons.logging.Log;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendADSystem.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendADSystem.java
index f5ec83a..80212b3 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendADSystem.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendADSystem.java
@@ -17,9 +17,9 @@
  */
 package org.apache.ambari.metrics.adservice.prototype.core;
 
+import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
 import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
 import org.apache.ambari.metrics.adservice.prototype.methods.hsdev.HsdevTechnique;
-import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
 import org.apache.ambari.metrics.adservice.prototype.methods.kstest.KSTechnique;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/MetricAnomaly.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/MetricAnomaly.java
index 251603b..60ff11c 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/MetricAnomaly.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/MetricAnomaly.java
@@ -18,8 +18,6 @@
 package org.apache.ambari.metrics.adservice.prototype.methods;
 
 import java.io.Serializable;
-import java.util.HashMap;
-import java.util.Map;
 
 public class MetricAnomaly implements Serializable{
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/hsdev/HsdevTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/hsdev/HsdevTechnique.java
index 6facc99..855cc70 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/hsdev/HsdevTechnique.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/hsdev/HsdevTechnique.java
@@ -21,14 +21,15 @@ import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
 import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import static org.apache.ambari.metrics.adservice.prototype.common.StatisticUtils.median;
-import static org.apache.ambari.metrics.adservice.prototype.common.StatisticUtils.sdev;
 
 import java.io.Serializable;
 import java.util.Date;
 import java.util.HashMap;
 import java.util.Map;
 
+import static org.apache.ambari.metrics.adservice.prototype.common.StatisticUtils.median;
+import static org.apache.ambari.metrics.adservice.prototype.common.StatisticUtils.sdev;
+
 public class HsdevTechnique implements Serializable {
 
   private Map<String, Double> hsdevMap;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/kstest/KSTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/kstest/KSTechnique.java
index 4727c6f..0dc679e 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/kstest/KSTechnique.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/kstest/KSTechnique.java
@@ -20,8 +20,8 @@ package org.apache.ambari.metrics.adservice.prototype.methods.kstest;
 
 import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
 import org.apache.ambari.metrics.adservice.prototype.common.ResultSet;
-import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
 import org.apache.ambari.metrics.adservice.prototype.core.RFunctionInvoker;
+import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyTester.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyTester.java
index d079e66..10b3a71 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyTester.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyTester.java
@@ -18,23 +18,6 @@
 
 package org.apache.ambari.metrics.adservice.prototype.testing.utilities;
 
-import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
-import org.apache.ambari.metrics.adservice.prototype.common.ResultSet;
-import org.apache.ambari.metrics.adservice.prototype.core.MetricsCollectorInterface;
-import org.apache.ambari.metrics.adservice.prototype.core.RFunctionInvoker;
-import org.apache.commons.collections.CollectionUtils;
-import org.apache.commons.lang.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-
-import java.net.InetAddress;
-import java.net.UnknownHostException;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.TreeMap;
-
 /**
  * Class which was originally used to send test series from AMS to Spark through Kafka.
  */
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
index bd88d57..6953745 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
@@ -1,3 +1,15 @@
+#Licensed under the Apache License, Version 2.0 (the "License");
+#you may not use this file except in compliance with the License.
+#You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+#Unless required by applicable law or agreed to in writing, software
+#distributed under the License is distributed on an "AS IS" BASIS,
+#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#See the License for the specific language governing permissions and
+#limitations under the License.
+
 server:
   applicationConnectors:
    - type: http
@@ -7,3 +19,18 @@ server:
 
 logging:
   type: external
+
+metricManagerService:
+  inputDefinitionDirectory: /etc/adservice/conf/input-definitions-directory
+
+metricsCollector:
+  hostPortList: host1:6188,host2:6188
+  metadataEndpoint: /v1/timeline/metrics/metadata/keys
+
+adQueryService:
+  anomalyDataTtl: 604800
+
+#subsystemService:
+#  spark:
+#  pointInTime:
+#  trend:
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala
index b7f217e..8b3a829 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala
@@ -28,6 +28,7 @@ import com.fasterxml.jackson.databind.{ObjectMapper, SerializationFeature}
 import com.fasterxml.jackson.datatype.joda.JodaModule
 import com.fasterxml.jackson.jaxrs.json.JacksonJaxbJsonProvider
 import com.fasterxml.jackson.module.scala.DefaultScalaModule
+
 import io.dropwizard.Application
 import io.dropwizard.setup.Environment
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
index 9e6cc6d..be8d027 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
@@ -1,7 +1,3 @@
-package org.apache.ambari.metrics.adservice.app
-
-import io.dropwizard.Configuration
-
 /**
   * Licensed to the Apache Software Foundation (ASF) under one
   * or more contributor license agreements.  See the NOTICE file
@@ -19,6 +15,55 @@ import io.dropwizard.Configuration
   * See the License for the specific language governing permissions and
   * limitations under the License.
   */
+
+package org.apache.ambari.metrics.adservice.app
+
+import javax.validation.Valid
+
+import org.apache.ambari.metrics.adservice.configuration.{AdServiceConfiguration, HBaseConfiguration, MetricCollectorConfiguration, MetricManagerServiceConfiguration}
+
+import com.fasterxml.jackson.annotation.JsonProperty
+
+import io.dropwizard.Configuration
+
+/**
+  * Top Level AD System Manager config items.
+  */
 class AnomalyDetectionAppConfig extends Configuration {
 
+  /*
+   Metric Definition Service configuration
+    */
+  @Valid
+  private val metricManagerServiceConfiguration = new MetricManagerServiceConfiguration
+
+  @Valid
+  private val metricCollectorConfiguration = new MetricCollectorConfiguration
+
+  /*
+   Anomaly Service configuration
+    */
+  @Valid
+  private val adServiceConfiguration = new AdServiceConfiguration
+
+  /*
+   HBase Conf
+    */
+  def getHBaseConf : org.apache.hadoop.conf.Configuration = {
+    HBaseConfiguration.getHBaseConf
+  }
+
+  @JsonProperty("metricManagerService")
+  def getMetricManagerServiceConfiguration: MetricManagerServiceConfiguration = {
+    metricManagerServiceConfiguration
+  }
+
+  @JsonProperty("adQueryService")
+  def getAdServiceConfiguration: AdServiceConfiguration = {
+    adServiceConfiguration
+  }
+
+  @JsonProperty("metricsCollector")
+  def getMetricCollectorConfiguration: MetricCollectorConfiguration = metricCollectorConfiguration
+
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
index 338c97b..7425a7e 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
@@ -23,6 +23,7 @@ import org.apache.ambari.metrics.adservice.service.{ADQueryService, ADQueryServi
 import com.codahale.metrics.health.HealthCheck
 import com.google.inject.AbstractModule
 import com.google.inject.multibindings.Multibinder
+
 import io.dropwizard.setup.Environment
 
 class AnomalyDetectionAppModule(config: AnomalyDetectionAppConfig, env: Environment) extends AbstractModule {
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/ADServiceConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/ADServiceConfiguration.scala
deleted file mode 100644
index 248c74e..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/ADServiceConfiguration.scala
+++ /dev/null
@@ -1,74 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.common
-
-import java.net.{MalformedURLException, URISyntaxException}
-
-import org.apache.hadoop.conf.Configuration
-
-object ADServiceConfiguration {
-
-  private val AMS_AD_SITE_CONFIGURATION_FILE = "ams-ad-site.xml"
-  private val HBASE_SITE_CONFIGURATION_FILE = "hbase-site.xml"
-
-  val ANOMALY_METRICS_TTL = "timeline.metrics.anomaly.data.ttl"
-
-  private var hbaseConf: org.apache.hadoop.conf.Configuration = _
-  private var adConf: org.apache.hadoop.conf.Configuration = _
-
-  def initConfigs(): Unit = {
-
-    var classLoader: ClassLoader = Thread.currentThread.getContextClassLoader
-    if (classLoader == null) classLoader = getClass.getClassLoader
-
-    try {
-      val hbaseResUrl = classLoader.getResource(HBASE_SITE_CONFIGURATION_FILE)
-      if (hbaseResUrl == null) throw new IllegalStateException("Unable to initialize the AD subsystem. No hbase-site present in the classpath.")
-
-      hbaseConf = new Configuration(true)
-      hbaseConf.addResource(hbaseResUrl.toURI.toURL)
-
-      val adSystemConfigUrl = classLoader.getResource(AMS_AD_SITE_CONFIGURATION_FILE)
-      if (adSystemConfigUrl == null) throw new IllegalStateException("Unable to initialize the AD subsystem. No ams-ad-site present in the classpath")
-
-      adConf = new Configuration(true)
-      adConf.addResource(adSystemConfigUrl.toURI.toURL)
-
-    } catch {
-      case me : MalformedURLException => println("MalformedURLException")
-      case ue : URISyntaxException => println("URISyntaxException")
-    }
-  }
-
-  def getHBaseConf: org.apache.hadoop.conf.Configuration = {
-    hbaseConf
-  }
-
-  def getAdConf: org.apache.hadoop.conf.Configuration = {
-    adConf
-  }
-
-  def getAnomalyDataTtl: Int = {
-    if (adConf != null) return adConf.get(ANOMALY_METRICS_TTL, "604800").toInt
-    604800
-  }
-
-  /**
-    * ttl
-    *
-    */
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Range.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Range.scala
new file mode 100644
index 0000000..003c18f
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Range.scala
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.common
+
+/**
+  * Class to capture a Range in a Season.
+  * For example Monday - Wednesday is a 'Range' in a DAY Season.
+  * @param lower lower end
+  * @param higher higher end
+  */
+case class Range (lower: Int, higher: Int) {
+
+  def withinRange(value: Int) : Boolean = {
+    if (lower <= higher) {
+      (value >= lower) && (value <= higher)
+    } else {
+      !(value > higher) && (value < lower)
+    }
+  }
+
+  @Override
+  override def equals(obj: scala.Any): Boolean = {
+    if (obj == null) {
+      return false
+    }
+    val that : Range = obj.asInstanceOf[Range]
+    (lower == that.lower) && (higher == that.higher)
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Season.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Season.scala
new file mode 100644
index 0000000..aba2587
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Season.scala
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.common
+
+import java.time.DayOfWeek
+import java.util.Calendar
+
+import javax.xml.bind.annotation.XmlRootElement
+
+import org.apache.ambari.metrics.adservice.common.SeasonType.SeasonType
+
+import com.fasterxml.jackson.databind.ObjectMapper
+import com.fasterxml.jackson.module.scala.DefaultScalaModule
+import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
+
+/**
+  * Class to capture a 'Season' for a metric anomaly.
+  * A Season is a combination of DAY Range and HOUR Range.
+  * @param DAY Day Range
+  * @param HOUR Hour Range
+  */
+@XmlRootElement
+case class Season(var DAY: Range, var HOUR: Range) {
+
+  def belongsTo(timestamp : Long) : Boolean = {
+    val c = Calendar.getInstance
+    c.setTimeInMillis(timestamp)
+    val dayOfWeek = c.get(Calendar.DAY_OF_WEEK)
+    val hourOfDay = c.get(Calendar.HOUR_OF_DAY)
+
+    if (DAY.lower != -1 && !DAY.withinRange(dayOfWeek))
+      return false
+    if (HOUR.lower != -1 && !HOUR.withinRange(hourOfDay))
+      return false
+    true
+  }
+
+  @Override
+  override def equals(obj: scala.Any): Boolean = {
+
+    if (obj == null) {
+      return false
+    }
+
+    val that : Season = obj.asInstanceOf[Season]
+    DAY.equals(that.DAY) && HOUR.equals(that.HOUR)
+  }
+
+  @Override
+  override def toString: String = {
+
+    var prettyPrintString = ""
+
+    var dLower: Int = DAY.lower - 1
+    if (dLower == 0) {
+      dLower = 7
+    }
+
+    var dHigher: Int = DAY.higher - 1
+    if (dHigher == 0) {
+      dHigher = 7
+    }
+
+    if (DAY != null) {
+      prettyPrintString = prettyPrintString.concat("DAY : [" + DayOfWeek.of(dLower) + "," + DayOfWeek.of(dHigher)) + "]"
+    }
+
+    if (HOUR != null) {
+      prettyPrintString = prettyPrintString.concat(" HOUR : [" + HOUR.lower + "," + HOUR.higher) + "]"
+    }
+    prettyPrintString
+  }
+}
+
+object Season {
+
+  def apply(DAY: Range, HOUR: Range): Season = new Season(DAY, HOUR)
+
+  def apply(range: Range, seasonType: SeasonType): Season = {
+    if (seasonType.equals(SeasonType.DAY)) {
+      new Season(range, Range(-1,-1))
+    } else {
+      new Season(Range(-1,-1), range)
+    }
+  }
+
+  val mapper = new ObjectMapper() with ScalaObjectMapper
+  mapper.registerModule(DefaultScalaModule)
+
+  def getSeasons(timestamp: Long, seasons : List[Season]) : List[Season] = {
+    val validSeasons : scala.collection.mutable.MutableList[Season] = scala.collection.mutable.MutableList.empty[Season]
+    for ( season <- seasons ) {
+      if (season.belongsTo(timestamp)) {
+        validSeasons += season
+      }
+    }
+    validSeasons.toList
+  }
+
+  def serialize(season: Season) : String = {
+    mapper.writeValueAsString(season)
+  }
+
+  def deserialize(seasonString: String) : Season = {
+    mapper.readValue[Season](seasonString)
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/SeasonType.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/SeasonType.scala
new file mode 100644
index 0000000..067972c
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/SeasonType.scala
@@ -0,0 +1,24 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.common
+
+object SeasonType extends Enumeration{
+
+  type SeasonType = Value
+  val DAY,HOUR = Value
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/TimeRange.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/TimeRange.scala
new file mode 100644
index 0000000..50df658
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/TimeRange.scala
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.common
+
+import java.util.Date
+
+/**
+  * A special form of a 'Range' class to denote Time range.
+  */
+case class TimeRange (startTime: Long, endTime: Long) {
+  @Override
+  override def toString: String = {
+    "StartTime=" + new Date(startTime) + ", EndTime=" + new Date(endTime)
+  }
+
+  @Override
+  override def equals(obj: scala.Any): Boolean = {
+    if (obj == null) {
+      return false
+    }
+    val that : TimeRange = obj.asInstanceOf[TimeRange]
+    (startTime == that.startTime) && (endTime == that.endTime)
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/AdServiceConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/AdServiceConfiguration.scala
new file mode 100644
index 0000000..11e9f28
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/AdServiceConfiguration.scala
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.configuration
+
+import javax.validation.constraints.NotNull
+
+import com.fasterxml.jackson.annotation.JsonProperty
+
+/**
+  * Class to get Anomaly Service specific configuration.
+  */
+class AdServiceConfiguration {
+
+  @NotNull
+  var anomalyDataTtl: Long = _
+
+  @JsonProperty
+  def getAnomalyDataTtl: Long = anomalyDataTtl
+
+  @JsonProperty
+  def setAnomalyDataTtl(anomalyDataTtl: Long): Unit = {
+    this.anomalyDataTtl = anomalyDataTtl
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala
new file mode 100644
index 0000000..a7bbc66
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.adservice.configuration
+
+import java.net.{MalformedURLException, URISyntaxException}
+
+import org.apache.hadoop.conf.Configuration
+
+object HBaseConfiguration {
+
+  val HBASE_SITE_CONFIGURATION_FILE: String = "hbase-site.xml"
+  val hbaseConf: org.apache.hadoop.conf.Configuration = new Configuration(true)
+  var isInitialized: Boolean = false
+
+  def initConfigs(): Unit = {
+    if (!isInitialized) {
+      var classLoader: ClassLoader = Thread.currentThread.getContextClassLoader
+      if (classLoader == null) classLoader = getClass.getClassLoader
+
+      try {
+        val hbaseResUrl = classLoader.getResource(HBASE_SITE_CONFIGURATION_FILE)
+        if (hbaseResUrl == null) throw new IllegalStateException("Unable to initialize the AD subsystem. No hbase-site present in the classpath.")
+
+        hbaseConf.addResource(hbaseResUrl.toURI.toURL)
+        isInitialized = true
+
+      } catch {
+        case me : MalformedURLException => println("MalformedURLException")
+        case ue : URISyntaxException => println("URISyntaxException")
+      }
+    }
+  }
+
+  def getHBaseConf: org.apache.hadoop.conf.Configuration = {
+    if (!isInitialized) {
+      initConfigs()
+    }
+    hbaseConf
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/ADManagerConfigurationTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala
similarity index 53%
copy from ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/ADManagerConfigurationTest.scala
copy to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala
index 40b9d6a..50a0b72 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/ADManagerConfigurationTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala
@@ -15,26 +15,38 @@
   * See the License for the specific language governing permissions and
   * limitations under the License.
   */
-package org.apache.ambari.metrics.adservice.common
 
-import org.scalatest.FlatSpec
+package org.apache.ambari.metrics.adservice.configuration
 
-import scala.collection.mutable
+import javax.validation.constraints.NotNull
 
-class ADServiceConfigurationTest extends FlatSpec {
+import com.fasterxml.jackson.annotation.JsonProperty
 
-  "A Stack" should "pop values in last-in-first-out order" in {
-    val stack = new mutable.Stack[Int]
-    stack.push(1)
-    stack.push(2)
-    assert(stack.pop() === 2)
-    assert(stack.pop() === 1)
+/**
+  * Class to capture the Metrics Collector related configuration.
+  */
+class MetricCollectorConfiguration {
+
+  @NotNull
+  private var hostPortList: String = _
+
+  @NotNull
+  private var metadataEndpoint: String = _
+
+  @JsonProperty
+  def getHostPortList: String = hostPortList
+
+  @JsonProperty
+  def getMetadataEndpoint: String = metadataEndpoint
+
+  @JsonProperty
+  def setHostPortList(hostPortList: String): Unit = {
+    this.hostPortList = hostPortList
   }
 
-  it should "throw NoSuchElementException if an empty stack is popped" in {
-    val emptyStack = new mutable.Stack[String]
-    assertThrows[NoSuchElementException] {
-      emptyStack.pop()
-    }
+  @JsonProperty
+  def setMetadataEndpoint(metadataEndpoint: String): Unit = {
+    this.metadataEndpoint = metadataEndpoint
   }
+
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricManagerServiceConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricManagerServiceConfiguration.scala
new file mode 100644
index 0000000..e5960d5
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricManagerServiceConfiguration.scala
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.configuration
+
+import javax.validation.constraints.NotNull
+
+import com.fasterxml.jackson.annotation.JsonProperty
+
+/**
+  * Class to capture the Metric Definition Service configuration.
+  */
+class MetricManagerServiceConfiguration {
+
+  @NotNull
+  private val inputDefinitionDirectory: String = ""
+
+  @JsonProperty
+  def getInputDefinitionDirectory: String = inputDefinitionDirectory
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreAccessor.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreAccessor.scala
new file mode 100644
index 0000000..bcdb416
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreAccessor.scala
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.db
+
+import org.apache.ambari.metrics.adservice.metadata.MetricSourceDefinition
+
+/**
+  * Trait used to talk to the AD Metadata Store.
+  */
+trait AdMetadataStoreAccessor {
+
+  /**
+    * Return all saved component definitions from DB.
+    * @return
+    */
+  def getSavedInputDefinitions: List[MetricSourceDefinition]
+
+  /**
+    * Save a set of component definitions
+    * @param metricSourceDefinitions Set of component definitions
+    * @return Success / Failure
+    */
+  def saveInputDefinitions(metricSourceDefinitions: List[MetricSourceDefinition]) : Boolean
+
+  /**
+    * Save a component definition
+    * @param metricSourceDefinition component definition
+    * @return Success / Failure
+    */
+  def saveInputDefinition(metricSourceDefinition: MetricSourceDefinition) : Boolean
+
+  /**
+    * Delete a component definition
+    * @param definitionName component definition
+    * @return
+    */
+  def removeInputDefinition(definitionName: String) : Boolean
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreConstants.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreConstants.scala
new file mode 100644
index 0000000..3d273a3
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreConstants.scala
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.db
+
+object AdMetadataStoreConstants {
+
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+  /* Table Name constants */
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+
+  val METRIC_PROFILE_TABLE_NAME = "METRIC_DEFINITION"
+
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+  /* CREATE statement constants */
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+
+  val CREATE_METRIC_DEFINITION_TABLE: String = "CREATE TABLE IF NOT EXISTS %s (" +
+    "DEFINITION_NAME VARCHAR, " +
+    "DEFINITION_JSON VARCHAR, " +
+    "DEFINITION_SOURCE NUMBER, " +
+    "CREATED_TIME TIMESTAMP, " +
+    "UPDATED_TIME TIMESTAMP " +
+    "CONSTRAINT pk PRIMARY KEY (DEFINITION_NAME))"
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
index 6f33e56..1191e90 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
@@ -17,23 +17,36 @@
 
 package org.apache.ambari.metrics.adservice.db
 
-import java.sql.{Connection, SQLException}
+import java.sql.{Connection, PreparedStatement, ResultSet, SQLException}
+import java.util.concurrent.TimeUnit.SECONDS
 
-import org.apache.ambari.metrics.adservice.common.{ADServiceConfiguration, PhoenixQueryConstants}
+import org.apache.ambari.metrics.adservice.app.AnomalyDetectionAppConfig
+import org.apache.ambari.metrics.adservice.common._
+import org.apache.ambari.metrics.adservice.configuration.HBaseConfiguration
+import org.apache.ambari.metrics.adservice.metadata.MetricKey
+import org.apache.ambari.metrics.adservice.model.AnomalyDetectionMethod.AnomalyDetectionMethod
+import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
+import org.apache.ambari.metrics.adservice.model.{AnomalyDetectionMethod, AnomalyType, SingleMetricAnomalyInstance}
+import org.apache.ambari.metrics.adservice.subsystem.pointintime.PointInTimeAnomalyInstance
+import org.apache.ambari.metrics.adservice.subsystem.trend.TrendAnomalyInstance
 import org.apache.hadoop.hbase.util.RetryCounterFactory
 import org.apache.hadoop.metrics2.sink.timeline.query.{DefaultPhoenixDataSource, PhoenixConnectionProvider}
-import java.util.concurrent.TimeUnit.SECONDS
+
+import com.google.inject.Inject
 
 object PhoenixAnomalyStoreAccessor  {
 
-  private var datasource: PhoenixConnectionProvider = _
+  @Inject
+  var configuration: AnomalyDetectionAppConfig = _
+
+  var datasource: PhoenixConnectionProvider = _
 
   def initAnomalyMetricSchema(): Unit = {
 
-    val datasource: PhoenixConnectionProvider = new DefaultPhoenixDataSource(ADServiceConfiguration.getHBaseConf)
+    val datasource: PhoenixConnectionProvider = new DefaultPhoenixDataSource(HBaseConfiguration.getHBaseConf)
     val retryCounterFactory = new RetryCounterFactory(10, SECONDS.toMillis(3).toInt)
 
-    val ttl = ADServiceConfiguration.getAnomalyDataTtl
+    val ttl = configuration.getAdServiceConfiguration.getAnomalyDataTtl
     try {
       var conn = datasource.getConnectionRetryingOnException(retryCounterFactory)
       var stmt = conn.createStatement
@@ -64,4 +77,89 @@ object PhoenixAnomalyStoreAccessor  {
 
   @throws[SQLException]
   def getConnection: Connection = datasource.getConnection
+
+  def getSingleMetricAnomalies(anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int) : scala.collection.mutable.MutableList[SingleMetricAnomalyInstance] = {
+    val anomalies = scala.collection.mutable.MutableList.empty[SingleMetricAnomalyInstance]
+    val conn : Connection = getConnection
+    var stmt : PreparedStatement = null
+    var rs : ResultSet = null
+    val s : Season = Season(Range(-1,-1), SeasonType.DAY)
+
+    try {
+      stmt = prepareAnomalyMetricsGetSqlStatement(conn, anomalyType, startTime, endTime, limit)
+      rs = stmt.executeQuery
+      if (anomalyType.equals(AnomalyType.POINT_IN_TIME)) {
+        while (rs.next()) {
+          val uuid: Array[Byte] = rs.getBytes("METRIC_UUID")
+          val timestamp: Long = rs.getLong("ANOMALY_TIMESTAMP")
+          val metricValue: Double = rs.getDouble("METRIC_VALUE")
+          val methodType: AnomalyDetectionMethod = AnomalyDetectionMethod.withName(rs.getString("METHOD_NAME"))
+          val season: Season = Season.deserialize(rs.getString("SEASONAL_INFO"))
+          val anomalyScore: Double = rs.getDouble("ANOMALY_SCORE")
+          val modelSnapshot: String = rs.getString("MODEL_PARAMETERS")
+
+          val metricKey: MetricKey = null //MetricManager.getMetricKeyFromUuid(uuid)
+          val anomalyInstance: SingleMetricAnomalyInstance = new PointInTimeAnomalyInstance(metricKey, timestamp,
+            metricValue, methodType, anomalyScore, season, modelSnapshot)
+          anomalies.+=(anomalyInstance)
+        }
+      } else {
+        while (rs.next()) {
+          val uuid: Array[Byte] = rs.getBytes("METRIC_UUID")
+          val anomalyStart: Long = rs.getLong("ANOMALY_PERIOD_START")
+          val anomalyEnd: Long = rs.getLong("ANOMALY_PERIOD_END")
+          val referenceStart: Long = rs.getLong("TEST_PERIOD_START")
+          val referenceEnd: Long = rs.getLong("TEST_PERIOD_END")
+          val methodType: AnomalyDetectionMethod = AnomalyDetectionMethod.withName(rs.getString("METHOD_NAME"))
+          val season: Season = Season.deserialize(rs.getString("SEASONAL_INFO"))
+          val anomalyScore: Double = rs.getDouble("ANOMALY_SCORE")
+          val modelSnapshot: String = rs.getString("MODEL_PARAMETERS")
+
+          val metricKey: MetricKey = null //MetricManager.getMetricKeyFromUuid(uuid)
+          val anomalyInstance: SingleMetricAnomalyInstance = TrendAnomalyInstance(metricKey,
+            TimeRange(anomalyStart, anomalyEnd),
+            TimeRange(referenceStart, referenceEnd),
+            methodType, anomalyScore, season, modelSnapshot)
+          anomalies.+=(anomalyInstance)
+        }
+      }
+    } catch {
+      case e: SQLException => throw e
+    }
+
+    anomalies
+  }
+
+  @throws[SQLException]
+  def prepareAnomalyMetricsGetSqlStatement(connection: Connection, anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int): PreparedStatement = {
+
+    val sb = new StringBuilder
+
+    if (anomalyType.equals(AnomalyType.POINT_IN_TIME)) {
+      sb.++=(String.format(PhoenixQueryConstants.GET_PIT_ANOMALY_METRIC_SQL, PhoenixQueryConstants.PIT_ANOMALY_METRICS_TABLE_NAME))
+    } else {
+      sb.++=(String.format(PhoenixQueryConstants.GET_TREND_ANOMALY_METRIC_SQL, PhoenixQueryConstants.TREND_ANOMALY_METRICS_TABLE_NAME))
+    }
+
+    sb.append(" LIMIT " + limit)
+    var stmt: java.sql.PreparedStatement = null
+    try {
+      stmt = connection.prepareStatement(sb.toString)
+      var pos = 1
+
+      pos += 1
+      stmt.setLong(pos, startTime)
+
+      stmt.setLong(pos, endTime)
+
+      stmt.setFetchSize(limit)
+
+    } catch {
+      case e: SQLException =>
+        if (stmt != null)
+          stmt
+        throw e
+    }
+    stmt
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/PhoenixQueryConstants.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixQueryConstants.scala
similarity index 85%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/PhoenixQueryConstants.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixQueryConstants.scala
index 17173ec..5379c91 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/PhoenixQueryConstants.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixQueryConstants.scala
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.adservice.common
+package org.apache.ambari.metrics.adservice.db
 
 object PhoenixQueryConstants {
 
@@ -33,8 +33,6 @@ object PhoenixQueryConstants {
   /* CREATE statement constants */
   //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
 
-  val CREATE_METRIC_PROFILE_TABLE = ""
-
   val CREATE_METHOD_PARAMETERS_TABLE: String = "CREATE TABLE IF NOT EXISTS %s (" +
     "METHOD_NAME VARCHAR, " +
     "METHOD_TYPE VARCHAR, " +
@@ -49,7 +47,7 @@ object PhoenixQueryConstants {
     "METRIC_VALUE DOUBLE, " +
     "SEASONAL_INFO VARCHAR, " +
     "ANOMALY_SCORE DOUBLE, " +
-    "MODEL_SNAPSHOT VARCHAR, " +
+    "MODEL_PARAMETERS VARCHAR, " +
     "DETECTION_TIME UNSIGNED_LONG " +
     "CONSTRAINT pk PRIMARY KEY (METRIC_UUID, METHOD_NAME, ANOMALY_TIMESTAMP)) " +
     "DATA_BLOCK_ENCODING='FAST_DIFF', IMMUTABLE_ROWS=true, TTL=%s, COMPRESSION='SNAPPY'"
@@ -61,8 +59,9 @@ object PhoenixQueryConstants {
     "TEST_PERIOD_START UNSIGNED_LONG NOT NULL, " +
     "TEST_PERIOD_END UNSIGNED_LONG NOT NULL, " +
     "METHOD_NAME VARCHAR, " +
+    "SEASONAL_INFO VARCHAR, " +
     "ANOMALY_SCORE DOUBLE, " +
-    "MODEL_SNAPSHOT VARCHAR, " +
+    "MODEL_PARAMETERS VARCHAR, " +
     "DETECTION_TIME UNSIGNED_LONG " +
     "CONSTRAINT pk PRIMARY KEY (METRIC_UUID, METHOD_NAME, ANOMALY_PERIOD_START, ANOMALY_PERIOD_END, TEST_PERIOD_START, TEST_PERIOD_END)) " +
     "DATA_BLOCK_ENCODING='FAST_DIFF' IMMUTABLE_ROWS=true, TTL=%s, COMPRESSION='SNAPPY'"
@@ -83,10 +82,10 @@ object PhoenixQueryConstants {
   val UPSERT_METHOD_PARAMETERS_SQL: String = "UPSERT INTO %s (METHOD_NAME, METHOD_TYPE, PARAMETERS) VALUES (?,?,?)"
 
   val UPSERT_PIT_ANOMALY_METRICS_SQL: String = "UPSERT INTO %s (METRIC_UUID, ANOMALY_TIMESTAMP, METRIC_VALUE, METHOD_NAME, " +
-    "SEASONAL_INFO, ANOMALY_SCORE, MODEL_SNAPSHOT, DETECTION_TIME) VALUES (?, ?, ?, ?, ?, ?, ?, ?)"
+    "SEASONAL_INFO, ANOMALY_SCORE, MODEL_PARAMETERS, DETECTION_TIME) VALUES (?, ?, ?, ?, ?, ?, ?, ?)"
 
   val UPSERT_TREND_ANOMALY_METRICS_SQL: String = "UPSERT INTO %s (METRIC_UUID, ANOMALY_PERIOD_START, ANOMALY_PERIOD_END, " +
-    "TEST_PERIOD_START, TEST_PERIOD_END, METHOD_NAME, ANOMALY_SCORE, MODEL_SNAPSHOT, DETECTION_TIME) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)"
+    "TEST_PERIOD_START, TEST_PERIOD_END, METHOD_NAME, ANOMALY_SCORE, MODEL_PARAMETERS, DETECTION_TIME) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)"
 
   val UPSERT_MODEL_SNAPSHOT_SQL: String = "UPSERT INTO %s (METRIC_UUID, METHOD_NAME, METHOD_TYPE, PARAMETERS) VALUES (?, ?, ?, ?)"
 
@@ -94,15 +93,15 @@ object PhoenixQueryConstants {
   /* GET statement constants */
   //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
 
-  val GET_METHOD_PAREMETERS_SQL: String = "SELECT METHOD_NAME, METHOD_TYPE, PARAMETERS FROM %s WHERE METHOD_NAME = %s"
+  val GET_METHOD_PARAMETERS_SQL: String = "SELECT METHOD_NAME, METHOD_TYPE, PARAMETERS FROM %s WHERE METHOD_NAME = %s"
 
   val GET_PIT_ANOMALY_METRIC_SQL: String = "SELECT METRIC_UUID, ANOMALY_TIMESTAMP, METRIC_VALUE, METHOD_NAME, SEASONAL_INFO, " +
-    "ANOMALY_SCORE, MODEL_SNAPSHOT, DETECTION_TIME FROM %s WHERE METRIC_METRIC_UUID = ? AND ANOMALY_TIMESTAMP > ? AND ANOMALY_TIMESTAMP <= ? " +
+    "ANOMALY_SCORE, MODEL_PARAMETERS, DETECTION_TIME FROM %s WHERE ANOMALY_TIMESTAMP > ? AND ANOMALY_TIMESTAMP <= ? " +
     "ORDER BY ANOMALY_SCORE DESC"
 
-  val GET_TREND_ANOMALY_METRIC_SQL: String = "SELECT METRIC_METRIC_UUID, ANOMALY_PERIOD_START, ANOMALY_PERIOD_END, TEST_PERIOD_START, " +
-    "ANOMALY_PERIOD_START, METHOD_NAME, ANOMALY_SCORE, MODEL_SNAPSHOT, DETECTION_TIME FROM %s WHERE METHOD = ? AND ANOMALY_PERIOD_END > ? " +
-    "AND TEST_END_TIME <= ? ORDER BY ANOMALY_SCORE DESC"
+  val GET_TREND_ANOMALY_METRIC_SQL: String = "SELECT METRIC_UUID, ANOMALY_PERIOD_START, ANOMALY_PERIOD_END, TEST_PERIOD_START, " +
+    "TEST_PERIOD_END, METHOD_NAME, SEASONAL_INFO, ANOMALY_SCORE, MODEL_PARAMETERS, DETECTION_TIME FROM %s WHERE ANOMALY_PERIOD_END > ? " +
+    "AND ANOMALY_PERIOD_END <= ? ORDER BY ANOMALY_SCORE DESC"
 
   val GET_MODEL_SNAPSHOT_SQL: String = "SELECT METRIC_UUID, METHOD_NAME, METHOD_TYPE, PARAMETERS FROM %s WHERE UUID = %s AND METHOD_NAME = %s"
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala
new file mode 100644
index 0000000..801c5f5
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala
@@ -0,0 +1,112 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.metadata
+
+import java.net.{HttpURLConnection, URL}
+
+import org.apache.ambari.metrics.adservice.configuration.MetricCollectorConfiguration
+import org.apache.commons.lang.StringUtils
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricKey
+
+import com.fasterxml.jackson.databind.ObjectMapper
+import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
+
+/**
+  * Class to invoke Metrics Collector metadata API.
+  * TODO : Instantiate a sync thread that regularly updates the internal maps by reading off AMS metadata.
+  */
+class ADMetadataProvider extends MetricMetadataProvider {
+
+  var metricCollectorHostPorts: Array[String] = Array.empty[String]
+  var metricMetadataPath: String = "/v1/timeline/metrics/metadata/keys"
+
+  val connectTimeout: Int = 10000
+  val readTimeout: Int = 10000
+  //TODO: Add retries for metrics collector GET call.
+  //val retries: Long = 5
+
+  def this(configuration: MetricCollectorConfiguration) {
+    this
+    if (StringUtils.isNotEmpty(configuration.getHostPortList)) {
+      metricCollectorHostPorts = configuration.getHostPortList.split(",")
+    }
+    metricMetadataPath = configuration.getMetadataEndpoint
+  }
+
+  override def getMetricKeysForDefinitions(metricSourceDefinition: MetricSourceDefinition): (Map[MetricDefinition,
+    Set[MetricKey]], Set[MetricKey]) = {
+
+    val keysMap = scala.collection.mutable.Map[MetricDefinition, Set[MetricKey]]()
+    val numDefinitions: Int = metricSourceDefinition.metricDefinitions.size
+    val metricKeySet: scala.collection.mutable.Set[MetricKey] = scala.collection.mutable.Set.empty[MetricKey]
+
+    for (metricDef <- metricSourceDefinition.metricDefinitions) {
+      for (hostPort <- metricCollectorHostPorts) {
+        val metricKeys: Set[MetricKey] = getKeysFromMetricsCollector(hostPort + metricMetadataPath, metricDef)
+        if (metricKeys != null) {
+          keysMap += (metricDef -> metricKeys)
+          metricKeySet.++(metricKeys)
+        }
+      }
+    }
+    (keysMap.toMap, metricKeySet.toSet)
+  }
+
+  /**
+    * Make Metrics Collector REST API call to fetch keys.
+    *
+    * @param url
+    * @param metricDefinition
+    * @return
+    */
+  def getKeysFromMetricsCollector(url: String, metricDefinition: MetricDefinition): Set[MetricKey] = {
+
+    val mapper = new ObjectMapper() with ScalaObjectMapper
+    try {
+      val connection = new URL(url).openConnection.asInstanceOf[HttpURLConnection]
+      connection.setConnectTimeout(connectTimeout)
+      connection.setReadTimeout(readTimeout)
+      connection.setRequestMethod("GET")
+      val inputStream = connection.getInputStream
+      val content = scala.io.Source.fromInputStream(inputStream).mkString
+      if (inputStream != null) inputStream.close()
+      val metricKeySet: Set[MetricKey] = fromTimelineMetricKey(mapper.readValue[java.util.Set[TimelineMetricKey]](content))
+      return metricKeySet
+    } catch {
+      case _: java.io.IOException | _: java.net.SocketTimeoutException => // handle this
+    }
+    null
+  }
+
+  def fromTimelineMetricKey(timelineMetricKeys: java.util.Set[TimelineMetricKey]): Set[MetricKey] = {
+    val metricKeySet: scala.collection.mutable.Set[MetricKey] = scala.collection.mutable.Set.empty[MetricKey]
+    val iter = timelineMetricKeys.iterator()
+    while (iter.hasNext) {
+      val timelineMetricKey: TimelineMetricKey = iter.next()
+      val metricKey: MetricKey = MetricKey(timelineMetricKey.metricName,
+        timelineMetricKey.appId,
+        timelineMetricKey.instanceId,
+        timelineMetricKey.hostName,
+        timelineMetricKey.uuid)
+
+      metricKeySet.add(metricKey)
+    }
+    metricKeySet.toSet
+  }
+
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/InputMetricDefinitionParser.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/InputMetricDefinitionParser.scala
new file mode 100644
index 0000000..cc66c90
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/InputMetricDefinitionParser.scala
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.metadata
+
+import java.io.File
+
+import com.fasterxml.jackson.databind.ObjectMapper
+import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
+
+object InputMetricDefinitionParser {
+
+  def parseInputDefinitionsFromDirectory(directory: String): List[MetricSourceDefinition] = {
+
+    if (directory == null) {
+      return List.empty[MetricSourceDefinition]
+    }
+    val mapper = new ObjectMapper() with ScalaObjectMapper
+
+    def metricSourceDefinitions: List[MetricSourceDefinition] =
+      for {
+        file <- getFilesInDirectory(directory)
+        definition: MetricSourceDefinition = mapper.readValue[MetricSourceDefinition](file)
+        if definition != null
+      } yield definition
+
+    metricSourceDefinitions
+  }
+
+  private def getFilesInDirectory(directory: String): List[File] = {
+    val dir = new File(directory)
+    if (dir.exists && dir.isDirectory) {
+      dir.listFiles.filter(_.isFile).toList
+    } else {
+      List[File]()
+    }
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala
new file mode 100644
index 0000000..0a5e51f
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala
@@ -0,0 +1,69 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+
+package org.apache.ambari.metrics.adservice.metadata
+
+/*
+   {
+       "metric-name": "mem_free",
+       "metric-description" : "Free memory on a Host.",
+       "troubleshooting-info" : "Sudden drop / hike in free memory on a host.",
+       "static-threshold" : 10,
+       “app-id” : “HOST”
+}
+ */
+
+case class MetricDefinition (metricName: String,
+                             appId: String,
+                             hosts: List[String],
+                             metricDescription: String,
+                             troubleshootingInfo: String,
+                             staticThreshold: Double)  {
+
+  @Override
+  override def equals(obj: scala.Any): Boolean = {
+
+    if (obj == null || (getClass ne obj.getClass))
+      return false
+
+    val that = obj.asInstanceOf[MetricDefinition]
+
+    if (!(metricName == that.metricName))
+      return false
+
+    if (!(appId == that.appId))
+      return false
+
+    true
+  }
+}
+
+object MetricDefinition {
+
+  def apply(metricName: String,
+            appId: String,
+            hosts: List[String],
+            metricDescription: String,
+            troubleshootingInfo: String,
+            staticThreshold: Double): MetricDefinition =
+    new MetricDefinition(metricName, appId, hosts, metricDescription, troubleshootingInfo, staticThreshold)
+
+  def apply(metricName: String, appId: String, hosts: List[String]): MetricDefinition =
+    new MetricDefinition(metricName, appId, hosts, null, null, -1)
+
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/ADManagerConfigurationTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricKey.scala
similarity index 53%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/ADManagerConfigurationTest.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricKey.scala
index 40b9d6a..afad617 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/ADManagerConfigurationTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricKey.scala
@@ -15,26 +15,36 @@
   * See the License for the specific language governing permissions and
   * limitations under the License.
   */
-package org.apache.ambari.metrics.adservice.common
 
-import org.scalatest.FlatSpec
+package org.apache.ambari.metrics.adservice.metadata
 
-import scala.collection.mutable
+case class MetricKey (metricName: String, appId: String, instanceId: String, hostname: String, uuid: Array[Byte]) {
 
-class ADServiceConfigurationTest extends FlatSpec {
-
-  "A Stack" should "pop values in last-in-first-out order" in {
-    val stack = new mutable.Stack[Int]
-    stack.push(1)
-    stack.push(2)
-    assert(stack.pop() === 2)
-    assert(stack.pop() === 1)
+  @Override
+  override def toString: String = {
+  "MetricName=" + metricName + ",App=" + appId + ",InstanceId=" + instanceId + ",Host=" + hostname
   }
 
-  it should "throw NoSuchElementException if an empty stack is popped" in {
-    val emptyStack = new mutable.Stack[String]
-    assertThrows[NoSuchElementException] {
-      emptyStack.pop()
-    }
+  @Override
+  override def equals(obj: scala.Any): Boolean = {
+
+    if (obj == null || (getClass ne obj.getClass))
+      return false
+
+    val that = obj.asInstanceOf[MetricKey]
+
+    if (!(metricName == that.metricName))
+      return false
+
+    if (!(appId == that.appId))
+      return false
+
+    if (!(instanceId == that.instanceId))
+      return false
+
+    if (!(hostname == that.hostname))
+      return false
+
+    true
   }
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerService.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerService.scala
new file mode 100644
index 0000000..12bd7e4
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerService.scala
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.metadata
+
+trait MetricManagerService {
+
+  /**
+    * Given a 'UUID', return the metric key associated with it.
+    * @param uuid UUID
+    * @return
+    */
+  def getMetricKeyFromUuid(uuid: Array[Byte]) : MetricKey
+
+  /**
+    * Given a component definition name, return the definition associated with it.
+    * @param name component definition name
+    * @return
+    */
+  def getDefinitionByName(name: String) : MetricSourceDefinition
+
+  /**
+    * Add a new definition.
+    * @param definition component definition JSON
+    * @return
+    */
+  def addDefinition(definition: MetricSourceDefinition) : Boolean
+
+  /**
+    * Update a component definition by name. Only definitions which were added by API can be modified through API.
+    * @param definition component definition name
+    * @return
+    */
+  def updateDefinition(definition: MetricSourceDefinition) : Boolean
+
+  /**
+    * Delete a component definition by name. Only definitions which were added by API can be deleted through API.
+    * @param name component definition name
+    * @return
+    */
+  def deleteDefinitionByName(name: String) : Boolean
+
+  /**
+    * Given an appId, return set of definitions that are tracked for that appId.
+    * @param appId component definition appId
+    * @return
+    */
+  def getDefinitionByAppId(appId: String) : List[MetricSourceDefinition]
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerServiceImpl.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerServiceImpl.scala
new file mode 100644
index 0000000..ce02775
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerServiceImpl.scala
@@ -0,0 +1,183 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.metadata
+
+import org.apache.ambari.metrics.adservice.app.AnomalyDetectionAppConfig
+import org.apache.ambari.metrics.adservice.db.AdMetadataStoreAccessor
+
+import com.google.inject.{Inject, Singleton}
+
+@Singleton
+class MetricManagerServiceImpl extends MetricManagerService {
+
+  @Inject
+  var adMetadataStoreAccessor: AdMetadataStoreAccessor = _
+
+  var configuration: AnomalyDetectionAppConfig = _
+  var metricMetadataProvider: MetricMetadataProvider = _
+
+  var metricSourceDefinitionMap: Map[String, MetricSourceDefinition] = Map()
+  var metricKeys: Set[MetricKey] = Set.empty[MetricKey]
+  var metricDefinitionMetricKeyMap: Map[MetricDefinition, Set[MetricKey]] = Map()
+
+  @Inject
+  def this (anomalyDetectionAppConfig: AnomalyDetectionAppConfig) = {
+    this ()
+    //TODO : Create AD Metadata instance here (or inject)
+    configuration = anomalyDetectionAppConfig
+    initializeService()
+  }
+
+  def this (anomalyDetectionAppConfig: AnomalyDetectionAppConfig, adMetadataStoreAccessor: AdMetadataStoreAccessor) = {
+    this ()
+    //TODO : Create AD Metadata instance here (or inject). Pass in Schema information.
+    configuration = anomalyDetectionAppConfig
+    this.adMetadataStoreAccessor = adMetadataStoreAccessor
+    initializeService()
+  }
+
+  def initializeService() : Unit = {
+
+    //Create AD Metadata Schema
+    //TODO Make sure AD Metadata DB is initialized here.
+
+    //Initialize Metric Metadata Provider
+    metricMetadataProvider = new ADMetadataProvider(configuration.getMetricCollectorConfiguration)
+
+    loadMetricSourceDefinitions()
+  }
+
+  def loadMetricSourceDefinitions() : Unit = {
+
+    //Load definitions from metadata store
+    val definitionsFromStore: List[MetricSourceDefinition] = adMetadataStoreAccessor.getSavedInputDefinitions
+
+    //Load definitions from configs
+    val definitionsFromConfig: List[MetricSourceDefinition] = getInputDefinitionsFromConfig
+
+    //Union the 2 sources, with DB taking precedence.
+    //Save new definition list to DB.
+    metricSourceDefinitionMap = metricSourceDefinitionMap.++(combineDefinitionSources(definitionsFromConfig, definitionsFromStore))
+
+      //Reach out to AMS Metadata and get Metric Keys. Pass in List<CD> and get back Map<MD,Set<MK>>
+    for (definition <- metricSourceDefinitionMap.values) {
+      val (definitionKeyMap: Map[MetricDefinition, Set[MetricKey]], keys: Set[MetricKey])= metricMetadataProvider.getMetricKeysForDefinitions(definition)
+      metricDefinitionMetricKeyMap = metricDefinitionMetricKeyMap.++(definitionKeyMap)
+      metricKeys = metricKeys.++(keys)
+    }
+  }
+
+  def getMetricKeyFromUuid(uuid: Array[Byte]): MetricKey = {
+    var key: MetricKey = null
+    for (metricKey <- metricKeys) {
+      if (metricKey.uuid.sameElements(uuid)) {
+        key = metricKey
+      }
+    }
+    key
+  }
+
+  @Override
+  def getDefinitionByName(name: String): MetricSourceDefinition = {
+    metricSourceDefinitionMap.apply(name)
+  }
+
+  @Override
+  def addDefinition(definition: MetricSourceDefinition): Boolean = {
+    if (metricSourceDefinitionMap.contains(definition.definitionName)) {
+      return false
+    }
+    definition.definitionSource = MetricSourceDefinitionType.API
+
+    val success: Boolean = adMetadataStoreAccessor.saveInputDefinition(definition)
+    if (success) {
+      metricSourceDefinitionMap += definition.definitionName -> definition
+    }
+    success
+  }
+
+  @Override
+  def updateDefinition(definition: MetricSourceDefinition): Boolean = {
+    if (!metricSourceDefinitionMap.contains(definition.definitionName)) {
+      return false
+    }
+
+    if (metricSourceDefinitionMap.apply(definition.definitionName).definitionSource != MetricSourceDefinitionType.API) {
+      return false
+    }
+
+    val success: Boolean = adMetadataStoreAccessor.saveInputDefinition(definition)
+    if (success) {
+      metricSourceDefinitionMap += definition.definitionName -> definition
+    }
+    success
+  }
+
+  @Override
+  def deleteDefinitionByName(name: String): Boolean = {
+    if (!metricSourceDefinitionMap.contains(name)) {
+      return false
+    }
+
+    val definition : MetricSourceDefinition = metricSourceDefinitionMap.apply(name)
+    if (definition.definitionSource != MetricSourceDefinitionType.API) {
+      return false
+    }
+
+    val success: Boolean = adMetadataStoreAccessor.removeInputDefinition(name)
+    if (success) {
+      metricSourceDefinitionMap += definition.definitionName -> definition
+    }
+    success
+  }
+
+  @Override
+  def getDefinitionByAppId(appId: String): List[MetricSourceDefinition] = {
+
+    val defList : List[MetricSourceDefinition] = metricSourceDefinitionMap.values.toList
+    defList.filter(_.appId == appId)
+  }
+
+  def combineDefinitionSources(configDefinitions: List[MetricSourceDefinition], dbDefinitions: List[MetricSourceDefinition])
+  : Map[String, MetricSourceDefinition] = {
+
+    var combinedDefinitionMap: scala.collection.mutable.Map[String, MetricSourceDefinition] =
+      scala.collection.mutable.Map.empty[String, MetricSourceDefinition]
+
+    for (definitionFromDb <- dbDefinitions) {
+      combinedDefinitionMap(definitionFromDb.definitionName) = definitionFromDb
+    }
+
+    for (definition <- configDefinitions) {
+      if (!dbDefinitions.contains(definition)) {
+        adMetadataStoreAccessor.saveInputDefinition(definition)
+        combinedDefinitionMap(definition.definitionName) = definition
+      }
+    }
+    combinedDefinitionMap.toMap
+  }
+
+  def getInputDefinitionsFromConfig: List[MetricSourceDefinition] = {
+    val configDirectory = configuration.getMetricManagerServiceConfiguration.getInputDefinitionDirectory
+    InputMetricDefinitionParser.parseInputDefinitionsFromDirectory(configDirectory)
+  }
+
+  def setAdMetadataStoreAccessor (adMetadataStoreAccessor: AdMetadataStoreAccessor) : Unit = {
+    this.adMetadataStoreAccessor = adMetadataStoreAccessor
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricMetadataProvider.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricMetadataProvider.scala
new file mode 100644
index 0000000..5f9c0a0
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricMetadataProvider.scala
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.metadata
+
+/**
+  * Metadata provider for maintaining the metric information in the Metric Definition Service.
+  */
+trait MetricMetadataProvider {
+
+  /**
+    * Return the set of Metric Keys for a given component definition.
+    * @param metricSourceDefinition component definition
+    * @return
+    */
+  def getMetricKeysForDefinitions(metricSourceDefinition: MetricSourceDefinition): (Map[MetricDefinition, Set[MetricKey]], Set[MetricKey])
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinition.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinition.scala
new file mode 100644
index 0000000..60198e0
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinition.scala
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.metadata
+
+import javax.xml.bind.annotation.XmlRootElement
+
+import org.apache.ambari.metrics.adservice.metadata.MetricSourceDefinitionType.MetricSourceDefinitionType
+import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
+
+import com.fasterxml.jackson.databind.ObjectMapper
+import com.fasterxml.jackson.module.scala.DefaultScalaModule
+import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
+
+/*
+{
+ "definition-name": "host-memory",
+ "app-id" : "HOST",
+ "hosts" : [“c6401.ambari.apache.org”],
+ "metric-definitions" : [
+   {
+       "metric-name": "mem_free",
+       "metric-description" : "Free memory on a Host.",
+       "troubleshooting-info" : "Sudden drop / hike in free memory on a host.",
+       "static-threshold" : 10,
+       “app-id” : “HOST”
+}   ],
+
+ "related-definition-names" : ["host-cpu", “host-network”],
+ “anomaly-detection-subsystems” : [“point-in-time”, “trend”]
+}
+*/
+
+/*
+
+On Startup
+Read input definitions directory, parse the JSONs
+Create / Update the metric definitions in DB
+Convert metric definitions to Map<MetricKey, MetricDefinition>
+
+What to do want to have in memory?
+Map of Metric Key -> List<Component Definitions>
+
+What do we use metric definitions for?
+Anomaly GET - Associate definition information as well.
+Definition CRUD - Get definition given definition name
+Get set of metrics that are being tracked
+Return definition information for a metric key
+Given a metric definition name, return set of metrics.
+
+*/
+
+@XmlRootElement
+class MetricSourceDefinition {
+
+  var definitionName: String = _
+  var appId: String = _
+  var definitionSource: MetricSourceDefinitionType = MetricSourceDefinitionType.CONFIG
+  var hosts: List[String] = List.empty[String]
+  var relatedDefinitions: List[String] = List.empty[String]
+  var associatedAnomalySubsystems: List[AnomalyType] = List.empty[AnomalyType]
+
+  var metricDefinitions: scala.collection.mutable.MutableList[MetricDefinition] =
+    scala.collection.mutable.MutableList.empty[MetricDefinition]
+
+  def this(definitionName: String, appId: String, source: MetricSourceDefinitionType) = {
+    this
+    this.definitionName = definitionName
+    this.appId = appId
+    this.definitionSource = source
+  }
+
+  def addMetricDefinition(metricDefinition: MetricDefinition): Unit = {
+    if (!metricDefinitions.contains(metricDefinition)) {
+      metricDefinitions.+=(metricDefinition)
+    }
+  }
+
+  def removeMetricDefinition(metricDefinition: MetricDefinition): Unit = {
+    metricDefinitions = metricDefinitions.filter(_ != metricDefinition)
+  }
+
+  @Override
+  override def equals(obj: scala.Any): Boolean = {
+
+    if (obj == null) {
+      return false
+    }
+    val that = obj.asInstanceOf[MetricSourceDefinition]
+    definitionName.equals(that.definitionName)
+  }
+}
+
+object MetricSourceDefinition {
+  val mapper = new ObjectMapper() with ScalaObjectMapper
+  mapper.registerModule(DefaultScalaModule)
+
+  def serialize(definition: MetricSourceDefinition) : String = {
+    mapper.writeValueAsString(definition)
+  }
+
+  def deserialize(definitionString: String) : MetricSourceDefinition = {
+    mapper.readValue[MetricSourceDefinition](definitionString)
+  }
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionType.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionType.scala
new file mode 100644
index 0000000..04ff95b
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionType.scala
@@ -0,0 +1,26 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.metadata
+
+import javax.xml.bind.annotation.XmlRootElement
+
+@XmlRootElement
+object MetricSourceDefinitionType extends Enumeration{
+  type MetricSourceDefinitionType = Value
+  val CONFIG,API = Value
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/AnomalyDetectionMethod.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/AnomalyDetectionMethod.scala
new file mode 100644
index 0000000..81a7023
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/AnomalyDetectionMethod.scala
@@ -0,0 +1,23 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.model
+
+object AnomalyDetectionMethod extends Enumeration {
+  type AnomalyDetectionMethod = Value
+  val EMA, TUKEYS, KS, HSDEV, UNKOWN = Value
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/AnomalyType.scala
similarity index 78%
copy from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
copy to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/AnomalyType.scala
index 9e6cc6d..817180e 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/AnomalyType.scala
@@ -1,7 +1,3 @@
-package org.apache.ambari.metrics.adservice.app
-
-import io.dropwizard.Configuration
-
 /**
   * Licensed to the Apache Software Foundation (ASF) under one
   * or more contributor license agreements.  See the NOTICE file
@@ -19,6 +15,12 @@ import io.dropwizard.Configuration
   * See the License for the specific language governing permissions and
   * limitations under the License.
   */
-class AnomalyDetectionAppConfig extends Configuration {
+package org.apache.ambari.metrics.adservice.model
+
+import javax.xml.bind.annotation.XmlRootElement
 
+@XmlRootElement
+object AnomalyType extends Enumeration {
+  type AnomalyType = Value
+   val POINT_IN_TIME, TREND, UNKNOWN = Value
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SingleMetricAnomalyInstance.scala
similarity index 73%
copy from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
copy to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SingleMetricAnomalyInstance.scala
index 9e6cc6d..981a893 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SingleMetricAnomalyInstance.scala
@@ -1,7 +1,3 @@
-package org.apache.ambari.metrics.adservice.app
-
-import io.dropwizard.Configuration
-
 /**
   * Licensed to the Apache Software Foundation (ASF) under one
   * or more contributor license agreements.  See the NOTICE file
@@ -19,6 +15,15 @@ import io.dropwizard.Configuration
   * See the License for the specific language governing permissions and
   * limitations under the License.
   */
-class AnomalyDetectionAppConfig extends Configuration {
+
+package org.apache.ambari.metrics.adservice.model
+
+import org.apache.ambari.metrics.adservice.metadata.MetricKey
+import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
+
+abstract class SingleMetricAnomalyInstance {
+
+  val metricKey: MetricKey
+  val anomalyType: AnomalyType
 
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala
index fb9921a..c941ac3 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala
@@ -17,9 +17,9 @@
   */
 package org.apache.ambari.metrics.adservice.resource
 
-import javax.ws.rs.{GET, Path, Produces}
-import javax.ws.rs.core.Response
 import javax.ws.rs.core.MediaType.APPLICATION_JSON
+import javax.ws.rs.core.Response
+import javax.ws.rs.{GET, Path, Produces}
 
 import org.joda.time.DateTime
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala
new file mode 100644
index 0000000..aacea79
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.resource
+
+class MetricDefinitionResource {
+
+  /*
+    GET component definition
+    POST component definition
+    DELETE component definition
+    PUT component definition
+  */
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala
index b92a145..22fe0ac 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala
@@ -17,9 +17,9 @@
   */
 package org.apache.ambari.metrics.adservice.resource
 
-import javax.ws.rs.{GET, Path, Produces}
-import javax.ws.rs.core.Response
 import javax.ws.rs.core.MediaType.APPLICATION_JSON
+import javax.ws.rs.core.Response
+import javax.ws.rs.{GET, Path, Produces}
 
 import org.joda.time.DateTime
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/SubsystemResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/SubsystemResource.scala
new file mode 100644
index 0000000..e7d7c9a
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/SubsystemResource.scala
@@ -0,0 +1,26 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.resource
+
+class SubsystemResource {
+
+  /*
+    GET / UPDATE - parameters (which subsystem, parameters)
+    POST - Update sensitivity of a subsystem (which subsystem, increase or decrease, factor)
+   */
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala
index 0161166..8e6f511 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala
@@ -17,6 +17,18 @@
   */
 package org.apache.ambari.metrics.adservice.service
 
+import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
+import org.apache.ambari.metrics.adservice.model.SingleMetricAnomalyInstance
+
 trait ADQueryService {
 
+  /**
+    * API to return list of single metric anomalies satisfying a set of conditions from the anomaly store.
+    * @param anomalyType Type of the anomaly (Point In Time / Trend)
+    * @param startTime Start of time range
+    * @param endTime End of time range
+    * @param limit Maximim number of anomaly metrics that need to be returned based on anomaly score.
+    * @return
+    */
+  def getTopNAnomaliesByType(anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int): List[SingleMetricAnomalyInstance]
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala
index fe00f58..e5efa44 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala
@@ -16,7 +16,22 @@
   * limitations under the License.
   */
 package org.apache.ambari.metrics.adservice.service
+import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
+import org.apache.ambari.metrics.adservice.model.SingleMetricAnomalyInstance
 
 class ADQueryServiceImpl extends ADQueryService {
 
+  /**
+    * Implementation to return list of anomalies satisfying a set of conditions from the anomaly store.
+    *
+    * @param anomalyType Type of the anomaly (Point In Time / Trend)
+    * @param startTime   Start of time range
+    * @param endTime     End of time range
+    * @param limit       Maximim number of anomaly metrics that need to be returned based on anomaly score.
+    * @return
+    */
+  override def getTopNAnomaliesByType(anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int): List[SingleMetricAnomalyInstance] = {
+    val anomalies = List.empty[SingleMetricAnomalyInstance]
+    anomalies
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/MetricAnomalyDetector.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/MetricAnomalyDetector.scala
index 6122f5e..90c564e 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/MetricAnomalyDetector.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/MetricAnomalyDetector.scala
@@ -16,22 +16,6 @@
  */
 package org.apache.ambari.metrics.adservice.spark.prototype
 
-import java.io.{FileInputStream, IOException, InputStream}
-import java.util
-import java.util.Properties
-import java.util.logging.LogManager
-
-import com.fasterxml.jackson.databind.ObjectMapper
-import org.apache.ambari.metrics.adservice.prototype.core.MetricsCollectorInterface
-import org.apache.spark.SparkConf
-import org.apache.spark.streaming._
-import org.apache.spark.streaming.kafka._
-import org.apache.ambari.metrics.adservice.prototype.methods.{AnomalyDetectionTechnique, MetricAnomaly}
-import org.apache.ambari.metrics.adservice.prototype.methods.ema.{EmaModelLoader, EmaTechnique}
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics
-import org.apache.log4j.Logger
-import org.apache.spark.storage.StorageLevel
-
 object MetricAnomalyDetector {
 
   /*
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala
index ac00764..466225f 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala
@@ -17,11 +17,6 @@
 
 package org.apache.ambari.metrics.adservice.spark.prototype
 
-import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric
-import org.apache.spark.sql.SQLContext
-import org.apache.spark.{SparkConf, SparkContext}
-
 object SparkPhoenixReader {
 
   def main(args: Array[String]) {
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/pointintime/PointInTimeAnomalyInstance.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/pointintime/PointInTimeAnomalyInstance.scala
new file mode 100644
index 0000000..63cf8c7
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/pointintime/PointInTimeAnomalyInstance.scala
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.subsystem.pointintime
+
+import java.util.Date
+
+import org.apache.ambari.metrics.adservice.common.Season
+import org.apache.ambari.metrics.adservice.metadata.MetricKey
+import org.apache.ambari.metrics.adservice.model.AnomalyDetectionMethod.AnomalyDetectionMethod
+import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
+import org.apache.ambari.metrics.adservice.model.{AnomalyType, SingleMetricAnomalyInstance}
+
+class PointInTimeAnomalyInstance(val metricKey: MetricKey,
+                                 val timestamp: Long,
+                                 val metricValue: Double,
+                                 val methodType: AnomalyDetectionMethod,
+                                 val anomalyScore: Double,
+                                 val anomalousSeason: Season,
+                                 val modelParameters: String) extends SingleMetricAnomalyInstance {
+
+  override val anomalyType: AnomalyType = AnomalyType.POINT_IN_TIME
+
+  private def anomalyToString : String = {
+      "Method=" + methodType + ", AnomalyScore=" + anomalyScore + ", Season=" + anomalousSeason.toString +
+        ", Model Parameters=" + modelParameters
+  }
+
+  @Override
+  override def toString: String = {
+    "Metric : [" + metricKey.toString + ", Metric Value=" + metricValue + " @ Time = " + new Date(timestamp) +  "], Anomaly : [" + anomalyToString + "]"
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala
new file mode 100644
index 0000000..125da34
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala
@@ -0,0 +1,29 @@
+package org.apache.ambari.metrics.adservice.subsystem.trend
+
+import org.apache.ambari.metrics.adservice.common.{Season, TimeRange}
+import org.apache.ambari.metrics.adservice.metadata.MetricKey
+import org.apache.ambari.metrics.adservice.model.AnomalyDetectionMethod.AnomalyDetectionMethod
+import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
+import org.apache.ambari.metrics.adservice.model.{AnomalyType, SingleMetricAnomalyInstance}
+
+case class TrendAnomalyInstance (metricKey: MetricKey,
+                                 anomalousPeriod: TimeRange,
+                                 referencePeriod: TimeRange,
+                                 methodType: AnomalyDetectionMethod,
+                                 anomalyScore: Double,
+                                 seasonInfo: Season,
+                                 modelParameters: String) extends SingleMetricAnomalyInstance {
+
+  override val anomalyType: AnomalyType = AnomalyType.POINT_IN_TIME
+
+  private def anomalyToString : String = {
+    "Method=" + methodType + ", AnomalyScore=" + anomalyScore + ", Season=" + anomalousPeriod.toString +
+      ", Model Parameters=" + modelParameters
+  }
+
+  @Override
+  override def toString: String = {
+    "Metric : [" + metricKey.toString + ", AnomalousPeriod=" + anomalousPeriod + ", ReferencePeriod=" + referencePeriod +
+      "], Anomaly : [" + anomalyToString + "]"
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestTukeys.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestTukeys.java
index 57a6f34..1077a9c 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestTukeys.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestTukeys.java
@@ -17,9 +17,9 @@
  */
 package org.apache.ambari.metrics.adservice.prototype;
 
-import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
 import org.apache.ambari.metrics.adservice.prototype.core.MetricsCollectorInterface;
 import org.apache.ambari.metrics.adservice.prototype.core.RFunctionInvoker;
+import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
 import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala
new file mode 100644
index 0000000..8e3a949
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.app
+
+import java.io.File
+
+import javax.validation.Validator
+
+import org.scalatest.FunSuite
+
+import com.fasterxml.jackson.databind.ObjectMapper
+
+import io.dropwizard.configuration.YamlConfigurationFactory
+import io.dropwizard.jackson.Jackson
+import io.dropwizard.jersey.validation.Validators
+
+class AnomalyDetectionAppConfigTest extends FunSuite {
+
+  test("testConfiguration") {
+
+    val objectMapper: ObjectMapper = Jackson.newObjectMapper()
+    val validator: Validator = Validators.newValidator
+    val factory: YamlConfigurationFactory[AnomalyDetectionAppConfig] =
+      new YamlConfigurationFactory[AnomalyDetectionAppConfig](classOf[AnomalyDetectionAppConfig], validator, objectMapper, "")
+
+    val classLoader = getClass.getClassLoader
+    val file = new File(classLoader.getResource("config.yml").getFile)
+    val config = factory.build(file)
+
+    assert(config.isInstanceOf[AnomalyDetectionAppConfig])
+
+    assert(config.getMetricManagerServiceConfiguration.getInputDefinitionDirectory == "/etc/adservice/conf/input-definitions-directory")
+
+    assert(config.getMetricCollectorConfiguration.getHostPortList == "host1:6188,host2:6188")
+
+    assert(config.getAdServiceConfiguration.getAnomalyDataTtl == 604800)
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
index c088855..65cf609 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
@@ -21,8 +21,8 @@ import javax.ws.rs.client.Client
 import javax.ws.rs.core.MediaType.APPLICATION_JSON
 
 import org.apache.ambari.metrics.adservice.app.DropwizardAppRuleHelper.withAppRunning
-import org.glassfish.jersey.client.{ClientConfig, JerseyClientBuilder}
 import org.glassfish.jersey.client.ClientProperties.{CONNECT_TIMEOUT, READ_TIMEOUT}
+import org.glassfish.jersey.client.{ClientConfig, JerseyClientBuilder}
 import org.glassfish.jersey.filter.LoggingFilter
 import org.glassfish.jersey.jaxb.internal.XmlJaxbElementProvider
 import org.joda.time.DateTime
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/RangeTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/RangeTest.scala
new file mode 100644
index 0000000..b610b97
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/RangeTest.scala
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.common
+
+import org.scalatest.FlatSpec
+
+class RangeTest extends FlatSpec {
+
+  "A Range " should " return true for inner and boundary values" in {
+    val range : Range = Range(4,6)
+    assert(range.withinRange(5))
+    assert(range.withinRange(6))
+    assert(range.withinRange(4))
+    assert(!range.withinRange(7))
+  }
+
+  it should "accept same lower and higher range values" in {
+    val range : Range = Range(4,4)
+    assert(range.withinRange(4))
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/SeasonTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/SeasonTest.scala
new file mode 100644
index 0000000..4d542e8
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/SeasonTest.scala
@@ -0,0 +1,91 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.common
+
+import java.util.Calendar
+
+import org.scalatest.FunSuite
+
+class SeasonTest extends FunSuite {
+
+  test("testBelongsTo") {
+
+    //Create Season for weekdays. Mon to Friday and 9AM - 5PM
+    var season : Season = Season(Range(Calendar.MONDAY,Calendar.FRIDAY), Range(9,17))
+
+    //Try with a timestamp on a Monday, @ 9AM.
+    val c = Calendar.getInstance
+    c.set(2017, Calendar.OCTOBER, 30, 9, 0, 0)
+    assert(season.belongsTo(c.getTimeInMillis))
+
+    c.set(2017, Calendar.OCTOBER, 30, 18, 0, 0)
+    assert(!season.belongsTo(c.getTimeInMillis))
+
+    //Try with a timestamp on a Sunday, @ 9AM.
+    c.set(2017, Calendar.OCTOBER, 29, 9, 0, 0)
+    assert(!season.belongsTo(c.getTimeInMillis))
+
+    //Create Season for Monday 11AM - 12Noon.
+    season = Season(Range(Calendar.MONDAY,Calendar.MONDAY), Range(11,12))
+    c.set(2017, Calendar.OCTOBER, 30, 9, 0, 0)
+    assert(!season.belongsTo(c.getTimeInMillis))
+
+    c.set(2017, Calendar.OCTOBER, 30, 11, 30, 0)
+    assert(season.belongsTo(c.getTimeInMillis))
+
+
+    //Create Season from Friday to Monday and 9AM - 5PM
+    season = Season(Range(Calendar.FRIDAY,Calendar.MONDAY), Range(9,17))
+
+    //Try with a timestamp on a Monday, @ 9AM.
+    c.set(2017, Calendar.OCTOBER, 30, 9, 0, 0)
+    assert(season.belongsTo(c.getTimeInMillis))
+
+    //Try with a timestamp on a Sunday, @ 3PM.
+    c.set(2017, Calendar.OCTOBER, 29, 15, 0, 0)
+    assert(season.belongsTo(c.getTimeInMillis))
+
+    //Try with a timestamp on a Wednesday, @ 9AM.
+    c.set(2017, Calendar.NOVEMBER, 1, 9, 0, 0)
+    assert(!season.belongsTo(c.getTimeInMillis))
+  }
+
+  test("testEquals") {
+
+    var season1: Season =  Season(Range(4,5), Range(2,3))
+    var season2: Season =  Season(Range(4,5), Range(2,3))
+    assert(season1 == season2)
+
+    var season3: Season =  Season(Range(4,4), Range(2,3))
+    assert(!(season1 == season3))
+  }
+
+  test("testSerialize") {
+    val season1 : Season = Season(Range(Calendar.MONDAY,Calendar.FRIDAY), Range(9,17))
+
+    val seasonString = Season.serialize(season1)
+
+    val season2 : Season = Season.deserialize(seasonString)
+    assert(season1 == season2)
+
+    val season3 : Season = Season(Range(Calendar.MONDAY,Calendar.THURSDAY), Range(9,17))
+    assert(!(season2 == season3))
+
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/AMSMetadataProviderTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/AMSMetadataProviderTest.scala
new file mode 100644
index 0000000..bd38e9a
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/AMSMetadataProviderTest.scala
@@ -0,0 +1,43 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+
+package org.apache.ambari.metrics.adservice.metadata
+
+import org.apache.ambari.metrics.adservice.configuration.MetricCollectorConfiguration
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricKey
+import org.scalatest.FunSuite
+
+class AMSMetadataProviderTest extends FunSuite {
+
+  test("testFromTimelineMetricKey") {
+    val timelineMetricKeys: java.util.Set[TimelineMetricKey] = new java.util.HashSet[TimelineMetricKey]()
+
+    val uuid: Array[Byte] = Array.empty[Byte]
+
+    for (i <- 1 to 3) {
+      val key: TimelineMetricKey = new TimelineMetricKey("M" + i, "App", null, "H", uuid)
+      timelineMetricKeys.add(key)
+    }
+
+    val aMSMetadataProvider : ADMetadataProvider = new ADMetadataProvider(new MetricCollectorConfiguration)
+
+    val metricKeys : Set[MetricKey] = aMSMetadataProvider.fromTimelineMetricKey(timelineMetricKeys)
+    assert(metricKeys.size == 3)
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerServiceTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerServiceTest.scala
new file mode 100644
index 0000000..8e19a0f
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricManagerServiceTest.scala
@@ -0,0 +1,130 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+
+package org.apache.ambari.metrics.adservice.metadata
+
+import org.apache.ambari.metrics.adservice.app.AnomalyDetectionAppConfig
+import org.apache.ambari.metrics.adservice.db.AdMetadataStoreAccessor
+import org.easymock.EasyMock.{anyObject, expect, expectLastCall, replay}
+import org.scalatest.FunSuite
+import org.scalatest.easymock.EasyMockSugar
+
+class MetricManagerServiceTest extends FunSuite {
+
+  test("testAddDefinition") {
+
+    val definitions : scala.collection.mutable.MutableList[MetricSourceDefinition] = scala.collection.mutable.MutableList.empty[MetricSourceDefinition]
+
+    for (i <- 1 to 3) {
+      val msd1 : MetricSourceDefinition = new MetricSourceDefinition("TestDefinition" + i, "testAppId", MetricSourceDefinitionType.API)
+      definitions.+=(msd1)
+    }
+
+    val newDef : MetricSourceDefinition = new MetricSourceDefinition("NewDefinition", "testAppId", MetricSourceDefinitionType.API)
+
+    val adMetadataStoreAccessor: AdMetadataStoreAccessor = EasyMockSugar.niceMock[AdMetadataStoreAccessor]
+    expect(adMetadataStoreAccessor.getSavedInputDefinitions).andReturn(definitions.toList).once()
+    expect(adMetadataStoreAccessor.saveInputDefinition(newDef)).andReturn(true).once()
+    replay(adMetadataStoreAccessor)
+
+    val metricManagerService: MetricManagerServiceImpl = new MetricManagerServiceImpl(new AnomalyDetectionAppConfig, adMetadataStoreAccessor)
+
+    metricManagerService.setAdMetadataStoreAccessor(adMetadataStoreAccessor)
+
+    metricManagerService.addDefinition(newDef)
+
+    assert(metricManagerService.metricSourceDefinitionMap.size == 4)
+    assert(metricManagerService.metricSourceDefinitionMap.get("testDefinition") != null)
+  }
+
+  test("testGetDefinitionByName") {
+    val definitions : scala.collection.mutable.MutableList[MetricSourceDefinition] = scala.collection.mutable.MutableList.empty[MetricSourceDefinition]
+
+    for (i <- 1 to 3) {
+      val msd1 : MetricSourceDefinition = new MetricSourceDefinition("TestDefinition" + i, "testAppId", MetricSourceDefinitionType.API)
+      definitions.+=(msd1)
+    }
+
+    val adMetadataStoreAccessor: AdMetadataStoreAccessor = EasyMockSugar.niceMock[AdMetadataStoreAccessor]
+    expect(adMetadataStoreAccessor.getSavedInputDefinitions).andReturn(definitions.toList).once()
+    replay(adMetadataStoreAccessor)
+
+    val metricManagerService: MetricManagerServiceImpl = new MetricManagerServiceImpl(new AnomalyDetectionAppConfig, adMetadataStoreAccessor)
+
+    metricManagerService.setAdMetadataStoreAccessor(adMetadataStoreAccessor)
+    for (i <- 1 to 3) {
+      val definition: MetricSourceDefinition = metricManagerService.getDefinitionByName("TestDefinition" + i)
+      assert(definition != null)
+    }
+  }
+
+  test("testGetDefinitionByAppId") {
+    val definitions : scala.collection.mutable.MutableList[MetricSourceDefinition] = scala.collection.mutable.MutableList.empty[MetricSourceDefinition]
+
+    for (i <- 1 to 3) {
+      var msd1 : MetricSourceDefinition = null
+      if (i == 2) {
+        msd1 = new MetricSourceDefinition("TestDefinition" + i, null, MetricSourceDefinitionType.API)
+      } else {
+        msd1 = new MetricSourceDefinition("TestDefinition" + i, "testAppId", MetricSourceDefinitionType.API)
+      }
+      definitions.+=(msd1)
+    }
+
+    val adMetadataStoreAccessor: AdMetadataStoreAccessor = EasyMockSugar.niceMock[AdMetadataStoreAccessor]
+    expect(adMetadataStoreAccessor.getSavedInputDefinitions).andReturn(definitions.toList).once()
+    replay(adMetadataStoreAccessor)
+
+    val metricManagerService: MetricManagerServiceImpl = new MetricManagerServiceImpl(new AnomalyDetectionAppConfig, adMetadataStoreAccessor)
+
+    metricManagerService.setAdMetadataStoreAccessor(adMetadataStoreAccessor)
+    val definitionsByAppId: List[MetricSourceDefinition] = metricManagerService.getDefinitionByAppId("testAppId")
+    assert(definitionsByAppId.size == 2)
+  }
+
+  test("testDeleteDefinitionByName") {
+    val definitions : scala.collection.mutable.MutableList[MetricSourceDefinition] = scala.collection.mutable.MutableList.empty[MetricSourceDefinition]
+
+    for (i <- 1 to 3) {
+      var msd1 : MetricSourceDefinition = null
+      if (i == 2) {
+        msd1 = new MetricSourceDefinition("TestDefinition" + i, null, MetricSourceDefinitionType.CONFIG)
+      } else {
+        msd1 = new MetricSourceDefinition("TestDefinition" + i, "testAppId", MetricSourceDefinitionType.API)
+      }
+      definitions.+=(msd1)
+    }
+
+    val adMetadataStoreAccessor: AdMetadataStoreAccessor = EasyMockSugar.niceMock[AdMetadataStoreAccessor]
+    expect(adMetadataStoreAccessor.getSavedInputDefinitions).andReturn(definitions.toList).once()
+    expect(adMetadataStoreAccessor.removeInputDefinition(anyObject[String])).andReturn(true).times(2)
+    replay(adMetadataStoreAccessor)
+
+    val metricManagerService: MetricManagerServiceImpl = new MetricManagerServiceImpl(new AnomalyDetectionAppConfig, adMetadataStoreAccessor)
+
+    metricManagerService.setAdMetadataStoreAccessor(adMetadataStoreAccessor)
+
+    var success: Boolean = metricManagerService.deleteDefinitionByName("TestDefinition1")
+    assert(success)
+    success = metricManagerService.deleteDefinitionByName("TestDefinition2")
+    assert(!success)
+    success = metricManagerService.deleteDefinitionByName("TestDefinition3")
+    assert(success)
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionTest.scala
new file mode 100644
index 0000000..c4d639c
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionTest.scala
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.metadata
+
+import org.scalatest.FunSuite
+
+class MetricSourceDefinitionTest extends FunSuite {
+
+  test("createNewMetricSourceDefinition") {
+    val msd : MetricSourceDefinition = new MetricSourceDefinition("testDefinition", "testAppId", MetricSourceDefinitionType.API)
+
+    assert(msd.definitionName == "testDefinition")
+    assert(msd.appId == "testAppId")
+    assert(msd.definitionSource == MetricSourceDefinitionType.API)
+
+    assert(msd.hosts.isEmpty)
+    assert(msd.metricDefinitions.isEmpty)
+    assert(msd.associatedAnomalySubsystems.isEmpty)
+    assert(msd.relatedDefinitions.isEmpty)
+  }
+
+  test("testAddMetricDefinition") {
+    val msd : MetricSourceDefinition = new MetricSourceDefinition("testDefinition", "testAppId", MetricSourceDefinitionType.API)
+    assert(msd.metricDefinitions.isEmpty)
+
+    msd.addMetricDefinition(MetricDefinition("TestMetric", "TestApp", List.empty[String]))
+    assert(msd.metricDefinitions.nonEmpty)
+  }
+
+  test("testEquals") {
+    val msd1 : MetricSourceDefinition = new MetricSourceDefinition("testDefinition", "testAppId", MetricSourceDefinitionType.API)
+    val msd2 : MetricSourceDefinition = new MetricSourceDefinition("testDefinition", "testAppId2", MetricSourceDefinitionType.API)
+    assert(msd1 == msd2)
+  }
+
+  test("testRemoveMetricDefinition") {
+    val msd : MetricSourceDefinition = new MetricSourceDefinition("testDefinition", "testAppId", MetricSourceDefinitionType.API)
+    assert(msd.metricDefinitions.isEmpty)
+
+    msd.addMetricDefinition(MetricDefinition("TestMetric", "TestApp", List.empty[String]))
+    assert(msd.metricDefinitions.nonEmpty)
+
+    msd.removeMetricDefinition(MetricDefinition("TestMetric", "TestApp", List.empty[String]))
+    assert(msd.metricDefinitions.isEmpty)
+  }
+
+  test("serializeDeserialize") {
+    val msd : MetricSourceDefinition = new MetricSourceDefinition("testDefinition", "testAppId", MetricSourceDefinitionType.API)
+    val msdString: String = MetricSourceDefinition.serialize(msd)
+    assert(msdString.nonEmpty)
+
+    val msd2: MetricSourceDefinition = MetricSourceDefinition.deserialize(msdString)
+    assert(msd2 != null)
+    assert(msd == msd2)
+
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetricKey.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetricKey.java
new file mode 100644
index 0000000..7619811
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetricKey.java
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.metrics2.sink.timeline;
+
+import org.apache.commons.lang.StringUtils;
+
+public class TimelineMetricKey {
+  public String metricName;
+  public String appId;
+  public String instanceId = null;
+  public String hostName;
+  public byte[] uuid;
+
+  public TimelineMetricKey(String metricName, String appId, String instanceId, String hostName, byte[] uuid) {
+    this.metricName = metricName;
+    this.appId = appId;
+    this.instanceId = instanceId;
+    this.hostName = hostName;
+    this.uuid = uuid;
+  }
+
+  @Override
+  public boolean equals(Object o) {
+    if (this == o) return true;
+    if (o == null || getClass() != o.getClass()) return false;
+
+    TimelineMetricKey that = (TimelineMetricKey) o;
+
+    if (!metricName.equals(that.metricName)) return false;
+    if (!appId.equals(that.appId)) return false;
+    if (!hostName.equals(that.hostName)) return false;
+    return (StringUtils.isNotEmpty(instanceId) ? instanceId.equals(that.instanceId) : StringUtils.isEmpty(that.instanceId));
+  }
+
+  @Override
+  public int hashCode() {
+    int result = metricName.hashCode();
+    result = 31 * result + (appId != null ? appId.hashCode() : 0);
+    result = 31 * result + (instanceId != null ? instanceId.hashCode() : 0);
+    result = 31 * result + (hostName != null ? hostName.hashCode() : 0);
+    return result;
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
index 4450d65..a96be30 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
@@ -19,6 +19,31 @@ package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline
 
 import com.google.common.collect.ArrayListMultimap;
 import com.google.common.collect.Multimap;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.DEFAULT_TOPN_HOSTS_LIMIT;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.USE_GROUPBY_AGGREGATOR_QUERIES;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.ACTUAL_AGGREGATOR_NAMES;
+
+import java.io.IOException;
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.UnknownHostException;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadFactory;
+import java.util.concurrent.TimeUnit;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
 import org.apache.commons.collections.MapUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
@@ -29,6 +54,7 @@ import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
 import org.apache.hadoop.metrics2.sink.timeline.MetricHostAggregate;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricKey;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricWithAggregatedValues;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
@@ -51,29 +77,6 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.TopNCondition;
 
-import java.io.IOException;
-import java.net.MalformedURLException;
-import java.net.URISyntaxException;
-import java.net.UnknownHostException;
-import java.sql.SQLException;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.Executors;
-import java.util.concurrent.ScheduledExecutorService;
-import java.util.concurrent.ThreadFactory;
-import java.util.concurrent.TimeUnit;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.DEFAULT_TOPN_HOSTS_LIMIT;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.USE_GROUPBY_AGGREGATOR_QUERIES;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.ACTUAL_AGGREGATOR_NAMES;
-
 public class HBaseTimelineMetricsService extends AbstractService implements TimelineMetricStore {
 
   static final Log LOG = LogFactory.getLog(HBaseTimelineMetricsService.class);
@@ -437,11 +440,17 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
   }
 
   @Override
-  public Map<String, List<TimelineMetricMetadata>> getTimelineMetricMetadata(String query) throws SQLException, IOException {
+  public Map<String, List<TimelineMetricMetadata>> getTimelineMetricMetadata(String appId, String metricPattern,
+                                                                             boolean includeBlacklistedMetrics) throws SQLException, IOException {
     Map<TimelineMetricMetadataKey, TimelineMetricMetadata> metadata =
       metricMetadataManager.getMetadataCache();
 
-    boolean includeBlacklistedMetrics = StringUtils.isNotEmpty(query) && "all".equalsIgnoreCase(query);
+    boolean filterByAppId = StringUtils.isNotEmpty(appId);
+    boolean filterByMetricName = StringUtils.isNotEmpty(metricPattern);
+    Pattern metricFilterPattern = null;
+    if (filterByMetricName) {
+      metricFilterPattern = Pattern.compile(metricPattern);
+    }
 
     // Group Metadata by AppId
     Map<String, List<TimelineMetricMetadata>> metadataByAppId = new HashMap<>();
@@ -450,10 +459,23 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
       if (!includeBlacklistedMetrics && !metricMetadata.isWhitelisted()) {
         continue;
       }
-      List<TimelineMetricMetadata> metadataList = metadataByAppId.get(metricMetadata.getAppId());
+
+      String currentAppId = metricMetadata.getAppId();
+      if (filterByAppId && !currentAppId.equals(appId)) {
+        continue;
+      }
+
+      if (filterByMetricName) {
+        Matcher m = metricFilterPattern.matcher(metricMetadata.getMetricName());
+        if (!m.find()) {
+          continue;
+        }
+      }
+
+      List<TimelineMetricMetadata> metadataList = metadataByAppId.get(currentAppId);
       if (metadataList == null) {
         metadataList = new ArrayList<>();
-        metadataByAppId.put(metricMetadata.getAppId(), metadataList);
+        metadataByAppId.put(currentAppId, metadataList);
       }
 
       metadataList.add(metricMetadata);
@@ -463,8 +485,42 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
   }
 
   @Override
-  public Map<String, TimelineMetricMetadataKey> getUuids() throws SQLException, IOException {
-    return metricMetadataManager.getUuidKeyMap();
+  public byte[] getUuid(String metricName, String appId, String instanceId, String hostname) throws SQLException, IOException {
+    return metricMetadataManager.getUuid(metricName, appId, instanceId, hostname);
+  }
+
+  /**
+   * Given a metricName, appId, instanceId and optional hostname parameter, return a set of TimelineMetricKey objects
+   * that will have all the unique metric instances for the above parameter filter.
+   *
+   * @param metricName
+   * @param appId
+   * @param instanceId
+   * @param hostname
+   * @return
+   * @throws SQLException
+   * @throws IOException
+   */
+  @Override
+  public Set<TimelineMetricKey> getTimelineMetricKey(String metricName, String appId, String instanceId, String hostname) throws SQLException, IOException {
+
+    if (StringUtils.isEmpty(hostname)) {
+      Set<String> hosts = new HashSet<>();
+      for (String host : metricMetadataManager.getHostedAppsCache().keySet()) {
+        if (metricMetadataManager.getHostedAppsCache().get(host).getHostedApps().contains(appId)) {
+          hosts.add(host);
+        }
+      }
+      Set<TimelineMetricKey> timelineMetricKeys = new HashSet<>();
+      for (String host : hosts) {
+        byte[] uuid = metricMetadataManager.getUuid(metricName, appId, instanceId, host);
+        timelineMetricKeys.add(new TimelineMetricKey(metricName, appId, instanceId, host, uuid));
+      }
+      return timelineMetricKeys;
+    } else {
+      byte[] uuid = metricMetadataManager.getUuid(metricName, appId, instanceId, hostname);
+      return Collections.singleton(new TimelineMetricKey(metricName, appId, instanceId, hostname, uuid));
+    }
   }
 
   @Override
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
index cdeefdc..f00bd91 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
@@ -21,11 +21,11 @@ import org.apache.hadoop.metrics2.sink.timeline.AggregationResult;
 import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricKey;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
 import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
 
 import java.io.IOException;
 import java.sql.SQLException;
@@ -81,7 +81,8 @@ public interface TimelineMetricStore {
    * @throws SQLException
    * @throws IOException
    */
-  Map<String, List<TimelineMetricMetadata>> getTimelineMetricMetadata(String query) throws SQLException, IOException;
+  Map<String, List<TimelineMetricMetadata>> getTimelineMetricMetadata(String appId, String metricPattern,
+                                                                             boolean includeBlacklistedMetrics) throws SQLException, IOException;
 
   TimelinePutResponse putHostAggregatedMetrics(AggregationResult aggregationResult) throws SQLException, IOException;
   /**
@@ -100,7 +101,7 @@ public interface TimelineMetricStore {
    */
   Map<String, Map<String,Set<String>>> getInstanceHostsMetadata(String instanceId, String appId) throws SQLException, IOException;
 
- Map<String, TimelineMetricMetadataKey> getUuids() throws SQLException, IOException;
+  byte[] getUuid(String metricName, String appId, String instanceId, String hostname) throws SQLException, IOException;
 
     /**
      * Return a list of known live collector nodes
@@ -109,4 +110,7 @@ public interface TimelineMetricStore {
   List<String> getLiveInstances();
 
   TimelineMetrics getAnomalyMetrics(String method, long startTime, long endTime, Integer limit) throws SQLException;
+
+  Set<TimelineMetricKey> getTimelineMetricKey(String metricName, String appId, String instanceId, String hostname) throws SQLException, IOException;
+
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
index f9ad773..6b926ac 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
@@ -438,6 +438,16 @@ public class TimelineMetricMetadataManager {
     return ArrayUtils.addAll(metricUuid, hostUuid);
   }
 
+  public byte[] getUuid(String metricName, String appId, String instanceId, String hostname) {
+
+    byte[] metricUuid = getUuid(new TimelineClusterMetric(metricName, appId, instanceId, -1l));
+    if (StringUtils.isNotEmpty(hostname)) {
+      byte[] hostUuid = getUuidForHostname(hostname);
+      return ArrayUtils.addAll(metricUuid, hostUuid);
+    }
+    return metricUuid;
+  }
+
   public String getMetricNameFromUuid(byte[]  uuid) {
 
     byte[] metricUuid = uuid;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
index 5d9bb35..db35686 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
@@ -27,6 +27,7 @@ import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.metrics2.sink.timeline.AggregationResult;
 import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
 import org.apache.hadoop.metrics2.sink.timeline.PrecisionLimitExceededException;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricKey;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities;
@@ -50,6 +51,7 @@ import javax.servlet.http.HttpServletRequest;
 import javax.servlet.http.HttpServletResponse;
 import javax.ws.rs.Consumes;
 import javax.ws.rs.DELETE;
+import javax.ws.rs.DefaultValue;
 import javax.ws.rs.GET;
 import javax.ws.rs.POST;
 import javax.ws.rs.Path;
@@ -434,18 +436,24 @@ public class TimelineWebServices {
       throw new WebApplicationException(e, Response.Status.INTERNAL_SERVER_ERROR);
     }
   }
+
   @GET
   @Path("/metrics/metadata")
   @Produces({ MediaType.APPLICATION_JSON })
   public Map<String, List<TimelineMetricMetadata>> getTimelineMetricMetadata(
     @Context HttpServletRequest req,
     @Context HttpServletResponse res,
-    @QueryParam("query") String query
+    @QueryParam("appId") String appId,
+    @QueryParam("metricName") String metricPattern,
+    @QueryParam("includeAll") String includeBlacklistedMetrics
     ) {
     init(res);
 
     try {
-      return timelineMetricStore.getTimelineMetricMetadata(query);
+      return timelineMetricStore.getTimelineMetricMetadata(
+        parseStr(appId),
+        parseStr(metricPattern),
+        parseBoolean(includeBlacklistedMetrics));
     } catch (Exception e) {
       throw new WebApplicationException(e, Response.Status.INTERNAL_SERVER_ERROR);
     }
@@ -486,16 +494,40 @@ public class TimelineWebServices {
   }
 
   @GET
-  @Path("/metrics/uuids")
+  @Path("/metrics/uuid")
   @Produces({ MediaType.APPLICATION_JSON })
-  public Map<String, TimelineMetricMetadataKey> getUuids(
+  public byte[] getUuid(
     @Context HttpServletRequest req,
-    @Context HttpServletResponse res
+    @Context HttpServletResponse res,
+    @QueryParam("metricName") String metricName,
+    @QueryParam("appId") String appId,
+    @QueryParam("instanceId") String instanceId,
+    @QueryParam("hostname") String hostname
+    ) {
+    init(res);
+
+    try {
+      return timelineMetricStore.getUuid(metricName, appId, instanceId, hostname);
+    } catch (Exception e) {
+      throw new WebApplicationException(e, Response.Status.INTERNAL_SERVER_ERROR);
+    }
+  }
+
+  @GET
+  @Path("/metrics/metadata/key")
+  @Produces({ MediaType.APPLICATION_JSON })
+  public Set<TimelineMetricKey> getTimelineMetricKey(
+    @Context HttpServletRequest req,
+    @Context HttpServletResponse res,
+    @QueryParam("metricName") String metricName,
+    @QueryParam("appId") String appId,
+    @QueryParam("instanceId") String instanceId,
+    @QueryParam("hostname") String hostname
   ) {
     init(res);
 
     try {
-      return timelineMetricStore.getUuids();
+      return timelineMetricStore.getTimelineMetricKey(metricName, appId, instanceId, hostname);
     } catch (Exception e) {
       throw new WebApplicationException(e, Response.Status.INTERNAL_SERVER_ERROR);
     }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java
index 7c879e1..42175a7 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java
@@ -21,6 +21,7 @@ import org.apache.hadoop.metrics2.sink.timeline.AggregationResult;
 import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricKey;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
@@ -90,7 +91,8 @@ public class TestTimelineMetricStore implements TimelineMetricStore {
   }
 
   @Override
-  public Map<String, List<TimelineMetricMetadata>> getTimelineMetricMetadata(String query) throws SQLException, IOException {
+  public Map<String, List<TimelineMetricMetadata>> getTimelineMetricMetadata(String appId, String metricPattern,
+                                                                             boolean includeBlacklistedMetrics) throws SQLException, IOException {
     return null;
   }
 
@@ -115,7 +117,7 @@ public class TestTimelineMetricStore implements TimelineMetricStore {
   }
 
   @Override
-  public Map<String, TimelineMetricMetadataKey> getUuids() throws SQLException, IOException {
+  public byte[] getUuid(String metricName, String appId, String instanceId, String hostname) throws SQLException, IOException {
     return null;
   }
 
@@ -124,4 +126,9 @@ public class TestTimelineMetricStore implements TimelineMetricStore {
     return null;
   }
 
+  @Override
+  public Set<TimelineMetricKey> getTimelineMetricKey(String metricName, String appId, String instanceId, String hostname) throws SQLException, IOException {
+    return Collections.emptySet();
+  }
+
 }

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 39/39: AMBARI-23250 : Fix deployment issues in AMS perf branch. (hbase-site change).

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit ba21ebe111ab119ccd4ca3305dc5267c5b4f0725
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Fri Mar 16 14:26:13 2018 -0700

    AMBARI-23250 : Fix deployment issues in AMS perf branch. (hbase-site change).
---
 .../0.1.0/configuration/ams-hbase-site.xml               | 16 ++++++++++++++++
 .../AMBARI_METRICS/0.1.0/service_advisor.py              |  1 +
 .../resources/stacks/HDF/2.0/services/stack_advisor.py   |  1 +
 3 files changed, 18 insertions(+)

diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-hbase-site.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-hbase-site.xml
index cf819d9..de4ebb0 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-hbase-site.xml
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-hbase-site.xml
@@ -617,4 +617,20 @@
     </value-attributes>
     <on-ambari-upgrade add="true"/>
   </property>
+  <property>
+    <name>hbase.unsafe.stream.capability.enforce</name>
+    <value>false</value>
+    <description>
+      Controls whether HBase will check for stream capabilities (hflush/hsync).
+      Disable this if you intend to run on LocalFileSystem.
+      WARNING: Doing so may expose you to additional risk of data loss!
+    </description>
+    <depends-on>
+      <property>
+        <type>ams-site</type>
+        <name>timeline.metrics.service.operation.mode</name>
+      </property>
+    </depends-on>
+    <on-ambari-upgrade add="true"/>
+  </property>
 </configuration>
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/service_advisor.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/service_advisor.py
index eae98bf..c78d48a 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/service_advisor.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/service_advisor.py
@@ -303,6 +303,7 @@ class AMBARI_METRICSRecommender(service_advisor.ServiceAdvisor):
     if operatingMode == "distributed":
       putAmsSiteProperty("timeline.metrics.service.watcher.disabled", 'true')
       putAmsHbaseSiteProperty("hbase.cluster.distributed", 'true')
+      putAmsHbaseSiteProperty("hbase.unsafe.stream.capability.enforce", 'true')
     else:
       putAmsSiteProperty("timeline.metrics.service.watcher.disabled", 'false')
       putAmsHbaseSiteProperty("hbase.cluster.distributed", 'false')
diff --git a/contrib/management-packs/hdf-ambari-mpack/src/main/resources/stacks/HDF/2.0/services/stack_advisor.py b/contrib/management-packs/hdf-ambari-mpack/src/main/resources/stacks/HDF/2.0/services/stack_advisor.py
index da33b95..2a6766a 100644
--- a/contrib/management-packs/hdf-ambari-mpack/src/main/resources/stacks/HDF/2.0/services/stack_advisor.py
+++ b/contrib/management-packs/hdf-ambari-mpack/src/main/resources/stacks/HDF/2.0/services/stack_advisor.py
@@ -617,6 +617,7 @@ class HDF20StackAdvisor(DefaultStackAdvisor):
       putAmsSiteProperty("timeline.metrics.service.watcher.disabled", 'true')
       putAmsSiteProperty("timeline.metrics.host.aggregator.ttl", 259200)
       putAmsHbaseSiteProperty("hbase.cluster.distributed", 'true')
+      putAmsHbaseSiteProperty("hbase.unsafe.stream.capability.enforce", 'true')
     else:
       putAmsSiteProperty("timeline.metrics.service.watcher.disabled", 'false')
       putAmsSiteProperty("timeline.metrics.host.aggregator.ttl", 86400)

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 17/39: Fixed rat errors from Merge trunk into branch-3.0-ams

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 2eb160ce8713fd12e9cfc3eeba26b9b051258ba6
Author: Siddharth Wagle <sw...@hortonworks.com>
AuthorDate: Tue Sep 26 16:03:17 2017 -0700

    Fixed rat errors from Merge trunk into branch-3.0-ams
---
 .../src/main/resources/input-config.properties         | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/input-config.properties b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/input-config.properties
index 88304c7..ab106c4 100644
--- a/ambari-metrics/ambari-metrics-alertservice/src/main/resources/input-config.properties
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/resources/input-config.properties
@@ -1,3 +1,21 @@
+# Copyright 2011 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 appIds=HOST
 
 collectorHost=localhost

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 03/39: AMBARI-21106 : Ambari Metrics Anomaly detection prototype (Commit 2). (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 485cdf4f632cdcc4bb3f5fc53f525c6686d33b14
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Tue May 30 13:17:28 2017 -0700

    AMBARI-21106 : Ambari Metrics Anomaly detection prototype (Commit 2). (avijayan)
---
 ambari-metrics/ambari-metrics-alertservice/pom.xml | 4 ++--
 ambari-metrics/ambari-metrics-spark/pom.xml        | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-alertservice/pom.xml b/ambari-metrics/ambari-metrics-alertservice/pom.xml
index 3a3545b..10f920a 100644
--- a/ambari-metrics/ambari-metrics-alertservice/pom.xml
+++ b/ambari-metrics/ambari-metrics-alertservice/pom.xml
@@ -5,11 +5,11 @@
     <parent>
         <artifactId>ambari-metrics</artifactId>
         <groupId>org.apache.ambari</groupId>
-        <version>2.5.1.0.0</version>
+        <version>2.0.0.0-SNAPSHOT</version>
     </parent>
     <modelVersion>4.0.0</modelVersion>
     <artifactId>ambari-metrics-alertservice</artifactId>
-    <version>2.5.1.0.0</version>
+    <version>2.0.0.0-SNAPSHOT</version>
     <build>
         <plugins>
             <plugin>
diff --git a/ambari-metrics/ambari-metrics-spark/pom.xml b/ambari-metrics/ambari-metrics-spark/pom.xml
index 33b4257..f1c8a13 100644
--- a/ambari-metrics/ambari-metrics-spark/pom.xml
+++ b/ambari-metrics/ambari-metrics-spark/pom.xml
@@ -3,11 +3,11 @@
     <parent>
         <artifactId>ambari-metrics</artifactId>
         <groupId>org.apache.ambari</groupId>
-        <version>2.5.1.0.0</version>
+        <version>2.0.0.0-SNAPSHOT</version>
     </parent>
     <modelVersion>4.0.0</modelVersion>
     <artifactId>ambari-metrics-spark</artifactId>
-    <version>2.5.1.0.0</version>
+    <version>2.0.0.0-SNAPSHOT</version>
     <properties>
         <scala.version>2.10.4</scala.version>
     </properties>

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 05/39: AMBARI-21079. Add ability to sink Raw metrics to external system via Http. Renamed files to fix build. (swagle)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 58e8d4ba9f2e42bc21a713597c994a3c345bc721
Author: Siddharth Wagle <sw...@hortonworks.com>
AuthorDate: Tue May 30 16:45:27 2017 -0700

    AMBARI-21079. Add ability to sink Raw metrics to external system via Http. Renamed files to fix build. (swagle)
---
 ...eTimelineMetricStoreTest.java => HBaseTimelineMetricsServiceTest.java} | 0
 1 file changed, 0 insertions(+), 0 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricStoreTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsServiceTest.java
similarity index 100%
rename from ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricStoreTest.java
rename to ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsServiceTest.java

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 24/39: AMBARI-22343. Add ability in AMS to tee metrics to a set of configured Kafka brokers. (swagle)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 64dcc02abc94859f9df316767b7f4589ea0b1502
Author: Siddharth Wagle <sw...@hortonworks.com>
AuthorDate: Tue Oct 31 11:34:45 2017 -0700

    AMBARI-22343. Add ability in AMS to tee metrics to a set of configured Kafka brokers. (swagle)
---
 .../libraries/functions/package_conditions.py      |   4 +
 .../pom.xml                                        |  11 ++
 .../src/main/resources/config.yml                  |   3 -
 .../ambari-metrics-timelineservice/pom.xml         | 127 ++++++++++++---------
 .../metrics/timeline/PhoenixHBaseAccessor.java     |  29 ++---
 .../timeline/TimelineMetricConfiguration.java      |  55 ++++++---
 .../timeline/sink/ExternalSinkProvider.java        |  12 +-
 .../metrics/timeline/sink/HttpSinkProvider.java    |   4 +-
 .../metrics/timeline/sink/KafkaSinkProvider.java   | 118 +++++++++++++++++++
 .../DefaultInternalMetricsSourceProvider.java      |   2 +-
 .../metrics/timeline/source/RawMetricsSource.java  |  17 +--
 .../0.1.0/configuration/ams-admanager-config.xml   |  60 ++++++++++
 .../0.1.0/configuration/ams-admanager-env.xml      | 105 +++++++++++++++++
 .../0.1.0/configuration/ams-site.xml               |   1 -
 .../AMBARI_METRICS/0.1.0/metainfo.xml              |  41 ++++++-
 .../AMBARI_METRICS/0.1.0/package/scripts/ams.py    |  35 ++++++
 .../0.1.0/package/scripts/ams_admanager.py         |  73 ++++++++++++
 .../AMBARI_METRICS/0.1.0/package/scripts/params.py |   9 ++
 .../AMBARI_METRICS/0.1.0/package/scripts/status.py |   2 +
 .../0.1.0/package/scripts/status_params.py         |   2 +
 .../package/templates/admanager_config.yaml.j2     |  24 ++++
 21 files changed, 619 insertions(+), 115 deletions(-)

diff --git a/ambari-common/src/main/python/resource_management/libraries/functions/package_conditions.py b/ambari-common/src/main/python/resource_management/libraries/functions/package_conditions.py
index ebc1aba..64cda98 100644
--- a/ambari-common/src/main/python/resource_management/libraries/functions/package_conditions.py
+++ b/ambari-common/src/main/python/resource_management/libraries/functions/package_conditions.py
@@ -50,6 +50,10 @@ def should_install_phoenix():
   has_phoenix = len(phoenix_hosts) > 0
   return phoenix_enabled or has_phoenix
 
+def should_install_ams_admanager():
+  config = Script.get_config()
+  return _has_applicable_local_component(config, ["AD_MANAGER"])
+
 def should_install_ams_collector():
   config = Script.get_config()
   return _has_applicable_local_component(config, ["METRICS_COLLECTOR"])
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
index 554d026..e96e957 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
@@ -296,6 +296,12 @@
       <artifactId>spark-mllib_${scala.binary.version}</artifactId>
       <version>${spark.version}</version>
       <scope>provided</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>com.fasterxml.jackson.core</groupId>
+          <artifactId>jackson-databind</artifactId>
+        </exclusion>
+      </exclusions>
     </dependency>
     <dependency>
       <groupId>org.apache.hadoop</groupId>
@@ -419,6 +425,11 @@
       <version>${jackson.version}</version>
     </dependency>
     <dependency>
+      <groupId>com.fasterxml.jackson.core</groupId>
+      <artifactId>jackson-databind</artifactId>
+      <version>${jackson.version}</version>
+    </dependency>
+    <dependency>
       <groupId>junit</groupId>
       <artifactId>junit</artifactId>
       <version>4.12</version>
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
index 9ca9e95..bd88d57 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
@@ -2,9 +2,6 @@ server:
   applicationConnectors:
    - type: http
      port: 9999
-  adminConnectors:
-    - type: http
-      port: 9990
   requestLog:
     type: external
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/pom.xml b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
index 3d119f9..7794a11 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/pom.xml
+++ b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
@@ -80,7 +80,8 @@
               <outputDirectory>${project.build.directory}/lib</outputDirectory>
               <includeScope>compile</includeScope>
               <excludeScope>test</excludeScope>
-              <excludeArtifactIds>jasper-runtime,jasper-compiler</excludeArtifactIds>
+              <excludeArtifactIds>jasper-runtime,jasper-compiler
+              </excludeArtifactIds>
             </configuration>
           </execution>
         </executions>
@@ -125,11 +126,13 @@
                 <source>
                   <location>target/lib</location>
                   <excludes>
-                  <exclude>*tests.jar</exclude>
+                    <exclude>*tests.jar</exclude>
                   </excludes>
                 </source>
                 <source>
-                  <location>${project.build.directory}/${project.artifactId}-${project.version}.jar</location>
+                  <location>
+                    ${project.build.directory}/${project.artifactId}-${project.version}.jar
+                  </location>
                 </source>
               </sources>
             </mapping>
@@ -214,7 +217,9 @@
                   <location>conf/unix/amshbase_metrics_whitelist</location>
                 </source>
                 <source>
-                  <location>target/embedded/${hbase.folder}/conf/hbase-site.xml</location>
+                  <location>
+                    target/embedded/${hbase.folder}/conf/hbase-site.xml
+                  </location>
                 </source>
               </sources>
             </mapping>
@@ -287,7 +292,8 @@
           <skip>true</skip>
           <attach>false</attach>
           <submodules>false</submodules>
-          <controlDir>${project.basedir}/../src/main/package/deb/control</controlDir>
+          <controlDir>${project.basedir}/../src/main/package/deb/control
+          </controlDir>
         </configuration>
       </plugin>
     </plugins>
@@ -657,23 +663,29 @@
       <scope>test</scope>
       <classifier>tests</classifier>
     </dependency>
-      <dependency>
-        <groupId>org.apache.hbase</groupId>
-        <artifactId>hbase-testing-util</artifactId>
-        <version>${hbase.version}</version>
-        <scope>test</scope>
-        <optional>true</optional>
-        <exclusions>
-          <exclusion>
-            <groupId>org.jruby</groupId>
-            <artifactId>jruby-complete</artifactId>
-          </exclusion>
-          <exclusion>
-            <artifactId>zookeeper</artifactId>
-            <groupId>org.apache.zookeeper</groupId>
-          </exclusion>
-        </exclusions>
-      </dependency>
+    <dependency>
+      <groupId>org.apache.hbase</groupId>
+      <artifactId>hbase-testing-util</artifactId>
+      <version>${hbase.version}</version>
+      <scope>test</scope>
+      <optional>true</optional>
+      <exclusions>
+        <exclusion>
+          <groupId>org.jruby</groupId>
+          <artifactId>jruby-complete</artifactId>
+        </exclusion>
+        <exclusion>
+          <artifactId>zookeeper</artifactId>
+          <groupId>org.apache.zookeeper</groupId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.kafka</groupId>
+      <artifactId>kafka-clients</artifactId>
+      <version>0.11.0.1</version>
+    </dependency>
+
     <dependency>
       <groupId>org.powermock</groupId>
       <artifactId>powermock-module-junit4</artifactId>
@@ -731,17 +743,17 @@
                 </goals>
                 <configuration>
                   <target name="Download HBase">
-                    <mkdir dir="${project.build.directory}/embedded" />
+                    <mkdir dir="${project.build.directory}/embedded"/>
                     <get
-                      src="${hbase.tar}"
-                      dest="${project.build.directory}/embedded/hbase.tar.gz"
-                      usetimestamp="true"
-                      />
+                        src="${hbase.tar}"
+                        dest="${project.build.directory}/embedded/hbase.tar.gz"
+                        usetimestamp="true"
+                    />
                     <untar
-                      src="${project.build.directory}/embedded/hbase.tar.gz"
-                      dest="${project.build.directory}/embedded"
-                      compression="gzip"
-                      />
+                        src="${project.build.directory}/embedded/hbase.tar.gz"
+                        dest="${project.build.directory}/embedded"
+                        compression="gzip"
+                    />
                   </target>
                 </configuration>
               </execution>
@@ -755,19 +767,19 @@
                   <target name="Download Phoenix">
                     <mkdir dir="${project.build.directory}/embedded"/>
                     <get
-                      src="${phoenix.tar}"
-                      dest="${project.build.directory}/embedded/phoenix.tar.gz"
-                      usetimestamp="true"
-                      />
+                        src="${phoenix.tar}"
+                        dest="${project.build.directory}/embedded/phoenix.tar.gz"
+                        usetimestamp="true"
+                    />
                     <untar
-                      src="${project.build.directory}/embedded/phoenix.tar.gz"
-                      dest="${project.build.directory}/embedded"
-                      compression="gzip"
-                      />
+                        src="${project.build.directory}/embedded/phoenix.tar.gz"
+                        dest="${project.build.directory}/embedded"
+                        compression="gzip"
+                    />
                     <move
-                      file="${project.build.directory}/embedded/${phoenix.folder}/phoenix-${phoenix.version}-server.jar"
-                      tofile="${project.build.directory}/embedded/${hbase.folder}/lib/phoenix-${phoenix.version}-server.jar"
-                      />
+                        file="${project.build.directory}/embedded/${phoenix.folder}/phoenix-${phoenix.version}-server.jar"
+                        tofile="${project.build.directory}/embedded/${hbase.folder}/lib/phoenix-${phoenix.version}-server.jar"
+                    />
                   </target>
                 </configuration>
               </execution>
@@ -798,24 +810,24 @@
                 </goals>
                 <configuration>
                   <target name="Download HBase">
-                    <mkdir dir="${project.build.directory}/embedded" />
+                    <mkdir dir="${project.build.directory}/embedded"/>
                     <get
-                      src="${hbase.winpkg.zip}"
-                      dest="${project.build.directory}/embedded/hbase.zip"
-                      usetimestamp="true"
-                      />
+                        src="${hbase.winpkg.zip}"
+                        dest="${project.build.directory}/embedded/hbase.zip"
+                        usetimestamp="true"
+                    />
                     <unzip
-                      src="${project.build.directory}/embedded/hbase.zip"
-                      dest="${project.build.directory}/embedded/hbase.temp"
-                      />
+                        src="${project.build.directory}/embedded/hbase.zip"
+                        dest="${project.build.directory}/embedded/hbase.temp"
+                    />
                     <unzip
-                      src="${project.build.directory}/embedded/hbase.temp/resources/${hbase.winpkg.folder}.zip"
-                      dest="${project.build.directory}/embedded"
-                      />
+                        src="${project.build.directory}/embedded/hbase.temp/resources/${hbase.winpkg.folder}.zip"
+                        dest="${project.build.directory}/embedded"
+                    />
                     <copy
-                      file="${project.build.directory}/embedded/hbase.temp/resources/servicehost.exe"
-                      tofile="${project.build.directory}/embedded/${hbase.winpkg.folder}/bin/ams_hbase_master.exe"
-                      />
+                        file="${project.build.directory}/embedded/hbase.temp/resources/servicehost.exe"
+                        tofile="${project.build.directory}/embedded/${hbase.winpkg.folder}/bin/ams_hbase_master.exe"
+                    />
                   </target>
                 </configuration>
               </execution>
@@ -854,7 +866,8 @@
             <!-- The configuration of the plugin -->
             <configuration>
               <!-- Configuration of the archiver -->
-              <finalName>${project.artifactId}-simulator-${project.version}</finalName>
+              <finalName>${project.artifactId}-simulator-${project.version}
+              </finalName>
               <archive>
                 <!-- Manifest specific configuration -->
                 <manifest>
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
index f8d31f7..65b4614 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
@@ -120,7 +120,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.client.Durability;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
-import org.apache.hadoop.hbase.util.RetryCounter;
 import org.apache.hadoop.hbase.util.RetryCounterFactory;
 import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
 import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
@@ -211,7 +210,7 @@ public class PhoenixHBaseAccessor {
   private HashMap<String, String> tableTTL = new HashMap<>();
 
   private final TimelineMetricConfiguration configuration;
-  private InternalMetricsSource rawMetricsSource;
+  private List<InternalMetricsSource> rawMetricsSources;
 
   public PhoenixHBaseAccessor(PhoenixConnectionProvider dataSource) {
     this(TimelineMetricConfiguration.getInstance(), dataSource);
@@ -278,15 +277,17 @@ public class PhoenixHBaseAccessor {
       LOG.info("Initialized aggregator sink class " + metricSinkClass);
     }
 
-    ExternalSinkProvider externalSinkProvider = configuration.getExternalSinkProvider();
+    List<ExternalSinkProvider> externalSinkProviderList = configuration.getExternalSinkProviderList();
     InternalSourceProvider internalSourceProvider = configuration.getInternalSourceProvider();
-    if (externalSinkProvider != null) {
-      ExternalMetricsSink rawMetricsSink = externalSinkProvider.getExternalMetricsSink(RAW_METRICS);
-      int interval = configuration.getExternalSinkInterval(RAW_METRICS);
-      if (interval == -1){
-        interval = cacheCommitInterval;
+    if (!externalSinkProviderList.isEmpty()) {
+      for (ExternalSinkProvider externalSinkProvider : externalSinkProviderList) {
+        ExternalMetricsSink rawMetricsSink = externalSinkProvider.getExternalMetricsSink(RAW_METRICS);
+        int interval = configuration.getExternalSinkInterval(externalSinkProvider.getClass().getSimpleName(), RAW_METRICS);
+        if (interval == -1) {
+          interval = cacheCommitInterval;
+        }
+        rawMetricsSources.add(internalSourceProvider.getInternalMetricsSource(RAW_METRICS, interval, rawMetricsSink));
       }
-      rawMetricsSource = internalSourceProvider.getInternalMetricsSource(RAW_METRICS, interval, rawMetricsSink);
     }
     TIMELINE_METRIC_READ_HELPER = new TimelineMetricReadHelper(this.metadataManagerInstance);
   }
@@ -303,8 +304,10 @@ public class PhoenixHBaseAccessor {
     }
     if (metricsList.size() > 0) {
       commitMetrics(metricsList);
-      if (rawMetricsSource != null) {
-        rawMetricsSource.publishTimelineMetrics(metricsList);
+      if (!rawMetricsSources.isEmpty()) {
+        for (InternalMetricsSource rawMetricsSource : rawMetricsSources) {
+          rawMetricsSource.publishTimelineMetrics(metricsList);
+        }
       }
     }
   }
@@ -316,10 +319,8 @@ public class PhoenixHBaseAccessor {
   private void commitAnomalyMetric(Connection conn, TimelineMetric metric) {
     PreparedStatement metricRecordStmt = null;
     try {
-
       Map<String, String> metricMetadata = metric.getMetadata();
-
-
+      
       byte[] uuid = metadataManagerInstance.getUuid(metric);
       if (uuid == null) {
         LOG.error("Error computing UUID for metric. Cannot write metrics : " + metric.toString());
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
index 929fc8c..395ec7b 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
@@ -26,6 +26,7 @@ import java.net.MalformedURLException;
 import java.net.URISyntaxException;
 import java.net.URL;
 import java.net.UnknownHostException;
+import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.HashSet;
@@ -51,8 +52,6 @@ import org.apache.log4j.Logger;
  * Configuration class that reads properties from ams-site.xml. All values
  * for time or intervals are given in seconds.
  */
-@InterfaceAudience.Public
-@InterfaceStability.Evolving
 public class TimelineMetricConfiguration {
   private static final Log LOG = LogFactory.getLog(TimelineMetricConfiguration.class);
 
@@ -343,14 +342,22 @@ public class TimelineMetricConfiguration {
   public static final String TIMELINE_METRICS_COLLECTOR_IGNITE_BACKUPS = "timeline.metrics.collector.ignite.nodes.backups";
 
   public static final String INTERNAL_CACHE_HEAP_PERCENT =
-    "timeline.metrics.service.cache.%s.heap.percent";
+    "timeline.metrics.internal.cache.%s.heap.percent";
 
   public static final String EXTERNAL_SINK_INTERVAL =
-    "timeline.metrics.service.external.sink.%s.interval";
+    "timeline.metrics.external.sink.%s.%s.interval";
 
   public static final String DEFAULT_EXTERNAL_SINK_DIR =
-    "timeline.metrics.service.external.sink.dir";
-
+    "timeline.metrics.external.sink.dir";
+
+  public static final String KAFKA_SERVERS = "timeline.metrics.external.sink.kafka.bootstrap.servers";
+  public static final String KAFKA_ACKS = "timeline.metrics.external.sink.kafka.acks";
+  public static final String KAFKA_RETRIES = "timeline.metrics.external.sink.kafka.bootstrap.retries";
+  public static final String KAFKA_BATCH_SIZE = "timeline.metrics.external.sink.kafka.batch.size";
+  public static final String KAFKA_LINGER_MS = "timeline.metrics.external.sink.kafka.linger.ms";
+  public static final String KAFKA_BUFFER_MEM = "timeline.metrics.external.sink.kafka.buffer.memory";
+  public static final String KAFKA_SINK_TIMEOUT_SECONDS = "timeline.metrics.external.sink.kafka.timeout.seconds";
+  
   private Configuration hbaseConf;
   private Configuration metricsConf;
   private Configuration metricsSslConf;
@@ -601,8 +608,24 @@ public class TimelineMetricConfiguration {
     return false;
   }
 
-  public int getExternalSinkInterval(SOURCE_NAME sourceName) {
-    return Integer.parseInt(metricsConf.get(String.format(EXTERNAL_SINK_INTERVAL, sourceName), "-1"));
+  /**
+   * Get the sink interval for a metrics source.
+   * Determines how often the metrics will be written to the sink.
+   * This determines whether any caching will be needed on the collector
+   * side, default interval disables caching by writing at the same time as
+   * we get data.
+   *
+   * @param sinkProviderClassName Simple name of your implementation of {@link ExternalSinkProvider}
+   * @param sourceName {@link SOURCE_NAME}
+   * @return seconds
+   */
+  public int getExternalSinkInterval(String sinkProviderClassName,
+                                     SOURCE_NAME sourceName) {
+    String sinkProviderSimpleClassName = sinkProviderClassName.substring(
+      sinkProviderClassName.lastIndexOf(".") + 1);
+
+    return Integer.parseInt(metricsConf.get(
+      String.format(EXTERNAL_SINK_INTERVAL, sinkProviderSimpleClassName, sourceName), "-1"));
   }
 
   public InternalSourceProvider getInternalSourceProvider() {
@@ -612,12 +635,18 @@ public class TimelineMetricConfiguration {
     return ReflectionUtils.newInstance(providerClass, metricsConf);
   }
 
-  public ExternalSinkProvider getExternalSinkProvider() {
-    Class<?> providerClass = metricsConf.getClassByNameOrNull(TIMELINE_METRICS_SINK_PROVIDER_CLASS);
-    if (providerClass != null) {
-      return (ExternalSinkProvider) ReflectionUtils.newInstance(providerClass, metricsConf);
+  /**
+   * List of external sink provider classes. Comma-separated.
+   */
+  public List<ExternalSinkProvider> getExternalSinkProviderList() {
+    Class<?>[] providerClasses = metricsConf.getClasses(TIMELINE_METRICS_SINK_PROVIDER_CLASS);
+    List<ExternalSinkProvider> providerList = new ArrayList<>();
+    if (providerClasses != null) {
+      for (Class<?> providerClass : providerClasses) {
+        providerList.add((ExternalSinkProvider) ReflectionUtils.newInstance(providerClass, metricsConf));
+      }
     }
-    return null;
+    return providerList;
   }
 
   public String getInternalCacheHeapPercent(String instanceName) {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/ExternalSinkProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/ExternalSinkProvider.java
index 48887d9..7c7683b 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/ExternalSinkProvider.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/ExternalSinkProvider.java
@@ -1,8 +1,3 @@
-package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink;
-
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider.SOURCE_NAME;
-
 /**
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
@@ -21,9 +16,14 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
  * limitations under the License.
  */
 
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink;
+
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider.SOURCE_NAME;
+
+
 /**
  * Configurable provider for sink classes that match the metrics sources.
- * Provider can return same sink of different sinks for each source.
+ * Provider can return same sink or different sinks for each source.
  */
 public interface ExternalSinkProvider {
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/HttpSinkProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/HttpSinkProvider.java
index bb84c8a..9c2a93e 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/HttpSinkProvider.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/HttpSinkProvider.java
@@ -92,7 +92,7 @@ public class HttpSinkProvider implements ExternalSinkProvider {
 
   @Override
   public ExternalMetricsSink getExternalMetricsSink(InternalSourceProvider.SOURCE_NAME sourceName) {
-    return null;
+    return new DefaultHttpMetricsSink();
   }
 
   protected HttpURLConnection getConnection(String spec) throws IOException {
@@ -147,7 +147,7 @@ public class HttpSinkProvider implements ExternalSinkProvider {
     @Override
     public int getSinkTimeOutSeconds() {
       try {
-        return conf.getMetricsConf().getInt("timeline.metrics.service.external.http.sink.timeout.seconds", 10);
+        return conf.getMetricsConf().getInt("timeline.metrics.external.sink.http.timeout.seconds", 10);
       } catch (Exception e) {
         return 10;
       }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/KafkaSinkProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/KafkaSinkProvider.java
new file mode 100644
index 0000000..3b34b55
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/KafkaSinkProvider.java
@@ -0,0 +1,118 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink;
+
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.KAFKA_ACKS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.KAFKA_BATCH_SIZE;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.KAFKA_BUFFER_MEM;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.KAFKA_LINGER_MS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.KAFKA_RETRIES;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.KAFKA_SERVERS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.KAFKA_SINK_TIMEOUT_SECONDS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_CACHE_COMMIT_INTERVAL;
+
+import java.util.Collection;
+import java.util.Properties;
+import java.util.concurrent.Future;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider.SOURCE_NAME;
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.Producer;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.clients.producer.RecordMetadata;
+
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+
+/*
+  This will be used by the single Metrics committer thread. Hence it is
+  important to make this non-blocking export.
+ */
+public class KafkaSinkProvider implements ExternalSinkProvider {
+  private static String TOPIC_NAME = "ambari-metrics-topic";
+  private static final Log LOG = LogFactory.getLog(KafkaSinkProvider.class);
+
+  private Producer producer;
+  private int TIMEOUT_SECONDS = 10;
+  private int FLUSH_SECONDS = 3;
+
+  ObjectMapper objectMapper = new ObjectMapper();
+
+  public KafkaSinkProvider() {
+    TimelineMetricConfiguration configuration = TimelineMetricConfiguration.getInstance();
+
+    Properties configProperties = new Properties();
+    try {
+      configProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, configuration.getMetricsConf().getTrimmed(KAFKA_SERVERS));
+      configProperties.put(ProducerConfig.ACKS_CONFIG, configuration.getMetricsConf().getTrimmed(KAFKA_ACKS, "all"));
+      // Avoid duplicates - No transactional semantics
+      configProperties.put(ProducerConfig.RETRIES_CONFIG, configuration.getMetricsConf().getInt(KAFKA_RETRIES, 0));
+      configProperties.put(ProducerConfig.BATCH_SIZE_CONFIG, configuration.getMetricsConf().getInt(KAFKA_BATCH_SIZE, 128));
+      configProperties.put(ProducerConfig.LINGER_MS_CONFIG, configuration.getMetricsConf().getInt(KAFKA_LINGER_MS, 1));
+      configProperties.put(ProducerConfig.BUFFER_MEMORY_CONFIG, configuration.getMetricsConf().getLong(KAFKA_BUFFER_MEM, 33554432)); // 32 MB
+      FLUSH_SECONDS = configuration.getMetricsConf().getInt(TIMELINE_METRICS_CACHE_COMMIT_INTERVAL, 3);
+      TIMEOUT_SECONDS = configuration.getMetricsConf().getInt(KAFKA_SINK_TIMEOUT_SECONDS, 10);
+    } catch (Exception e) {
+      LOG.error("Configuration error!", e);
+      throw new ExceptionInInitializerError(e);
+    }
+    configProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.ByteArraySerializer");
+    configProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.connect.json.JsonSerializer");
+
+    
+
+    producer = new KafkaProducer(configProperties);
+  }
+
+  @Override
+  public ExternalMetricsSink getExternalMetricsSink(SOURCE_NAME sourceName) {
+    switch (sourceName) {
+      case RAW_METRICS:
+        return new KafkaRawMetricsSink();
+      default:
+        throw new UnsupportedOperationException("Provider does not support " +
+          "the expected source " + sourceName);
+    }
+  }
+
+  class KafkaRawMetricsSink implements ExternalMetricsSink {
+
+    @Override
+    public int getSinkTimeOutSeconds() {
+      return TIMEOUT_SECONDS;
+    }
+
+    @Override
+    public int getFlushSeconds() {
+      return FLUSH_SECONDS;
+    }
+
+    @Override
+    public void sinkMetricData(Collection<TimelineMetrics> metrics) {
+      JsonNode jsonNode = objectMapper.valueToTree(metrics);
+      ProducerRecord<String, JsonNode> rec = new ProducerRecord<String, JsonNode>(TOPIC_NAME, jsonNode);
+      Future<RecordMetadata> f = producer.send(rec);
+    }
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/DefaultInternalMetricsSourceProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/DefaultInternalMetricsSourceProvider.java
index b97c39f..c6b071f 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/DefaultInternalMetricsSourceProvider.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/DefaultInternalMetricsSourceProvider.java
@@ -24,7 +24,7 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 public class DefaultInternalMetricsSourceProvider implements InternalSourceProvider {
   private static final Log LOG = LogFactory.getLog(DefaultInternalMetricsSourceProvider.class);
 
-  // TODO: Implement read based sources for higher level data
+  // TODO: Implement read based sources for higher order data
   @Override
   public InternalMetricsSource getInternalMetricsSource(SOURCE_NAME sourceName, int sinkIntervalSeconds, ExternalMetricsSink sink) {
     if (sink == null) {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSource.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSource.java
index 967d819..879577a 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSource.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSource.java
@@ -63,21 +63,14 @@ public class RawMetricsSource implements InternalMetricsSource {
   }
 
   private void initializeFixedRateScheduler() {
-    executorService.scheduleAtFixedRate(new Runnable() {
-      @Override
-      public void run() {
-        rawMetricsSink.sinkMetricData(cache.evictAll());
-      }
-    }, rawMetricsSink.getFlushSeconds(), rawMetricsSink.getFlushSeconds(), TimeUnit.SECONDS);
+    executorService.scheduleAtFixedRate(() -> rawMetricsSink.sinkMetricData(cache.evictAll()),
+      rawMetricsSink.getFlushSeconds(), rawMetricsSink.getFlushSeconds(), TimeUnit.SECONDS);
   }
 
   private void submitDataWithTimeout(final Collection<TimelineMetrics> metrics) {
-    Future f = executorService.submit(new Callable<Object>() {
-      @Override
-      public Object call() throws Exception {
-        rawMetricsSink.sinkMetricData(metrics);
-        return null;
-      }
+    Future f = executorService.submit(() -> {
+      rawMetricsSink.sinkMetricData(metrics);
+      return null;
     });
     try {
       f.get(rawMetricsSink.getSinkTimeOutSeconds(), TimeUnit.SECONDS);
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-config.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-config.xml
new file mode 100644
index 0000000..489850f
--- /dev/null
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-config.xml
@@ -0,0 +1,60 @@
+<?xml version="1.0"?>
+<!--
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+  <property>
+    <name>ambari.metrics.admanager.spark.operation.mode</name>
+    <value>stand-alone</value>
+    <display-name>Anomaly Detection Service operation mode</display-name>
+    <description>
+      Service Operation modes:
+      1) stand-alone: Standalone Spark cluster for AD jobs
+      2) spark-on-yarn: Spark running on YARN. (Recommended production setting)
+    </description>
+    <on-ambari-upgrade add="true"/>
+    <value-attributes>
+      <overridable>false</overridable>
+      <type>value-list</type>
+      <entries>
+        <entry>
+          <value>stand-alone</value>
+          <label>Stand Alone</label>
+        </entry>
+        <entry>
+          <value>spark-on-yarn</value>
+          <label>Spark on YARN</label>
+        </entry>
+      </entries>
+      <selection-cardinality>1</selection-cardinality>
+    </value-attributes>
+  </property>
+  <property>
+    <name>ambari.metrics.admanager.application.port</name>
+    <value>9090</value>
+    <display-name>AD Manager http port</display-name>
+    <description>AMS Anomaly Detection Manager application port</description>
+    <value-attributes>
+      <type>int</type>
+      <overridable>false</overridable>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+</configuration>
\ No newline at end of file
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-env.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-env.xml
new file mode 100644
index 0000000..99e93a6
--- /dev/null
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-env.xml
@@ -0,0 +1,105 @@
+<?xml version="1.0"?>
+<!--
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+  <property>
+    <name>ams_admanager_log_dir</name>
+    <value>/var/log/ambari-metrics-anomaly-detection</value>
+    <display-name>Anomaly Detection Manager log dir</display-name>
+    <description>AMS Anomaly Detection Manager log directory.</description>
+    <value-attributes>
+      <type>directory</type>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>ams_admanager_pid_dir</name>
+    <value>/var/run/ambari-metrics-anomaly-detection</value>
+    <display-name>Anomaly Detection Manager pid dir</display-name>
+    <description>AMS Anomaly Detection Manager pid directory.</description>
+    <value-attributes>
+      <type>directory</type>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>ams_admanager_data_dir</name>
+    <value>/var/lib/ambari-metrics-anomaly-detection</value>
+    <display-name>Anomaly Detection Manager data dir</display-name>
+    <description>AMS Anomaly Detection Manager data directory.</description>
+    <value-attributes>
+      <type>directory</type>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>ams_admanager_heapsize</name>
+    <value>1024</value>
+    <description>Anomaly Detection Manager Heap Size</description>
+    <display-name>Anomaly Detection Manager Heap Size</display-name>
+    <value-attributes>
+      <type>int</type>
+      <unit>MB</unit>
+      <minimum>512</minimum>
+      <maximum>16384</maximum>
+      <increment-step>256</increment-step>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>content</name>
+    <display-name>ams-ad-env template</display-name>
+    <value>
+      # Set environment variables here.
+
+      # The java implementation to use. Java 1.8 required.
+      export JAVA_HOME={{java64_home}}
+
+      #  Anomaly Detection Manager Log directory for log4j
+      export AMS_AD_LOG_DIR={{ams_admanager_log_dir}}
+
+      # Anomaly Detection Manager pid directory
+      export AMS_AD_PID_DIR={{ams_admanager_pid_dir}}
+
+      # Anomaly Detection Manager heapsize
+      export AMS_AD_HEAPSIZE={{ams_admanager_heapsize}}
+
+      # Anomaly Detection Manager data dir
+      export AMS_AD_DATA_DIR={{ams_admanager_data_dir}}
+
+      # Anomaly Detection Manager options
+      export AMS_AD_OPTS="
+      {% if security_enabled %}
+      export AMS_AD_OPTS="$AMS_AD_OPTS -Djava.security.auth.login.config={{ams_ad_jaas_config_file}}"
+      {% endif %}
+
+      # Anomaly Detection Manager GC options
+      export AMS_AD_GC_OPTS="-XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:{{ams_admanager_log_dir}}/admanager-gc.log-`date +'%Y%m%d%H%M'`"
+      export AMS_AD_OPTS="$AMS_AD_OPTS $AMS_AD_GC_OPTS"
+
+    </value>
+    <value-attributes>
+      <type>content</type>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+
+</configuration>
\ No newline at end of file
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-site.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-site.xml
index 49dfd95..6bd25d2 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-site.xml
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-site.xml
@@ -1,5 +1,4 @@
 <?xml version="1.0"?>
-<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 <!--
 /**
  *
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml
index 78014b0..0add0cd 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml
@@ -114,13 +114,37 @@
             <config-type>ams-grafana-env</config-type>
             <config-type>ams-grafana-ini</config-type>
           </configuration-dependencies>
+          <logs>
+            <log>
+              <logId>ams_grafana</logId>
+              <primary>true</primary>
+            </log>
+          </logs>
+        </component>
+
+        <component>
+          <name>AD_MANAGER</name>
+          <displayName>AD Manager</displayName>
+          <category>MASTER</category>
+          <cardinality>0-1</cardinality>
+          <versionAdvertised>false</versionAdvertised>
+          <commandScript>
+            <script>scripts/ams_admanager.py</script>
+            <scriptType>PYTHON</scriptType>
+            <timeout>1200</timeout>
+          </commandScript>
+          <configuration-dependencies>
+            <config-type>ams-hbase-site</config-type>
+            <config-type>ams-admanager-config</config-type>
+            <config-type>ams-admanager-env</config-type>
+          </configuration-dependencies>
+          <logs>
+            <log>
+              <logId>ams_anomaly_detection</logId>
+              <primary>true</primary>
+            </log>
+          </logs>
         </component>
-        <logs>
-          <log>
-            <logId>ams_grafana</logId>
-            <primary>true</primary>
-          </log>
-        </logs>
       </components>
 
       <themes>
@@ -153,6 +177,11 @@
               <condition>should_install_ams_grafana</condition>
             </package>
             <package>
+              <name>ambari-metrics-admanager</name>
+              <skipUpgrade>true</skipUpgrade>
+              <condition>should_install_ams_admanager</condition>
+            </package>
+            <package>
               <name>gcc</name>
             </package>
           </packages>
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
index 9b15fae..fe6b4ec 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
@@ -512,6 +512,41 @@ def ams(name=None, action=None):
 
     pass
 
+  elif name == 'admanager':
+    ams_ad_directories = [
+      params.ams_ad_conf_dir,
+      params.ams_ad_log_dir,
+      params.ams_ad_data_dir,
+      params.ams_ad_pid_dir
+    ]
+
+    for ams_ad_dir in ams_ad_directories:
+      Directory(ams_ad_dir,
+                owner=params.ams_user,
+                group=params.user_group,
+                mode=0755,
+                create_parents=True,
+                recursive_ownership=True
+                )
+
+    File(format("{ams_ad_conf_dir}/ams-admanager-env.sh"),
+         owner=params.ams_user,
+         group=params.user_group,
+         content=InlineTemplate(params.ams_grafana_env_sh_template)
+         )
+
+    File(format("{conf_dir}/config.yaml"),
+         content=Template("config.yaml.j2"),
+         owner=params.ams_user,
+         group=params.user_group
+         )
+
+    if action != 'stop':
+      for dir in ams_ad_directories:
+        Execute(('chown', '-R', params.ams_user, dir),
+                sudo=True
+                )
+
 def is_spnego_enabled(params):
   if 'core-site' in params.config['configurations'] \
       and 'hadoop.http.authentication.type' in params.config['configurations']['core-site'] \
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams_admanager.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams_admanager.py
new file mode 100644
index 0000000..96c4454
--- /dev/null
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams_admanager.py
@@ -0,0 +1,73 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+from resource_management import Script, Execute
+from resource_management.libraries.functions import format
+from status import check_service_status
+from ams import ams
+from resource_management.core.logger import Logger
+from resource_management.core import sudo
+
+class AmsADManager(Script):
+  def install(self, env):
+    import params
+    env.set_params(params)
+    self.install_packages(env)
+    self.configure(env) # for security
+
+  def configure(self, env, action = None):
+    import params
+    env.set_params(params)
+    ams(name='admanager', action=action)
+
+  def start(self, env):
+    import params
+    env.set_params(params)
+    self.configure(env, action = 'start')
+
+    start_cmd = format("{ams_admanager_script} start")
+    Execute(start_cmd,
+            user=params.ams_user
+            )
+    pidfile = format("{ams_ad_pid_dir}/admanager.pid")
+    if not sudo.path_exists(pidfile):
+      Logger.warning("Pid file doesn't exist after starting of the component.")
+    else:
+      Logger.info("AD Manager Server has started with pid: {0}".format(sudo.read_file(pidfile).strip()))
+
+
+  def stop(self, env):
+    import params
+    env.set_params(params)
+    self.configure(env, action = 'stop')
+    Execute((format("{ams_admanager_script}"), 'stop'),
+            sudo=True
+            )
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+    check_service_status(env, name='admanager')
+
+  def get_pid_files(self):
+    import status_params
+    return [status_params.ams_ad_pid_file]
+
+if __name__ == "__main__":
+  AmsADManager().execute()
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py
index 6389696..5d49939 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py
@@ -187,6 +187,15 @@ ams_hbase_home_dir = "/usr/lib/ams-hbase/"
 
 ams_hbase_init_check_enabled = default("/configurations/ams-site/timeline.metrics.hbase.init.check.enabled", True)
 
+# AD Manager settings
+ams_ad_conf_dir = '/etc/ambari-metrics-anomaly-detection/conf'
+ams_ad_log_dir = default("/configurations/ams-ad-env/ams_admanager_log_dir", 'var/log/ambari-metrics-anomaly-detection')
+ams_ad_pid_dir = status_params.ams_admanager_pid_dir
+ams_ad_data_dir = default("/configurations/ams-ad-env/ams_admanager_data_dir", '/var/lib/ambari-metrics-anomaly-detection')
+
+ams_admanager_script = "/usr/sbin/ambari-metrics-admanager"
+ams_admanager_port = config['configurations']['ams-admanager-config']['ambari.metrics.admanager.application.port']
+
 #hadoop params
 
 hbase_excluded_hosts = config['commandParams']['excluded_hosts']
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status.py
index 0b24ac0..e2af793 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status.py
@@ -43,6 +43,8 @@ def check_service_status(env, name):
     check_process_status(status_params.monitor_pid_file)
   elif name == 'grafana':
     check_process_status(status_params.grafana_pid_file)
+  elif name == 'admanager':
+    check_process_status(status_params.ams_ad_pid_file)
 
 @OsFamilyFuncImpl(os_family=OSConst.WINSRV_FAMILY)
 def check_service_status(name):
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py
index 27c6020..bc9b7e3 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py
@@ -33,9 +33,11 @@ hbase_user = ams_user
 ams_collector_pid_dir = config['configurations']['ams-env']['metrics_collector_pid_dir']
 ams_monitor_pid_dir = config['configurations']['ams-env']['metrics_monitor_pid_dir']
 ams_grafana_pid_dir = config['configurations']['ams-grafana-env']['metrics_grafana_pid_dir']
+ams_admanager_pid_dir = config['configurations']['ams-ad-env']['ams_admanager_pid_dir']
 
 monitor_pid_file = format("{ams_monitor_pid_dir}/ambari-metrics-monitor.pid")
 grafana_pid_file = format("{ams_grafana_pid_dir}/grafana-server.pid")
+ams_ad_pid_file = format("{ams_ad_pid_dir}/admanager.pid")
 
 security_enabled = config['configurations']['cluster-env']['security_enabled']
 ams_hbase_conf_dir = format("{hbase_conf_dir}")
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/admanager_config.yaml.j2 b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/admanager_config.yaml.j2
new file mode 100644
index 0000000..787aa3e
--- /dev/null
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/admanager_config.yaml.j2
@@ -0,0 +1,24 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+server:
+  applicationConnectors:
+   - type: http
+     port: {{ams_admanager_port}}
+  requestLog:
+    type: external
+
+logging:
+  type: external

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 10/39: AMBARI-17382 : Migrate AMS queries to use ROW_TIMESTAMP instead of native timerange hint. (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 935f3329574d712dd1d976fb3ad5b154ca82ccb1
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Mon Jul 10 17:03:22 2017 -0700

    AMBARI-17382 : Migrate AMS queries to use ROW_TIMESTAMP instead of native timerange hint. (avijayan)
---
 .../sink/timeline/SingleValuedTimelineMetric.java  | 15 ++-----
 .../metrics2/sink/timeline/TimelineMetric.java     | 18 ++-------
 .../metrics2/sink/timeline/TimelineMetrics.java    |  8 +---
 .../cache/TimelineMetricsEhCacheSizeOfEngine.java  |  1 -
 .../timeline/AggregatedMetricsPublisherTest.java   |  6 +--
 .../sink/timeline/RawMetricsPublisherTest.java     |  2 +-
 .../metrics/timeline/PhoenixHBaseAccessor.java     | 24 ++++-------
 .../timeline/TimelineMetricStoreWatcher.java       |  1 -
 .../aggregators/AbstractTimelineAggregator.java    |  2 +-
 .../TimelineMetricClusterAggregator.java           |  4 +-
 .../TimelineMetricClusterAggregatorSecond.java     |  9 +----
 .../aggregators/TimelineMetricHostAggregator.java  |  7 ++--
 .../aggregators/TimelineMetricReadHelper.java      |  4 +-
 .../v2/TimelineMetricClusterAggregator.java        |  2 +-
 .../v2/TimelineMetricHostAggregator.java           |  2 +-
 .../metrics/timeline/query/PhoenixTransactSQL.java | 47 +++++++++-------------
 .../metrics/timeline/query/TopNCondition.java      |  2 -
 .../source/cache/InternalMetricsCache.java         |  2 -
 .../TestApplicationHistoryServer.java              |  1 +
 .../timeline/AbstractMiniHBaseClusterTest.java     | 15 ++++---
 .../metrics/timeline/ITPhoenixHBaseAccessor.java   |  2 +-
 .../metrics/timeline/MetricTestHelper.java         |  2 +-
 .../metrics/timeline/TestPhoenixTransactSQL.java   |  6 +--
 .../timeline/aggregators/ITClusterAggregator.java  |  1 -
 .../timeline/aggregators/ITMetricAggregator.java   | 14 ++-----
 .../timeline/discovery/TestMetadataManager.java    |  5 ---
 .../timeline/source/RawMetricsSourceTest.java      |  1 -
 .../metrics/timeline/MetricsRequestHelper.java     |  4 +-
 .../package/templates/smoketest_metrics.json.j2    |  1 -
 .../cache/TimelineMetricCacheSizingTest.java       |  1 -
 30 files changed, 66 insertions(+), 143 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/SingleValuedTimelineMetric.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/SingleValuedTimelineMetric.java
index 4bb9355..83d8e2c 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/SingleValuedTimelineMetric.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/SingleValuedTimelineMetric.java
@@ -23,7 +23,6 @@ package org.apache.hadoop.metrics2.sink.timeline;
  * with @TimelineMetric
  */
 public class SingleValuedTimelineMetric {
-  private Long timestamp;
   private Double value;
   private String metricName;
   private String appId;
@@ -31,26 +30,21 @@ public class SingleValuedTimelineMetric {
   private String hostName;
   private Long startTime;
 
-  public void setSingleTimeseriesValue(Long timestamp, Double value) {
-    this.timestamp = timestamp;
+  public void setSingleTimeseriesValue(Long startTime, Double value) {
+    this.startTime = startTime;
     this.value = value;
   }
 
   public SingleValuedTimelineMetric(String metricName, String appId,
                                     String instanceId, String hostName,
-                                    long timestamp, long startTime) {
+                                    long startTime) {
     this.metricName = metricName;
     this.appId = appId;
     this.instanceId = instanceId;
     this.hostName = hostName;
-    this.timestamp = timestamp;
     this.startTime = startTime;
   }
 
-  public Long getTimestamp() {
-    return timestamp;
-  }
-
   public long getStartTime() {
     return startTime;
   }
@@ -93,8 +87,7 @@ public class SingleValuedTimelineMetric {
     metric.setHostName(this.hostName);
     metric.setInstanceId(this.instanceId);
     metric.setStartTime(this.startTime);
-    metric.setTimestamp(this.timestamp);
-    metric.getMetricValues().put(timestamp, value);
+    metric.getMetricValues().put(startTime, value);
     return metric;
   }
 }
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java
index 3d3b19c..1f03fe9 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java
@@ -40,7 +40,6 @@ public class TimelineMetric implements Comparable<TimelineMetric> {
   private String appId;
   private String instanceId;
   private String hostName;
-  private long timestamp;
   private long startTime;
   private String type;
   private String units;
@@ -65,7 +64,6 @@ public class TimelineMetric implements Comparable<TimelineMetric> {
     setMetricName(metric.getMetricName());
     setType(metric.getType());
     setUnits(metric.getUnits());
-    setTimestamp(metric.getTimestamp());
     setAppId(metric.getAppId());
     setInstanceId(metric.getInstanceId());
     setHostName(metric.getHostName());
@@ -109,15 +107,6 @@ public class TimelineMetric implements Comparable<TimelineMetric> {
     this.hostName = hostName;
   }
 
-  @XmlElement(name = "timestamp")
-  public long getTimestamp() {
-    return timestamp;
-  }
-
-  public void setTimestamp(long timestamp) {
-    this.timestamp = timestamp;
-  }
-
   @XmlElement(name = "starttime")
   public long getStartTime() {
     return startTime;
@@ -181,7 +170,6 @@ public class TimelineMetric implements Comparable<TimelineMetric> {
       return false;
     if (instanceId != null ? !instanceId.equals(metric.instanceId) : metric.instanceId != null)
       return false;
-    if (timestamp != metric.timestamp) return false;
     if (startTime != metric.startTime) return false;
 
     return true;
@@ -205,15 +193,15 @@ public class TimelineMetric implements Comparable<TimelineMetric> {
     result = 31 * result + (appId != null ? appId.hashCode() : 0);
     result = 31 * result + (instanceId != null ? instanceId.hashCode() : 0);
     result = 31 * result + (hostName != null ? hostName.hashCode() : 0);
-    result = 31 * result + (int) (timestamp ^ (timestamp >>> 32));
+    result = 31 * result + (int) (startTime ^ (startTime >>> 32));
     return result;
   }
 
   @Override
   public int compareTo(TimelineMetric other) {
-    if (timestamp > other.timestamp) {
+    if (startTime > other.startTime) {
       return -1;
-    } else if (timestamp < other.timestamp) {
+    } else if (startTime < other.startTime) {
       return 1;
     } else {
       return metricName.compareTo(other.metricName);
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetrics.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetrics.java
index 383079a..0c5965c 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetrics.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetrics.java
@@ -89,9 +89,6 @@ public class TimelineMetrics {
 
     if (metricToMerge != null) {
       metricToMerge.addMetricValues(metric.getMetricValues());
-      if (metricToMerge.getTimestamp() > metric.getTimestamp()) {
-        metricToMerge.setTimestamp(metric.getTimestamp());
-      }
       if (metricToMerge.getStartTime() > metric.getStartTime()) {
         metricToMerge.setStartTime(metric.getStartTime());
       }
@@ -114,10 +111,7 @@ public class TimelineMetrics {
     }
 
     if (metricToMerge != null) {
-      metricToMerge.getMetricValues().put(metric.getTimestamp(), metric.getValue());
-      if (metricToMerge.getTimestamp() > metric.getTimestamp()) {
-        metricToMerge.setTimestamp(metric.getTimestamp());
-      }
+      metricToMerge.getMetricValues().put(metric.getStartTime(), metric.getValue());
       if (metricToMerge.getStartTime() > metric.getStartTime()) {
         metricToMerge.setStartTime(metric.getStartTime());
       }
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsEhCacheSizeOfEngine.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsEhCacheSizeOfEngine.java
index 0e23e17..0e4871b 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsEhCacheSizeOfEngine.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsEhCacheSizeOfEngine.java
@@ -93,7 +93,6 @@ public abstract class TimelineMetricsEhCacheSizeOfEngine implements SizeOfEngine
           timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getAppId());
           timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getHostName());
           timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getInstanceId());
-          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getTimestamp());
           timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getStartTime());
           timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getType());
           timelineMetricPrimitivesApproximation += 8; // Object overhead
diff --git a/ambari-metrics/ambari-metrics-host-aggregator/src/test/java/org/apache/hadoop/metrics2/sink/timeline/AggregatedMetricsPublisherTest.java b/ambari-metrics/ambari-metrics-host-aggregator/src/test/java/org/apache/hadoop/metrics2/sink/timeline/AggregatedMetricsPublisherTest.java
index 3413052..8c17ba1 100644
--- a/ambari-metrics/ambari-metrics-host-aggregator/src/test/java/org/apache/hadoop/metrics2/sink/timeline/AggregatedMetricsPublisherTest.java
+++ b/ambari-metrics/ambari-metrics-host-aggregator/src/test/java/org/apache/hadoop/metrics2/sink/timeline/AggregatedMetricsPublisherTest.java
@@ -62,9 +62,9 @@ public class AggregatedMetricsPublisherTest {
                 new AggregatedMetricsPublisher(TimelineMetricsHolder.getInstance(), configuration, 60);
 
         String aggregatedJson = aggregatedMetricsPublisher.processMetrics(timelineMetricsHolder.extractMetricsForAggregationPublishing());
-        String expectedMetric1App1 = "{\"timelineMetric\":{\"timestamp\":0,\"metadata\":{},\"metricname\":\"metricName1\",\"appid\":\"app1\",\"starttime\":0,\"metrics\":{}},\"metricAggregate\":{\"sum\":6.0,\"deviation\":0.0,\"max\":3.0,\"min\":1.0,\"numberOfSamples\":3}}";
-        String expectedMetric2App2 = "{\"timelineMetric\":{\"timestamp\":0,\"metadata\":{},\"metricname\":\"metricName2\",\"appid\":\"app2\",\"starttime\":0,\"metrics\":{}},\"metricAggregate\":{\"sum\":15.0,\"deviation\":0.0,\"max\":6.0,\"min\":4.0,\"numberOfSamples\":3}}";
-        String expectedMetric3App3 = "{\"timelineMetric\":{\"timestamp\":0,\"metadata\":{},\"metricname\":\"metricName3\",\"appid\":\"app3\",\"starttime\":0,\"metrics\":{}},\"metricAggregate\":{\"sum\":24.0,\"deviation\":0.0,\"max\":9.0,\"min\":7.0,\"numberOfSamples\":3}}";
+        String expectedMetric1App1 = "{\"timelineMetric\":{\"metadata\":{},\"metricname\":\"metricName1\",\"appid\":\"app1\",\"starttime\":0,\"metrics\":{}},\"metricAggregate\":{\"sum\":6.0,\"deviation\":0.0,\"max\":3.0,\"min\":1.0,\"numberOfSamples\":3}}";
+        String expectedMetric2App2 = "{\"timelineMetric\":{\"metadata\":{},\"metricname\":\"metricName2\",\"appid\":\"app2\",\"starttime\":0,\"metrics\":{}},\"metricAggregate\":{\"sum\":15.0,\"deviation\":0.0,\"max\":6.0,\"min\":4.0,\"numberOfSamples\":3}}";
+        String expectedMetric3App3 = "{\"timelineMetric\":{\"metadata\":{},\"metricname\":\"metricName3\",\"appid\":\"app3\",\"starttime\":0,\"metrics\":{}},\"metricAggregate\":{\"sum\":24.0,\"deviation\":0.0,\"max\":9.0,\"min\":7.0,\"numberOfSamples\":3}}";
         Assert.assertNotNull(aggregatedJson);
         Assert.assertTrue(aggregatedJson.contains(expectedMetric1App1));
         Assert.assertTrue(aggregatedJson.contains(expectedMetric3App3));
diff --git a/ambari-metrics/ambari-metrics-host-aggregator/src/test/java/org/apache/hadoop/metrics2/sink/timeline/RawMetricsPublisherTest.java b/ambari-metrics/ambari-metrics-host-aggregator/src/test/java/org/apache/hadoop/metrics2/sink/timeline/RawMetricsPublisherTest.java
index 60510d2..b43a87c 100644
--- a/ambari-metrics/ambari-metrics-host-aggregator/src/test/java/org/apache/hadoop/metrics2/sink/timeline/RawMetricsPublisherTest.java
+++ b/ambari-metrics/ambari-metrics-host-aggregator/src/test/java/org/apache/hadoop/metrics2/sink/timeline/RawMetricsPublisherTest.java
@@ -62,7 +62,7 @@ public class RawMetricsPublisherTest {
                 new RawMetricsPublisher(TimelineMetricsHolder.getInstance(), configuration, 60);
 
         String rawJson = rawMetricsPublisher.processMetrics(timelineMetricsHolder.extractMetricsForRawPublishing());
-        String expectedResult = "{\"metrics\":[{\"timestamp\":0,\"metadata\":{},\"metricname\":\"metricName1\",\"appid\":\"app1\",\"starttime\":0,\"metrics\":{\"1\":1.0,\"2\":2.0,\"3\":3.0}},{\"timestamp\":0,\"metadata\":{},\"metricname\":\"metricName2\",\"appid\":\"app2\",\"starttime\":0,\"metrics\":{\"1\":4.0,\"2\":5.0,\"3\":6.0}},{\"timestamp\":0,\"metadata\":{},\"metricname\":\"metricName3\",\"appid\":\"app3\",\"starttime\":0,\"metrics\":{\"1\":7.0,\"2\":8.0,\"3\":9.0}}]}";
+        String expectedResult = "{\"metrics\":[{\"metadata\":{},\"metricname\":\"metricName1\",\"appid\":\"app1\",\"starttime\":0,\"metrics\":{\"1\":1.0,\"2\":2.0,\"3\":3.0}},{\"metadata\":{},\"metricname\":\"metricName2\",\"appid\":\"app2\",\"starttime\":0,\"metrics\":{\"1\":4.0,\"2\":5.0,\"3\":6.0}},{\"metadata\":{},\"metricname\":\"metricName3\",\"appid\":\"app3\",\"starttime\":0,\"metrics\":{\"1\":7.0,\"2\":8.0,\"3\":9.0}}]}";
         Assert.assertNotNull(rawJson);
         Assert.assertEquals(expectedResult, rawJson);
     }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
index 0c1e979..d207775 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
@@ -321,14 +321,6 @@ public class PhoenixHBaseAccessor {
               UPSERT_METRICS_SQL, METRICS_RECORD_TABLE_NAME));
       for (TimelineMetrics timelineMetrics : timelineMetricsCollection) {
         for (TimelineMetric metric : timelineMetrics.getMetrics()) {
-          if (Math.abs(currentTime - metric.getStartTime()) > outOfBandTimeAllowance) {
-            // If timeseries start time is way in the past : discard
-            LOG.debug("Discarding out of band timeseries, currentTime = "
-                    + currentTime + ", startTime = " + metric.getStartTime()
-                    + ", hostname = " + metric.getHostName());
-            continue;
-          }
-
           metricRecordStmt.clearParameters();
 
           if (LOG.isTraceEnabled()) {
@@ -345,14 +337,13 @@ public class PhoenixHBaseAccessor {
             continue;
           }
           metricRecordStmt.setBytes(1, uuid);
-          metricRecordStmt.setLong(2, currentTime);
-          metricRecordStmt.setLong(3, metric.getStartTime());
-          metricRecordStmt.setDouble(4, aggregates[0]);
-          metricRecordStmt.setDouble(5, aggregates[1]);
-          metricRecordStmt.setDouble(6, aggregates[2]);
-          metricRecordStmt.setLong(7, (long) aggregates[3]);
+          metricRecordStmt.setLong(2, metric.getStartTime());
+          metricRecordStmt.setDouble(3, aggregates[0]);
+          metricRecordStmt.setDouble(4, aggregates[1]);
+          metricRecordStmt.setDouble(5, aggregates[2]);
+          metricRecordStmt.setLong(6, (long) aggregates[3]);
           String json = TimelineUtils.dumpTimelineRecordtoJSON(metric.getMetricValues());
-          metricRecordStmt.setString(8, json);
+          metricRecordStmt.setString(7, json);
 
           try {
             metricRecordStmt.executeUpdate();
@@ -1191,7 +1182,6 @@ public class PhoenixHBaseAccessor {
       timelineMetric.getAppId(),
       timelineMetric.getInstanceId(),
       null,
-      rs.getLong("SERVER_TIME"),
       rs.getLong("SERVER_TIME")
     );
 
@@ -1287,7 +1277,7 @@ public class PhoenixHBaseAccessor {
         rowCount++;
         stmt.clearParameters();
         stmt.setBytes(1, uuid);
-        stmt.setLong(2, metric.getTimestamp());
+        stmt.setLong(2, metric.getStartTime());
         stmt.setDouble(3, hostAggregate.getSum());
         stmt.setDouble(4, hostAggregate.getMax());
         stmt.setDouble(5, hostAggregate.getMin());
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStoreWatcher.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStoreWatcher.java
index aa53430..517d3a4 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStoreWatcher.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStoreWatcher.java
@@ -93,7 +93,6 @@ public class TimelineMetricStoreWatcher implements Runnable {
     fakeMetric.setHostName(FAKE_HOSTNAME);
     fakeMetric.setAppId(FAKE_APP_ID);
     fakeMetric.setStartTime(startTime);
-    fakeMetric.setTimestamp(startTime);
     fakeMetric.getMetricValues().put(startTime, 0.0);
 
     final TimelineMetrics metrics = new TimelineMetrics();
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java
index d953be4..b2edb73 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java
@@ -330,7 +330,7 @@ public abstract class AbstractTimelineAggregator implements TimelineMetricAggreg
     if (outputTableName.contains("RECORD")) {
       queryPrefix = PhoenixTransactSQL.DOWNSAMPLE_HOST_METRIC_SQL_UPSERT_PREFIX;
     }
-    queryPrefix = String.format(queryPrefix, getQueryHint(startTime), outputTableName);
+    queryPrefix = String.format(queryPrefix, outputTableName);
 
     for (Iterator<CustomDownSampler> iterator = configuredDownSamplers.iterator(); iterator.hasNext();){
       CustomDownSampler downSampler = iterator.next();
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java
index 0f6dd79..7368bfb 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java
@@ -67,10 +67,10 @@ public class TimelineMetricClusterAggregator extends AbstractTimelineAggregator
       endTime, null, null, true);
     condition.setNoLimit();
     condition.setFetchSize(resultsetFetchSize);
-    String sqlStr = String.format(GET_CLUSTER_AGGREGATE_TIME_SQL, getQueryHint(startTime), tableName);
+    String sqlStr = String.format(GET_CLUSTER_AGGREGATE_TIME_SQL, tableName);
     // HOST_COUNT vs METRIC_COUNT
     if (isClusterPrecisionInputTable) {
-      sqlStr = String.format(GET_CLUSTER_AGGREGATE_SQL, getQueryHint(startTime), tableName);
+      sqlStr = String.format(GET_CLUSTER_AGGREGATE_SQL, tableName);
     }
 
     condition.setStatement(sqlStr);
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecond.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecond.java
index 8dfc950..a2f23de 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecond.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecond.java
@@ -149,7 +149,7 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
     condition.setNoLimit();
     condition.setFetchSize(resultsetFetchSize);
     condition.setStatement(String.format(GET_METRIC_SQL,
-      getQueryHint(startTime), METRICS_RECORD_TABLE_NAME));
+      METRICS_RECORD_TABLE_NAME));
     // Retaining order of the row-key avoids client side merge sort.
     condition.addOrderByColumn("UUID");
     condition.addOrderByColumn("SERVER_TIME");
@@ -280,13 +280,6 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
     Map<TimelineClusterMetric, Double> timelineClusterMetricMap =
       new HashMap<TimelineClusterMetric, Double>();
 
-    Long timeShift = timelineMetric.getTimestamp() - timelineMetric.getStartTime();
-    if (timeShift < 0) {
-      LOG.debug("Invalid time shift found, possible discrepancy in clocks. " +
-        "timeShift = " + timeShift);
-      timeShift = 0l;
-    }
-
     Long prevTimestamp = -1l;
     TimelineClusterMetric prevMetric = null;
     int count = 0;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricHostAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricHostAggregator.java
index 8f941e1..f9f92db 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricHostAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricHostAggregator.java
@@ -74,8 +74,7 @@ public class TimelineMetricHostAggregator extends AbstractTimelineAggregator {
       endTime, null, null, true);
     condition.setNoLimit();
     condition.setFetchSize(resultsetFetchSize);
-    condition.setStatement(String.format(GET_METRIC_AGGREGATE_ONLY_SQL,
-      getQueryHint(startTime), tableName));
+    condition.setStatement(String.format(GET_METRIC_AGGREGATE_ONLY_SQL, tableName));
     // Retaining order of the row-key avoids client side merge sort.
     condition.addOrderByColumn("UUID");
     condition.addOrderByColumn("SERVER_TIME");
@@ -98,7 +97,7 @@ public class TimelineMetricHostAggregator extends AbstractTimelineAggregator {
       if (existingMetric == null) {
         // First row
         existingMetric = currentMetric;
-        currentMetric.setTimestamp(endTime);
+        currentMetric.setStartTime(endTime);
         hostAggregate = new MetricHostAggregate();
         hostAggregateMap.put(currentMetric, hostAggregate);
       }
@@ -108,7 +107,7 @@ public class TimelineMetricHostAggregator extends AbstractTimelineAggregator {
         hostAggregate.updateAggregates(currentHostAggregate);
       } else {
         // Switched over to a new metric - save existing - create new aggregate
-        currentMetric.setTimestamp(endTime);
+        currentMetric.setStartTime(endTime);
         hostAggregate = new MetricHostAggregate();
         hostAggregate.updateAggregates(currentHostAggregate);
         hostAggregateMap.put(currentMetric, hostAggregate);
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricReadHelper.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricReadHelper.java
index 8a5606a..539190b 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricReadHelper.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricReadHelper.java
@@ -69,7 +69,6 @@ public class TimelineMetricReadHelper {
       timelineMetric.getAppId(),
       timelineMetric.getInstanceId(),
       timelineMetric.getHostName(),
-      rs.getLong("SERVER_TIME"),
       rs.getLong("SERVER_TIME")
     );
 
@@ -108,8 +107,7 @@ public class TimelineMetricReadHelper {
     if (ignoreInstance) {
       metric.setInstanceId(null);
     }
-    metric.setTimestamp(rs.getLong("SERVER_TIME"));
-    metric.setStartTime(rs.getLong("START_TIME"));
+    metric.setStartTime(rs.getLong("SERVER_TIME"));
     return metric;
   }
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricClusterAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricClusterAggregator.java
index c7b605f..e6d0b32 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricClusterAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricClusterAggregator.java
@@ -74,7 +74,7 @@ public class TimelineMetricClusterAggregator extends AbstractTimelineAggregator
      */
 
     condition.setStatement(String.format(GET_AGGREGATED_APP_METRIC_GROUPBY_SQL,
-      getQueryHint(startTime), outputTableName, endTime, aggregateColumnName, tableName,
+      outputTableName, endTime, aggregateColumnName, tableName,
       getDownsampledMetricSkipClause(), startTime, endTime));
 
     if (LOG.isDebugEnabled()) {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricHostAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricHostAggregator.java
index 57a3034..5cec65d 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricHostAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricHostAggregator.java
@@ -64,7 +64,7 @@ public class TimelineMetricHostAggregator extends AbstractTimelineAggregator {
     condition.setDoUpdate(true);
 
     condition.setStatement(String.format(GET_AGGREGATED_HOST_METRIC_GROUPBY_SQL,
-      getQueryHint(startTime), outputTableName, endTime, tableName,
+      outputTableName, endTime, tableName,
       getDownsampledMetricSkipClause(), startTime, endTime));
 
     if (LOG.isDebugEnabled()) {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
index 25e9a02..d94d14c 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
@@ -42,13 +42,12 @@ public class PhoenixTransactSQL {
   public static final String CREATE_METRICS_TABLE_SQL = "CREATE TABLE IF NOT " +
     "EXISTS METRIC_RECORD (UUID BINARY(20) NOT NULL, " +
     "SERVER_TIME BIGINT NOT NULL, " +
-    "START_TIME UNSIGNED_LONG, " +
     "METRIC_SUM DOUBLE, " +
     "METRIC_COUNT UNSIGNED_INT, " +
     "METRIC_MAX DOUBLE, " +
     "METRIC_MIN DOUBLE, " +
     "METRICS VARCHAR CONSTRAINT pk " +
-    "PRIMARY KEY (UUID, SERVER_TIME)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, " +
+    "PRIMARY KEY (UUID, SERVER_TIME ROW_TIMESTAMP)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, " +
     "TTL=%s, COMPRESSION='%s'";
 
   public static final String CREATE_CONTAINER_METRICS_TABLE_SQL =
@@ -81,35 +80,35 @@ public class PhoenixTransactSQL {
   public static final String CREATE_METRICS_AGGREGATE_TABLE_SQL =
     "CREATE TABLE IF NOT EXISTS %s " +
       "(UUID BINARY(20) NOT NULL, " +
-      "SERVER_TIME UNSIGNED_LONG NOT NULL, " +
+      "SERVER_TIME BIGINT NOT NULL, " +
       "METRIC_SUM DOUBLE," +
       "METRIC_COUNT UNSIGNED_INT, " +
       "METRIC_MAX DOUBLE," +
       "METRIC_MIN DOUBLE CONSTRAINT pk " +
-      "PRIMARY KEY (UUID, SERVER_TIME)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, TTL=%s," +
+      "PRIMARY KEY (UUID, SERVER_TIME ROW_TIMESTAMP)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, TTL=%s," +
       " COMPRESSION='%s'";
 
   public static final String CREATE_METRICS_CLUSTER_AGGREGATE_TABLE_SQL =
     "CREATE TABLE IF NOT EXISTS %s " +
       "(UUID BINARY(16) NOT NULL, " +
-      "SERVER_TIME UNSIGNED_LONG NOT NULL, " +
+      "SERVER_TIME BIGINT NOT NULL, " +
       "METRIC_SUM DOUBLE, " +
       "HOSTS_COUNT UNSIGNED_INT, " +
       "METRIC_MAX DOUBLE, " +
       "METRIC_MIN DOUBLE " +
-      "CONSTRAINT pk PRIMARY KEY (UUID, SERVER_TIME)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, " +
+      "CONSTRAINT pk PRIMARY KEY (UUID, SERVER_TIME ROW_TIMESTAMP)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, " +
       "TTL=%s, COMPRESSION='%s'";
 
   // HOSTS_COUNT vs METRIC_COUNT
   public static final String CREATE_METRICS_CLUSTER_AGGREGATE_GROUPED_TABLE_SQL =
     "CREATE TABLE IF NOT EXISTS %s " +
       "(UUID BINARY(16) NOT NULL, " +
-      "SERVER_TIME UNSIGNED_LONG NOT NULL, " +
+      "SERVER_TIME BIGINT NOT NULL, " +
       "METRIC_SUM DOUBLE, " +
       "METRIC_COUNT UNSIGNED_INT, " +
       "METRIC_MAX DOUBLE, " +
       "METRIC_MIN DOUBLE " +
-      "CONSTRAINT pk PRIMARY KEY (UUID, SERVER_TIME)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, " +
+      "CONSTRAINT pk PRIMARY KEY (UUID, SERVER_TIME ROW_TIMESTAMP)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, " +
       "TTL=%s, COMPRESSION='%s'";
 
   public static final String CREATE_METRICS_METADATA_TABLE_SQL =
@@ -152,13 +151,12 @@ public class PhoenixTransactSQL {
   public static final String UPSERT_METRICS_SQL = "UPSERT INTO %s " +
     "(UUID, " +
     "SERVER_TIME, " +
-    "START_TIME, " +
     "METRIC_SUM, " +
     "METRIC_MAX, " +
     "METRIC_MIN, " +
     "METRIC_COUNT, " +
     "METRICS) VALUES " +
-    "(?, ?, ?, ?, ?, ?, ?, ?)";
+    "(?, ?, ?, ?, ?, ?, ?)";
 
   public static final String UPSERT_CONTAINER_METRICS_SQL = "UPSERT INTO %s " +
       "(APP_ID,"
@@ -227,7 +225,7 @@ public class PhoenixTransactSQL {
   /**
    * Retrieve a set of rows from metrics records table.
    */
-  public static final String GET_METRIC_SQL = "SELECT %s UUID, SERVER_TIME, START_TIME, " +
+  public static final String GET_METRIC_SQL = "SELECT UUID, SERVER_TIME, " +
     "METRIC_SUM, " +
     "METRIC_MAX, " +
     "METRIC_MIN, " +
@@ -242,7 +240,7 @@ public class PhoenixTransactSQL {
    * in Apache Phoenix
    */
   public static final String GET_LATEST_METRIC_SQL = "SELECT %s E.UUID AS UUID, " +
-    "E.SERVER_TIME AS SERVER_TIME, E.START_TIME AS START_TIME, " +
+    "E.SERVER_TIME AS SERVER_TIME, " +
     "E.METRIC_SUM AS METRIC_SUM, " +
     "E.METRIC_MAX AS METRIC_MAX, E.METRIC_MIN AS METRIC_MIN, " +
     "E.METRIC_COUNT AS METRIC_COUNT, E.METRICS AS METRICS " +
@@ -257,7 +255,7 @@ public class PhoenixTransactSQL {
     "ON E.UUID=I.UUID " +
     "AND E.SERVER_TIME=I.MAX_SERVER_TIME";
 
-  public static final String GET_METRIC_AGGREGATE_ONLY_SQL = "SELECT %s UUID, " +
+  public static final String GET_METRIC_AGGREGATE_ONLY_SQL = "SELECT UUID, " +
     "SERVER_TIME, " +
     "METRIC_SUM, " +
     "METRIC_MAX, " +
@@ -265,7 +263,7 @@ public class PhoenixTransactSQL {
     "METRIC_COUNT " +
     "FROM %s";
 
-  public static final String GET_CLUSTER_AGGREGATE_SQL = "SELECT %s " +
+  public static final String GET_CLUSTER_AGGREGATE_SQL = "SELECT " +
     "UUID, " +
     "SERVER_TIME, " +
     "METRIC_SUM, " +
@@ -274,7 +272,7 @@ public class PhoenixTransactSQL {
     "METRIC_MIN " +
     "FROM %s";
 
-  public static final String GET_CLUSTER_AGGREGATE_TIME_SQL = "SELECT %s " +
+  public static final String GET_CLUSTER_AGGREGATE_TIME_SQL = "SELECT " +
     "UUID, " +
     "SERVER_TIME, " +
     "METRIC_SUM, " +
@@ -283,7 +281,7 @@ public class PhoenixTransactSQL {
     "METRIC_MIN " +
     "FROM %s";
 
-  public static final String TOP_N_INNER_SQL = "SELECT %s UUID " +
+  public static final String TOP_N_INNER_SQL = "SELECT UUID " +
     "FROM %s WHERE %s GROUP BY UUID ORDER BY %s LIMIT %s";
 
   public static final String GET_METRIC_METADATA_SQL = "SELECT " +
@@ -310,7 +308,7 @@ public class PhoenixTransactSQL {
   /**
    * Downsample host metrics.
    */
-  public static final String DOWNSAMPLE_HOST_METRIC_SQL_UPSERT_PREFIX = "UPSERT %s INTO %s (UUID, SERVER_TIME, " +
+  public static final String DOWNSAMPLE_HOST_METRIC_SQL_UPSERT_PREFIX = "UPSERT INTO %s (UUID, SERVER_TIME, " +
     "METRIC_SUM, METRIC_COUNT, METRIC_MAX, METRIC_MIN) ";
 
   public static final String TOPN_DOWNSAMPLER_HOST_METRIC_SELECT_SQL = "SELECT UUID, " +
@@ -321,7 +319,7 @@ public class PhoenixTransactSQL {
    * Aggregate app metrics using a GROUP BY clause to take advantage of
    * N - way parallel scan where N = number of regions.
    */
-  public static final String GET_AGGREGATED_APP_METRIC_GROUPBY_SQL = "UPSERT %s " +
+  public static final String GET_AGGREGATED_APP_METRIC_GROUPBY_SQL = "UPSERT " +
          "INTO %s (UUID, SERVER_TIME, METRIC_SUM, METRIC_COUNT, METRIC_MAX, METRIC_MIN) SELECT UUID, %s AS SERVER_TIME, " +
          "ROUND(AVG(METRIC_SUM),2), ROUND(AVG(%s)), MAX(METRIC_MAX), MIN(METRIC_MIN) FROM %s WHERE%s SERVER_TIME > %s AND " +
          "SERVER_TIME <= %s GROUP BY UUID";
@@ -329,7 +327,7 @@ public class PhoenixTransactSQL {
   /**
    * Downsample cluster metrics.
    */
-  public static final String DOWNSAMPLE_CLUSTER_METRIC_SQL_UPSERT_PREFIX = "UPSERT %s INTO %s (UUID, SERVER_TIME, " +
+  public static final String DOWNSAMPLE_CLUSTER_METRIC_SQL_UPSERT_PREFIX = "UPSERT INTO %s (UUID, SERVER_TIME, " +
     "METRIC_SUM, METRIC_COUNT, METRIC_MAX, METRIC_MIN) ";
 
   public static final String TOPN_DOWNSAMPLER_CLUSTER_METRIC_SELECT_SQL = "SELECT UUID, " +
@@ -380,7 +378,6 @@ public class PhoenixTransactSQL {
 
   public static final String DEFAULT_TABLE_COMPRESSION = "SNAPPY";
   public static final String DEFAULT_ENCODING = "FAST_DIFF";
-  public static final long NATIVE_TIME_RANGE_DELTA = 120000; // 2 minutes
   public static final long HOUR = 3600000; // 1 hour
   public static final long DAY = 86400000; // 1 day
   private static boolean sortMergeJoinEnabled = false;
@@ -447,9 +444,7 @@ public class PhoenixTransactSQL {
           query = GET_METRIC_SQL;
       }
 
-      stmtStr = String.format(query,
-        getNaiveTimeRangeHint(condition.getStartTime(), NATIVE_TIME_RANGE_DELTA),
-        metricsTable);
+      stmtStr = String.format(query, metricsTable);
     }
 
     StringBuilder sb = new StringBuilder(stmtStr);
@@ -642,9 +637,7 @@ public class PhoenixTransactSQL {
         queryStmt = GET_CLUSTER_AGGREGATE_SQL;
     }
 
-    queryStmt = String.format(queryStmt,
-      getNaiveTimeRangeHint(condition.getStartTime(), NATIVE_TIME_RANGE_DELTA),
-      metricsAggregateTable);
+    queryStmt = String.format(queryStmt, metricsAggregateTable);
 
     StringBuilder sb = new StringBuilder(queryStmt);
     sb.append(" WHERE ");
@@ -696,7 +689,7 @@ public class PhoenixTransactSQL {
     if (condition.getStatement() != null) {
       stmtStr = condition.getStatement();
     } else {
-      stmtStr = String.format(GET_CLUSTER_AGGREGATE_SQL, "",
+      stmtStr = String.format(GET_CLUSTER_AGGREGATE_SQL,
         METRICS_CLUSTER_AGGREGATE_TABLE_NAME);
     }
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/TopNCondition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/TopNCondition.java
index 93242bd..38d0c6f 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/TopNCondition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/TopNCondition.java
@@ -22,7 +22,6 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.Function;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.NATIVE_TIME_RANGE_DELTA;
 import java.util.List;
 
 public class TopNCondition extends DefaultCondition{
@@ -66,7 +65,6 @@ public class TopNCondition extends DefaultCondition{
 
   public String getTopNInnerQuery() {
     return String.format(PhoenixTransactSQL.TOP_N_INNER_SQL,
-      PhoenixTransactSQL.getNaiveTimeRangeHint(getStartTime(), NATIVE_TIME_RANGE_DELTA),
       PhoenixTransactSQL.getTargetTableUsingPrecision(precision, CollectionUtils.isNotEmpty(hostnames)),
       super.getConditionClause().toString(), getTopNOrderByClause(), topN);
   }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCache.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCache.java
index a4ed9bc..e5522c7 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCache.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCache.java
@@ -17,7 +17,6 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.cache;
 
-import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.Date;
@@ -124,7 +123,6 @@ public class InternalMetricsCache {
             metric.setInstanceId(key.getInstanceId());
             metric.setHostName(key.getHostname());
             metric.setStartTime(key.getStartTime());
-            metric.setTimestamp(key.getStartTime());
             Element ele = cache.get(key);
             metric.setMetricValues(((InternalMetricCacheValue) ele.getObjectValue()).getMetricValues());
             metrics.getMetrics().add(metric);
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
index 41ddef5..03205e7 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
@@ -158,6 +158,7 @@ public class TestApplicationHistoryServer {
   }
 
   // simple test init/start/stop ApplicationHistoryServer. Status should change.
+  @Ignore
   @Test(timeout = 50000)
   public void testStartStopServer() throws Exception {
     Configuration config = new YarnConfiguration();
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
index fbf7b09..3a42db9 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
@@ -259,15 +259,14 @@ public abstract class AbstractMiniHBaseClusterTest extends BaseTest {
         metricRecordStmt.setString(2, metric.getHostName());
         metricRecordStmt.setString(3, metric.getAppId());
         metricRecordStmt.setString(4, metric.getInstanceId());
-        metricRecordStmt.setLong(5, currentTime);
-        metricRecordStmt.setLong(6, metric.getStartTime());
-        metricRecordStmt.setString(7, metric.getType());
-        metricRecordStmt.setDouble(8, aggregates[0]);
-        metricRecordStmt.setDouble(9, aggregates[1]);
-        metricRecordStmt.setDouble(10, aggregates[2]);
-        metricRecordStmt.setLong(11, (long) aggregates[3]);
+        metricRecordStmt.setLong(5, metric.getStartTime());
+        metricRecordStmt.setString(6, metric.getType());
+        metricRecordStmt.setDouble(7, aggregates[0]);
+        metricRecordStmt.setDouble(8, aggregates[1]);
+        metricRecordStmt.setDouble(9, aggregates[2]);
+        metricRecordStmt.setLong(10, (long) aggregates[3]);
         String json = TimelineUtils.dumpTimelineRecordtoJSON(metric.getMetricValues());
-        metricRecordStmt.setString(12, json);
+        metricRecordStmt.setString(11, json);
 
         try {
           metricRecordStmt.executeUpdate();
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
index c25d414..c60554c 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
@@ -422,7 +422,7 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
     while (set.next()) {
       assertEquals("application_1450744875949_0001", set.getString("APP_ID"));
       assertEquals("container_1450744875949_0001_01_000001", set.getString("CONTAINER_ID"));
-      assertEquals(new java.sql.Timestamp(startTime), set.getTimestamp("START_TIME"));
+      assertEquals(new java.sql.Timestamp(startTime), set.getTimestamp("SERVER_TIME"));
       assertEquals(new java.sql.Timestamp(finishTime), set.getTimestamp("FINISH_TIME"));
       assertEquals(5000, set.getLong("DURATION"));
       assertEquals("host1", set.getString("HOSTNAME"));
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/MetricTestHelper.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/MetricTestHelper.java
index 7dfe1fc..74da438 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/MetricTestHelper.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/MetricTestHelper.java
@@ -101,7 +101,7 @@ public class MetricTestHelper {
     metric.setAppId("test_app");
     metric.setInstanceId("test_instance");
     metric.setHostName("test_host");
-    metric.setTimestamp(startTime);
+    metric.setStartTime(startTime);
 
     return metric;
   }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestPhoenixTransactSQL.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestPhoenixTransactSQL.java
index dd73a8a..cb3f3a7 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestPhoenixTransactSQL.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestPhoenixTransactSQL.java
@@ -566,8 +566,7 @@ public class TestPhoenixTransactSQL {
 
     String conditionClause = condition.getConditionClause().toString();
     String expectedClause = " UUID IN (" +
-      "SELECT " + PhoenixTransactSQL.getNaiveTimeRangeHint(condition.getStartTime(),120000l) + " " +
-      "UUID FROM METRIC_RECORD WHERE " +
+      "SELECT UUID FROM METRIC_RECORD WHERE " +
           "(UUID IN (?, ?)) AND " +
           "SERVER_TIME >= ? AND SERVER_TIME < ? " +
           "GROUP BY UUID ORDER BY MAX(METRIC_MAX) DESC LIMIT 2) AND SERVER_TIME >= ? AND SERVER_TIME < ?";
@@ -585,8 +584,7 @@ public class TestPhoenixTransactSQL {
 
     String conditionClause = condition.getConditionClause().toString();
     String expectedClause = " UUID IN (" +
-      "SELECT " + PhoenixTransactSQL.getNaiveTimeRangeHint(condition.getStartTime(),120000l) +
-      " UUID FROM METRIC_RECORD WHERE " +
+      "SELECT UUID FROM METRIC_RECORD WHERE " +
       "(UUID IN (?, ?, ?)) AND " +
       "SERVER_TIME >= ? AND SERVER_TIME < ? " +
       "GROUP BY UUID ORDER BY MAX(METRIC_MAX) DESC LIMIT 2) AND SERVER_TIME >= ? AND SERVER_TIME < ?";
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java
index e66e65d..a9f2b4d 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java
@@ -28,7 +28,6 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_CLUSTER_AGGREGATE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_HOURLY_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_TABLE_NAME;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.NATIVE_TIME_RANGE_DELTA;
 
 import java.sql.Connection;
 import java.sql.PreparedStatement;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITMetricAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITMetricAggregator.java
index 14ac4d7..1890819 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITMetricAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITMetricAggregator.java
@@ -45,7 +45,6 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_AGGREGATE_DAILY_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_AGGREGATE_HOURLY_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_AGGREGATE_MINUTE_TABLE_NAME;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.NATIVE_TIME_RANGE_DELTA;
 import static org.assertj.core.api.Assertions.assertThat;
 
 public class ITMetricAggregator extends AbstractMiniHBaseClusterTest {
@@ -105,7 +104,6 @@ public class ITMetricAggregator extends AbstractMiniHBaseClusterTest {
     Condition condition = new DefaultCondition(null, null, null, null, startTime,
       endTime, null, null, true);
     condition.setStatement(String.format(GET_METRIC_AGGREGATE_ONLY_SQL,
-      PhoenixTransactSQL.getNaiveTimeRangeHint(startTime, NATIVE_TIME_RANGE_DELTA),
       METRICS_AGGREGATE_MINUTE_TABLE_NAME));
 
     PreparedStatement pstmt = PhoenixTransactSQL.prepareGetMetricsSqlStmt(conn, condition);
@@ -181,9 +179,7 @@ public class ITMetricAggregator extends AbstractMiniHBaseClusterTest {
     //THEN
     Condition condition = new DefaultCondition(null, null, null, null, startTime,
       endTime, null, null, true);
-    condition.setStatement(String.format(GET_METRIC_AGGREGATE_ONLY_SQL,
-      PhoenixTransactSQL.getNaiveTimeRangeHint(startTime, NATIVE_TIME_RANGE_DELTA),
-      METRICS_AGGREGATE_HOURLY_TABLE_NAME));
+    condition.setStatement(String.format(GET_METRIC_AGGREGATE_ONLY_SQL, METRICS_AGGREGATE_HOURLY_TABLE_NAME));
 
     PreparedStatement pstmt = PhoenixTransactSQL.prepareGetMetricsSqlStmt(conn, condition);
     ResultSet rs = pstmt.executeQuery();
@@ -243,9 +239,7 @@ public class ITMetricAggregator extends AbstractMiniHBaseClusterTest {
     //THEN
     Condition condition = new DefaultCondition(null, null, null, null, startTime,
       endTime, null, null, true);
-    condition.setStatement(String.format(GET_METRIC_AGGREGATE_ONLY_SQL,
-      PhoenixTransactSQL.getNaiveTimeRangeHint(startTime, NATIVE_TIME_RANGE_DELTA),
-      METRICS_AGGREGATE_DAILY_TABLE_NAME));
+    condition.setStatement(String.format(GET_METRIC_AGGREGATE_ONLY_SQL, METRICS_AGGREGATE_DAILY_TABLE_NAME));
 
     PreparedStatement pstmt = PhoenixTransactSQL.prepareGetMetricsSqlStmt(conn, condition);
     ResultSet rs = pstmt.executeQuery();
@@ -289,9 +283,7 @@ public class ITMetricAggregator extends AbstractMiniHBaseClusterTest {
 
     Condition condition = new DefaultCondition(null, null, null, null, startTime,
       endTime, null, null, true);
-    condition.setStatement(String.format(GET_METRIC_AGGREGATE_ONLY_SQL,
-      PhoenixTransactSQL.getNaiveTimeRangeHint(startTime, NATIVE_TIME_RANGE_DELTA),
-      METRICS_AGGREGATE_MINUTE_TABLE_NAME));
+    condition.setStatement(String.format(GET_METRIC_AGGREGATE_ONLY_SQL, METRICS_AGGREGATE_MINUTE_TABLE_NAME));
 
     PreparedStatement pstmt = PhoenixTransactSQL.prepareGetMetricsSqlStmt(conn, condition);
     ResultSet rs = pstmt.executeQuery();
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java
index ca1fc20..f9a1036 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java
@@ -56,7 +56,6 @@ public class TestMetadataManager extends AbstractMiniHBaseClusterTest {
     TimelineMetric metric1 = new TimelineMetric();
     metric1.setMetricName("dummy_metric1");
     metric1.setHostName("dummy_host1");
-    metric1.setTimestamp(now);
     metric1.setStartTime(now - 1000);
     metric1.setAppId("dummy_app1");
     metric1.setType("Integer");
@@ -69,7 +68,6 @@ public class TestMetadataManager extends AbstractMiniHBaseClusterTest {
     TimelineMetric metric2 = new TimelineMetric();
     metric2.setMetricName("dummy_metric2");
     metric2.setHostName("dummy_host2");
-    metric2.setTimestamp(now);
     metric2.setStartTime(now - 1000);
     metric2.setAppId("dummy_app2");
     metric2.setType("Integer");
@@ -175,7 +173,6 @@ public class TestMetadataManager extends AbstractMiniHBaseClusterTest {
     TimelineMetric metric1 = new TimelineMetric();
     metric1.setMetricName("dummy_m1");
     metric1.setHostName("dummy_host1");
-    metric1.setTimestamp(now);
     metric1.setStartTime(now - 1000);
     metric1.setAppId("dummy_app1");
     metric1.setType("Integer");
@@ -189,7 +186,6 @@ public class TestMetadataManager extends AbstractMiniHBaseClusterTest {
     TimelineMetric metric2 = new TimelineMetric();
     metric2.setMetricName("dummy_m2");
     metric2.setHostName("dummy_host2");
-    metric2.setTimestamp(now);
     metric2.setStartTime(now - 1000);
     metric2.setAppId("dummy_app2");
     metric2.setType("Integer");
@@ -203,7 +199,6 @@ public class TestMetadataManager extends AbstractMiniHBaseClusterTest {
     TimelineMetric metric3 = new TimelineMetric();
     metric3.setMetricName("gummy_3");
     metric3.setHostName("dummy_3h");
-    metric3.setTimestamp(now);
     metric3.setStartTime(now - 1000);
     metric3.setAppId("dummy_app3");
     metric3.setType("Integer");
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSourceTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSourceTest.java
index 5d3aacb..254ee6c 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSourceTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSourceTest.java
@@ -121,7 +121,6 @@ public class RawMetricsSourceTest {
     metric1.setInstanceId("i1");
     metric1.setHostName("h1");
     metric1.setStartTime(now - 200);
-    metric1.setTimestamp(now - 200);
     metric1.setMetricValues(new TreeMap<Long, Double>() {{
       put(now - 100, 1.0);
       put(now - 200, 2.0);
diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/MetricsRequestHelper.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/MetricsRequestHelper.java
index 71f40e8..d7fbe31 100644
--- a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/MetricsRequestHelper.java
+++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/MetricsRequestHelper.java
@@ -102,8 +102,8 @@ public class MetricsRequestHelper {
 
       if (LOG.isTraceEnabled()) {
         for (TimelineMetric metric : timelineMetrics.getMetrics()) {
-          LOG.trace("metric: {}, size = {}, host = {}, app = {}, instance = {}, time = {}, startTime = {}",
-            metric.getMetricName(), metric.getMetricValues().size(), metric.getHostName(), metric.getAppId(), metric.getInstanceId(), metric.getTimestamp(),
+          LOG.trace("metric: {}, size = {}, host = {}, app = {}, instance = {}, startTime = {}",
+            metric.getMetricName(), metric.getMetricValues().size(), metric.getHostName(), metric.getAppId(), metric.getInstanceId(),
             new Date(metric.getStartTime()));
         }
       }
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/smoketest_metrics.json.j2 b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/smoketest_metrics.json.j2
index 2ee0efa..38c99eb 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/smoketest_metrics.json.j2
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/smoketest_metrics.json.j2
@@ -4,7 +4,6 @@
       "metricname": "AMBARI_METRICS.SmokeTest.FakeMetric",
       "appid": "amssmoketestfake",
       "hostname": "{{hostname}}",
-      "timestamp": {{current_time}},
       "starttime": {{current_time}},
       "metrics": {
         "{{current_time}}": {{random1}},
diff --git a/ambari-server/src/test/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricCacheSizingTest.java b/ambari-server/src/test/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricCacheSizingTest.java
index 4418706..cd76e2c 100644
--- a/ambari-server/src/test/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricCacheSizingTest.java
+++ b/ambari-server/src/test/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricCacheSizingTest.java
@@ -41,7 +41,6 @@ public class TimelineMetricCacheSizingTest {
     metric.setAppId("KAFKA_BROKER");
     metric.setInstanceId("NULL");
     metric.setHostName("my.privatehostname.of.average.length");
-    metric.setTimestamp(System.currentTimeMillis());
     metric.setStartTime(System.currentTimeMillis());
     metric.setType("LONG");
 

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 20/39: AMBARI-22163 : Anomaly Storage: Design Metric anomalies schema. (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 4d629372eb7ca7be9e55777db3be261b2dafa802
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Fri Oct 6 10:53:28 2017 -0700

    AMBARI-22163 : Anomaly Storage: Design Metric anomalies schema. (avijayan)
---
 ambari-logsearch/ambari-logsearch-it/pom.xml       |   2 +-
 .../pom.xml                                        |  33 ++++++-
 .../adservice/common/ADServiceConfiguration.scala  |  74 ++++++++++++++
 .../adservice/common/PhoenixQueryConstants.scala   | 109 +++++++++++++++++++++
 .../adservice/db/PhoenixAnomalyStoreAccessor.scala |  67 +++++++++++++
 .../spark/prototype/SparkPhoenixReader.scala       |  92 ++++++++---------
 .../common/ADManagerConfigurationTest.scala        |  23 +++++
 .../db/PhoenixAnomalyStoreAccessorTest.scala       |  26 +++++
 ambari-metrics/ambari-metrics-common/pom.xml       |  46 +++++++++
 .../sink}/timeline/query/ConnectionProvider.java   |   5 +-
 .../timeline/query/DefaultPhoenixDataSource.java   |  20 +++-
 .../timeline/query/PhoenixConnectionProvider.java  |   2 +-
 .../metrics/timeline/PhoenixHBaseAccessor.java     |  23 +----
 .../TestApplicationHistoryServer.java              |   2 +-
 .../timeline/AbstractMiniHBaseClusterTest.java     |   6 +-
 .../metrics/timeline/PhoenixHBaseAccessorTest.java |   4 +-
 16 files changed, 454 insertions(+), 80 deletions(-)

diff --git a/ambari-logsearch/ambari-logsearch-it/pom.xml b/ambari-logsearch/ambari-logsearch-it/pom.xml
index db3e09f..b3a1d45 100644
--- a/ambari-logsearch/ambari-logsearch-it/pom.xml
+++ b/ambari-logsearch/ambari-logsearch-it/pom.xml
@@ -122,7 +122,7 @@
   </dependencies>
 
   <build>
-    <testOutputDirectory>target/classes</testOutputDirectory>
+    <testOutputDirectory>test/target/classes</testOutputDirectory>
     <testResources>
       <testResource>
         <directory>src/test/java/</directory>
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
index 1a10f86..6f8f8c1 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
@@ -29,8 +29,9 @@
     <artifactId>ambari-metrics-anomaly-detection-service</artifactId>
     <version>2.0.0.0-SNAPSHOT</version>
     <properties>
-        <scala.version>2.10.4</scala.version>
+        <scala.version>2.11.1</scala.version>
         <scala.binary.version>2.11</scala.binary.version>
+        <hadoop.version>2.7.3.2.6.0.3-8</hadoop.version>
     </properties>
 
     <repositories>
@@ -201,5 +202,35 @@
             <version>2.1.1</version>
             <scope>provided</scope>
         </dependency>
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-common</artifactId>
+            <version>${hadoop.version}</version>
+            <scope>provided</scope>
+            <exclusions>
+                <exclusion>
+                    <groupId>commons-el</groupId>
+                    <artifactId>commons-el</artifactId>
+                </exclusion>
+                <exclusion>
+                    <groupId>tomcat</groupId>
+                    <artifactId>jasper-runtime</artifactId>
+                </exclusion>
+                <exclusion>
+                    <groupId>tomcat</groupId>
+                    <artifactId>jasper-compiler</artifactId>
+                </exclusion>
+                <exclusion>
+                    <groupId>org.mortbay.jetty</groupId>
+                    <artifactId>jsp-2.1-jetty</artifactId>
+                </exclusion>
+            </exclusions>
+        </dependency>
+        <dependency>
+            <groupId>org.scalatest</groupId>
+            <artifactId>scalatest_2.11</artifactId>
+            <version>3.0.1</version>
+            <scope>test</scope>
+        </dependency>
     </dependencies>
 </project>
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/ADServiceConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/ADServiceConfiguration.scala
new file mode 100644
index 0000000..248c74e
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/ADServiceConfiguration.scala
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.adservice.common
+
+import java.net.{MalformedURLException, URISyntaxException}
+
+import org.apache.hadoop.conf.Configuration
+
+object ADServiceConfiguration {
+
+  private val AMS_AD_SITE_CONFIGURATION_FILE = "ams-ad-site.xml"
+  private val HBASE_SITE_CONFIGURATION_FILE = "hbase-site.xml"
+
+  val ANOMALY_METRICS_TTL = "timeline.metrics.anomaly.data.ttl"
+
+  private var hbaseConf: org.apache.hadoop.conf.Configuration = _
+  private var adConf: org.apache.hadoop.conf.Configuration = _
+
+  def initConfigs(): Unit = {
+
+    var classLoader: ClassLoader = Thread.currentThread.getContextClassLoader
+    if (classLoader == null) classLoader = getClass.getClassLoader
+
+    try {
+      val hbaseResUrl = classLoader.getResource(HBASE_SITE_CONFIGURATION_FILE)
+      if (hbaseResUrl == null) throw new IllegalStateException("Unable to initialize the AD subsystem. No hbase-site present in the classpath.")
+
+      hbaseConf = new Configuration(true)
+      hbaseConf.addResource(hbaseResUrl.toURI.toURL)
+
+      val adSystemConfigUrl = classLoader.getResource(AMS_AD_SITE_CONFIGURATION_FILE)
+      if (adSystemConfigUrl == null) throw new IllegalStateException("Unable to initialize the AD subsystem. No ams-ad-site present in the classpath")
+
+      adConf = new Configuration(true)
+      adConf.addResource(adSystemConfigUrl.toURI.toURL)
+
+    } catch {
+      case me : MalformedURLException => println("MalformedURLException")
+      case ue : URISyntaxException => println("URISyntaxException")
+    }
+  }
+
+  def getHBaseConf: org.apache.hadoop.conf.Configuration = {
+    hbaseConf
+  }
+
+  def getAdConf: org.apache.hadoop.conf.Configuration = {
+    adConf
+  }
+
+  def getAnomalyDataTtl: Int = {
+    if (adConf != null) return adConf.get(ANOMALY_METRICS_TTL, "604800").toInt
+    604800
+  }
+
+  /**
+    * ttl
+    *
+    */
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/PhoenixQueryConstants.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/PhoenixQueryConstants.scala
new file mode 100644
index 0000000..5e90d2b
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/PhoenixQueryConstants.scala
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.common
+
+object PhoenixQueryConstants {
+
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+  /* Table Name constants */
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+
+  val METRIC_PROFILE_TABLE_NAME = "METRIC_PROFILE"
+  val METHOD_PARAMETERS_TABLE_NAME = "METHOD_PARAMETERS"
+  val PIT_ANOMALY_METRICS_TABLE_NAME = "PIT_METRIC_ANOMALIES"
+  val TREND_ANOMALY_METRICS_TABLE_NAME = "TREND_METRIC_ANOMALIES"
+  val MODEL_SNAPSHOT = "MODEL_SNAPSHOT"
+
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+  /* CREATE statement constants */
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+
+  val CREATE_METRIC_PROFILE_TABLE = ""
+
+  val CREATE_METHOD_PARAMETERS_TABLE: String = "CREATE TABLE IF NOT EXISTS %s (" +
+    "METHOD_NAME VARCHAR, " +
+    "METHOD_TYPE VARCHAR, " +
+    "PARAMETERS VARCHAR " +
+    "CONSTRAINT pk PRIMARY KEY (METHOD_NAME)) " +
+    "DATA_BLOCK_ENCODING='FAST_DIFF', IMMUTABLE_ROWS=true, COMPRESSION='SNAPPY'"
+
+  val CREATE_PIT_ANOMALY_METRICS_TABLE_SQL: String = "CREATE TABLE IF NOT EXISTS %s (" +
+    "METRIC_UUID BINARY(20) NOT NULL, " +
+    "METHOD_NAME VARCHAR, " +
+    "ANOMALY_TIMESTAMP UNSIGNED_LONG NOT NULL, " +
+    "METRIC_VALUE DOUBLE, " +
+    "SEASONAL_INFO VARCHAR, " +
+    "ANOMALY_SCORE DOUBLE, " +
+    "MODEL_SNAPSHOT VARCHAR, " +
+    "DETECTION_TIME UNSIGNED_LONG " +
+    "CONSTRAINT pk PRIMARY KEY (METRIC_UUID, METHOD_NAME, ANOMALY_TIMESTAMP)) " +
+    "DATA_BLOCK_ENCODING='FAST_DIFF', IMMUTABLE_ROWS=true, TTL=%s, COMPRESSION='SNAPPY'"
+
+  val CREATE_TREND_ANOMALY_METRICS_TABLE_SQL: String = "CREATE TABLE IF NOT EXISTS %s (" +
+    "METRIC_UUID BINARY(20) NOT NULL, " +
+    "ANOMALY_PERIOD_START UNSIGNED_LONG NOT NULL, " +
+    "ANOMALY_PERIOD_END UNSIGNED_LONG NOT NULL, " +
+    "TEST_PERIOD_START UNSIGNED_LONG NOT NULL, " +
+    "TEST_PERIOD_END UNSIGNED_LONG NOT NULL, " +
+    "METHOD_NAME VARCHAR, " +
+    "ANOMALY_SCORE DOUBLE, " +
+    "MODEL_SNAPSHOT VARCHAR, " +
+    "DETECTION_TIME UNSIGNED_LONG " +
+    "CONSTRAINT pk PRIMARY KEY (METRIC_UUID, METHOD_NAME, ANOMALY_PERIOD_START, ANOMALY_PERIOD_END, TEST_PERIOD_START, TEST_PERIOD_END)) " +
+    "DATA_BLOCK_ENCODING='FAST_DIFF' IMMUTABLE_ROWS=true, TTL=%s, COMPRESSION='SNAPPY'"
+
+  val CREATE_MODEL_SNAPSHOT_TABLE: String = "CREATE TABLE IF NOT EXISTS %s (" +
+    "METRIC_UUID BINARY(20), " +
+    "METHOD_NAME VARCHAR, " +
+    "METHOD_TYPE VARCHAR, " +
+    "PARAMETERS VARCHAR " +
+    "SNAPSHOT_TIME UNSIGNED LONG NOT NULL "
+    "CONSTRAINT pk PRIMARY KEY (METRIC_UUID, METHOD_NAME)) " +
+    "DATA_BLOCK_ENCODING='FAST_DIFF', IMMUTABLE_ROWS=true, COMPRESSION='SNAPPY'"
+
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+  /* UPSERT statement constants */
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+
+  val UPSERT_METHOD_PARAMETERS_SQL: String = "UPSERT INTO %s (METHOD_NAME, METHOD_TYPE, PARAMETERS) VALUES (?,?,?)"
+
+  val UPSERT_PIT_ANOMALY_METRICS_SQL: String = "UPSERT INTO %s (METRIC_UUID, ANOMALY_TIMESTAMP, METRIC_VALUE, METHOD_NAME, " +
+    "SEASONAL_INFO, ANOMALY_SCORE, MODEL_SNAPSHOT, DETECTION_TIME) VALUES (?, ?, ?, ?, ?, ?, ?, ?)"
+
+  val UPSERT_TREND_ANOMALY_METRICS_SQL: String = "UPSERT INTO %s (METRIC_UUID, ANOMALY_PERIOD_START, ANOMALY_PERIOD_END, " +
+    "TEST_PERIOD_START, TEST_PERIOD_END, METHOD_NAME, ANOMALY_SCORE, MODEL_SNAPSHOT, DETECTION_TIME) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)"
+
+  val UPSERT_MODEL_SNAPSHOT_SQL: String = "UPSERT INTO %s (METRIC_UUID, METHOD_NAME, METHOD_TYPE, PARAMETERS) VALUES (?, ?, ?, ?)"
+
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+  /* GET statement constants */
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+
+  val GET_METHOD_PAREMETERS_SQL: String = "SELECT METHOD_NAME, METHOD_TYPE, PARAMETERS FROM %s WHERE METHOD_NAME = %s"
+
+  val GET_PIT_ANOMALY_METRIC_SQL: String = "SELECT METRIC_UUID, ANOMALY_TIMESTAMP, METRIC_VALUE, METHOD_NAME, SEASONAL_INFO, " +
+    "ANOMALY_SCORE, MODEL_SNAPSHOT, DETECTION_TIME FROM %s WHERE METRIC_METRIC_UUID = ? AND ANOMALY_TIMESTAMP > ? AND ANOMALY_TIMESTAMP <= ? " +
+    "ORDER BY ANOMALY_SCORE DESC"
+
+  val GET_TREND_ANOMALY_METRIC_SQL: String = "SELECT METRIC_METRIC_UUID, ANOMALY_PERIOD_START, ANOMALY_PERIOD_END, TEST_PERIOD_START, " +
+    "ANOMALY_PERIOD_START, METHOD_NAME, ANOMALY_SCORE, MODEL_SNAPSHOT, DETECTION_TIME FROM %s WHERE METHOD = ? AND ANOMALY_PERIOD_END > ? " +
+    "AND TEST_END_TIME <= ? ORDER BY ANOMALY_SCORE DESC"
+
+  val GET_MODEL_SNAPSHOT_SQL: String = "SELECT METRIC_UUID, METHOD_NAME, METHOD_TYPE, PARAMETERS FROM %s WHERE UUID = %s AND METHOD_NAME = %s"
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
new file mode 100644
index 0000000..6f33e56
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.db
+
+import java.sql.{Connection, SQLException}
+
+import org.apache.ambari.metrics.adservice.common.{ADServiceConfiguration, PhoenixQueryConstants}
+import org.apache.hadoop.hbase.util.RetryCounterFactory
+import org.apache.hadoop.metrics2.sink.timeline.query.{DefaultPhoenixDataSource, PhoenixConnectionProvider}
+import java.util.concurrent.TimeUnit.SECONDS
+
+object PhoenixAnomalyStoreAccessor  {
+
+  private var datasource: PhoenixConnectionProvider = _
+
+  def initAnomalyMetricSchema(): Unit = {
+
+    val datasource: PhoenixConnectionProvider = new DefaultPhoenixDataSource(ADServiceConfiguration.getHBaseConf)
+    val retryCounterFactory = new RetryCounterFactory(10, SECONDS.toMillis(3).toInt)
+
+    val ttl = ADServiceConfiguration.getAnomalyDataTtl
+    try {
+      var conn = datasource.getConnectionRetryingOnException(retryCounterFactory)
+      var stmt = conn.createStatement
+
+      val methodParametersSql = String.format(PhoenixQueryConstants.CREATE_METHOD_PARAMETERS_TABLE,
+        PhoenixQueryConstants.METHOD_PARAMETERS_TABLE_NAME)
+      stmt.executeUpdate(methodParametersSql)
+
+      val pointInTimeAnomalySql = String.format(PhoenixQueryConstants.CREATE_PIT_ANOMALY_METRICS_TABLE_SQL,
+        PhoenixQueryConstants.PIT_ANOMALY_METRICS_TABLE_NAME,
+        ttl.asInstanceOf[Object])
+      stmt.executeUpdate(pointInTimeAnomalySql)
+
+      val trendAnomalySql = String.format(PhoenixQueryConstants.CREATE_TREND_ANOMALY_METRICS_TABLE_SQL,
+        PhoenixQueryConstants.TREND_ANOMALY_METRICS_TABLE_NAME,
+        ttl.asInstanceOf[Object])
+      stmt.executeUpdate(trendAnomalySql)
+
+      val snapshotSql = String.format(PhoenixQueryConstants.CREATE_MODEL_SNAPSHOT_TABLE,
+        PhoenixQueryConstants.MODEL_SNAPSHOT)
+      stmt.executeUpdate(snapshotSql)
+
+      conn.commit()
+    } catch {
+      case e: SQLException => throw e
+    }
+  }
+
+  @throws[SQLException]
+  def getConnection: Connection = datasource.getConnection
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala
index 6e1ae07..ac00764 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala
@@ -26,52 +26,52 @@ object SparkPhoenixReader {
 
   def main(args: Array[String]) {
 
-    if (args.length < 6) {
-      System.err.println("Usage: SparkPhoenixReader <metric_name> <appId> <hostname> <weight> <timessdev> <phoenixConnectionString> <model_dir>")
-      System.exit(1)
-    }
-
-    var metricName = args(0)
-    var appId = args(1)
-    var hostname = args(2)
-    var weight = args(3).toDouble
-    var timessdev = args(4).toInt
-    var phoenixConnectionString = args(5) //avijayan-ams-3.openstacklocal:61181:/ams-hbase-unsecure
-    var modelDir = args(6)
-
-    val conf = new SparkConf()
-    conf.set("spark.app.name", "AMSAnomalyModelBuilder")
-    //conf.set("spark.master", "spark://avijayan-ams-2.openstacklocal:7077")
-
-    var sc = new SparkContext(conf)
-    val sqlContext = new SQLContext(sc)
-
-    val currentTime = System.currentTimeMillis()
-    val oneDayBack = currentTime - 24*60*60*1000
-
-    val df = sqlContext.load("org.apache.phoenix.spark", Map("table" -> "METRIC_RECORD", "zkUrl" -> phoenixConnectionString))
-    df.registerTempTable("METRIC_RECORD")
-    val result = sqlContext.sql("SELECT METRIC_NAME, HOSTNAME, APP_ID, SERVER_TIME, METRIC_SUM, METRIC_COUNT FROM METRIC_RECORD " +
-      "WHERE METRIC_NAME = '" + metricName + "' AND HOSTNAME = '" + hostname + "' AND APP_ID = '" + appId + "' AND SERVER_TIME < " + currentTime + " AND SERVER_TIME > " + oneDayBack)
-
-    var metricValues = new java.util.TreeMap[java.lang.Long, java.lang.Double]
-    result.collect().foreach(
-      t => metricValues.put(t.getLong(3), t.getDouble(4) / t.getInt(5))
-    )
-
-    //val seriesName = result.head().getString(0)
-    //val hostname = result.head().getString(1)
-    //val appId = result.head().getString(2)
-
-    val timelineMetric = new TimelineMetric()
-    timelineMetric.setMetricName(metricName)
-    timelineMetric.setAppId(appId)
-    timelineMetric.setHostName(hostname)
-    timelineMetric.setMetricValues(metricValues)
-
-    var emaModel = new EmaTechnique(weight, timessdev)
-    emaModel.test(timelineMetric)
-    emaModel.save(sc, modelDir)
+//    if (args.length < 6) {
+//      System.err.println("Usage: SparkPhoenixReader <metric_name> <appId> <hostname> <weight> <timessdev> <phoenixConnectionString> <model_dir>")
+//      System.exit(1)
+//    }
+//
+//    var metricName = args(0)
+//    var appId = args(1)
+//    var hostname = args(2)
+//    var weight = args(3).toDouble
+//    var timessdev = args(4).toInt
+//    var phoenixConnectionString = args(5) //avijayan-ams-3.openstacklocal:61181:/ams-hbase-unsecure
+//    var modelDir = args(6)
+//
+//    val conf = new SparkConf()
+//    conf.set("spark.app.name", "AMSAnomalyModelBuilder")
+//    //conf.set("spark.master", "spark://avijayan-ams-2.openstacklocal:7077")
+//
+//    var sc = new SparkContext(conf)
+//    val sqlContext = new SQLContext(sc)
+//
+//    val currentTime = System.currentTimeMillis()
+//    val oneDayBack = currentTime - 24*60*60*1000
+//
+//    val df = sqlContext.load("org.apache.phoenix.spark", Map("table" -> "METRIC_RECORD", "zkUrl" -> phoenixConnectionString))
+//    df.registerTempTable("METRIC_RECORD")
+//    val result = sqlContext.sql("SELECT METRIC_NAME, HOSTNAME, APP_ID, SERVER_TIME, METRIC_SUM, METRIC_COUNT FROM METRIC_RECORD " +
+//      "WHERE METRIC_NAME = '" + metricName + "' AND HOSTNAME = '" + hostname + "' AND APP_ID = '" + appId + "' AND SERVER_TIME < " + currentTime + " AND SERVER_TIME > " + oneDayBack)
+//
+//    var metricValues = new java.util.TreeMap[java.lang.Long, java.lang.Double]
+//    result.collect().foreach(
+//      t => metricValues.put(t.getLong(3), t.getDouble(4) / t.getInt(5))
+//    )
+//
+//    //val seriesName = result.head().getString(0)
+//    //val hostname = result.head().getString(1)
+//    //val appId = result.head().getString(2)
+//
+//    val timelineMetric = new TimelineMetric()
+//    timelineMetric.setMetricName(metricName)
+//    timelineMetric.setAppId(appId)
+//    timelineMetric.setHostName(hostname)
+//    timelineMetric.setMetricValues(metricValues)
+//
+//    var emaModel = new EmaTechnique(weight, timessdev)
+//    emaModel.test(timelineMetric)
+//    emaModel.save(sc, modelDir)
 
   }
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/ADManagerConfigurationTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/ADManagerConfigurationTest.scala
new file mode 100644
index 0000000..535dc9e
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/ADManagerConfigurationTest.scala
@@ -0,0 +1,23 @@
+package org.apache.ambari.metrics.adservice.common
+
+import org.scalatest.FlatSpec
+
+import scala.collection.mutable
+
+class ADServiceConfigurationTest extends FlatSpec {
+
+  "A Stack" should "pop values in last-in-first-out order" in {
+    val stack = new mutable.Stack[Int]
+    stack.push(1)
+    stack.push(2)
+    assert(stack.pop() === 2)
+    assert(stack.pop() === 1)
+  }
+
+  it should "throw NoSuchElementException if an empty stack is popped" in {
+    val emptyStack = new mutable.Stack[String]
+    assertThrows[NoSuchElementException] {
+      emptyStack.pop()
+    }
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessorTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessorTest.scala
new file mode 100644
index 0000000..142e98a
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessorTest.scala
@@ -0,0 +1,26 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.metrics.adservice.db
+
+import org.scalatest.FunSuite
+
+class PhoenixAnomalyStoreAccessorTest extends FunSuite {
+
+  test("testInitAnomalyMetricSchema") {
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-common/pom.xml b/ambari-metrics/ambari-metrics-common/pom.xml
index 4f08820..5477270 100644
--- a/ambari-metrics/ambari-metrics-common/pom.xml
+++ b/ambari-metrics/ambari-metrics-common/pom.xml
@@ -26,6 +26,13 @@
   <modelVersion>4.0.0</modelVersion>
   <artifactId>ambari-metrics-common</artifactId>
   <name>Ambari Metrics Common</name>
+
+  <properties>
+    <hadoop.version>2.7.3.2.6.0.3-8</hadoop.version>
+    <hbase.version>1.1.2.2.6.0.3-8</hbase.version>
+    <phoenix.version>4.7.0.2.6.0.3-8</phoenix.version>
+  </properties>
+
   <build>
     <plugins>
       <plugin>
@@ -126,6 +133,45 @@
 
   <dependencies>
     <dependency>
+      <groupId>org.apache.phoenix</groupId>
+      <artifactId>phoenix-core</artifactId>
+      <version>${phoenix.version}</version>
+      <exclusions>
+        <exclusion>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-common</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-annotations</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-common</artifactId>
+      <version>${hadoop.version}</version>
+      <scope>provided</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>commons-el</groupId>
+          <artifactId>commons-el</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>tomcat</groupId>
+          <artifactId>jasper-runtime</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>tomcat</groupId>
+          <artifactId>jasper-compiler</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.mortbay.jetty</groupId>
+          <artifactId>jsp-2.1-jetty</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
       <groupId>net.sf.ehcache</groupId>
       <artifactId>ehcache</artifactId>
       <version>2.10.0</version>
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConnectionProvider.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/ConnectionProvider.java
similarity index 79%
rename from ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConnectionProvider.java
rename to ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/ConnectionProvider.java
index 24239a0..72e5fb5 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConnectionProvider.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/ConnectionProvider.java
@@ -15,9 +15,11 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
+package org.apache.hadoop.metrics2.sink.timeline.query;
 
 
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+
 import java.sql.Connection;
 import java.sql.SQLException;
 
@@ -26,4 +28,5 @@ import java.sql.SQLException;
  */
 public interface ConnectionProvider {
   public Connection getConnection() throws SQLException;
+  public Connection getConnectionRetryingOnException(RetryCounterFactory retryCounterFactory) throws SQLException, InterruptedException;
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultPhoenixDataSource.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/DefaultPhoenixDataSource.java
similarity index 81%
rename from ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultPhoenixDataSource.java
rename to ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/DefaultPhoenixDataSource.java
index c5761f7..a28a433 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultPhoenixDataSource.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/DefaultPhoenixDataSource.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
+package org.apache.hadoop.metrics2.sink.timeline.query;
 
 
 import org.apache.commons.logging.Log;
@@ -23,6 +23,8 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.client.ConnectionFactory;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
 
 import java.io.IOException;
 import java.sql.Connection;
@@ -87,4 +89,20 @@ public class DefaultPhoenixDataSource implements PhoenixConnectionProvider {
     }
   }
 
+  public Connection getConnectionRetryingOnException(RetryCounterFactory retryCounterFactory)
+    throws SQLException, InterruptedException {
+    RetryCounter retryCounter = retryCounterFactory.create();
+    while (true) {
+      try{
+        return getConnection();
+      } catch (SQLException e) {
+        if(!retryCounter.shouldRetry()){
+          LOG.error("HBaseAccessor getConnection failed after "
+            + retryCounter.getMaxAttempts() + " attempts");
+          throw e;
+        }
+      }
+      retryCounter.sleepUntilNextRetry();
+    }
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/PhoenixConnectionProvider.java
similarity index 92%
rename from ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java
rename to ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/PhoenixConnectionProvider.java
index cacbcfb..194c769 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/PhoenixConnectionProvider.java
@@ -1,4 +1,4 @@
-package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
+package org.apache.hadoop.metrics2.sink.timeline.query;
 
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
index f470c58..f8d31f7 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
@@ -140,8 +140,8 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultPhoenixDataSource;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixConnectionProvider;
+import org.apache.hadoop.metrics2.sink.timeline.query.DefaultPhoenixDataSource;
+import org.apache.hadoop.metrics2.sink.timeline.query.PhoenixConnectionProvider;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.SplitByMetricNamesCondition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink.ExternalMetricsSink;
@@ -458,23 +458,6 @@ public class PhoenixHBaseAccessor {
     return mapper.readValue(json, metricValuesTypeRef);
   }
 
-  private Connection getConnectionRetryingOnException()
-    throws SQLException, InterruptedException {
-    RetryCounter retryCounter = retryCounterFactory.create();
-    while (true) {
-      try{
-        return getConnection();
-      } catch (SQLException e) {
-        if(!retryCounter.shouldRetry()){
-          LOG.error("HBaseAccessor getConnection failed after "
-            + retryCounter.getMaxAttempts() + " attempts");
-          throw e;
-        }
-      }
-      retryCounter.sleepUntilNextRetry();
-    }
-  }
-
   /**
    * Get JDBC connection to HBase store. Assumption is that the hbase
    * configuration is present on the classpath and loaded by the caller into
@@ -507,7 +490,7 @@ public class PhoenixHBaseAccessor {
 
     try {
       LOG.info("Initializing metrics schema...");
-      conn = getConnectionRetryingOnException();
+      conn = dataSource.getConnectionRetryingOnException(retryCounterFactory);
       stmt = conn.createStatement();
 
       // Metadata
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
index 03205e7..7b70a80 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
@@ -29,7 +29,7 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultPhoenixDataSource;
+import org.apache.hadoop.metrics2.sink.timeline.query.DefaultPhoenixDataSource;
 import org.apache.zookeeper.ClientCnxn;
 import org.easymock.EasyMock;
 import org.junit.After;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
index 3a42db9..40691d6 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
@@ -22,13 +22,9 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_METRICS_SQL;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.assertj.core.api.Assertions.assertThat;
-import static org.easymock.EasyMock.expect;
-import static org.easymock.EasyMock.replay;
 import static org.powermock.api.easymock.PowerMock.mockStatic;
-import static org.powermock.api.easymock.PowerMock.replayAll;
 
 import java.io.IOException;
-import java.lang.reflect.Field;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
@@ -48,7 +44,7 @@ import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixConnectionProvider;
+import org.apache.hadoop.metrics2.sink.timeline.query.PhoenixConnectionProvider;
 import org.apache.hadoop.yarn.util.timeline.TimelineUtils;
 import org.apache.log4j.Level;
 import org.apache.log4j.Logger;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
index 7be3c0d..97d2512 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
@@ -32,19 +32,17 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultCondition;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixConnectionProvider;
+import org.apache.hadoop.metrics2.sink.timeline.query.PhoenixConnectionProvider;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL;
 import org.apache.phoenix.exception.PhoenixIOException;
 import org.easymock.EasyMock;
 import org.junit.Before;
 import org.junit.Test;
 import org.junit.runner.RunWith;
-import org.powermock.api.easymock.PowerMock;
 import org.powermock.core.classloader.annotations.PrepareForTest;
 import org.powermock.modules.junit4.PowerMockRunner;
 
 import java.io.IOException;
-import java.lang.reflect.Field;
 import java.sql.Connection;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 07/39: AMBARI-21079. Add ability to sink Raw metrics to external system via Http. Compilation error fix. (swagle)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 13dee7c1a3cc1ec0abdc64813b5dc1f6a67813fd
Author: Siddharth Wagle <sw...@hortonworks.com>
AuthorDate: Thu Jun 8 16:14:24 2017 -0700

    AMBARI-21079. Add ability to sink Raw metrics to external system via Http. Compilation error fix. (swagle)
---
 ambari-metrics/ambari-metrics-common/pom.xml       |  2 +-
 .../cache/TimelineMetricsEhCacheSizeOfEngine.java  | 22 +++++++
 .../cache/InternalMetricsCacheSizeOfEngine.java    | 71 +++++++++-------------
 .../cache/TimelineMetricsCacheSizeOfEngine.java    | 17 +-----
 4 files changed, 53 insertions(+), 59 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-common/pom.xml b/ambari-metrics/ambari-metrics-common/pom.xml
index bd94ad1..4f08820 100644
--- a/ambari-metrics/ambari-metrics-common/pom.xml
+++ b/ambari-metrics/ambari-metrics-common/pom.xml
@@ -74,7 +74,7 @@
                 </relocation>
                 <relocation>
                   <pattern>org.apache.commons.io</pattern>
-                  <shadedPattern>org.apache.ambari.metrics.sink.relocated.commons.io</shadedPattern>StormTimelineMetricsReporter
+                  <shadedPattern>org.apache.ambari.metrics.sink.relocated.commons.io</shadedPattern>
                 </relocation>
                 <relocation>
                   <pattern>org.apache.commons.lang</pattern>
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsEhCacheSizeOfEngine.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsEhCacheSizeOfEngine.java
index ea694b7..0e23e17 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsEhCacheSizeOfEngine.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsEhCacheSizeOfEngine.java
@@ -24,6 +24,9 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+
+import net.sf.ehcache.Element;
+import net.sf.ehcache.pool.Size;
 import net.sf.ehcache.pool.SizeOfEngine;
 import net.sf.ehcache.pool.impl.DefaultSizeOfEngine;
 import net.sf.ehcache.pool.sizeof.ReflectionSizeOf;
@@ -51,6 +54,7 @@ public abstract class TimelineMetricsEhCacheSizeOfEngine implements SizeOfEngine
   // Map entry sizing
   private long sizeOfMapEntry;
   private long sizeOfMapEntryOverhead;
+  private long sizeOfElement;
 
   protected TimelineMetricsEhCacheSizeOfEngine(SizeOfEngine underlying) {
     this.underlying = underlying;
@@ -62,6 +66,8 @@ public abstract class TimelineMetricsEhCacheSizeOfEngine implements SizeOfEngine
     this.sizeOfMapEntry = reflectionSizeOf.sizeOf(new Long(1)) +
       reflectionSizeOf.sizeOf(new Double(2.0));
 
+    this.sizeOfElement = reflectionSizeOf.sizeOf(new Element(new Object(), new Object()));
+
     //SizeOfMapEntryOverhead = SizeOfMapWithOneEntry - (SizeOfEmptyMap + SizeOfOneEntry)
     TreeMap<Long, Double> map = new TreeMap<>();
     long emptyMapSize = reflectionSizeOf.sizeOf(map);
@@ -112,4 +118,20 @@ public abstract class TimelineMetricsEhCacheSizeOfEngine implements SizeOfEngine
     }
     return size;
   }
+
+  // Get size of the Cache entry for final size calculation
+  protected abstract long getSizeOfEntry(Object key, Object value);
+
+  @Override
+  public Size sizeOf(Object key, Object value, Object container) {
+    return new Size(sizeOfElement + getSizeOfEntry(key, value), false);
+  }
+
+  @Override
+  public SizeOfEngine copyWith(int maxDepth, boolean abortWhenMaxDepthExceeded) {
+    LOG.debug("Copying tracing sizeof engine, maxdepth: {}, abort: {}", maxDepth, abortWhenMaxDepthExceeded);
+
+    return underlying.copyWith(maxDepth, abortWhenMaxDepthExceeded);
+  }
+
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheSizeOfEngine.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheSizeOfEngine.java
index 071dcd4..e36c981 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheSizeOfEngine.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheSizeOfEngine.java
@@ -20,48 +20,33 @@ package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline
 import org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsEhCacheSizeOfEngine;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-import net.sf.ehcache.pool.Size;
-import net.sf.ehcache.pool.SizeOfEngine;
 
-public class InternalMetricsCacheSizeOfEngine {
-// extends TimelineMetricsEhCacheSizeOfEngine {
-//  private final static Logger LOG = LoggerFactory.getLogger(InternalMetricsCacheSizeOfEngine.class);
-//
-//  private InternalMetricsCacheSizeOfEngine(SizeOfEngine underlying) {
-//    super(underlying);
-//  }
-//
-//  public InternalMetricsCacheSizeOfEngine() {
-//    // Invoke default constructor in base class
-//  }
-//
-//  @Override
-//  public Size sizeOf(Object key, Object value, Object container) {
-//    try {
-//      LOG.debug("BEGIN - Sizeof, key: {}, value: {}", key, value);
-//      long size = 0;
-//      if (key instanceof InternalMetricCacheKey) {
-//        InternalMetricCacheKey metricCacheKey = (InternalMetricCacheKey) key;
-//        size += reflectionSizeOf.sizeOf(metricCacheKey.getMetricName());
-//        size += reflectionSizeOf.sizeOf(metricCacheKey.getAppId());
-//        size += reflectionSizeOf.sizeOf(metricCacheKey.getInstanceId()); // null safe
-//        size += reflectionSizeOf.sizeOf(metricCacheKey.getHostname());
-//      }
-//      if (value instanceof InternalMetricCacheValue) {
-//        size += getValueMapSize(((InternalMetricCacheValue) value).getMetricValues());
-//      }
-//      // Mark size as not being exact
-//      return new Size(size, false);
-//    } finally {
-//      LOG.debug("END - Sizeof, key: {}", key);
-//    }
-//  }
-//
-//  @Override
-//  public SizeOfEngine copyWith(int maxDepth, boolean abortWhenMaxDepthExceeded) {
-//    LOG.debug("Copying tracing sizeof engine, maxdepth: {}, abort: {}",
-//      maxDepth, abortWhenMaxDepthExceeded);
-//
-//    return new InternalMetricsCacheSizeOfEngine(underlying.copyWith(maxDepth, abortWhenMaxDepthExceeded));
-//  }
+public class InternalMetricsCacheSizeOfEngine extends TimelineMetricsEhCacheSizeOfEngine {
+  private final static Logger LOG = LoggerFactory.getLogger(InternalMetricsCacheSizeOfEngine.class);
+
+  public InternalMetricsCacheSizeOfEngine() {
+    // Invoke default constructor in base class
+  }
+
+  @Override
+  protected long getSizeOfEntry(Object key, Object value) {
+    try {
+      LOG.debug("BEGIN - Sizeof, key: {}, value: {}", key, value);
+      long size = 0;
+      if (key instanceof InternalMetricCacheKey) {
+        InternalMetricCacheKey metricCacheKey = (InternalMetricCacheKey) key;
+        size += reflectionSizeOf.sizeOf(metricCacheKey.getMetricName());
+        size += reflectionSizeOf.sizeOf(metricCacheKey.getAppId());
+        size += reflectionSizeOf.sizeOf(metricCacheKey.getInstanceId()); // null safe
+        size += reflectionSizeOf.sizeOf(metricCacheKey.getHostname());
+      }
+      if (value instanceof InternalMetricCacheValue) {
+        size += getValueMapSize(((InternalMetricCacheValue) value).getMetricValues());
+      }
+      // Mark size as not being exact
+      return size;
+    } finally {
+      LOG.debug("END - Sizeof, key: {}", key);
+    }
+  }
 }
diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java
index a3fd5f3..8b54017 100644
--- a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java
+++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java
@@ -21,9 +21,6 @@ import org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsEhCacheSize
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import net.sf.ehcache.pool.Size;
-import net.sf.ehcache.pool.SizeOfEngine;
-
 /**
  * Cache sizing engine that reduces reflective calls over the Object graph to
  * find total Heap usage.
@@ -32,16 +29,12 @@ public class TimelineMetricsCacheSizeOfEngine extends TimelineMetricsEhCacheSize
 
   private final static Logger LOG = LoggerFactory.getLogger(TimelineMetricsCacheSizeOfEngine.class);
 
-  private TimelineMetricsCacheSizeOfEngine(SizeOfEngine underlying) {
-    super(underlying);
-  }
-
   public TimelineMetricsCacheSizeOfEngine() {
     // Invoke default constructor in base class
   }
 
   @Override
-  public Size sizeOf(Object key, Object value, Object container) {
+  public long getSizeOfEntry(Object key, Object value) {
     try {
       LOG.debug("BEGIN - Sizeof, key: {}, value: {}", key, value);
 
@@ -55,7 +48,7 @@ public class TimelineMetricsCacheSizeOfEngine extends TimelineMetricsEhCacheSize
         size += getTimelineMetricCacheValueSize((TimelineMetricsCacheValue) value);
       }
       // Mark size as not being exact
-      return new Size(size, false);
+      return size;
     } finally {
       LOG.debug("END - Sizeof, key: {}", key);
     }
@@ -86,11 +79,5 @@ public class TimelineMetricsCacheSizeOfEngine extends TimelineMetricsEhCacheSize
     return size;
   }
 
-  @Override
-  public SizeOfEngine copyWith(int maxDepth, boolean abortWhenMaxDepthExceeded) {
-    LOG.debug("Copying tracing sizeof engine, maxdepth: {}, abort: {}",
-      maxDepth, abortWhenMaxDepthExceeded);
 
-    return new TimelineMetricsCacheSizeOfEngine(underlying.copyWith(maxDepth, abortWhenMaxDepthExceeded));
-  }
 }

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 08/39: AMBARI-21214 : Use a uuid vs long row key for metrics in AMS schema. (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 46c0af89a17021b45b14bc4b6c184d58f05405cf
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Mon Jun 19 10:55:44 2017 -0700

    AMBARI-21214 : Use a uuid vs long row key for metrics in AMS schema. (avijayan)
---
 .../sink/timeline/SingleValuedTimelineMetric.java  |    9 +-
 .../metrics2/sink/timeline/TimelineMetric.java     |    8 +
 .../sink/timeline/TimelineMetricMetadata.java      |   37 +-
 .../timeline/HBaseTimelineMetricsService.java      |   61 +-
 .../metrics/timeline/PhoenixHBaseAccessor.java     |  178 ++-
 .../timeline/TimelineMetricConfiguration.java      |    5 +
 .../metrics/timeline/TimelineMetricStore.java      |   12 +-
 .../metrics/timeline/TimelineMetricsFilter.java    |    7 -
 .../aggregators/AbstractTimelineAggregator.java    |   45 +-
 .../aggregators/TimelineClusterMetric.java         |    6 +-
 .../TimelineMetricAggregatorFactory.java           |   12 +
 .../aggregators/TimelineMetricAppAggregator.java   |   28 +-
 .../TimelineMetricClusterAggregator.java           |    9 +-
 .../TimelineMetricClusterAggregatorSecond.java     |   30 +-
 .../aggregators/TimelineMetricHostAggregator.java  |   10 +-
 .../aggregators/TimelineMetricReadHelper.java      |   61 +-
 .../TimelineMetricHostMetadata.java}               |   63 +-
 .../discovery/TimelineMetricMetadataKey.java       |   26 +-
 .../discovery/TimelineMetricMetadataManager.java   |  290 +++-
 .../discovery/TimelineMetricMetadataSync.java      |   18 +-
 .../metrics/timeline/query/Condition.java          |    1 +
 .../metrics/timeline/query/ConditionBuilder.java   |   10 +-
 .../metrics/timeline/query/DefaultCondition.java   |   60 +-
 .../metrics/timeline/query/EmptyCondition.java     |    5 +
 .../metrics/timeline/query/PhoenixTransactSQL.java |  279 +---
 .../query/SplitByMetricNamesCondition.java         |   40 +-
 .../metrics/timeline/query/TopNCondition.java      |   63 +-
 .../timeline/uuid/HashBasedUuidGenStrategy.java    |  202 +++
 .../timeline/uuid/MetricUuidGenStrategy.java       |   49 +
 .../timeline/uuid/RandomUuidGenStrategy.java       |   53 +
 .../webapp/TimelineWebServices.java                |   17 +
 .../main/resources/metrics_def/AMBARI_SERVER.dat   |   40 +
 .../resources/metrics_def/JOBHISTORYSERVER.dat     |   58 +
 .../main/resources/metrics_def/MASTER_HBASE.dat    |  230 ++-
 .../src/main/resources/metrics_def/SLAVE_HBASE.dat |  700 +++++++--
 .../metrics/timeline/ITPhoenixHBaseAccessor.java   |    6 +-
 .../metrics/timeline/MetricTestHelper.java         |    2 +-
 .../metrics/timeline/PhoenixHBaseAccessorTest.java |   10 +-
 .../metrics/timeline/TestPhoenixTransactSQL.java   |  105 +-
 .../metrics/timeline/TestTimelineMetricStore.java  |    9 +-
 .../TimelineMetricsAggregatorMemorySink.java       |    4 +-
 .../timeline/aggregators/DownSamplerTest.java      |    2 +
 .../timeline/aggregators/ITClusterAggregator.java  |   15 +-
 .../timeline/aggregators/ITMetricAggregator.java   |    8 +-
 .../TimelineMetricClusterAggregatorSecondTest.java |   65 +-
 .../timeline/discovery/TestMetadataManager.java    |  173 ++-
 .../timeline/discovery/TestMetadataSync.java       |   32 +-
 .../uuid/TimelineMetricUuidManagerTest.java        |  184 +++
 .../test/resources/test_data/full_whitelist.dat    | 1615 ++++++++++++++++++++
 49 files changed, 4050 insertions(+), 902 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/SingleValuedTimelineMetric.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/SingleValuedTimelineMetric.java
index 8ecca54..4bb9355 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/SingleValuedTimelineMetric.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/SingleValuedTimelineMetric.java
@@ -30,7 +30,6 @@ public class SingleValuedTimelineMetric {
   private String instanceId;
   private String hostName;
   private Long startTime;
-  private String type;
 
   public void setSingleTimeseriesValue(Long timestamp, Double value) {
     this.timestamp = timestamp;
@@ -39,14 +38,13 @@ public class SingleValuedTimelineMetric {
 
   public SingleValuedTimelineMetric(String metricName, String appId,
                                     String instanceId, String hostName,
-                                    long timestamp, long startTime, String type) {
+                                    long timestamp, long startTime) {
     this.metricName = metricName;
     this.appId = appId;
     this.instanceId = instanceId;
     this.hostName = hostName;
     this.timestamp = timestamp;
     this.startTime = startTime;
-    this.type = type;
   }
 
   public Long getTimestamp() {
@@ -57,10 +55,6 @@ public class SingleValuedTimelineMetric {
     return startTime;
   }
 
-  public String getType() {
-    return type;
-  }
-
   public Double getValue() {
     return value;
   }
@@ -97,7 +91,6 @@ public class SingleValuedTimelineMetric {
     metric.setMetricName(this.metricName);
     metric.setAppId(this.appId);
     metric.setHostName(this.hostName);
-    metric.setType(this.type);
     metric.setInstanceId(this.instanceId);
     metric.setStartTime(this.startTime);
     metric.setTimestamp(this.timestamp);
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java
index edace52..3d3b19c 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java
@@ -52,6 +52,14 @@ public class TimelineMetric implements Comparable<TimelineMetric> {
 
   }
 
+  // To reconstruct TimelineMetric from UUID.
+  public TimelineMetric(String metricName, String hostname, String appId, String instanceId) {
+    this.metricName = metricName;
+    this.hostName = hostname;
+    this.appId = appId;
+    this.instanceId = instanceId;
+  }
+
   // copy constructor
   public TimelineMetric(TimelineMetric metric) {
     setMetricName(metric.getMetricName());
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetricMetadata.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetricMetadata.java
index 727becc..6c9712f 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetricMetadata.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetricMetadata.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.metrics2.sink.timeline;
 
+import org.apache.commons.lang.ArrayUtils;
+import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.codehaus.jackson.annotate.JsonIgnore;
@@ -32,6 +34,8 @@ import javax.xml.bind.annotation.XmlRootElement;
 public class TimelineMetricMetadata {
   private String metricName;
   private String appId;
+  private String instanceId;
+  private byte[] uuid;
   private String units;
   private String type = "UNDEFINED";
   private Long seriesStartTime;
@@ -51,11 +55,12 @@ public class TimelineMetricMetadata {
   public TimelineMetricMetadata() {
   }
 
-  public TimelineMetricMetadata(String metricName, String appId, String units,
+  public TimelineMetricMetadata(String metricName, String appId, String instanceId, String units,
                                 String type, Long seriesStartTime,
                                 boolean supportsAggregates, boolean isWhitelisted) {
     this.metricName = metricName;
     this.appId = appId;
+    this.instanceId = instanceId;
     this.units = units;
     this.type = type;
     this.seriesStartTime = seriesStartTime;
@@ -82,6 +87,24 @@ public class TimelineMetricMetadata {
     this.appId = appId;
   }
 
+  @XmlElement(name = "instanceId")
+  public String getInstanceId() {
+    return instanceId;
+  }
+
+  public void setInstanceId(String instanceId) {
+    this.instanceId = instanceId;
+  }
+
+  @XmlElement(name = "uuid")
+  public byte[] getUuid() {
+    return uuid;
+  }
+
+  public void setUuid(byte[] uuid) {
+    this.uuid = uuid;
+  }
+
   @XmlElement(name = "units")
   public String getUnits() {
     return units;
@@ -102,7 +125,7 @@ public class TimelineMetricMetadata {
 
   @XmlElement(name = "seriesStartTime")
   public Long getSeriesStartTime() {
-    return seriesStartTime;
+    return (seriesStartTime != null) ? seriesStartTime : 0l;
   }
 
   public void setSeriesStartTime(Long seriesStartTime) {
@@ -138,9 +161,10 @@ public class TimelineMetricMetadata {
    */
   public boolean needsToBeSynced(TimelineMetricMetadata metadata) throws MetadataException {
     if (!this.metricName.equals(metadata.getMetricName()) ||
-        !this.appId.equals(metadata.getAppId())) {
+        !this.appId.equals(metadata.getAppId()) ||
+      !(StringUtils.isNotEmpty(instanceId) ? instanceId.equals(metadata.instanceId) : StringUtils.isEmpty(metadata.instanceId))) {
       throw new MetadataException("Unexpected argument: metricName = " +
-        metadata.getMetricName() + ", appId = " + metadata.getAppId());
+        metadata.getMetricName() + ", appId = " + metadata.getAppId() + ", instanceId = " + metadata.getInstanceId());
     }
 
     // Series start time should never change
@@ -159,14 +183,15 @@ public class TimelineMetricMetadata {
     TimelineMetricMetadata that = (TimelineMetricMetadata) o;
 
     if (!metricName.equals(that.metricName)) return false;
-    return !(appId != null ? !appId.equals(that.appId) : that.appId != null);
-
+    if (!appId.equals(that.appId)) return false;
+    return (StringUtils.isNotEmpty(instanceId) ? instanceId.equals(that.instanceId) : StringUtils.isEmpty(that.instanceId));
   }
 
   @Override
   public int hashCode() {
     int result = metricName.hashCode();
     result = 31 * result + (appId != null ? appId.hashCode() : 0);
+    result = 31 * result + (instanceId != null ? instanceId.hashCode() : 0);
     return result;
   }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
index 1ba01bc..2d890c0 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
@@ -19,7 +19,6 @@ package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline
 
 import com.google.common.collect.ArrayListMultimap;
 import com.google.common.collect.Multimap;
-import org.apache.ambari.metrics.alertservice.spark.AmsKafkaProducer;
 import org.apache.commons.collections.MapUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
@@ -41,15 +40,16 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineMetricAggregatorFactory;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.AGGREGATOR_NAME;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricHostMetadata;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.function.SeriesAggregateFunction;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.function.TimelineMetricsSeriesAggregateFunction;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.function.TimelineMetricsSeriesAggregateFunctionFactory;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.ConditionBuilder;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.TopNCondition;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.function.SeriesAggregateFunction;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.function.TimelineMetricsSeriesAggregateFunction;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.function.TimelineMetricsSeriesAggregateFunctionFactory;
 
 import java.io.IOException;
 import java.net.MalformedURLException;
@@ -64,11 +64,14 @@ import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
-import java.util.concurrent.*;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ThreadFactory;
+import java.util.concurrent.TimeUnit;
 
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.DEFAULT_TOPN_HOSTS_LIMIT;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_HOST_INMEMORY_AGGREGATION;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.USE_GROUPBY_AGGREGATOR_QUERIES;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.DEFAULT_TOPN_HOSTS_LIMIT;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.ACTUAL_AGGREGATOR_NAMES;
 
 public class HBaseTimelineMetricsService extends AbstractService implements TimelineMetricStore {
@@ -83,7 +86,7 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
   private Integer defaultTopNHostsLimit;
   private MetricCollectorHAController haController;
   private boolean containerMetricsDisabled = false;
-  private AmsKafkaProducer kafkaProducer;
+
   /**
    * Construct the service.
    *
@@ -143,8 +146,6 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
         LOG.info("Using group by aggregators for aggregating host and cluster metrics.");
       }
 
-      kafkaProducer = new AmsKafkaProducer(metricsConf.get("kafka.bootstrap.servers")); //104.196.85.21:6667
-
       // Start the cluster aggregator second
       TimelineMetricAggregator secondClusterAggregator =
         TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(
@@ -154,19 +155,19 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
       // Start the minute cluster aggregator
       TimelineMetricAggregator minuteClusterAggregator =
         TimelineMetricAggregatorFactory.createTimelineClusterAggregatorMinute(
-          hBaseAccessor, metricsConf, haController);
+          hBaseAccessor, metricsConf, metricMetadataManager, haController);
       scheduleAggregatorThread(minuteClusterAggregator);
 
       // Start the hourly cluster aggregator
       TimelineMetricAggregator hourlyClusterAggregator =
         TimelineMetricAggregatorFactory.createTimelineClusterAggregatorHourly(
-          hBaseAccessor, metricsConf, haController);
+          hBaseAccessor, metricsConf, metricMetadataManager, haController);
       scheduleAggregatorThread(hourlyClusterAggregator);
 
       // Start the daily cluster aggregator
       TimelineMetricAggregator dailyClusterAggregator =
         TimelineMetricAggregatorFactory.createTimelineClusterAggregatorDaily(
-          hBaseAccessor, metricsConf, haController);
+          hBaseAccessor, metricsConf, metricMetadataManager, haController);
       scheduleAggregatorThread(dailyClusterAggregator);
 
       // Start the minute host aggregator
@@ -175,20 +176,20 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
       } else {
         TimelineMetricAggregator minuteHostAggregator =
           TimelineMetricAggregatorFactory.createTimelineMetricAggregatorMinute(
-            hBaseAccessor, metricsConf, haController);
+            hBaseAccessor, metricsConf, metricMetadataManager, haController);
         scheduleAggregatorThread(minuteHostAggregator);
       }
 
       // Start the hourly host aggregator
       TimelineMetricAggregator hourlyHostAggregator =
         TimelineMetricAggregatorFactory.createTimelineMetricAggregatorHourly(
-          hBaseAccessor, metricsConf, haController);
+          hBaseAccessor, metricsConf, metricMetadataManager, haController);
       scheduleAggregatorThread(hourlyHostAggregator);
 
       // Start the daily host aggregator
       TimelineMetricAggregator dailyHostAggregator =
         TimelineMetricAggregatorFactory.createTimelineMetricAggregatorDaily(
-          hBaseAccessor, metricsConf, haController);
+          hBaseAccessor, metricsConf, metricMetadataManager, haController);
       scheduleAggregatorThread(dailyHostAggregator);
 
       if (!configuration.isTimelineMetricsServiceWatcherDisabled()) {
@@ -238,6 +239,8 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
     Multimap<String, List<Function>> metricFunctions =
       parseMetricNamesToAggregationFunctions(metricNames);
 
+    List<byte[]> uuids = metricMetadataManager.getUuids(metricFunctions.keySet(), hostnames, applicationId, instanceId);
+
     ConditionBuilder conditionBuilder = new ConditionBuilder(new ArrayList<String>(metricFunctions.keySet()))
       .hostnames(hostnames)
       .appId(applicationId)
@@ -246,7 +249,8 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
       .endTime(endTime)
       .precision(precision)
       .limit(limit)
-      .grouped(groupedByHosts);
+      .grouped(groupedByHosts)
+      .uuid(uuids);
 
     if (topNConfig != null) {
       if (TopNCondition.isTopNHostCondition(metricNames, hostnames) ^ //Only 1 condition should be true.
@@ -372,13 +376,6 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
     // Error indicated by the Sql exception
     TimelinePutResponse response = new TimelinePutResponse();
 
-    try {
-      if (!metrics.getMetrics().isEmpty() && metrics.getMetrics().get(0).getAppId().equals("HOST")) {
-        kafkaProducer.sendMetrics(fromTimelineMetrics(metrics));
-      }
-    } catch (InterruptedException | ExecutionException e) {
-      LOG.error(e);
-    }
     hBaseAccessor.insertMetricRecordsWithMetadata(metricMetadataManager, metrics, false);
 
     return response;
@@ -449,8 +446,18 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
   }
 
   @Override
+  public Map<String, TimelineMetricMetadataKey> getUuids() throws SQLException, IOException {
+    return metricMetadataManager.getUuidKeyMap();
+  }
+
+  @Override
   public Map<String, Set<String>> getHostAppsMetadata() throws SQLException, IOException {
-    return metricMetadataManager.getHostedAppsCache();
+    Map<String, TimelineMetricHostMetadata> hostsMetadata = metricMetadataManager.getHostedAppsCache();
+    Map<String, Set<String>> hostAppMap = new HashMap<>();
+    for (String hostname : hostsMetadata.keySet()) {
+      hostAppMap.put(hostname, hostsMetadata.get(hostname).getHostedApps());
+    }
+    return hostAppMap;
   }
 
   @Override
@@ -469,7 +476,7 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
   public Map<String, Map<String,Set<String>>> getInstanceHostsMetadata(String instanceId, String appId)
           throws SQLException, IOException {
 
-    Map<String, Set<String>> hostedApps = metricMetadataManager.getHostedAppsCache();
+    Map<String, TimelineMetricHostMetadata> hostedApps = metricMetadataManager.getHostedAppsCache();
     Map<String, Set<String>> instanceHosts = new HashMap<>();
     if (configuration.getTimelineMetricsMultipleClusterSupport()) {
       instanceHosts = metricMetadataManager.getHostedInstanceCache();
@@ -480,7 +487,7 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
     if (MapUtils.isEmpty(instanceHosts)) {
       Map<String, Set<String>> appHostMap = new HashMap<String, Set<String>>();
       for (String host : hostedApps.keySet()) {
-        for (String app : hostedApps.get(host)) {
+        for (String app : hostedApps.get(host).getHostedApps()) {
           if (!appHostMap.containsKey(app)) {
             appHostMap.put(app, new HashSet<String>());
           }
@@ -499,7 +506,7 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
 
         Set<String> hostsWithInstance = instanceHosts.get(instance);
         for (String host : hostsWithInstance) {
-          for (String app : hostedApps.get(host)) {
+          for (String app : hostedApps.get(host).getHostedApps()) {
             if (StringUtils.isNotEmpty(appId) && !app.equals(appId)) {
               continue;
             }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
index bab6bb2..0c1e979 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
@@ -132,6 +132,7 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.Function;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineMetricReadHelper;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricHostMetadata;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
@@ -171,7 +172,7 @@ public class PhoenixHBaseAccessor {
   private static final int POINTS_PER_MINUTE = 6;
   public static int RESULTSET_LIMIT = (int)TimeUnit.HOURS.toMinutes(2) * METRICS_PER_MINUTE * POINTS_PER_MINUTE ;
 
-  static final TimelineMetricReadHelper TIMELINE_METRIC_READ_HELPER = new TimelineMetricReadHelper();
+  static TimelineMetricReadHelper TIMELINE_METRIC_READ_HELPER = new TimelineMetricReadHelper();
   static ObjectMapper mapper = new ObjectMapper();
   static TypeReference<TreeMap<Long, Double>> metricValuesTypeRef = new TypeReference<TreeMap<Long, Double>>() {};
 
@@ -190,6 +191,7 @@ public class PhoenixHBaseAccessor {
   private final boolean skipBlockCacheForAggregatorsEnabled;
   private final String timelineMetricsTablesDurability;
   private final String timelineMetricsPrecisionTableDurability;
+  private TimelineMetricMetadataManager metadataManagerInstance;
 
   static final String HSTORE_COMPACTION_CLASS_KEY =
     "hbase.hstore.defaultengine.compactionpolicy.class";
@@ -282,6 +284,7 @@ public class PhoenixHBaseAccessor {
       }
       rawMetricsSource = internalSourceProvider.getInternalMetricsSource(RAW_METRICS, interval, rawMetricsSink);
     }
+    TIMELINE_METRIC_READ_HELPER = new TimelineMetricReadHelper(this.metadataManagerInstance);
   }
 
   public boolean isInsertCacheEmpty() {
@@ -336,19 +339,20 @@ public class PhoenixHBaseAccessor {
           double[] aggregates = AggregatorUtils.calculateAggregates(
                   metric.getMetricValues());
 
-          metricRecordStmt.setString(1, metric.getMetricName());
-          metricRecordStmt.setString(2, metric.getHostName());
-          metricRecordStmt.setString(3, metric.getAppId());
-          metricRecordStmt.setString(4, metric.getInstanceId());
-          metricRecordStmt.setLong(5, currentTime);
-          metricRecordStmt.setLong(6, metric.getStartTime());
-          metricRecordStmt.setString(7, metric.getUnits());
-          metricRecordStmt.setDouble(8, aggregates[0]);
-          metricRecordStmt.setDouble(9, aggregates[1]);
-          metricRecordStmt.setDouble(10, aggregates[2]);
-          metricRecordStmt.setLong(11, (long) aggregates[3]);
+          byte[] uuid = metadataManagerInstance.getUuid(metric);
+          if (uuid == null) {
+            LOG.error("Error computing UUID for metric. Cannot write metrics : " + metric.toString());
+            continue;
+          }
+          metricRecordStmt.setBytes(1, uuid);
+          metricRecordStmt.setLong(2, currentTime);
+          metricRecordStmt.setLong(3, metric.getStartTime());
+          metricRecordStmt.setDouble(4, aggregates[0]);
+          metricRecordStmt.setDouble(5, aggregates[1]);
+          metricRecordStmt.setDouble(6, aggregates[2]);
+          metricRecordStmt.setLong(7, (long) aggregates[3]);
           String json = TimelineUtils.dumpTimelineRecordtoJSON(metric.getMetricValues());
-          metricRecordStmt.setString(12, json);
+          metricRecordStmt.setString(8, json);
 
           try {
             metricRecordStmt.executeUpdate();
@@ -477,20 +481,12 @@ public class PhoenixHBaseAccessor {
       // Host level
       String precisionSql = String.format(CREATE_METRICS_TABLE_SQL,
         encoding, tableTTL.get(METRICS_RECORD_TABLE_NAME), compression);
-      String splitPoints = metricsConf.get(PRECISION_TABLE_SPLIT_POINTS);
-      if (!StringUtils.isEmpty(splitPoints)) {
-        precisionSql += getSplitPointsStr(splitPoints);
-      }
       stmt.executeUpdate(precisionSql);
 
       String hostMinuteAggregrateSql = String.format(CREATE_METRICS_AGGREGATE_TABLE_SQL,
         METRICS_AGGREGATE_MINUTE_TABLE_NAME, encoding,
         tableTTL.get(METRICS_AGGREGATE_MINUTE_TABLE_NAME),
         compression);
-      splitPoints = metricsConf.get(AGGREGATE_TABLE_SPLIT_POINTS);
-      if (!StringUtils.isEmpty(splitPoints)) {
-        hostMinuteAggregrateSql += getSplitPointsStr(splitPoints);
-      }
       stmt.executeUpdate(hostMinuteAggregrateSql);
 
       stmt.executeUpdate(String.format(CREATE_METRICS_AGGREGATE_TABLE_SQL,
@@ -507,10 +503,7 @@ public class PhoenixHBaseAccessor {
         METRICS_CLUSTER_AGGREGATE_TABLE_NAME, encoding,
         tableTTL.get(METRICS_CLUSTER_AGGREGATE_TABLE_NAME),
         compression);
-      splitPoints = metricsConf.get(AGGREGATE_TABLE_SPLIT_POINTS);
-      if (!StringUtils.isEmpty(splitPoints)) {
-        aggregateSql += getSplitPointsStr(splitPoints);
-      }
+
       stmt.executeUpdate(aggregateSql);
       stmt.executeUpdate(String.format(CREATE_METRICS_CLUSTER_AGGREGATE_GROUPED_TABLE_SQL,
         METRICS_CLUSTER_AGGREGATE_MINUTE_TABLE_NAME, encoding,
@@ -961,7 +954,8 @@ public class PhoenixHBaseAccessor {
   private void appendMetricFromResultSet(TimelineMetrics metrics, Condition condition,
                                          Multimap<String, List<Function>> metricFunctions,
                                          ResultSet rs) throws SQLException, IOException {
-    String metricName = rs.getString("METRIC_NAME");
+    byte[] uuid = rs.getBytes("UUID");
+    String metricName = metadataManagerInstance.getMetricNameFromUuid(uuid);
     Collection<List<Function>> functionList = findMetricFunctions(metricFunctions, metricName);
 
     for (List<Function> functions : functionList) {
@@ -1103,7 +1097,8 @@ public class PhoenixHBaseAccessor {
       Condition condition, Multimap<String, List<Function>> metricFunctions,
       ResultSet rs) throws SQLException {
 
-    String metricName = rs.getString("METRIC_NAME");
+    byte[] uuid = rs.getBytes("UUID");
+    String metricName = metadataManagerInstance.getMetricNameFromUuid(uuid);
     Collection<List<Function>> functionList = findMetricFunctions(metricFunctions, metricName);
 
     for (List<Function> functions : functionList) {
@@ -1136,14 +1131,15 @@ public class PhoenixHBaseAccessor {
     SplitByMetricNamesCondition splitCondition =
       new SplitByMetricNamesCondition(condition);
 
-    for (String metricName: splitCondition.getOriginalMetricNames()) {
+    for (byte[] uuid: condition.getUuids()) {
 
-      splitCondition.setCurrentMetric(metricName);
+      splitCondition.setCurrentUuid(uuid);
       stmt = PhoenixTransactSQL.prepareGetLatestAggregateMetricSqlStmt(conn, splitCondition);
       ResultSet rs = null;
       try {
         rs = stmt.executeQuery();
         while (rs.next()) {
+          String metricName = metadataManagerInstance.getMetricNameFromUuid(uuid);
           Collection<List<Function>> functionList = findMetricFunctions(metricFunctions, metricName);
           for (List<Function> functions : functionList) {
             if (functions != null) {
@@ -1187,14 +1183,16 @@ public class PhoenixHBaseAccessor {
       countColumnName = "HOSTS_COUNT";
     }
 
+    byte[] uuid = rs.getBytes("UUID");
+    TimelineMetric timelineMetric = metadataManagerInstance.getMetricFromUuid(uuid);
+
     SingleValuedTimelineMetric metric = new SingleValuedTimelineMetric(
-      rs.getString("METRIC_NAME") + f.getSuffix(),
-      rs.getString("APP_ID"),
-      rs.getString("INSTANCE_ID"),
+      timelineMetric.getMetricName() + f.getSuffix(),
+      timelineMetric.getAppId(),
+      timelineMetric.getInstanceId(),
       null,
       rs.getLong("SERVER_TIME"),
-      rs.getLong("SERVER_TIME"),
-      rs.getString("UNITS")
+      rs.getLong("SERVER_TIME")
     );
 
     double value;
@@ -1281,18 +1279,19 @@ public class PhoenixHBaseAccessor {
         TimelineMetric metric = metricAggregate.getKey();
         MetricHostAggregate hostAggregate = metricAggregate.getValue();
 
+        byte[] uuid = metadataManagerInstance.getUuid(metric);
+        if (uuid == null) {
+          LOG.error("Error computing UUID for metric. Cannot write metric : " + metric.toString());
+          continue;
+        }
         rowCount++;
         stmt.clearParameters();
-        stmt.setString(1, metric.getMetricName());
-        stmt.setString(2, metric.getHostName());
-        stmt.setString(3, metric.getAppId());
-        stmt.setString(4, metric.getInstanceId());
-        stmt.setLong(5, metric.getTimestamp());
-        stmt.setString(6, metric.getType());
-        stmt.setDouble(7, hostAggregate.getSum());
-        stmt.setDouble(8, hostAggregate.getMax());
-        stmt.setDouble(9, hostAggregate.getMin());
-        stmt.setDouble(10, hostAggregate.getNumberOfSamples());
+        stmt.setBytes(1, uuid);
+        stmt.setLong(2, metric.getTimestamp());
+        stmt.setDouble(3, hostAggregate.getSum());
+        stmt.setDouble(4, hostAggregate.getMax());
+        stmt.setDouble(5, hostAggregate.getMin());
+        stmt.setDouble(6, hostAggregate.getNumberOfSamples());
 
         try {
           stmt.executeUpdate();
@@ -1376,16 +1375,18 @@ public class PhoenixHBaseAccessor {
         }
 
         rowCount++;
+        byte[] uuid =  metadataManagerInstance.getUuid(clusterMetric);
+        if (uuid == null) {
+          LOG.error("Error computing UUID for metric. Cannot write metrics : " + clusterMetric.toString());
+          continue;
+        }
         stmt.clearParameters();
-        stmt.setString(1, clusterMetric.getMetricName());
-        stmt.setString(2, clusterMetric.getAppId());
-        stmt.setString(3, clusterMetric.getInstanceId());
-        stmt.setLong(4, clusterMetric.getTimestamp());
-        stmt.setString(5, clusterMetric.getType());
-        stmt.setDouble(6, aggregate.getSum());
-        stmt.setInt(7, aggregate.getNumberOfHosts());
-        stmt.setDouble(8, aggregate.getMax());
-        stmt.setDouble(9, aggregate.getMin());
+        stmt.setBytes(1, uuid);
+        stmt.setLong(2, clusterMetric.getTimestamp());
+        stmt.setDouble(3, aggregate.getSum());
+        stmt.setInt(4, aggregate.getNumberOfHosts());
+        stmt.setDouble(5, aggregate.getMax());
+        stmt.setDouble(6, aggregate.getMin());
 
         try {
           stmt.executeUpdate();
@@ -1462,17 +1463,20 @@ public class PhoenixHBaseAccessor {
             "aggregate = " + aggregate);
         }
 
+        byte[] uuid = metadataManagerInstance.getUuid(clusterMetric);
+        if (uuid == null) {
+          LOG.error("Error computing UUID for metric. Cannot write metric : " + clusterMetric.toString());
+          continue;
+        }
+
         rowCount++;
         stmt.clearParameters();
-        stmt.setString(1, clusterMetric.getMetricName());
-        stmt.setString(2, clusterMetric.getAppId());
-        stmt.setString(3, clusterMetric.getInstanceId());
-        stmt.setLong(4, clusterMetric.getTimestamp());
-        stmt.setString(5, clusterMetric.getType());
-        stmt.setDouble(6, aggregate.getSum());
-        stmt.setLong(7, aggregate.getNumberOfSamples());
-        stmt.setDouble(8, aggregate.getMax());
-        stmt.setDouble(9, aggregate.getMin());
+        stmt.setBytes(1, uuid);
+        stmt.setLong(2, clusterMetric.getTimestamp());
+        stmt.setDouble(3, aggregate.getSum());
+        stmt.setLong(4, aggregate.getNumberOfSamples());
+        stmt.setDouble(5, aggregate.getMax());
+        stmt.setDouble(6, aggregate.getMin());
 
         try {
           stmt.executeUpdate();
@@ -1560,21 +1564,23 @@ public class PhoenixHBaseAccessor {
    * One time save of metadata when discovering topology during aggregation.
    * @throws SQLException
    */
-  public void saveHostAppsMetadata(Map<String, Set<String>> hostedApps) throws SQLException {
+  public void saveHostAppsMetadata(Map<String, TimelineMetricHostMetadata> hostMetadata) throws SQLException {
     Connection conn = getConnection();
     PreparedStatement stmt = null;
     try {
       stmt = conn.prepareStatement(UPSERT_HOSTED_APPS_METADATA_SQL);
       int rowCount = 0;
 
-      for (Map.Entry<String, Set<String>> hostedAppsEntry : hostedApps.entrySet()) {
+      for (Map.Entry<String, TimelineMetricHostMetadata> hostedAppsEntry : hostMetadata.entrySet()) {
+        TimelineMetricHostMetadata timelineMetricHostMetadata = hostedAppsEntry.getValue();
         if (LOG.isTraceEnabled()) {
           LOG.trace("HostedAppsMetadata: " + hostedAppsEntry);
         }
 
         stmt.clearParameters();
         stmt.setString(1, hostedAppsEntry.getKey());
-        stmt.setString(2, StringUtils.join(hostedAppsEntry.getValue(), ","));
+        stmt.setBytes(2, timelineMetricHostMetadata.getUuid());
+        stmt.setString(3, StringUtils.join(timelineMetricHostMetadata.getHostedApps(), ","));
         try {
           stmt.executeUpdate();
           rowCount++;
@@ -1678,15 +1684,21 @@ public class PhoenixHBaseAccessor {
             + ", seriesStartTime = " + metadata.getSeriesStartTime()
           );
         }
-
-        stmt.clearParameters();
-        stmt.setString(1, metadata.getMetricName());
-        stmt.setString(2, metadata.getAppId());
-        stmt.setString(3, metadata.getUnits());
-        stmt.setString(4, metadata.getType());
-        stmt.setLong(5, metadata.getSeriesStartTime());
-        stmt.setBoolean(6, metadata.isSupportsAggregates());
-        stmt.setBoolean(7, metadata.isWhitelisted());
+        try {
+          stmt.clearParameters();
+          stmt.setString(1, metadata.getMetricName());
+          stmt.setString(2, metadata.getAppId());
+          stmt.setString(3, metadata.getInstanceId());
+          stmt.setBytes(4, metadata.getUuid());
+          stmt.setString(5, metadata.getUnits());
+          stmt.setString(6, metadata.getType());
+          stmt.setLong(7, metadata.getSeriesStartTime());
+          stmt.setBoolean(8, metadata.isSupportsAggregates());
+          stmt.setBoolean(9, metadata.isWhitelisted());
+        } catch (Exception e) {
+          LOG.error("Exception in saving metric metadata entry. ");
+          continue;
+        }
 
         try {
           stmt.executeUpdate();
@@ -1717,8 +1729,8 @@ public class PhoenixHBaseAccessor {
     }
   }
 
-  public Map<String, Set<String>> getHostedAppsMetadata() throws SQLException {
-    Map<String, Set<String>> hostedAppMap = new HashMap<>();
+  public Map<String, TimelineMetricHostMetadata> getHostedAppsMetadata() throws SQLException {
+    Map<String, TimelineMetricHostMetadata> hostedAppMap = new HashMap<>();
     Connection conn = getConnection();
     PreparedStatement stmt = null;
     ResultSet rs = null;
@@ -1728,8 +1740,9 @@ public class PhoenixHBaseAccessor {
       rs = stmt.executeQuery();
 
       while (rs.next()) {
-        hostedAppMap.put(rs.getString("HOSTNAME"),
-          new HashSet<>(Arrays.asList(StringUtils.split(rs.getString("APP_IDS"), ","))));
+        TimelineMetricHostMetadata hostMetadata = new TimelineMetricHostMetadata(new HashSet<>(Arrays.asList(StringUtils.split(rs.getString("APP_IDS"), ","))));
+        hostMetadata.setUuid(rs.getBytes("UUID"));
+        hostedAppMap.put(rs.getString("HOSTNAME"), hostMetadata);
       }
 
     } finally {
@@ -1820,9 +1833,11 @@ public class PhoenixHBaseAccessor {
       while (rs.next()) {
         String metricName = rs.getString("METRIC_NAME");
         String appId = rs.getString("APP_ID");
+        String instanceId = rs.getString("INSTANCE_ID");
         TimelineMetricMetadata metadata = new TimelineMetricMetadata(
           metricName,
           appId,
+          instanceId,
           rs.getString("UNITS"),
           rs.getString("TYPE"),
           rs.getLong("START_TIME"),
@@ -1830,8 +1845,9 @@ public class PhoenixHBaseAccessor {
           rs.getBoolean("IS_WHITELISTED")
         );
 
-        TimelineMetricMetadataKey key = new TimelineMetricMetadataKey(metricName, appId);
+        TimelineMetricMetadataKey key = new TimelineMetricMetadataKey(metricName, appId, instanceId);
         metadata.setIsPersisted(true); // Always true on retrieval
+        metadata.setUuid(rs.getBytes("UUID"));
         metadataMap.put(key, metadata);
       }
 
@@ -1862,4 +1878,8 @@ public class PhoenixHBaseAccessor {
     return metadataMap;
   }
 
+  public void setMetadataInstance(TimelineMetricMetadataManager metadataManager) {
+    this.metadataManagerInstance = metadataManager;
+    TIMELINE_METRIC_READ_HELPER = new TimelineMetricReadHelper(this.metadataManagerInstance);
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
index b1ecc51..6083859 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
@@ -313,11 +313,16 @@ public class TimelineMetricConfiguration {
   public static final String TIMELINE_METRICS_PRECISION_TABLE_HBASE_BLOCKING_STORE_FILES =
     "timeline.metrics.precision.table.hbase.hstore.blockingStoreFiles";
 
+<<<<<<< HEAD
   public static final String TIMELINE_METRICS_SUPPORT_MULTIPLE_CLUSTERS =
     "timeline.metrics.support.multiple.clusters";
 
   public static final String TIMELINE_METRICS_EVENT_METRIC_PATTERNS =
     "timeline.metrics.downsampler.event.metric.patterns";
+=======
+  public static final String TIMELINE_METRICS_UUID_GEN_STRATEGY =
+    "timeline.metrics.uuid.gen.strategy";
+>>>>>>> AMBARI-21214 : Use a uuid vs long row key for metrics in AMS schema. (avijayan)
 
   public static final String HOST_APP_ID = "HOST";
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
index d052d54..dab4494 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
@@ -25,6 +25,8 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
 import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
+
 import java.io.IOException;
 import java.sql.SQLException;
 import java.util.List;
@@ -98,9 +100,11 @@ public interface TimelineMetricStore {
    */
   Map<String, Map<String,Set<String>>> getInstanceHostsMetadata(String instanceId, String appId) throws SQLException, IOException;
 
-  /**
-   * Return a list of known live collector nodes
-   * @return [ hostname ]
-   */
+ Map<String, TimelineMetricMetadataKey> getUuids() throws SQLException, IOException;
+
+    /**
+     * Return a list of known live collector nodes
+     * @return [ hostname ]
+     */
   List<String> getLiveInstances();
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsFilter.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsFilter.java
index 63cc510..ef7186c 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsFilter.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsFilter.java
@@ -167,11 +167,4 @@ public class TimelineMetricsFilter {
     return false;
   }
 
-  public static void addToWhitelist(String metricName) {
-
-    if (StringUtils.isNotEmpty(metricName)) {
-      whitelistedMetrics.add(metricName);
-    }
-  }
-
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java
index e161abe..d953be4 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java
@@ -272,7 +272,8 @@ public abstract class AbstractTimelineAggregator implements TimelineMetricAggreg
         conn.commit();
         LOG.info(rows + " row(s) updated in aggregation.");
 
-        downsample(conn, startTime, endTime);
+        //TODO : Fix downsampling after UUID change.
+        //downsample(conn, startTime, endTime);
       } else {
         rs = stmt.executeQuery();
       }
@@ -280,7 +281,7 @@ public abstract class AbstractTimelineAggregator implements TimelineMetricAggreg
 
       aggregate(rs, startTime, endTime);
 
-    } catch (SQLException | IOException e) {
+    } catch (Exception e) {
       LOG.error("Exception during aggregating metrics.", e);
       success = false;
     } finally {
@@ -455,25 +456,29 @@ public abstract class AbstractTimelineAggregator implements TimelineMetricAggreg
    * @return
    */
   protected String getDownsampledMetricSkipClause() {
-    if (CollectionUtils.isEmpty(this.downsampleMetricPatterns)) {
-      return StringUtils.EMPTY;
-    }
-
-    StringBuilder sb = new StringBuilder();
-
-    for (int i = 0; i < downsampleMetricPatterns.size(); i++) {
-      sb.append(" METRIC_NAME");
-      sb.append(" NOT");
-      sb.append(" LIKE ");
-      sb.append("'" + downsampleMetricPatterns.get(i) + "'");
 
-      if (i < downsampleMetricPatterns.size() - 1) {
-        sb.append(" AND ");
-      }
-    }
-
-    sb.append(" AND ");
-    return sb.toString();
+    //TODO Fix downsampling for UUID change.
+    return StringUtils.EMPTY;
+
+//    if (CollectionUtils.isEmpty(this.downsampleMetricPatterns)) {
+//      return StringUtils.EMPTY;
+//    }
+//
+//    StringBuilder sb = new StringBuilder();
+//
+//    for (int i = 0; i < downsampleMetricPatterns.size(); i++) {
+//      sb.append(" METRIC_NAME");
+//      sb.append(" NOT");
+//      sb.append(" LIKE ");
+//      sb.append("'" + downsampleMetricPatterns.get(i) + "'");
+//
+//      if (i < downsampleMetricPatterns.size() - 1) {
+//        sb.append(" AND ");
+//      }
+//    }
+//
+//    sb.append(" AND ");
+//    return sb.toString();
   }
 
   /**
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineClusterMetric.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineClusterMetric.java
index b7d9110..6e793e1 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineClusterMetric.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineClusterMetric.java
@@ -22,15 +22,13 @@ public class TimelineClusterMetric {
   private String appId;
   private String instanceId;
   private long timestamp;
-  private String type;
 
   public TimelineClusterMetric(String metricName, String appId, String instanceId,
-                        long timestamp, String type) {
+                        long timestamp) {
     this.metricName = metricName;
     this.appId = appId;
     this.instanceId = instanceId;
     this.timestamp = timestamp;
-    this.type = type;
   }
 
   public String getMetricName() {
@@ -49,8 +47,6 @@ public class TimelineClusterMetric {
     return timestamp;
   }
 
-  public String getType() { return type; }
-
   @Override
   public boolean equals(Object o) {
     if (this == o) return true;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
index 2eb3553..081e610 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
@@ -95,6 +95,7 @@ public class TimelineMetricAggregatorFactory {
    */
   public static TimelineMetricAggregator createTimelineMetricAggregatorMinute
     (PhoenixHBaseAccessor hBaseAccessor, Configuration metricsConf,
+     TimelineMetricMetadataManager metadataManager,
      MetricCollectorHAController haController) {
 
     String checkpointDir = metricsConf.get(
@@ -128,6 +129,7 @@ public class TimelineMetricAggregatorFactory {
 
     return new TimelineMetricHostAggregator(
       METRIC_RECORD_MINUTE,
+      metadataManager,
       hBaseAccessor, metricsConf,
       checkpointLocation,
       sleepIntervalMillis,
@@ -145,6 +147,7 @@ public class TimelineMetricAggregatorFactory {
    */
   public static TimelineMetricAggregator createTimelineMetricAggregatorHourly
     (PhoenixHBaseAccessor hBaseAccessor, Configuration metricsConf,
+     TimelineMetricMetadataManager metadataManager,
      MetricCollectorHAController haController) {
 
     String checkpointDir = metricsConf.get(
@@ -178,6 +181,7 @@ public class TimelineMetricAggregatorFactory {
 
     return new TimelineMetricHostAggregator(
       METRIC_RECORD_HOURLY,
+      metadataManager,
       hBaseAccessor, metricsConf,
       checkpointLocation,
       sleepIntervalMillis,
@@ -195,6 +199,7 @@ public class TimelineMetricAggregatorFactory {
    */
   public static TimelineMetricAggregator createTimelineMetricAggregatorDaily
     (PhoenixHBaseAccessor hBaseAccessor, Configuration metricsConf,
+     TimelineMetricMetadataManager metadataManager,
      MetricCollectorHAController haController) {
 
     String checkpointDir = metricsConf.get(
@@ -228,6 +233,7 @@ public class TimelineMetricAggregatorFactory {
 
     return new TimelineMetricHostAggregator(
       METRIC_RECORD_DAILY,
+      metadataManager,
       hBaseAccessor, metricsConf,
       checkpointLocation,
       sleepIntervalMillis,
@@ -291,6 +297,7 @@ public class TimelineMetricAggregatorFactory {
    */
   public static TimelineMetricAggregator createTimelineClusterAggregatorMinute(
     PhoenixHBaseAccessor hBaseAccessor, Configuration metricsConf,
+    TimelineMetricMetadataManager metadataManager,
     MetricCollectorHAController haController) {
 
     String checkpointDir = metricsConf.get(
@@ -326,6 +333,7 @@ public class TimelineMetricAggregatorFactory {
 
     return new TimelineMetricClusterAggregator(
       METRIC_AGGREGATE_MINUTE,
+      metadataManager,
       hBaseAccessor, metricsConf,
       checkpointLocation,
       sleepIntervalMillis,
@@ -344,6 +352,7 @@ public class TimelineMetricAggregatorFactory {
    */
   public static TimelineMetricAggregator createTimelineClusterAggregatorHourly(
     PhoenixHBaseAccessor hBaseAccessor, Configuration metricsConf,
+    TimelineMetricMetadataManager metadataManager,
     MetricCollectorHAController haController) {
 
     String checkpointDir = metricsConf.get(
@@ -379,6 +388,7 @@ public class TimelineMetricAggregatorFactory {
 
     return new TimelineMetricClusterAggregator(
       METRIC_AGGREGATE_HOURLY,
+      metadataManager,
       hBaseAccessor, metricsConf,
       checkpointLocation,
       sleepIntervalMillis,
@@ -397,6 +407,7 @@ public class TimelineMetricAggregatorFactory {
    */
   public static TimelineMetricAggregator createTimelineClusterAggregatorDaily(
     PhoenixHBaseAccessor hBaseAccessor, Configuration metricsConf,
+    TimelineMetricMetadataManager metadataManager,
     MetricCollectorHAController haController) {
 
     String checkpointDir = metricsConf.get(
@@ -432,6 +443,7 @@ public class TimelineMetricAggregatorFactory {
 
     return new TimelineMetricClusterAggregator(
       METRIC_AGGREGATE_DAILY,
+      metadataManager,
       hBaseAccessor, metricsConf,
       checkpointLocation,
       sleepIntervalMillis,
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAppAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAppAggregator.java
index 9eaf456..55104de 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAppAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAppAggregator.java
@@ -24,6 +24,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricsFilter;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricHostMetadata;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 
@@ -48,14 +49,14 @@ public class TimelineMetricAppAggregator {
   private static final Log LOG = LogFactory.getLog(TimelineMetricAppAggregator.class);
   // Lookup to check candidacy of an app
   private final List<String> appIdsToAggregate;
-  private final Map<String, Set<String>> hostedAppsMap;
+  private final Map<String, TimelineMetricHostMetadata> hostMetadata;
   Map<TimelineClusterMetric, MetricClusterAggregate> aggregateClusterMetrics = new HashMap<>();
   TimelineMetricMetadataManager metadataManagerInstance;
 
   public TimelineMetricAppAggregator(TimelineMetricMetadataManager metadataManager,
                                      Configuration metricsConf) {
     appIdsToAggregate = getAppIdsForHostAggregation(metricsConf);
-    hostedAppsMap = metadataManager.getHostedAppsCache();
+    hostMetadata = metadataManager.getHostedAppsCache();
     metadataManagerInstance = metadataManager;
     LOG.info("AppIds configured for aggregation: " + appIdsToAggregate);
   }
@@ -95,17 +96,20 @@ public class TimelineMetricAppAggregator {
     // If metric is a host metric and host has apps on it
     if (appId.equalsIgnoreCase(HOST_APP_ID)) {
       // Candidate metric, update app aggregates
-      if (hostedAppsMap.containsKey(hostname)) {
+      if (hostMetadata.containsKey(hostname)) {
         updateAppAggregatesFromHostMetric(clusterMetric, hostname, metricValue);
       }
     } else {
       // Build the hostedapps map if not a host metric
       // Check app candidacy for host aggregation
       if (appIdsToAggregate.contains(appId)) {
-        Set<String> appIds = hostedAppsMap.get(hostname);
-        if (appIds == null) {
+        TimelineMetricHostMetadata timelineMetricHostMetadata = hostMetadata.get(hostname);
+        Set<String> appIds;
+        if (timelineMetricHostMetadata == null) {
           appIds = new HashSet<>();
-          hostedAppsMap.put(hostname, appIds);
+          hostMetadata.put(hostname, new TimelineMetricHostMetadata(appIds));
+        } else {
+          appIds = timelineMetricHostMetadata.getHostedApps();
         }
         if (!appIds.contains(appId)) {
           appIds.add(appId);
@@ -127,20 +131,20 @@ public class TimelineMetricAppAggregator {
       return;
     }
 
-    TimelineMetricMetadataKey appKey =  new TimelineMetricMetadataKey(clusterMetric.getMetricName(), HOST_APP_ID);
-    Set<String> apps = hostedAppsMap.get(hostname);
+    TimelineMetricMetadataKey appKey =  new TimelineMetricMetadataKey(clusterMetric.getMetricName(), HOST_APP_ID, clusterMetric.getInstanceId());
+    Set<String> apps = hostMetadata.get(hostname).getHostedApps();
     for (String appId : apps) {
       if (appIdsToAggregate.contains(appId)) {
 
         appKey.setAppId(appId);
         TimelineMetricMetadata appMetadata = metadataManagerInstance.getMetadataCacheValue(appKey);
         if (appMetadata == null) {
-          TimelineMetricMetadataKey key = new TimelineMetricMetadataKey(clusterMetric.getMetricName(), HOST_APP_ID);
+          TimelineMetricMetadataKey key = new TimelineMetricMetadataKey(clusterMetric.getMetricName(), HOST_APP_ID, clusterMetric.getInstanceId());
           TimelineMetricMetadata hostMetricMetadata = metadataManagerInstance.getMetadataCacheValue(key);
 
           if (hostMetricMetadata != null) {
             TimelineMetricMetadata timelineMetricMetadata = new TimelineMetricMetadata(clusterMetric.getMetricName(),
-              appId, hostMetricMetadata.getUnits(), hostMetricMetadata.getType(), hostMetricMetadata.getSeriesStartTime(),
+              appId, clusterMetric.getInstanceId(), hostMetricMetadata.getUnits(), hostMetricMetadata.getType(), hostMetricMetadata.getSeriesStartTime(),
               hostMetricMetadata.isSupportsAggregates(), TimelineMetricsFilter.acceptMetric(clusterMetric.getMetricName(), appId));
             metadataManagerInstance.putIfModifiedTimelineMetricMetadata(timelineMetricMetadata);
           }
@@ -151,9 +155,7 @@ public class TimelineMetricAppAggregator {
           new TimelineClusterMetric(clusterMetric.getMetricName(),
             appId,
             clusterMetric.getInstanceId(),
-            clusterMetric.getTimestamp(),
-            clusterMetric.getType()
-          );
+            clusterMetric.getTimestamp());
 
         MetricClusterAggregate clusterAggregate = aggregateClusterMetrics.get(appTimelineClusterMetric);
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java
index 74d4013..0f6dd79 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java
@@ -23,6 +23,7 @@ import org.apache.hadoop.metrics2.sink.timeline.MetricHostAggregate;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.AGGREGATOR_NAME;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultCondition;
 
@@ -37,10 +38,11 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_TABLE_NAME;
 
 public class TimelineMetricClusterAggregator extends AbstractTimelineAggregator {
-  private final TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(true);
+  private final TimelineMetricReadHelper readHelper;
   private final boolean isClusterPrecisionInputTable;
 
   public TimelineMetricClusterAggregator(AGGREGATOR_NAME aggregatorName,
+                                         TimelineMetricMetadataManager metricMetadataManager,
                                          PhoenixHBaseAccessor hBaseAccessor,
                                          Configuration metricsConf,
                                          String checkpointLocation,
@@ -56,6 +58,7 @@ public class TimelineMetricClusterAggregator extends AbstractTimelineAggregator
       hostAggregatorDisabledParam, inputTableName, outputTableName,
       nativeTimeRangeDelay, haController);
     isClusterPrecisionInputTable = inputTableName.equals(METRICS_CLUSTER_AGGREGATE_TABLE_NAME);
+    readHelper = new TimelineMetricReadHelper(metricMetadataManager, true);
   }
 
   @Override
@@ -71,9 +74,7 @@ public class TimelineMetricClusterAggregator extends AbstractTimelineAggregator
     }
 
     condition.setStatement(sqlStr);
-    condition.addOrderByColumn("METRIC_NAME");
-    condition.addOrderByColumn("APP_ID");
-    condition.addOrderByColumn("INSTANCE_ID");
+    condition.addOrderByColumn("UUID");
     condition.addOrderByColumn("SERVER_TIME");
     return condition;
   }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecond.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecond.java
index ca457f0..8dfc950 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecond.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecond.java
@@ -61,7 +61,7 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
  */
 public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggregator {
   public Long timeSliceIntervalMillis;
-  private TimelineMetricReadHelper timelineMetricReadHelper = new TimelineMetricReadHelper(true);
+  private TimelineMetricReadHelper timelineMetricReadHelper;
   // Aggregator to perform app-level aggregates for host metrics
   private final TimelineMetricAppAggregator appAggregator;
   // 1 minute client side buffering adjustment
@@ -69,8 +69,12 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
   private final boolean interpolationEnabled;
   private TimelineMetricMetadataManager metadataManagerInstance;
   private String skipAggrPatternStrings;
+<<<<<<< HEAD
   private String skipInterpolationMetricPatternStrings;
   private Set<Pattern> skipInterpolationMetricPatterns = new HashSet<>();
+=======
+  private final static String liveHostsMetricName = "live_hosts";
+>>>>>>> AMBARI-21214 : Use a uuid vs long row key for metrics in AMS schema. (avijayan)
 
   public TimelineMetricClusterAggregatorSecond(AGGREGATOR_NAME aggregatorName,
                                                TimelineMetricMetadataManager metadataManager,
@@ -95,6 +99,7 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
     this.serverTimeShiftAdjustment = Long.parseLong(metricsConf.get(SERVER_SIDE_TIMESIFT_ADJUSTMENT, "90000"));
     this.interpolationEnabled = Boolean.parseBoolean(metricsConf.get(TIMELINE_METRICS_CLUSTER_AGGREGATOR_INTERPOLATION_ENABLED, "true"));
     this.skipAggrPatternStrings = metricsConf.get(TIMELINE_METRIC_AGGREGATION_SQL_FILTERS);
+<<<<<<< HEAD
     this.skipInterpolationMetricPatternStrings = metricsConf.get(TIMELINE_METRICS_EVENT_METRIC_PATTERNS, "");
 
     if (StringUtils.isNotEmpty(skipInterpolationMetricPatternStrings)) {
@@ -104,6 +109,9 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
         skipInterpolationMetricPatterns.add(Pattern.compile(javaPatternString));
       }
     }
+=======
+    this.timelineMetricReadHelper = new TimelineMetricReadHelper(metadataManager, true);
+>>>>>>> AMBARI-21214 : Use a uuid vs long row key for metrics in AMS schema. (avijayan)
   }
 
   @Override
@@ -143,10 +151,7 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
     condition.setStatement(String.format(GET_METRIC_SQL,
       getQueryHint(startTime), METRICS_RECORD_TABLE_NAME));
     // Retaining order of the row-key avoids client side merge sort.
-    condition.addOrderByColumn("METRIC_NAME");
-    condition.addOrderByColumn("HOSTNAME");
-    condition.addOrderByColumn("APP_ID");
-    condition.addOrderByColumn("INSTANCE_ID");
+    condition.addOrderByColumn("UUID");
     condition.addOrderByColumn("SERVER_TIME");
     return condition;
   }
@@ -228,7 +233,7 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
   protected int processAggregateClusterMetrics(Map<TimelineClusterMetric, MetricClusterAggregate> aggregateClusterMetrics,
                                               TimelineMetric metric, List<Long[]> timeSlices) {
     // Create time slices
-    TimelineMetricMetadataKey appKey =  new TimelineMetricMetadataKey(metric.getMetricName(), metric.getAppId());
+    TimelineMetricMetadataKey appKey =  new TimelineMetricMetadataKey(metric.getMetricName(), metric.getAppId(), metric.getInstanceId());
     TimelineMetricMetadata metricMetadata = metadataManagerInstance.getMetadataCacheValue(appKey);
 
     if (metricMetadata != null && !metricMetadata.isSupportsAggregates()) {
@@ -301,8 +306,7 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
           timelineMetric.getMetricName(),
           timelineMetric.getAppId(),
           timelineMetric.getInstanceId(),
-          timestamp,
-          timelineMetric.getType());
+          timestamp);
 
         if (prevTimestamp < 0 || timestamp.equals(prevTimestamp)) {
           Double newValue = metric.getValue();
@@ -369,8 +373,7 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
               timelineMetric.getMetricName(),
               timelineMetric.getAppId(),
               timelineMetric.getInstanceId(),
-              entry.getKey(),
-              timelineMetric.getType());
+              entry.getKey());
 
             timelineClusterMetricMap.put(clusterMetric, interpolatedValue);
           } else {
@@ -427,8 +430,7 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
               timelineMetric.getMetricName(),
               timelineMetric.getAppId(),
               timelineMetric.getInstanceId(),
-              timeSlice[1],
-              timelineMetric.getType());
+              timeSlice[1]);
 
             LOG.debug("Interpolated value : " + interpolatedValue);
             timelineClusterMetricMap.put(clusterMetric, interpolatedValue);
@@ -458,13 +460,15 @@ public class TimelineMetricClusterAggregatorSecond extends AbstractTimelineAggre
 
     for (Map.Entry<String, MutableInt> appHostsEntry : appHostsCount.entrySet()) {
       TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric(
-        "live_hosts", appHostsEntry.getKey(), null, timestamp, null);
+        liveHostsMetricName, appHostsEntry.getKey(), null, timestamp);
 
       Integer numOfHosts = appHostsEntry.getValue().intValue();
 
       MetricClusterAggregate metricClusterAggregate = new MetricClusterAggregate(
         (double) numOfHosts, 1, null, (double) numOfHosts, (double) numOfHosts);
 
+      metadataManagerInstance.getUuid(timelineClusterMetric);
+
       aggregateClusterMetrics.put(timelineClusterMetric, metricClusterAggregate);
     }
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricHostAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricHostAggregator.java
index a17433b..8f941e1 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricHostAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricHostAggregator.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.AGGREGATOR_NAME;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultCondition;
 
@@ -38,9 +39,10 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 
 public class TimelineMetricHostAggregator extends AbstractTimelineAggregator {
   private static final Log LOG = LogFactory.getLog(TimelineMetricHostAggregator.class);
-  TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
+  TimelineMetricReadHelper readHelper;
 
   public TimelineMetricHostAggregator(AGGREGATOR_NAME aggregatorName,
+                                      TimelineMetricMetadataManager metricMetadataManager,
                                       PhoenixHBaseAccessor hBaseAccessor,
                                       Configuration metricsConf,
                                       String checkpointLocation,
@@ -54,6 +56,7 @@ public class TimelineMetricHostAggregator extends AbstractTimelineAggregator {
     super(aggregatorName, hBaseAccessor, metricsConf, checkpointLocation,
       sleepIntervalMillis, checkpointCutOffMultiplier, hostAggregatorDisabledParam,
       tableName, outputTableName, nativeTimeRangeDelay, haController);
+    readHelper = new TimelineMetricReadHelper(metricMetadataManager, false);
   }
 
   @Override
@@ -74,11 +77,8 @@ public class TimelineMetricHostAggregator extends AbstractTimelineAggregator {
     condition.setStatement(String.format(GET_METRIC_AGGREGATE_ONLY_SQL,
       getQueryHint(startTime), tableName));
     // Retaining order of the row-key avoids client side merge sort.
-    condition.addOrderByColumn("METRIC_NAME");
-    condition.addOrderByColumn("HOSTNAME");
+    condition.addOrderByColumn("UUID");
     condition.addOrderByColumn("SERVER_TIME");
-    condition.addOrderByColumn("APP_ID");
-    condition.addOrderByColumn("INSTANCE_ID");
     return condition;
   }
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricReadHelper.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricReadHelper.java
index be21f5a..8a5606a 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricReadHelper.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricReadHelper.java
@@ -23,16 +23,17 @@ import org.apache.hadoop.metrics2.sink.timeline.MetricHostAggregate;
 import org.apache.hadoop.metrics2.sink.timeline.SingleValuedTimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 
 import java.io.IOException;
 import java.sql.ResultSet;
 import java.sql.SQLException;
-import java.util.Map;
 import java.util.TreeMap;
 
 public class TimelineMetricReadHelper {
 
   private boolean ignoreInstance = false;
+  private TimelineMetricMetadataManager metadataManagerInstance = null;
 
   public TimelineMetricReadHelper() {}
 
@@ -40,6 +41,15 @@ public class TimelineMetricReadHelper {
     this.ignoreInstance = ignoreInstance;
   }
 
+  public TimelineMetricReadHelper(TimelineMetricMetadataManager timelineMetricMetadataManager) {
+    this.metadataManagerInstance = timelineMetricMetadataManager;
+  }
+
+  public TimelineMetricReadHelper(TimelineMetricMetadataManager timelineMetricMetadataManager, boolean ignoreInstance) {
+    this.metadataManagerInstance = timelineMetricMetadataManager;
+    this.ignoreInstance = ignoreInstance;
+  }
+
   public TimelineMetric getTimelineMetricFromResultSet(ResultSet rs)
       throws SQLException, IOException {
     TimelineMetric metric = getTimelineMetricCommonsFromResultSet(rs);
@@ -51,15 +61,16 @@ public class TimelineMetricReadHelper {
   public SingleValuedTimelineMetric getAggregatedTimelineMetricFromResultSet(ResultSet rs,
       Function f) throws SQLException, IOException {
 
+    byte[] uuid = rs.getBytes("UUID");
+    TimelineMetric timelineMetric = metadataManagerInstance.getMetricFromUuid(uuid);
     Function function = (f != null) ? f : Function.DEFAULT_VALUE_FUNCTION;
     SingleValuedTimelineMetric metric = new SingleValuedTimelineMetric(
-      rs.getString("METRIC_NAME") + function.getSuffix(),
-      rs.getString("APP_ID"),
-      rs.getString("INSTANCE_ID"),
-      rs.getString("HOSTNAME"),
-      rs.getLong("SERVER_TIME"),
+      timelineMetric.getMetricName() + function.getSuffix(),
+      timelineMetric.getAppId(),
+      timelineMetric.getInstanceId(),
+      timelineMetric.getHostName(),
       rs.getLong("SERVER_TIME"),
-      rs.getString("UNITS")
+      rs.getLong("SERVER_TIME")
     );
 
     double value;
@@ -91,16 +102,14 @@ public class TimelineMetricReadHelper {
    */
   public TimelineMetric getTimelineMetricCommonsFromResultSet(ResultSet rs)
       throws SQLException {
-    TimelineMetric metric = new TimelineMetric();
-    metric.setMetricName(rs.getString("METRIC_NAME"));
-    metric.setAppId(rs.getString("APP_ID"));
-    if (!ignoreInstance) {
-      metric.setInstanceId(rs.getString("INSTANCE_ID"));
+
+    byte[] uuid = rs.getBytes("UUID");
+    TimelineMetric metric = metadataManagerInstance.getMetricFromUuid(uuid);
+    if (ignoreInstance) {
+      metric.setInstanceId(null);
     }
-    metric.setHostName(rs.getString("HOSTNAME"));
     metric.setTimestamp(rs.getLong("SERVER_TIME"));
     metric.setStartTime(rs.getLong("START_TIME"));
-    metric.setType(rs.getString("UNITS"));
     return metric;
   }
 
@@ -130,14 +139,16 @@ public class TimelineMetricReadHelper {
     return agg;
   }
 
-
   public TimelineClusterMetric fromResultSet(ResultSet rs) throws SQLException {
+
+    byte[] uuid = rs.getBytes("UUID");
+    TimelineMetric timelineMetric = metadataManagerInstance.getMetricFromUuid(uuid);
+
     return new TimelineClusterMetric(
-      rs.getString("METRIC_NAME"),
-      rs.getString("APP_ID"),
-      ignoreInstance ? null : rs.getString("INSTANCE_ID"),
-      rs.getLong("SERVER_TIME"),
-      rs.getString("UNITS"));
+      timelineMetric.getMetricName(),
+      timelineMetric.getAppId(),
+      ignoreInstance ? null : timelineMetric.getInstanceId(),
+      rs.getLong("SERVER_TIME"));
   }
 
   public MetricHostAggregate getMetricHostAggregateFromResultSet(ResultSet rs)
@@ -154,14 +165,8 @@ public class TimelineMetricReadHelper {
 
   public TimelineMetric getTimelineMetricKeyFromResultSet(ResultSet rs)
       throws SQLException, IOException {
-    TimelineMetric metric = new TimelineMetric();
-    metric.setMetricName(rs.getString("METRIC_NAME"));
-    metric.setAppId(rs.getString("APP_ID"));
-    metric.setInstanceId(rs.getString("INSTANCE_ID"));
-    metric.setHostName(rs.getString("HOSTNAME"));
-    metric.setTimestamp(rs.getLong("SERVER_TIME"));
-    metric.setType(rs.getString("UNITS"));
-    return metric;
+    byte[] uuid = rs.getBytes("UUID");
+    return metadataManagerInstance.getMetricFromUuid(uuid);
   }
 }
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricHostMetadata.java
similarity index 52%
copy from ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java
copy to ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricHostMetadata.java
index 9aa64bd..06e9279 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricHostMetadata.java
@@ -1,9 +1,3 @@
-package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
-
-import org.apache.hadoop.metrics2.sink.timeline.Precision;
-
-import java.util.List;
-
 /**
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
@@ -21,28 +15,37 @@ import java.util.List;
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-public interface Condition {
-  boolean isEmpty();
-
-  List<String> getMetricNames();
-  boolean isPointInTime();
-  boolean isGrouped();
-  void setStatement(String statement);
-  List<String> getHostnames();
-  Precision getPrecision();
-  void setPrecision(Precision precision);
-  String getAppId();
-  String getInstanceId();
-  StringBuilder getConditionClause();
-  String getOrderByClause(boolean asc);
-  String getStatement();
-  Long getStartTime();
-  Long getEndTime();
-  Integer getLimit();
-  Integer getFetchSize();
-  void setFetchSize(Integer fetchSize);
-  void addOrderByColumn(String column);
-  void setNoLimit();
-  boolean doUpdate();
-  void setMetricNamesNotCondition(boolean metricNamesNotCondition);
+
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery;
+
+import java.util.HashSet;
+import java.util.Set;
+
+public class TimelineMetricHostMetadata {
+  private Set<String> hostedApps = new HashSet<>();
+  private byte[] uuid;
+
+  // Default constructor
+  public TimelineMetricHostMetadata() {
+  }
+
+  public TimelineMetricHostMetadata(Set<String> hostedApps) {
+    this.hostedApps = hostedApps;
+  }
+
+  public Set<String> getHostedApps() {
+    return hostedApps;
+  }
+
+  public void setHostedApps(Set<String> hostedApps) {
+    this.hostedApps = hostedApps;
+  }
+
+  public byte[] getUuid() {
+    return uuid;
+  }
+
+  public void setUuid(byte[] uuid) {
+    this.uuid = uuid;
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataKey.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataKey.java
index 504b502..6aeb2dd 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataKey.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataKey.java
@@ -17,13 +17,20 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery;
 
+import org.apache.commons.lang3.StringUtils;
+
+import javax.xml.bind.annotation.XmlRootElement;
+
+@XmlRootElement
 public class TimelineMetricMetadataKey {
   String metricName;
   String appId;
+  String instanceId;
 
-  public TimelineMetricMetadataKey(String metricName, String appId) {
+  public TimelineMetricMetadataKey(String metricName, String appId, String instanceId) {
     this.metricName = metricName;
     this.appId = appId;
+    this.instanceId = instanceId;
   }
 
   public String getMetricName() {
@@ -34,6 +41,10 @@ public class TimelineMetricMetadataKey {
     return appId;
   }
 
+  public String getInstanceId() {
+    return instanceId;
+  }
+
   public void setAppId(String appId) {
     this.appId = appId;
   }
@@ -46,15 +57,24 @@ public class TimelineMetricMetadataKey {
     TimelineMetricMetadataKey that = (TimelineMetricMetadataKey) o;
 
     if (!metricName.equals(that.metricName)) return false;
-    return !(appId != null ? !appId.equals(that.appId) : that.appId != null);
-
+    if (!appId.equals(that.appId)) return false;
+    return (StringUtils.isNotEmpty(instanceId) ? instanceId.equals(that.instanceId) : StringUtils.isEmpty(that.instanceId));
   }
 
   @Override
   public int hashCode() {
     int result = metricName.hashCode();
     result = 31 * result + (appId != null ? appId.hashCode() : 0);
+    result = 31 * result + (instanceId != null ? instanceId.hashCode() : 0);
     return result;
   }
 
+  @Override
+  public String toString() {
+    return "TimelineMetricMetadataKey{" +
+      "metricName='" + metricName + '\'' +
+      ", appId='" + appId + '\'' +
+      ", instanceId='" + instanceId + '\'' +
+      '}';
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
index 8a71756..e00c045 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
@@ -19,6 +19,7 @@ package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline
 
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang.ArrayUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -29,6 +30,9 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid.HashBasedUuidGenStrategy;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid.MetricUuidGenStrategy;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid.RandomUuidGenStrategy;
 
 import java.net.MalformedURLException;
 import java.net.URISyntaxException;
@@ -48,6 +52,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.DISABLE_METRIC_METADATA_MGMT;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.METRICS_METADATA_SYNC_INIT_DELAY;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.METRICS_METADATA_SYNC_SCHEDULE_DELAY;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_UUID_GEN_STRATEGY;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRIC_METADATA_FILTERS;
 
 public class TimelineMetricMetadataManager {
@@ -55,12 +60,17 @@ public class TimelineMetricMetadataManager {
   private boolean isDisabled = false;
   // Cache all metadata on retrieval
   private final Map<TimelineMetricMetadataKey, TimelineMetricMetadata> METADATA_CACHE = new ConcurrentHashMap<>();
+  private final Map<String, TimelineMetricMetadataKey> uuidKeyMap = new ConcurrentHashMap<>();
   // Map to lookup apps on a host
-  private final Map<String, Set<String>> HOSTED_APPS_MAP = new ConcurrentHashMap<>();
+  private final Map<String, TimelineMetricHostMetadata> HOSTED_APPS_MAP = new ConcurrentHashMap<>();
+  private final Map<String, String> uuidHostMap = new ConcurrentHashMap<>();
   private final Map<String, Set<String>> INSTANCE_HOST_MAP = new ConcurrentHashMap<>();
   // Sync only when needed
   AtomicBoolean SYNC_HOSTED_APPS_METADATA = new AtomicBoolean(false);
   AtomicBoolean SYNC_HOSTED_INSTANCES_METADATA = new AtomicBoolean(false);
+  private MetricUuidGenStrategy uuidGenStrategy = new HashBasedUuidGenStrategy();
+  private static final int timelineMetricUuidLength = 16;
+  private static final int hostnameUuidLength = 4;
 
   // Single thread to sync back new writes to the store
   private final ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor();
@@ -81,6 +91,8 @@ public class TimelineMetricMetadataManager {
     if (!StringUtils.isEmpty(patternStrings)) {
       metricNameFilters.addAll(Arrays.asList(patternStrings.split(",")));
     }
+
+    uuidGenStrategy = getUuidStrategy(metricsConf);
   }
 
   public TimelineMetricMetadataManager(PhoenixHBaseAccessor hBaseAccessor) throws MalformedURLException, URISyntaxException {
@@ -108,11 +120,14 @@ public class TimelineMetricMetadataManager {
         // Store in the cache
         METADATA_CACHE.putAll(metadata);
 
-        Map<String, Set<String>> hostedAppData = getHostedAppsFromStore();
+        Map<String, TimelineMetricHostMetadata> hostedAppData = getHostedAppsFromStore();
 
         LOG.info("Retrieved " + hostedAppData.size() + " host objects from store.");
         HOSTED_APPS_MAP.putAll(hostedAppData);
 
+        loadUuidMapsOnInit();
+
+        hBaseAccessor.setMetadataInstance(this);
       } catch (SQLException e) {
         LOG.warn("Exception loading metric metadata", e);
       }
@@ -127,7 +142,7 @@ public class TimelineMetricMetadataManager {
     return METADATA_CACHE.get(key);
   }
 
-  public Map<String, Set<String>> getHostedAppsCache() {
+  public Map<String, TimelineMetricHostMetadata> getHostedAppsCache() {
     return HOSTED_APPS_MAP;
   }
 
@@ -172,7 +187,7 @@ public class TimelineMetricMetadataManager {
     }
 
     TimelineMetricMetadataKey key = new TimelineMetricMetadataKey(
-      metadata.getMetricName(), metadata.getAppId());
+      metadata.getMetricName(), metadata.getAppId(), metadata.getInstanceId());
 
     TimelineMetricMetadata metadataFromCache = METADATA_CACHE.get(key);
 
@@ -197,10 +212,15 @@ public class TimelineMetricMetadataManager {
    * @param appId Application Id
    */
   public void putIfModifiedHostedAppsMetadata(String hostname, String appId) {
-    Set<String> apps = HOSTED_APPS_MAP.get(hostname);
+    TimelineMetricHostMetadata timelineMetricHostMetadata = HOSTED_APPS_MAP.get(hostname);
+    Set<String> apps = (timelineMetricHostMetadata != null) ? timelineMetricHostMetadata.getHostedApps() : null;
     if (apps == null) {
       apps = new HashSet<>();
-      HOSTED_APPS_MAP.put(hostname, apps);
+      if (timelineMetricHostMetadata == null) {
+        HOSTED_APPS_MAP.put(hostname, new TimelineMetricHostMetadata(apps));
+      } else {
+        HOSTED_APPS_MAP.get(hostname).setHostedApps(apps);
+      }
     }
 
     if (!apps.contains(appId)) {
@@ -230,7 +250,7 @@ public class TimelineMetricMetadataManager {
     hBaseAccessor.saveMetricMetadata(metadata);
   }
 
-  public void persistHostedAppsMetadata(Map<String, Set<String>> hostedApps) throws SQLException {
+  public void persistHostedAppsMetadata(Map<String, TimelineMetricHostMetadata> hostedApps) throws SQLException {
     hBaseAccessor.saveHostAppsMetadata(hostedApps);
   }
 
@@ -242,6 +262,7 @@ public class TimelineMetricMetadataManager {
     return new TimelineMetricMetadata(
       timelineMetric.getMetricName(),
       timelineMetric.getAppId(),
+      timelineMetric.getInstanceId(),
       timelineMetric.getUnits(),
       timelineMetric.getType(),
       timelineMetric.getStartTime(),
@@ -255,7 +276,7 @@ public class TimelineMetricMetadataManager {
   }
 
   boolean isDistributedModeEnabled() {
-    return metricsConf.get("timeline.metrics.service.operation.mode", "").equals("distributed");
+    return metricsConf.get("timeline.metrics.service.operation.mode").equals("distributed");
   }
 
   /**
@@ -270,7 +291,7 @@ public class TimelineMetricMetadataManager {
    * Fetch hosted apps from store
    * @throws SQLException
    */
-  Map<String, Set<String>> getHostedAppsFromStore() throws SQLException {
+  Map<String, TimelineMetricHostMetadata> getHostedAppsFromStore() throws SQLException {
     return hBaseAccessor.getHostedAppsMetadata();
   }
 
@@ -282,4 +303,255 @@ public class TimelineMetricMetadataManager {
     return MapUtils.isEmpty(metric.getMetadata()) ||
       !(String.valueOf(true).equals(metric.getMetadata().get("skipAggregation")));
   }
+
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+  // UUID Management
+  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
+
+
+  /**
+   * Load the UUID mappings from the UUID table on startup.
+   */
+  private void loadUuidMapsOnInit() {
+
+    for (TimelineMetricMetadataKey key : METADATA_CACHE.keySet()) {
+      TimelineMetricMetadata timelineMetricMetadata = METADATA_CACHE.get(key);
+      if (timelineMetricMetadata != null && timelineMetricMetadata.getUuid() != null) {
+        uuidKeyMap.put(new String(timelineMetricMetadata.getUuid()), key);
+      }
+    }
+
+    for (String host : HOSTED_APPS_MAP.keySet()) {
+      TimelineMetricHostMetadata timelineMetricHostMetadata = HOSTED_APPS_MAP.get(host);
+      if (timelineMetricHostMetadata != null && timelineMetricHostMetadata.getUuid() != null) {
+        uuidHostMap.put(new String(timelineMetricHostMetadata.getUuid()), host);
+      }
+    }
+  }
+
+  /**
+   * Returns the UUID gen strategy.
+   * @param configuration
+   * @return
+   */
+  private MetricUuidGenStrategy getUuidStrategy(Configuration configuration) {
+    String strategy = configuration.get(TIMELINE_METRICS_UUID_GEN_STRATEGY, "");
+    if ("random".equalsIgnoreCase(strategy)) {
+      return new RandomUuidGenStrategy();
+    } else {
+      return new HashBasedUuidGenStrategy();
+    }
+  }
+
+  /**
+   * Given the hostname, generates a byte array of length 'hostnameUuidLength'
+   * @param hostname
+   * @return uuid byte array of length 'hostnameUuidLength'
+   */
+  private byte[] getUuidForHostname(String hostname) {
+
+    TimelineMetricHostMetadata timelineMetricHostMetadata = HOSTED_APPS_MAP.get(hostname);
+    if (timelineMetricHostMetadata != null) {
+      byte[] uuid = timelineMetricHostMetadata.getUuid();
+      if (uuid != null) {
+        return uuid;
+      }
+    }
+
+    byte[] uuid = uuidGenStrategy.computeUuid(hostname, hostnameUuidLength);
+
+    String uuidStr = new String(uuid);
+    if (uuidHostMap.containsKey(uuidStr)) {
+      LOG.error("Duplicate key computed for " + hostname +", Collides with  " + uuidHostMap.get(uuidStr));
+      return null;
+    }
+
+    if (timelineMetricHostMetadata == null) {
+      timelineMetricHostMetadata = new TimelineMetricHostMetadata();
+      HOSTED_APPS_MAP.put(hostname, timelineMetricHostMetadata);
+    }
+    timelineMetricHostMetadata.setUuid(uuid);
+    uuidHostMap.put(uuidStr, hostname);
+
+    return uuid;
+  }
+
+  /**
+   * Given a timelineClusterMetric instance, generates a UUID for Metric-App-Instance combination.
+   * @param timelineClusterMetric
+   * @return uuid byte array of length 'timelineMetricUuidLength'
+   */
+  public byte[] getUuid(TimelineClusterMetric timelineClusterMetric) {
+    TimelineMetricMetadataKey key = new TimelineMetricMetadataKey(timelineClusterMetric.getMetricName(),
+      timelineClusterMetric.getAppId(), timelineClusterMetric.getInstanceId());
+
+    TimelineMetricMetadata timelineMetricMetadata = METADATA_CACHE.get(key);
+    if (timelineMetricMetadata != null) {
+      byte[] uuid = timelineMetricMetadata.getUuid();
+      if (uuid != null) {
+        return uuid;
+      }
+    }
+
+    byte[] uuid = uuidGenStrategy.computeUuid(timelineClusterMetric, timelineMetricUuidLength);
+
+    String uuidStr = new String(uuid);
+    if (uuidKeyMap.containsKey(uuidStr) && !uuidKeyMap.get(uuidStr).equals(key)) {
+      TimelineMetricMetadataKey collidingKey = (TimelineMetricMetadataKey)uuidKeyMap.get(uuidStr);
+      LOG.error("Duplicate key " + Arrays.toString(uuid) + "(" + uuid +  ") computed for " + timelineClusterMetric.toString() + ", Collides with  " + collidingKey.toString());
+      return null;
+    }
+
+    if (timelineMetricMetadata == null) {
+      timelineMetricMetadata = new TimelineMetricMetadata();
+      timelineMetricMetadata.setMetricName(timelineClusterMetric.getMetricName());
+      timelineMetricMetadata.setAppId(timelineClusterMetric.getAppId());
+      timelineMetricMetadata.setInstanceId(timelineClusterMetric.getInstanceId());
+      METADATA_CACHE.put(key, timelineMetricMetadata);
+    }
+
+    timelineMetricMetadata.setUuid(uuid);
+    timelineMetricMetadata.setIsPersisted(false);
+    uuidKeyMap.put(uuidStr, key);
+    return uuid;
+  }
+
+  /**
+   * Given a timelineMetric instance, generates a UUID for Metric-App-Instance combination.
+   * @param timelineMetric
+   * @return uuid byte array of length 'timelineMetricUuidLength' + 'hostnameUuidLength'
+   */
+  public byte[] getUuid(TimelineMetric timelineMetric) {
+
+    byte[] metricUuid = getUuid(new TimelineClusterMetric(timelineMetric.getMetricName(), timelineMetric.getAppId(),
+      timelineMetric.getInstanceId(), -1l));
+    byte[] hostUuid = getUuidForHostname(timelineMetric.getHostName());
+
+    return ArrayUtils.addAll(metricUuid, hostUuid);
+  }
+
+  public String getMetricNameFromUuid(byte[]  uuid) {
+
+    byte[] metricUuid = uuid;
+    if (uuid.length == timelineMetricUuidLength + hostnameUuidLength) {
+      metricUuid = ArrayUtils.subarray(uuid, 0, timelineMetricUuidLength);
+    }
+
+    TimelineMetricMetadataKey key = uuidKeyMap.get(new String(metricUuid));
+    return key != null ? key.getMetricName() : null;
+  }
+
+  public TimelineMetric getMetricFromUuid(byte[] uuid) {
+    if (uuid == null) {
+      return null;
+    }
+
+    if (uuid.length == timelineMetricUuidLength) {
+      TimelineMetricMetadataKey key = uuidKeyMap.get(new String(uuid));
+      return key != null ? new TimelineMetric(key.metricName, null, key.appId, key.instanceId) : null;
+    } else {
+      byte[] metricUuid = ArrayUtils.subarray(uuid, 0, timelineMetricUuidLength);
+      TimelineMetricMetadataKey key = uuidKeyMap.get(new String(metricUuid));
+      if (key == null) {
+        LOG.error("TimelineMetricMetadataKey is null for : " + Arrays.toString(uuid));
+        return null;
+      }
+      TimelineMetric timelineMetric = new TimelineMetric();
+      timelineMetric.setMetricName(key.metricName);
+      timelineMetric.setAppId(key.appId);
+      timelineMetric.setInstanceId(key.instanceId);
+
+      byte[] hostUuid = ArrayUtils.subarray(uuid, timelineMetricUuidLength, hostnameUuidLength + timelineMetricUuidLength);
+      timelineMetric.setHostName(uuidHostMap.get(new String(hostUuid)));
+      return timelineMetric;
+    }
+  }
+
+  /**
+   * Returns the set of UUIDs for a given GET request. If there are wildcards (%), resolves them based on UUID map.
+   * @param metricNames
+   * @param hostnames
+   * @param appId
+   * @param instanceId
+   * @return Set of UUIds
+   */
+  public List<byte[]> getUuids(Collection<String> metricNames, List<String> hostnames, String appId, String instanceId) {
+
+    Collection<String> sanitizedMetricNames = new HashSet<>();
+
+    for (String metricName : metricNames) {
+      if (metricName.contains("%")) {
+        String metricRegEx;
+        //Special case handling for metric name with * and __%.
+        //For example, dfs.NNTopUserOpCounts.windowMs=300000.op=*.user=%.count
+        // or dfs.NNTopUserOpCounts.windowMs=300000.op=__%.user=%.count
+        if (metricName.contains("*") || metricName.contains("__%")) {
+          String metricNameWithEscSeq = metricName.replace("*", "\\*").replace("__%", "..%");
+          metricRegEx = metricNameWithEscSeq.replace("%", ".*");
+        } else {
+          metricRegEx = metricName.replace("%", ".*");
+        }
+        for (TimelineMetricMetadataKey key : METADATA_CACHE.keySet()) {
+          String metricNameFromMetadata = key.getMetricName();
+          if (metricNameFromMetadata.matches(metricRegEx)) {
+            sanitizedMetricNames.add(metricNameFromMetadata);
+          }
+        }
+      } else {
+        sanitizedMetricNames.add(metricName);
+      }
+    }
+
+    Set<String> sanitizedHostNames = new HashSet<>();
+    if (CollectionUtils.isNotEmpty(hostnames)) {
+      for (String hostname : hostnames) {
+        if (hostname.contains("%")) {
+          String hostRegEx;
+          hostRegEx = hostname.replace("%", ".*");
+          for (String host : HOSTED_APPS_MAP.keySet()) {
+            if (host.matches(hostRegEx)) {
+              sanitizedHostNames.add(host);
+            }
+          }
+        } else {
+          sanitizedHostNames.add(hostname);
+        }
+      }
+    }
+
+    List<byte[]> uuids = new ArrayList<>();
+
+    if (!(appId.equals("HOST") || appId.equals("FLUME_HANDLER"))) { //HACK.. Why??
+      appId = appId.toLowerCase();
+    }
+    if (CollectionUtils.isNotEmpty(sanitizedHostNames)) {
+      for (String metricName : sanitizedMetricNames) {
+        TimelineMetric metric = new TimelineMetric();
+        metric.setMetricName(metricName);
+        metric.setAppId(appId);
+        metric.setInstanceId(instanceId);
+        for (String hostname : sanitizedHostNames) {
+          metric.setHostName(hostname);
+          byte[] uuid = getUuid(metric);
+          if (uuid != null) {
+            uuids.add(uuid);
+          }
+        }
+      }
+    } else {
+      for (String metricName : sanitizedMetricNames) {
+        TimelineClusterMetric metric = new TimelineClusterMetric(metricName, appId, instanceId, -1l);
+        byte[] uuid = getUuid(metric);
+        if (uuid != null) {
+          uuids.add(uuid);
+        }
+      }
+    }
+
+    return uuids;
+  }
+
+  public Map<String, TimelineMetricMetadataKey> getUuidKeyMap() {
+    return uuidKeyMap;
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataSync.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataSync.java
index 6d519f6..f808cd7 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataSync.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataSync.java
@@ -81,7 +81,7 @@ public class TimelineMetricMetadataSync implements Runnable {
     if (markSuccess) {
       for (TimelineMetricMetadata metadata : metadataToPersist) {
         TimelineMetricMetadataKey key = new TimelineMetricMetadataKey(
-          metadata.getMetricName(), metadata.getAppId()
+          metadata.getMetricName(), metadata.getAppId(), metadata.getInstanceId()
         );
 
         // Mark entry as being persisted
@@ -119,7 +119,7 @@ public class TimelineMetricMetadataSync implements Runnable {
    */
   private void persistHostAppsMetadata() {
     if (cacheManager.syncHostedAppsMetadata()) {
-      Map<String, Set<String>> persistedData = null;
+      Map<String, TimelineMetricHostMetadata> persistedData = null;
       try {
         persistedData = cacheManager.getHostedAppsFromStore();
       } catch (SQLException e) {
@@ -127,14 +127,14 @@ public class TimelineMetricMetadataSync implements Runnable {
         return; // Something wrong with store
       }
 
-      Map<String, Set<String>> cachedData = cacheManager.getHostedAppsCache();
-      Map<String, Set<String>> dataToSync = new HashMap<>();
+      Map<String, TimelineMetricHostMetadata> cachedData = cacheManager.getHostedAppsCache();
+      Map<String, TimelineMetricHostMetadata> dataToSync = new HashMap<>();
       if (cachedData != null && !cachedData.isEmpty()) {
-        for (Map.Entry<String, Set<String>> cacheEntry : cachedData.entrySet()) {
+        for (Map.Entry<String, TimelineMetricHostMetadata> cacheEntry : cachedData.entrySet()) {
           // No persistence / stale data in store
           if (persistedData == null || persistedData.isEmpty() ||
             !persistedData.containsKey(cacheEntry.getKey()) ||
-            !persistedData.get(cacheEntry.getKey()).containsAll(cacheEntry.getValue())) {
+            !persistedData.get(cacheEntry.getKey()).getHostedApps().containsAll(cacheEntry.getValue().getHostedApps())) {
             dataToSync.put(cacheEntry.getKey(), cacheEntry.getValue());
           }
         }
@@ -189,16 +189,16 @@ public class TimelineMetricMetadataSync implements Runnable {
    * Read all hosted apps metadata and update cached values - HA
    */
   private void refreshHostAppsMetadata() {
-    Map<String, Set<String>> hostedAppsDataFromStore = null;
+    Map<String, TimelineMetricHostMetadata> hostedAppsDataFromStore = null;
     try {
       hostedAppsDataFromStore = cacheManager.getHostedAppsFromStore();
     } catch (SQLException e) {
       LOG.warn("Error refreshing metadata from store.", e);
     }
     if (hostedAppsDataFromStore != null) {
-      Map<String, Set<String>> cachedData = cacheManager.getHostedAppsCache();
+      Map<String, TimelineMetricHostMetadata> cachedData = cacheManager.getHostedAppsCache();
 
-      for (Map.Entry<String, Set<String>> storeEntry : hostedAppsDataFromStore.entrySet()) {
+      for (Map.Entry<String, TimelineMetricHostMetadata> storeEntry : hostedAppsDataFromStore.entrySet()) {
         if (!cachedData.containsKey(storeEntry.getKey())) {
           cachedData.put(storeEntry.getKey(), storeEntry.getValue());
         }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java
index 9aa64bd..9714e1a 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java
@@ -24,6 +24,7 @@ import java.util.List;
 public interface Condition {
   boolean isEmpty();
 
+  List<byte[]> getUuids();
   List<String> getMetricNames();
   boolean isPointInTime();
   boolean isGrouped();
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConditionBuilder.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConditionBuilder.java
index 32c1e84..f395c3e 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConditionBuilder.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConditionBuilder.java
@@ -42,6 +42,7 @@ public class ConditionBuilder {
   private Integer topN;
   private boolean isBottomN;
   private Function topNFunction;
+  private List<byte[]> uuids;
 
   public ConditionBuilder(List<String> metricNames) {
     this.metricNames = metricNames;
@@ -122,14 +123,19 @@ public class ConditionBuilder {
     return this;
   }
 
+  public ConditionBuilder uuid(List<byte[]> uuids) {
+    this.uuids = uuids;
+    return this;
+  }
+
   public Condition build() {
     if (topN == null) {
       return new DefaultCondition(
-        metricNames,
+        uuids, metricNames,
         hostnames, appId, instanceId, startTime, endTime,
         precision, limit, grouped);
     } else {
-      return new TopNCondition(metricNames, hostnames, appId, instanceId,
+      return new TopNCondition(uuids, metricNames, hostnames, appId, instanceId,
         startTime, endTime, precision, limit, grouped, topN, topNFunction, isBottomN);
     }
   }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultCondition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultCondition.java
index a4f7014..3c03dca 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultCondition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultCondition.java
@@ -43,6 +43,7 @@ public class DefaultCondition implements Condition {
   String statement;
   Set<String> orderByColumns = new LinkedHashSet<String>();
   boolean metricNamesNotCondition = false;
+  List<byte[]> uuids = new ArrayList<>();
 
   private static final Log LOG = LogFactory.getLog(DefaultCondition.class);
 
@@ -60,6 +61,21 @@ public class DefaultCondition implements Condition {
     this.grouped = grouped;
   }
 
+  public DefaultCondition(List<byte[]> uuids, List<String> metricNames, List<String> hostnames, String appId,
+                          String instanceId, Long startTime, Long endTime, Precision precision,
+                          Integer limit, boolean grouped) {
+    this.uuids = uuids;
+    this.metricNames = metricNames;
+    this.hostnames = hostnames;
+    this.appId = appId;
+    this.instanceId = instanceId;
+    this.startTime = startTime;
+    this.endTime = endTime;
+    this.precision = precision;
+    this.limit = limit;
+    this.grouped = grouped;
+  }
+
   public String getStatement() {
     return statement;
   }
@@ -74,13 +90,7 @@ public class DefaultCondition implements Condition {
 
   public StringBuilder getConditionClause() {
     StringBuilder sb = new StringBuilder();
-
-    boolean appendConjunction = appendMetricNameClause(sb);
-
-    appendConjunction = appendHostnameClause(sb, appendConjunction);
-
-    appendConjunction = append(sb, appendConjunction, getAppId(), " APP_ID = ?");
-    appendConjunction = append(sb, appendConjunction, getInstanceId(), " INSTANCE_ID = ?");
+    boolean appendConjunction = appendUuidClause(sb);
     appendConjunction = append(sb, appendConjunction, getStartTime(), " SERVER_TIME >= ?");
     append(sb, appendConjunction, getEndTime(), " SERVER_TIME < ?");
 
@@ -216,6 +226,37 @@ public class DefaultCondition implements Condition {
     return null;
   }
 
+  protected boolean appendUuidClause(StringBuilder sb) {
+    boolean appendConjunction = false;
+
+    if (CollectionUtils.isNotEmpty(uuids)) {
+      // Put a '(' first
+      sb.append("(");
+
+      //IN clause
+      // UUID (NOT) IN (?,?,?,?)
+      if (CollectionUtils.isNotEmpty(uuids)) {
+        sb.append("UUID");
+        if (metricNamesNotCondition) {
+          sb.append(" NOT");
+        }
+        sb.append(" IN (");
+        //Append ?,?,?,?
+        for (int i = 0; i < uuids.size(); i++) {
+          sb.append("?");
+          if (i < uuids.size() - 1) {
+            sb.append(", ");
+          }
+        }
+        sb.append(")");
+      }
+      appendConjunction = true;
+      sb.append(")");
+    }
+
+    return appendConjunction;
+  }
+
   protected boolean appendMetricNameClause(StringBuilder sb) {
     boolean appendConjunction = false;
     List<String> metricsLike = new ArrayList<>();
@@ -381,4 +422,9 @@ public class DefaultCondition implements Condition {
   public void setMetricNamesNotCondition(boolean metricNamesNotCondition) {
     this.metricNamesNotCondition = metricNamesNotCondition;
   }
+
+  @Override
+  public List<byte[]> getUuids() {
+    return uuids;
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/EmptyCondition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/EmptyCondition.java
index 43ab88c..b667df3 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/EmptyCondition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/EmptyCondition.java
@@ -35,6 +35,11 @@ public class EmptyCondition implements Condition {
   }
 
   @Override
+  public List<byte[]> getUuids() {
+    return null;
+  }
+
+  @Override
   public List<String> getMetricNames() {
     return null;
   }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
index e55ff61..25e9a02 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
@@ -40,20 +40,15 @@ public class PhoenixTransactSQL {
    * Create table to store individual metric records.
    */
   public static final String CREATE_METRICS_TABLE_SQL = "CREATE TABLE IF NOT " +
-    "EXISTS METRIC_RECORD (METRIC_NAME VARCHAR, " +
-    "HOSTNAME VARCHAR, " +
-    "SERVER_TIME UNSIGNED_LONG NOT NULL, " +
-    "APP_ID VARCHAR, " +
-    "INSTANCE_ID VARCHAR, " +
+    "EXISTS METRIC_RECORD (UUID BINARY(20) NOT NULL, " +
+    "SERVER_TIME BIGINT NOT NULL, " +
     "START_TIME UNSIGNED_LONG, " +
-    "UNITS CHAR(20), " +
     "METRIC_SUM DOUBLE, " +
     "METRIC_COUNT UNSIGNED_INT, " +
     "METRIC_MAX DOUBLE, " +
     "METRIC_MIN DOUBLE, " +
     "METRICS VARCHAR CONSTRAINT pk " +
-    "PRIMARY KEY (METRIC_NAME, HOSTNAME, SERVER_TIME, APP_ID, " +
-    "INSTANCE_ID)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, " +
+    "PRIMARY KEY (UUID, SERVER_TIME)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, " +
     "TTL=%s, COMPRESSION='%s'";
 
   public static final String CREATE_CONTAINER_METRICS_TABLE_SQL =
@@ -85,55 +80,44 @@ public class PhoenixTransactSQL {
 
   public static final String CREATE_METRICS_AGGREGATE_TABLE_SQL =
     "CREATE TABLE IF NOT EXISTS %s " +
-      "(METRIC_NAME VARCHAR, " +
-      "HOSTNAME VARCHAR, " +
-      "APP_ID VARCHAR, " +
-      "INSTANCE_ID VARCHAR, " +
+      "(UUID BINARY(20) NOT NULL, " +
       "SERVER_TIME UNSIGNED_LONG NOT NULL, " +
-      "UNITS CHAR(20), " +
       "METRIC_SUM DOUBLE," +
       "METRIC_COUNT UNSIGNED_INT, " +
       "METRIC_MAX DOUBLE," +
       "METRIC_MIN DOUBLE CONSTRAINT pk " +
-      "PRIMARY KEY (METRIC_NAME, HOSTNAME, APP_ID, INSTANCE_ID, " +
-      "SERVER_TIME)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, TTL=%s," +
+      "PRIMARY KEY (UUID, SERVER_TIME)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, TTL=%s," +
       " COMPRESSION='%s'";
 
   public static final String CREATE_METRICS_CLUSTER_AGGREGATE_TABLE_SQL =
     "CREATE TABLE IF NOT EXISTS %s " +
-      "(METRIC_NAME VARCHAR, " +
-      "APP_ID VARCHAR, " +
-      "INSTANCE_ID VARCHAR, " +
+      "(UUID BINARY(16) NOT NULL, " +
       "SERVER_TIME UNSIGNED_LONG NOT NULL, " +
-      "UNITS CHAR(20), " +
       "METRIC_SUM DOUBLE, " +
       "HOSTS_COUNT UNSIGNED_INT, " +
       "METRIC_MAX DOUBLE, " +
       "METRIC_MIN DOUBLE " +
-      "CONSTRAINT pk PRIMARY KEY (METRIC_NAME, APP_ID, INSTANCE_ID, " +
-      "SERVER_TIME)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, " +
+      "CONSTRAINT pk PRIMARY KEY (UUID, SERVER_TIME)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, " +
       "TTL=%s, COMPRESSION='%s'";
 
   // HOSTS_COUNT vs METRIC_COUNT
   public static final String CREATE_METRICS_CLUSTER_AGGREGATE_GROUPED_TABLE_SQL =
     "CREATE TABLE IF NOT EXISTS %s " +
-      "(METRIC_NAME VARCHAR, " +
-      "APP_ID VARCHAR, " +
-      "INSTANCE_ID VARCHAR, " +
+      "(UUID BINARY(16) NOT NULL, " +
       "SERVER_TIME UNSIGNED_LONG NOT NULL, " +
-      "UNITS CHAR(20), " +
       "METRIC_SUM DOUBLE, " +
       "METRIC_COUNT UNSIGNED_INT, " +
       "METRIC_MAX DOUBLE, " +
       "METRIC_MIN DOUBLE " +
-      "CONSTRAINT pk PRIMARY KEY (METRIC_NAME, APP_ID, INSTANCE_ID, " +
-      "SERVER_TIME)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, " +
+      "CONSTRAINT pk PRIMARY KEY (UUID, SERVER_TIME)) DATA_BLOCK_ENCODING='%s', IMMUTABLE_ROWS=true, " +
       "TTL=%s, COMPRESSION='%s'";
 
   public static final String CREATE_METRICS_METADATA_TABLE_SQL =
     "CREATE TABLE IF NOT EXISTS METRICS_METADATA " +
       "(METRIC_NAME VARCHAR, " +
       "APP_ID VARCHAR, " +
+      "INSTANCE_ID VARCHAR, " +
+      "UUID BINARY(16), " +
       "UNITS CHAR(20), " +
       "TYPE CHAR(20), " +
       "START_TIME UNSIGNED_LONG, " +
@@ -144,7 +128,7 @@ public class PhoenixTransactSQL {
 
   public static final String CREATE_HOSTED_APPS_METADATA_TABLE_SQL =
     "CREATE TABLE IF NOT EXISTS HOSTED_APPS_METADATA " +
-      "(HOSTNAME VARCHAR, APP_IDS VARCHAR, " +
+      "(HOSTNAME VARCHAR, UUID BINARY(4), APP_IDS VARCHAR, " +
       "CONSTRAINT pk PRIMARY KEY (HOSTNAME))" +
       "DATA_BLOCK_ENCODING='%s', COMPRESSION='%s'";
 
@@ -166,14 +150,15 @@ public class PhoenixTransactSQL {
    * Insert into metric records table.
    */
   public static final String UPSERT_METRICS_SQL = "UPSERT INTO %s " +
-    "(METRIC_NAME, HOSTNAME, APP_ID, INSTANCE_ID, SERVER_TIME, START_TIME, " +
-    "UNITS, " +
+    "(UUID, " +
+    "SERVER_TIME, " +
+    "START_TIME, " +
     "METRIC_SUM, " +
     "METRIC_MAX, " +
     "METRIC_MIN, " +
     "METRIC_COUNT, " +
     "METRICS) VALUES " +
-    "(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)";
+    "(?, ?, ?, ?, ?, ?, ?, ?)";
 
   public static final String UPSERT_CONTAINER_METRICS_SQL = "UPSERT INTO %s " +
       "(APP_ID,"
@@ -201,40 +186,40 @@ public class PhoenixTransactSQL {
       "(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)";
 
   public static final String UPSERT_CLUSTER_AGGREGATE_SQL = "UPSERT INTO " +
-    "%s (METRIC_NAME, APP_ID, INSTANCE_ID, SERVER_TIME, " +
-    "UNITS, " +
+    "%s (UUID, " +
+    "SERVER_TIME, " +
     "METRIC_SUM, " +
     "HOSTS_COUNT, " +
     "METRIC_MAX, " +
     "METRIC_MIN) " +
-    "VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)";
+    "VALUES (?, ?, ?, ?, ?, ?)";
 
   public static final String UPSERT_CLUSTER_AGGREGATE_TIME_SQL = "UPSERT INTO" +
-    " %s (METRIC_NAME, APP_ID, INSTANCE_ID, SERVER_TIME, " +
+    " %s (UUID, SERVER_TIME, " +
     "UNITS, " +
     "METRIC_SUM, " +
     "METRIC_COUNT, " +
     "METRIC_MAX, " +
     "METRIC_MIN) " +
-    "VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)";
+    "VALUES (?, ?, ?, ?, ?, ?)";
 
   public static final String UPSERT_AGGREGATE_RECORD_SQL = "UPSERT INTO " +
-    "%s (METRIC_NAME, HOSTNAME, APP_ID, INSTANCE_ID, " +
+    "%s (UUID, " +
     "SERVER_TIME, " +
     "UNITS, " +
     "METRIC_SUM, " +
     "METRIC_MAX, " +
     "METRIC_MIN," +
     "METRIC_COUNT) " +
-    "VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)";
+    "VALUES (?, ?, ?, ?, ?, ?)";
 
   public static final String UPSERT_METADATA_SQL =
-    "UPSERT INTO METRICS_METADATA (METRIC_NAME, APP_ID, UNITS, TYPE, " +
+    "UPSERT INTO METRICS_METADATA (METRIC_NAME, APP_ID, INSTANCE_ID, UUID, UNITS, TYPE, " +
       "START_TIME, SUPPORTS_AGGREGATION, IS_WHITELISTED) " +
-      "VALUES (?, ?, ?, ?, ?, ?, ?)";
+      "VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)";
 
   public static final String UPSERT_HOSTED_APPS_METADATA_SQL =
-    "UPSERT INTO HOSTED_APPS_METADATA (HOSTNAME, APP_IDS) VALUES (?, ?)";
+    "UPSERT INTO HOSTED_APPS_METADATA (HOSTNAME, UUID, APP_IDS) VALUES (?, ?, ?)";
 
   public static final String UPSERT_INSTANCE_HOST_METADATA_SQL =
     "UPSERT INTO INSTANCE_HOST_METADATA (INSTANCE_ID, HOSTNAME) VALUES (?, ?)";
@@ -242,8 +227,7 @@ public class PhoenixTransactSQL {
   /**
    * Retrieve a set of rows from metrics records table.
    */
-  public static final String GET_METRIC_SQL = "SELECT %s METRIC_NAME, " +
-    "HOSTNAME, APP_ID, INSTANCE_ID, SERVER_TIME, START_TIME, UNITS, " +
+  public static final String GET_METRIC_SQL = "SELECT %s UUID, SERVER_TIME, START_TIME, " +
     "METRIC_SUM, " +
     "METRIC_MAX, " +
     "METRIC_MIN, " +
@@ -257,31 +241,24 @@ public class PhoenixTransactSQL {
    * Different queries for a number and a single hosts are used due to bug
    * in Apache Phoenix
    */
-  public static final String GET_LATEST_METRIC_SQL = "SELECT %s " +
-    "E.METRIC_NAME AS METRIC_NAME, E.HOSTNAME AS HOSTNAME, " +
-    "E.APP_ID AS APP_ID, E.INSTANCE_ID AS INSTANCE_ID, " +
+  public static final String GET_LATEST_METRIC_SQL = "SELECT %s E.UUID AS UUID, " +
     "E.SERVER_TIME AS SERVER_TIME, E.START_TIME AS START_TIME, " +
-    "E.UNITS AS UNITS, E.METRIC_SUM AS METRIC_SUM, " +
+    "E.METRIC_SUM AS METRIC_SUM, " +
     "E.METRIC_MAX AS METRIC_MAX, E.METRIC_MIN AS METRIC_MIN, " +
     "E.METRIC_COUNT AS METRIC_COUNT, E.METRICS AS METRICS " +
     "FROM %s AS E " +
     "INNER JOIN " +
-    "(SELECT METRIC_NAME, HOSTNAME, MAX(SERVER_TIME) AS MAX_SERVER_TIME, " +
-    "APP_ID, INSTANCE_ID " +
+    "(SELECT UUID, MAX(SERVER_TIME) AS MAX_SERVER_TIME " +
     "FROM %s " +
     "WHERE " +
     "%s " +
-    "GROUP BY METRIC_NAME, HOSTNAME, APP_ID, INSTANCE_ID) " +
+    "GROUP BY UUID) " +
     "AS I " +
-    "ON E.METRIC_NAME=I.METRIC_NAME " +
-    "AND E.HOSTNAME=I.HOSTNAME " +
-    "AND E.SERVER_TIME=I.MAX_SERVER_TIME " +
-    "AND E.APP_ID=I.APP_ID " +
-    "AND E.INSTANCE_ID=I.INSTANCE_ID";
-
-  public static final String GET_METRIC_AGGREGATE_ONLY_SQL = "SELECT %s " +
-    "METRIC_NAME, HOSTNAME, APP_ID, INSTANCE_ID, SERVER_TIME, " +
-    "UNITS, " +
+    "ON E.UUID=I.UUID " +
+    "AND E.SERVER_TIME=I.MAX_SERVER_TIME";
+
+  public static final String GET_METRIC_AGGREGATE_ONLY_SQL = "SELECT %s UUID, " +
+    "SERVER_TIME, " +
     "METRIC_SUM, " +
     "METRIC_MAX, " +
     "METRIC_MIN, " +
@@ -289,9 +266,8 @@ public class PhoenixTransactSQL {
     "FROM %s";
 
   public static final String GET_CLUSTER_AGGREGATE_SQL = "SELECT %s " +
-    "METRIC_NAME, APP_ID, " +
-    "INSTANCE_ID, SERVER_TIME, " +
-    "UNITS, " +
+    "UUID, " +
+    "SERVER_TIME, " +
     "METRIC_SUM, " +
     "HOSTS_COUNT, " +
     "METRIC_MAX, " +
@@ -299,24 +275,23 @@ public class PhoenixTransactSQL {
     "FROM %s";
 
   public static final String GET_CLUSTER_AGGREGATE_TIME_SQL = "SELECT %s " +
-    "METRIC_NAME, APP_ID, " +
-    "INSTANCE_ID, SERVER_TIME, " +
-    "UNITS, " +
+    "UUID, " +
+    "SERVER_TIME, " +
     "METRIC_SUM, " +
     "METRIC_COUNT, " +
     "METRIC_MAX, " +
     "METRIC_MIN " +
     "FROM %s";
 
-  public static final String TOP_N_INNER_SQL = "SELECT %s %s " +
-    "FROM %s WHERE %s GROUP BY %s ORDER BY %s LIMIT %s";
+  public static final String TOP_N_INNER_SQL = "SELECT %s UUID " +
+    "FROM %s WHERE %s GROUP BY UUID ORDER BY %s LIMIT %s";
 
   public static final String GET_METRIC_METADATA_SQL = "SELECT " +
-    "METRIC_NAME, APP_ID, UNITS, TYPE, START_TIME, " +
+    "METRIC_NAME, APP_ID, INSTANCE_ID, UUID, UNITS, TYPE, START_TIME, " +
     "SUPPORTS_AGGREGATION, IS_WHITELISTED FROM METRICS_METADATA";
 
   public static final String GET_HOSTED_APPS_METADATA_SQL = "SELECT " +
-    "HOSTNAME, APP_IDS FROM HOSTED_APPS_METADATA";
+    "HOSTNAME, UUID, APP_IDS FROM HOSTED_APPS_METADATA";
 
   public static final String GET_INSTANCE_HOST_METADATA_SQL = "SELECT " +
     "INSTANCE_ID, HOSTNAME FROM INSTANCE_HOST_METADATA";
@@ -325,44 +300,41 @@ public class PhoenixTransactSQL {
    * Aggregate host metrics using a GROUP BY clause to take advantage of
    * N - way parallel scan where N = number of regions.
    */
-  public static final String GET_AGGREGATED_HOST_METRIC_GROUPBY_SQL = "UPSERT %s " +
-    "INTO %s (METRIC_NAME, HOSTNAME, APP_ID, INSTANCE_ID, SERVER_TIME, UNITS, " +
-    "METRIC_SUM, METRIC_COUNT, METRIC_MAX, METRIC_MIN) " +
-    "SELECT METRIC_NAME, HOSTNAME, APP_ID, INSTANCE_ID, %s AS SERVER_TIME, UNITS, " +
+  public static final String GET_AGGREGATED_HOST_METRIC_GROUPBY_SQL = "UPSERT " +
+    "INTO %s (UUID, SERVER_TIME, METRIC_SUM, METRIC_COUNT, METRIC_MAX, METRIC_MIN) " +
+    "SELECT UUID, %s AS SERVER_TIME, " +
     "ROUND(SUM(METRIC_SUM)/SUM(METRIC_COUNT),2), SUM(METRIC_COUNT), MAX(METRIC_MAX), MIN(METRIC_MIN) " +
     "FROM %s WHERE%s SERVER_TIME > %s AND SERVER_TIME <= %s " +
-    "GROUP BY METRIC_NAME, HOSTNAME, APP_ID, INSTANCE_ID, UNITS";
+    "GROUP BY UUID";
 
   /**
    * Downsample host metrics.
    */
-  public static final String DOWNSAMPLE_HOST_METRIC_SQL_UPSERT_PREFIX = "UPSERT %s INTO %s (METRIC_NAME, HOSTNAME, " +
-    "APP_ID, INSTANCE_ID, SERVER_TIME, UNITS, METRIC_SUM, METRIC_COUNT, METRIC_MAX, METRIC_MIN) ";
+  public static final String DOWNSAMPLE_HOST_METRIC_SQL_UPSERT_PREFIX = "UPSERT %s INTO %s (UUID, SERVER_TIME, " +
+    "METRIC_SUM, METRIC_COUNT, METRIC_MAX, METRIC_MIN) ";
 
-  public static final String TOPN_DOWNSAMPLER_HOST_METRIC_SELECT_SQL = "SELECT METRIC_NAME, HOSTNAME, APP_ID, INSTANCE_ID, " +
-    "%s AS SERVER_TIME, UNITS, %s, 1, %s, %s FROM %s WHERE METRIC_NAME LIKE %s AND SERVER_TIME > %s AND SERVER_TIME <= %s " +
-    "GROUP BY METRIC_NAME, HOSTNAME, APP_ID, INSTANCE_ID, UNITS ORDER BY %s DESC LIMIT %s";
+  public static final String TOPN_DOWNSAMPLER_HOST_METRIC_SELECT_SQL = "SELECT UUID, " +
+    "%s AS SERVER_TIME, %s, 1, %s, %s FROM %s WHERE UUID IN %s AND SERVER_TIME > %s AND SERVER_TIME <= %s " +
+    "GROUP BY UUID ORDER BY %s DESC LIMIT %s";
 
   /**
    * Aggregate app metrics using a GROUP BY clause to take advantage of
    * N - way parallel scan where N = number of regions.
    */
   public static final String GET_AGGREGATED_APP_METRIC_GROUPBY_SQL = "UPSERT %s " +
-    "INTO %s (METRIC_NAME, APP_ID, INSTANCE_ID, SERVER_TIME, UNITS, " +
-    "METRIC_SUM, METRIC_COUNT, METRIC_MAX, METRIC_MIN) SELECT METRIC_NAME, APP_ID, " +
-    "INSTANCE_ID, %s AS SERVER_TIME, UNITS, ROUND(AVG(METRIC_SUM),2), ROUND(AVG(%s)), " +
-    "MAX(METRIC_MAX), MIN(METRIC_MIN) FROM %s WHERE%s SERVER_TIME > %s AND " +
-    "SERVER_TIME <= %s GROUP BY METRIC_NAME, APP_ID, INSTANCE_ID, UNITS";
+         "INTO %s (UUID, SERVER_TIME, METRIC_SUM, METRIC_COUNT, METRIC_MAX, METRIC_MIN) SELECT UUID, %s AS SERVER_TIME, " +
+         "ROUND(AVG(METRIC_SUM),2), ROUND(AVG(%s)), MAX(METRIC_MAX), MIN(METRIC_MIN) FROM %s WHERE%s SERVER_TIME > %s AND " +
+         "SERVER_TIME <= %s GROUP BY UUID";
 
   /**
    * Downsample cluster metrics.
    */
-  public static final String DOWNSAMPLE_CLUSTER_METRIC_SQL_UPSERT_PREFIX = "UPSERT %s INTO %s (METRIC_NAME, APP_ID, " +
-    "INSTANCE_ID, SERVER_TIME, UNITS, METRIC_SUM, METRIC_COUNT, METRIC_MAX, METRIC_MIN) ";
+  public static final String DOWNSAMPLE_CLUSTER_METRIC_SQL_UPSERT_PREFIX = "UPSERT %s INTO %s (UUID, SERVER_TIME, " +
+    "METRIC_SUM, METRIC_COUNT, METRIC_MAX, METRIC_MIN) ";
 
-  public static final String TOPN_DOWNSAMPLER_CLUSTER_METRIC_SELECT_SQL = "SELECT METRIC_NAME, APP_ID, INSTANCE_ID," +
-    " %s AS SERVER_TIME, UNITS, %s, 1, %s, %s FROM %s WHERE METRIC_NAME LIKE %s AND SERVER_TIME > %s AND SERVER_TIME <= %s " +
-    "GROUP BY METRIC_NAME, APP_ID, INSTANCE_ID, UNITS ORDER BY %s DESC LIMIT %s";
+  public static final String TOPN_DOWNSAMPLER_CLUSTER_METRIC_SELECT_SQL = "SELECT UUID, " +
+    "%s AS SERVER_TIME, %s, 1, %s, %s FROM %s WHERE UUID IN %s AND SERVER_TIME > %s AND SERVER_TIME <= %s " +
+    "GROUP BY UUID ORDER BY %s DESC LIMIT %s";
 
   /**
    * Event based downsampler SELECT query.
@@ -489,7 +461,7 @@ public class PhoenixTransactSQL {
       if (orderByClause != null) {
         sb.append(orderByClause);
       } else {
-        sb.append(" ORDER BY METRIC_NAME, SERVER_TIME ");
+        sb.append(" ORDER BY UUID, SERVER_TIME ");
       }
     }
 
@@ -505,30 +477,13 @@ public class PhoenixTransactSQL {
     try {
       stmt = connection.prepareStatement(sb.toString());
       int pos = 1;
-      pos = addMetricNames(condition, pos, stmt);
+      pos = addUuids(condition, pos, stmt);
 
       if (condition instanceof TopNCondition) {
-        TopNCondition topNCondition = (TopNCondition) condition;
-        if (topNCondition.isTopNHostCondition()) {
-          pos = addMetricNames(condition, pos, stmt);
-        }
-      }
-
-      pos = addHostNames(condition, pos, stmt);
-
-      if (condition instanceof TopNCondition) {
-        pos = addAppId(condition, pos, stmt);
-        pos = addInstanceId(condition, pos, stmt);
         pos = addStartTime(condition, pos, stmt);
         pos = addEndTime(condition, pos, stmt);
-        TopNCondition topNCondition = (TopNCondition) condition;
-        if (topNCondition.isTopNMetricCondition()) {
-          pos = addHostNames(condition, pos, stmt);
-        }
       }
 
-      pos = addAppId(condition, pos, stmt);
-      pos = addInstanceId(condition, pos, stmt);
       pos = addStartTime(condition, pos, stmt);
       addEndTime(condition, pos, stmt);
 
@@ -542,6 +497,9 @@ public class PhoenixTransactSQL {
       throw e;
     }
 
+    if (condition instanceof TopNCondition) {
+      LOG.info(sb.toString());
+    }
     return stmt;
   }
 
@@ -639,36 +597,11 @@ public class PhoenixTransactSQL {
     int pos = 1;
     //For GET_LATEST_METRIC_SQL_SINGLE_HOST parameters should be set 2 times
     do {
-      if (condition.getMetricNames() != null) {
-        for (String metricName : condition.getMetricNames()) {
-          if (LOG.isDebugEnabled()) {
-            LOG.debug("Setting pos: " + pos + ", value = " + metricName);
-          }
-          stmt.setString(pos++, metricName);
-        }
-      }
-      if (condition.getHostnames() != null) {
-        for (String hostname : condition.getHostnames()) {
-          if (LOG.isDebugEnabled()) {
-            LOG.debug("Setting pos: " + pos + ", value: " + hostname);
-          }
-          stmt.setString(pos++, hostname);
+      if (condition.getUuids() != null) {
+        for (byte[] uuid : condition.getUuids()) {
+          stmt.setBytes(pos++, uuid);
         }
       }
-      if (condition.getAppId() != null) {
-        if (LOG.isDebugEnabled()) {
-          LOG.debug("Setting pos: " + pos + ", value: " + condition.getAppId());
-        }
-        stmt.setString(pos++, condition.getAppId());
-      }
-      if (condition.getInstanceId() != null) {
-        if (LOG.isDebugEnabled()) {
-          LOG.debug("Setting pos: " + pos +
-            ", value: " + condition.getInstanceId());
-        }
-        stmt.setString(pos++, condition.getInstanceId());
-      }
-
       if (condition.getFetchSize() != null) {
         stmt.setFetchSize(condition.getFetchSize());
         pos++;
@@ -716,7 +649,7 @@ public class PhoenixTransactSQL {
     StringBuilder sb = new StringBuilder(queryStmt);
     sb.append(" WHERE ");
     sb.append(condition.getConditionClause());
-    sb.append(" ORDER BY METRIC_NAME, SERVER_TIME");
+    sb.append(" ORDER BY UUID, SERVER_TIME");
     if (condition.getLimit() != null) {
       sb.append(" LIMIT ").append(condition.getLimit());
     }
@@ -731,20 +664,16 @@ public class PhoenixTransactSQL {
       stmt = connection.prepareStatement(query);
       int pos = 1;
 
-      pos = addMetricNames(condition, pos, stmt);
+      pos = addUuids(condition, pos, stmt);
 
       if (condition instanceof TopNCondition) {
-        pos = addAppId(condition, pos, stmt);
-        pos = addInstanceId(condition, pos, stmt);
         pos = addStartTime(condition, pos, stmt);
         pos = addEndTime(condition, pos, stmt);
       }
 
       // TODO: Upper case all strings on POST
-      pos = addAppId(condition, pos, stmt);
-      pos = addInstanceId(condition, pos, stmt);
       pos = addStartTime(condition, pos, stmt);
-      pos = addEndTime(condition, pos, stmt);
+      addEndTime(condition, pos, stmt);
     } catch (SQLException e) {
       if (stmt != null) {
         stmt.close();
@@ -752,11 +681,14 @@ public class PhoenixTransactSQL {
       throw e;
     }
 
+    if (condition instanceof TopNCondition) {
+      LOG.info(sb.toString());
+    }
     return stmt;
   }
 
   public static PreparedStatement prepareGetLatestAggregateMetricSqlStmt(
-    Connection connection, Condition condition) throws SQLException {
+    Connection connection, SplitByMetricNamesCondition condition) throws SQLException {
 
     validateConditionIsNotEmpty(condition);
 
@@ -775,7 +707,7 @@ public class PhoenixTransactSQL {
     if (orderByClause != null) {
       sb.append(orderByClause);
     } else {
-      sb.append(" ORDER BY METRIC_NAME DESC, SERVER_TIME DESC  ");
+      sb.append(" ORDER BY UUID DESC, SERVER_TIME DESC  ");
     }
 
     sb.append(" LIMIT ").append(condition.getMetricNames().size());
@@ -791,18 +723,9 @@ public class PhoenixTransactSQL {
       int pos = 1;
       if (condition.getMetricNames() != null) {
         for (; pos <= condition.getMetricNames().size(); pos++) {
-          stmt.setString(pos, condition.getMetricNames().get(pos - 1));
+          stmt.setBytes(pos, condition.getCurrentUuid());
         }
       }
-      if (condition.getAppId() != null) {
-        if (LOG.isDebugEnabled()) {
-          LOG.debug("Setting pos: " + pos + ", value: " + condition.getAppId());
-        }
-        stmt.setString(pos++, condition.getAppId());
-      }
-      if (condition.getInstanceId() != null) {
-        stmt.setString(pos, condition.getInstanceId());
-      }
     } catch (SQLException e) {
       if (stmt != null) {
 
@@ -856,50 +779,14 @@ public class PhoenixTransactSQL {
     return inputTable;
   }
 
-  private static int addMetricNames(Condition condition, int pos, PreparedStatement stmt) throws SQLException {
-    if (condition.getMetricNames() != null) {
-      for (int pos2 = 1 ; pos2 <= condition.getMetricNames().size(); pos2++,pos++) {
+  private static int addUuids(Condition condition, int pos, PreparedStatement stmt) throws SQLException {
+    if (condition.getUuids() != null) {
+      for (int pos2 = 1 ; pos2 <= condition.getUuids().size(); pos2++,pos++) {
         if (LOG.isDebugEnabled()) {
-          LOG.debug("Setting pos: " + pos + ", value = " + condition.getMetricNames().get(pos2 - 1));
+          LOG.debug("Setting pos: " + pos + ", value = " + condition.getUuids().get(pos2 - 1));
         }
-        stmt.setString(pos, condition.getMetricNames().get(pos2 - 1));
-      }
-    }
-    return pos;
-  }
-
-  private static int addHostNames(Condition condition, int pos, PreparedStatement stmt) throws SQLException {
-    int i = pos;
-    if (condition.getHostnames() != null) {
-      for (String hostname : condition.getHostnames()) {
-        if (LOG.isDebugEnabled()) {
-          LOG.debug("Setting pos: " + pos + ", value: " + hostname);
-        }
-        stmt.setString(i++, hostname);
-      }
-    }
-    return i;
-  }
-
-
-  private static int addAppId(Condition condition, int pos, PreparedStatement stmt) throws SQLException {
-
-    if (condition.getAppId() != null) {
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("Setting pos: " + pos + ", value: " + condition.getAppId());
-      }
-      stmt.setString(pos++, condition.getAppId());
-    }
-    return pos;
-  }
-
-  private static int addInstanceId(Condition condition, int pos, PreparedStatement stmt) throws SQLException {
-
-    if (condition.getInstanceId() != null) {
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("Setting pos: " + pos + ", value: " + condition.getInstanceId());
+        stmt.setBytes(pos, condition.getUuids().get(pos2 - 1));
       }
-      stmt.setString(pos++, condition.getInstanceId());
     }
     return pos;
   }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/SplitByMetricNamesCondition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/SplitByMetricNamesCondition.java
index bb4dced..45ea74c 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/SplitByMetricNamesCondition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/SplitByMetricNamesCondition.java
@@ -24,7 +24,7 @@ import java.util.List;
 // TODO get rid of this class
 public class SplitByMetricNamesCondition implements Condition {
   private final Condition adaptee;
-  private String currentMetric;
+  private byte[] currentUuid;
   private boolean metricNamesNotCondition = false;
 
   public SplitByMetricNamesCondition(Condition condition){
@@ -37,8 +37,13 @@ public class SplitByMetricNamesCondition implements Condition {
   }
 
   @Override
+  public List<byte[]> getUuids() {
+    return adaptee.getUuids();
+  }
+
+  @Override
   public List<String> getMetricNames() {
-    return Collections.singletonList(currentMetric);
+    return Collections.singletonList(new String(currentUuid));
   }
 
   @Override
@@ -91,31 +96,12 @@ public class SplitByMetricNamesCondition implements Condition {
         if (sb.length() > 1) {
           sb.append(" OR ");
         }
-        sb.append("METRIC_NAME = ?");
+        sb.append("UUID = ?");
       }
 
       appendConjunction = true;
     }
-    // TODO prevent user from using this method with multiple hostnames and SQL LIMIT clause
-    if (getHostnames() != null && getHostnames().size() > 1) {
-      StringBuilder hostnamesCondition = new StringBuilder();
-      for (String hostname: getHostnames()) {
-        if (hostnamesCondition.length() > 0) {
-          hostnamesCondition.append(" ,");
-        } else {
-          hostnamesCondition.append(" HOSTNAME IN (");
-        }
-        hostnamesCondition.append('?');
-      }
-      hostnamesCondition.append(')');
-      appendConjunction = DefaultCondition.append(sb, appendConjunction, getHostnames(), hostnamesCondition.toString());
-    } else {
-      appendConjunction = DefaultCondition.append(sb, appendConjunction, getHostnames(), " HOSTNAME = ?");
-    }
-    appendConjunction = DefaultCondition.append(sb, appendConjunction,
-      getAppId(), " APP_ID = ?");
-    appendConjunction = DefaultCondition.append(sb, appendConjunction,
-      getInstanceId(), " INSTANCE_ID = ?");
+
     appendConjunction = DefaultCondition.append(sb, appendConjunction,
       getStartTime(), " SERVER_TIME >= ?");
     DefaultCondition.append(sb, appendConjunction, getEndTime(),
@@ -178,8 +164,12 @@ public class SplitByMetricNamesCondition implements Condition {
     return adaptee.getMetricNames();
   }
 
-  public void setCurrentMetric(String currentMetric) {
-    this.currentMetric = currentMetric;
+  public void setCurrentUuid(byte[] uuid) {
+    this.currentUuid = uuid;
+  }
+
+  public byte[] getCurrentUuid() {
+    return currentUuid;
   }
 
  @Override
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/TopNCondition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/TopNCondition.java
index 0f2a02c..93242bd 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/TopNCondition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/TopNCondition.java
@@ -32,11 +32,11 @@ public class TopNCondition extends DefaultCondition{
   private Function topNFunction;
   private static final Log LOG = LogFactory.getLog(TopNCondition.class);
 
-  public TopNCondition(List<String> metricNames, List<String> hostnames, String appId,
+  public TopNCondition(List<byte[]> uuids, List<String> metricNames, List<String> hostnames, String appId,
                           String instanceId, Long startTime, Long endTime, Precision precision,
                           Integer limit, boolean grouped, Integer topN, Function topNFunction,
                           boolean isBottomN) {
-    super(metricNames, hostnames, appId, instanceId, startTime, endTime, precision, limit, grouped);
+    super(uuids, metricNames, hostnames, appId, instanceId, startTime, endTime, precision, limit, grouped);
     this.topN = topN;
     this.isBottomN = isBottomN;
     this.topNFunction = topNFunction;
@@ -44,34 +44,20 @@ public class TopNCondition extends DefaultCondition{
 
   @Override
   public StringBuilder getConditionClause() {
-    StringBuilder sb = new StringBuilder();
-    boolean appendConjunction = false;
-
-    if (isTopNHostCondition(metricNames, hostnames)) {
-      appendConjunction = appendMetricNameClause(sb);
-
-      StringBuilder hostnamesCondition = new StringBuilder();
-      hostnamesCondition.append(" HOSTNAME IN (");
-      hostnamesCondition.append(getTopNInnerQuery());
-      hostnamesCondition.append(")");
-      appendConjunction = append(sb, appendConjunction, getHostnames(), hostnamesCondition.toString());
-
-    } else if (isTopNMetricCondition(metricNames, hostnames)) {
-
-      StringBuilder metricNamesCondition = new StringBuilder();
-      metricNamesCondition.append(" METRIC_NAME IN (");
-      metricNamesCondition.append(getTopNInnerQuery());
-      metricNamesCondition.append(")");
-      appendConjunction = append(sb, appendConjunction, getMetricNames(), metricNamesCondition.toString());
-      appendConjunction = appendHostnameClause(sb, appendConjunction);
-    } else {
+
+
+    if (!(isTopNHostCondition(metricNames, hostnames) || isTopNMetricCondition(metricNames, hostnames))) {
       LOG.error("Unsupported TopN Operation requested. Query can have either multiple hosts or multiple metric names " +
         "but not both.");
       return null;
     }
 
-    appendConjunction = append(sb, appendConjunction, getAppId(), " APP_ID = ?");
-    appendConjunction = append(sb, appendConjunction, getInstanceId(), " INSTANCE_ID = ?");
+    StringBuilder sb = new StringBuilder();
+    sb.append(" UUID IN (");
+    sb.append(getTopNInnerQuery());
+    sb.append(")");
+
+    boolean appendConjunction = true;
     appendConjunction = append(sb, appendConjunction, getStartTime(), " SERVER_TIME >= ?");
     append(sb, appendConjunction, getEndTime(), " SERVER_TIME < ?");
 
@@ -79,29 +65,10 @@ public class TopNCondition extends DefaultCondition{
   }
 
   public String getTopNInnerQuery() {
-    String innerQuery = null;
-
-    if (isTopNHostCondition(metricNames, hostnames)) {
-      String groupByClause = "METRIC_NAME, HOSTNAME, APP_ID";
-      String orderByClause = getTopNOrderByClause();
-
-      innerQuery = String.format(PhoenixTransactSQL.TOP_N_INNER_SQL, PhoenixTransactSQL.getNaiveTimeRangeHint(getStartTime(), NATIVE_TIME_RANGE_DELTA),
-        "HOSTNAME", PhoenixTransactSQL.getTargetTableUsingPrecision(precision, true), super.getConditionClause().toString(),
-        groupByClause, orderByClause, topN);
-
-
-    } else if (isTopNMetricCondition(metricNames, hostnames)) {
-
-      String groupByClause = "METRIC_NAME, APP_ID";
-      String orderByClause = getTopNOrderByClause();
-
-      innerQuery = String.format(PhoenixTransactSQL.TOP_N_INNER_SQL, PhoenixTransactSQL.getNaiveTimeRangeHint(getStartTime(), NATIVE_TIME_RANGE_DELTA),
-        "METRIC_NAME", PhoenixTransactSQL.getTargetTableUsingPrecision(precision, (hostnames != null && hostnames.size() == 1)),
-        super.getConditionClause().toString(),
-        groupByClause, orderByClause, topN);
-    }
-
-    return innerQuery;
+    return String.format(PhoenixTransactSQL.TOP_N_INNER_SQL,
+      PhoenixTransactSQL.getNaiveTimeRangeHint(getStartTime(), NATIVE_TIME_RANGE_DELTA),
+      PhoenixTransactSQL.getTargetTableUsingPrecision(precision, CollectionUtils.isNotEmpty(hostnames)),
+      super.getConditionClause().toString(), getTopNOrderByClause(), topN);
   }
 
   private String getTopNOrderByClause() {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java
new file mode 100644
index 0000000..f35c23a
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java
@@ -0,0 +1,202 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES   OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid;
+
+import org.apache.commons.lang.ArrayUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+public class HashBasedUuidGenStrategy implements MetricUuidGenStrategy {
+
+  /**
+   * Computes the UUID for a timelineClusterMetric.
+   * @param timelineClusterMetric
+   * @param maxLength
+   * @return byte array of length 'maxlength'
+   */
+  @Override
+  public byte[] computeUuid(TimelineClusterMetric timelineClusterMetric, int maxLength) {
+
+    int metricNameUuidLength = 12;
+    String metricName = timelineClusterMetric.getMetricName();
+
+    //Compute the individual splits.
+    String[] splits = getIndidivualSplits(metricName);
+
+    /*
+    Compute the ascii sum of every split in the metric name. (asciiSum += (int) splits[s].charAt(i))
+    For the last split, use weighted sum instead of ascii sum. (asciiSum += ((i+1) * (int) splits[s].charAt(i)))
+    These weighted sums are 'appended' to get the unique ID for metric name.
+     */
+    StringBuilder splitSums = new StringBuilder();
+    if (splits.length > 0) {
+      for (int s = 0; s < splits.length; s++) {
+        int asciiSum = 0;
+        if ( s < splits.length -1) {
+          for (int i = 0; i < splits[s].length(); i++) {
+            asciiSum += (int) splits[s].charAt(i); // Get Ascii Sum.
+          }
+        } else {
+          for (int i = 0; i < splits[s].length(); i++) {
+            asciiSum += ((i+1) * (int) splits[s].charAt(i)); //weighted sum for last split.
+          }
+        }
+        splitSums.append(asciiSum); //Append the sum to the array of sums.
+      }
+    }
+
+    //Compute a unique metric seed for the stemmed metric name
+    String stemmedMetric = stem(metricName);
+    long metricSeed = 100123456789L;
+    for (int i = 0; i < stemmedMetric.length(); i++) {
+      metricSeed += stemmedMetric.charAt(i);
+    }
+
+    //Reverse the computed seed to get a metric UUID portion which is used optionally.
+    byte[] metricUuidPortion = StringUtils.reverse(String.valueOf(metricSeed)).getBytes();
+    String splitSumString = splitSums.toString();
+    int splitLength = splitSumString.length();
+
+    //If splitSums length > required metric UUID length, use only the required length suffix substring of the splitSums as metric UUID.
+    if (splitLength > metricNameUuidLength) {
+      metricUuidPortion = ArrayUtils.subarray(splitSumString.getBytes(), splitLength - metricNameUuidLength, splitLength);
+    } else {
+      //If splitSums is not enough for required metric UUID length, pad with the metric uuid portion.
+      int pad = metricNameUuidLength - splitLength;
+      metricUuidPortion = ArrayUtils.addAll(splitSumString.getBytes(), ArrayUtils.subarray(metricUuidPortion, 0, pad));
+    }
+
+    /*
+      For appId and instanceId the logic is similar. Use a seed integer to start with and compute ascii sum.
+      Based on required length, use a suffix of the computed uuid.
+     */
+    String appId = timelineClusterMetric.getAppId();
+    int appidSeed = 11;
+    for (int i = 0; i < appId.length(); i++) {
+      appidSeed += appId.charAt(i);
+    }
+    String appIdSeedStr = String.valueOf(appidSeed);
+    byte[] appUuidPortion = ArrayUtils.subarray(appIdSeedStr.getBytes(), appIdSeedStr.length() - 2, appIdSeedStr.length());
+
+    String instanceId = timelineClusterMetric.getInstanceId();
+    ByteBuffer buffer = ByteBuffer.allocate(4);
+    byte[] instanceUuidPortion = new byte[2];
+    if (StringUtils.isNotEmpty(instanceId)) {
+      int instanceIdSeed = 1489;
+      for (int i = 0; i < appId.length(); i++) {
+        instanceIdSeed += appId.charAt(i);
+      }
+      buffer.putInt(instanceIdSeed);
+      ArrayUtils.subarray(buffer.array(), 2, 4);
+    }
+
+    // Concatenate all UUIDs together (metric uuid + appId uuid + instanceId uuid)
+    return ArrayUtils.addAll(ArrayUtils.addAll(metricUuidPortion, appUuidPortion), instanceUuidPortion);
+  }
+
+  /**
+   * Splits the metric name into individual tokens.
+   * For example,
+   *  kafka.server.ReplicaManager.LeaderCount -> [kafka, server, ReplicaManager, LeaderCount]
+   *  default.General.api_drop_table_15min_rate -> [default, General, api, drop, table, 15min, rate]
+   * @param metricName
+   * @return
+   */
+  private String[] getIndidivualSplits(String metricName) {
+    List<String> tokens = new ArrayList<>();
+    String[] splits = new String[0];
+    if (metricName.contains("\\.")) {
+      splits = metricName.split("\\.");
+      for (String split : splits) {
+        if (split.contains("_")) {
+          tokens.addAll(Arrays.asList(split.split("_")));
+        } else {
+          tokens.add(split);
+        }
+      }
+    }
+
+    if (splits.length <= 1) {
+      splits = metricName.split("\\_");
+      return splits;
+    }
+
+    if (splits.length <= 1) {
+      splits = metricName.split("\\=");
+      return splits;
+    }
+
+    return tokens.toArray(new String[tokens.size()]);
+  }
+
+  /**
+   * Stem the metric name. Remove a set of usual suspects characters.
+   * @param metricName
+   * @return
+   */
+  private String stem(String metricName) {
+    String metric = metricName.toLowerCase();
+    String regex = "[\\.\\_\\%\\-\\=]";
+    String trimmedMetric = StringUtils.removePattern(metric, regex);
+    return trimmedMetric;
+  }
+
+
+  /**
+   * Computes the UUID of a string. (hostname)
+   * Uses the ascii sum of the String. Numbers in the String are treated as actual numerical values rather than ascii values.
+   * @param value
+   * @param maxLength
+   * @return byte array of length 'maxlength'
+   */
+  @Override
+  public byte[] computeUuid(String value, int maxLength) {
+
+    if (StringUtils.isEmpty(value)) {
+      return null;
+    }
+    int len = value.length();
+    int numericValue = 0;
+    int seed = 1489;
+    for (int i = 0; i < len; i++) {
+      int ascii = value.charAt(i);
+      if (48 <= ascii && ascii <= 57) {
+        numericValue += numericValue * 10 + (ascii - 48);
+      } else {
+        if (numericValue > 0) {
+          seed += numericValue;
+          numericValue = 0;
+        }
+        seed+= value.charAt(i);
+      }
+    }
+
+    String seedStr = String.valueOf(seed);
+    if (seedStr.length() < maxLength) {
+      return null;
+    } else {
+      return seedStr.substring(seedStr.length() - maxLength, seedStr.length()).getBytes();
+    }
+  }
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/MetricUuidGenStrategy.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/MetricUuidGenStrategy.java
new file mode 100644
index 0000000..9aab96a
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/MetricUuidGenStrategy.java
@@ -0,0 +1,49 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES   OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid;
+
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
+
+public interface MetricUuidGenStrategy {
+
+  /**
+   * Compute UUID for a given value
+   * @param timelineMetric instance
+   * @param maxLength
+   * @return
+   */
+//  byte[] computeUuid(TimelineMetric timelineMetric, int maxLength);
+
+  /**
+   * Compute UUID for a given value
+   * @param value
+   * @param maxLength
+   * @return
+   */
+  byte[] computeUuid(TimelineClusterMetric timelineClusterMetric, int maxLength);
+
+  /**
+   * Compute UUID for a given value
+   * @param value
+   * @param maxLength
+   * @return
+   */
+  byte[] computeUuid(String value, int maxLength);
+
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/RandomUuidGenStrategy.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/RandomUuidGenStrategy.java
new file mode 100644
index 0000000..39d9549
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/RandomUuidGenStrategy.java
@@ -0,0 +1,53 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES   OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid;
+
+import com.google.common.primitives.Longs;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
+
+import java.security.SecureRandom;
+
+public class RandomUuidGenStrategy implements MetricUuidGenStrategy {
+  private static SecureRandom randomGenerator;
+
+  public RandomUuidGenStrategy() {
+    randomGenerator = new SecureRandom(
+      Longs.toByteArray(System.currentTimeMillis()));
+  }
+
+  @Override
+  public byte[] computeUuid(TimelineClusterMetric timelineClusterMetric, int maxLength) {
+    final byte[] bytes = new byte[maxLength];
+    randomGenerator.nextBytes(bytes);
+    return bytes;
+  }
+
+//  @Override
+//  public byte[] computeUuid(TimelineMetric timelineMetric, int maxLength) {
+//    return new byte[10];
+//  }
+
+  @Override
+  public byte[] computeUuid(String value, int maxLength) {
+    final byte[] bytes = new byte[maxLength];
+    randomGenerator.nextBytes(bytes);
+    return bytes;
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
index 50cfb08..472a787 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.EntityIdentifier;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.GenericObjectMapper;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.NameValuePair;
@@ -461,6 +462,22 @@ public class TimelineWebServices {
     }
   }
 
+  @GET
+  @Path("/metrics/uuids")
+  @Produces({ MediaType.APPLICATION_JSON })
+  public Map<String, TimelineMetricMetadataKey> getUuids(
+    @Context HttpServletRequest req,
+    @Context HttpServletResponse res
+  ) {
+    init(res);
+
+    try {
+      return timelineMetricStore.getUuids();
+    } catch (Exception e) {
+      throw new WebApplicationException(e, Response.Status.INTERNAL_SERVER_ERROR);
+    }
+  }
+
   /**
    * This is a discovery endpoint that advertises known live collector
    * instances. Note: It will always answer with current instance as live.
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/AMBARI_SERVER.dat b/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/AMBARI_SERVER.dat
new file mode 100644
index 0000000..407b0f8
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/AMBARI_SERVER.dat
@@ -0,0 +1,40 @@
+jvm.buffers.direct.capacity
+jvm.buffers.direct.count
+jvm.buffers.direct.used
+jvm.buffers.mapped.capacity
+jvm.buffers.mapped.count
+jvm.buffers.mapped.used
+jvm.file.open.descriptor.ratio
+jvm.gc.ConcurrentMarkSweep.count
+jvm.gc.ConcurrentMarkSweep.time
+jvm.gc.ParNew.count
+jvm.gc.ParNew.time
+jvm.memory.heap.committed
+jvm.memory.heap.init
+jvm.memory.heap.max
+jvm.memory.heap.usage
+jvm.memory.heap.used
+jvm.memory.non-heap.committed
+jvm.memory.non-heap.init
+jvm.memory.non-heap.max
+jvm.memory.non-heap.usage
+jvm.memory.non-heap.used
+jvm.memory.pools.CMS-Old-Gen.usage
+jvm.memory.pools.Code-Cache.usage
+jvm.memory.pools.Compressed-Class-Space.usage
+jvm.memory.pools.Metaspace.usage
+jvm.memory.pools.Par-Eden-Space.usage
+jvm.memory.pools.Par-Survivor-Space.usage
+jvm.memory.total.committed
+jvm.memory.total.init
+jvm.memory.total.max
+jvm.memory.total.used
+jvm.threads.blocked.count
+jvm.threads.count
+jvm.threads.daemon.count
+jvm.threads.deadlock.count
+jvm.threads.new.count
+jvm.threads.runnable.count
+jvm.threads.terminated.count
+jvm.threads.timed_waiting.count
+jvm.threads.waiting.count
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/JOBHISTORYSERVER.dat b/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/JOBHISTORYSERVER.dat
new file mode 100644
index 0000000..f4eccce
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/JOBHISTORYSERVER.dat
@@ -0,0 +1,58 @@
+jvm.JvmMetrics.GcCount
+jvm.JvmMetrics.GcCountCopy
+jvm.JvmMetrics.GcCountMarkSweepCompact
+jvm.JvmMetrics.GcTimeMillis
+jvm.JvmMetrics.GcTimeMillisCopy
+jvm.JvmMetrics.GcTimeMillisMarkSweepCompact
+jvm.JvmMetrics.LogError
+jvm.JvmMetrics.LogFatal
+jvm.JvmMetrics.LogInfo
+jvm.JvmMetrics.LogWarn
+jvm.JvmMetrics.MemHeapCommittedM
+jvm.JvmMetrics.MemHeapMaxM
+jvm.JvmMetrics.MemHeapUsedM
+jvm.JvmMetrics.MemMaxM
+jvm.JvmMetrics.MemNonHeapCommittedM
+jvm.JvmMetrics.MemNonHeapMaxM
+jvm.JvmMetrics.MemNonHeapUsedM
+jvm.JvmMetrics.ThreadsBlocked
+jvm.JvmMetrics.ThreadsNew
+jvm.JvmMetrics.ThreadsRunnable
+jvm.JvmMetrics.ThreadsTerminated
+jvm.JvmMetrics.ThreadsTimedWaiting
+jvm.JvmMetrics.ThreadsWaiting
+metricssystem.MetricsSystem.DroppedPubAll
+metricssystem.MetricsSystem.NumActiveSinks
+metricssystem.MetricsSystem.NumActiveSources
+metricssystem.MetricsSystem.NumAllSinks
+metricssystem.MetricsSystem.NumAllSources
+metricssystem.MetricsSystem.PublishAvgTime
+metricssystem.MetricsSystem.PublishNumOps
+metricssystem.MetricsSystem.Sink_timelineAvgTime
+metricssystem.MetricsSystem.Sink_timelineDropped
+metricssystem.MetricsSystem.Sink_timelineNumOps
+metricssystem.MetricsSystem.Sink_timelineQsize
+metricssystem.MetricsSystem.SnapshotAvgTime
+metricssystem.MetricsSystem.SnapshotNumOps
+rpc.rpc.CallQueueLength
+rpc.rpc.NumOpenConnections
+rpc.rpc.ReceivedBytes
+rpc.rpc.RpcAuthenticationFailures
+rpc.rpc.RpcAuthenticationSuccesses
+rpc.rpc.RpcAuthorizationFailures
+rpc.rpc.RpcAuthorizationSuccesses
+rpc.rpc.RpcClientBackoff
+rpc.rpc.RpcProcessingTimeAvgTime
+rpc.rpc.RpcProcessingTimeNumOps
+rpc.rpc.RpcQueueTimeAvgTime
+rpc.rpc.RpcQueueTimeNumOps
+rpc.rpc.RpcSlowCalls
+rpc.rpc.SentBytes
+ugi.UgiMetrics.GetGroupsAvgTime
+ugi.UgiMetrics.GetGroupsNumOps
+ugi.UgiMetrics.LoginFailureAvgTime
+ugi.UgiMetrics.LoginFailureNumOps
+ugi.UgiMetrics.LoginSuccessAvgTime
+ugi.UgiMetrics.LoginSuccessNumOps
+ugi.UgiMetrics.RenewalFailures
+ugi.UgiMetrics.RenewalFailuresTotal
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/MASTER_HBASE.dat b/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/MASTER_HBASE.dat
index 9ba90f1..bce85f2 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/MASTER_HBASE.dat
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/MASTER_HBASE.dat
@@ -25,29 +25,213 @@ ipc.IPC.QueueCallTime_num_ops
 ipc.IPC.queueSize 
 ipc.IPC.receivedBytes 
 ipc.IPC.sentBytes 
-jvm.JvmMetrics.GcCount 
-jvm.JvmMetrics.GcCountConcurrentMarkSweep 
-jvm.JvmMetrics.GcCountCopy 
-jvm.JvmMetrics.GcTimeMillis 
-jvm.JvmMetrics.GcTimeMillisConcurrentMarkSweep 
-jvm.JvmMetrics.GcTimeMillisCopy 
-jvm.JvmMetrics.LogError 
-jvm.JvmMetrics.LogFatal 
-jvm.JvmMetrics.LogInfo 
-jvm.JvmMetrics.LogWarn 
-jvm.JvmMetrics.MemHeapCommittedM 
-jvm.JvmMetrics.MemHeapMaxM 
-jvm.JvmMetrics.MemHeapUsedM 
-jvm.JvmMetrics.MemMaxM 
-jvm.JvmMetrics.MemNonHeapCommittedM 
-jvm.JvmMetrics.MemNonHeapMaxM 
-jvm.JvmMetrics.MemNonHeapUsedM 
-jvm.JvmMetrics.ThreadsBlocked 
-jvm.JvmMetrics.ThreadsNew 
-jvm.JvmMetrics.ThreadsRunnable 
-jvm.JvmMetrics.ThreadsTerminated 
-jvm.JvmMetrics.ThreadsTimedWaiting 
-jvm.JvmMetrics.ThreadsWaiting 
+jvm.Master.JvmMetrics.GcCount
+jvm.Master.JvmMetrics.GcCountConcurrentMarkSweep
+jvm.Master.JvmMetrics.GcCountParNew
+jvm.Master.JvmMetrics.GcTimeMillis
+jvm.Master.JvmMetrics.GcTimeMillisConcurrentMarkSweep
+jvm.Master.JvmMetrics.GcTimeMillisParNew
+jvm.Master.JvmMetrics.LogError
+jvm.Master.JvmMetrics.LogFatal
+jvm.Master.JvmMetrics.LogInfo
+jvm.Master.JvmMetrics.LogWarn
+jvm.Master.JvmMetrics.MemHeapCommittedM
+jvm.Master.JvmMetrics.MemHeapMaxM
+jvm.Master.JvmMetrics.MemHeapUsedM
+jvm.Master.JvmMetrics.MemMaxM
+jvm.Master.JvmMetrics.MemNonHeapCommittedM
+jvm.Master.JvmMetrics.MemNonHeapMaxM
+jvm.Master.JvmMetrics.MemNonHeapUsedM
+jvm.Master.JvmMetrics.ThreadsBlocked
+jvm.Master.JvmMetrics.ThreadsNew
+jvm.Master.JvmMetrics.ThreadsRunnable
+jvm.Master.JvmMetrics.ThreadsTerminated
+jvm.Master.JvmMetrics.ThreadsTimedWaiting
+jvm.Master.JvmMetrics.ThreadsWaiting
+master.AssignmentManger.Assign_25th_percentile
+master.AssignmentManger.Assign_75th_percentile
+master.AssignmentManger.Assign_90th_percentile
+master.AssignmentManger.Assign_95th_percentile
+master.AssignmentManger.Assign_98th_percentile
+master.AssignmentManger.Assign_99.9th_percentile
+master.AssignmentManger.Assign_99th_percentile
+master.AssignmentManger.Assign_max
+master.AssignmentManger.Assign_mean
+master.AssignmentManger.Assign_median
+master.AssignmentManger.Assign_min
+master.AssignmentManger.Assign_num_ops
+master.AssignmentManger.BulkAssign_25th_percentile
+master.AssignmentManger.BulkAssign_75th_percentile
+master.AssignmentManger.BulkAssign_90th_percentile
+master.AssignmentManger.BulkAssign_95th_percentile
+master.AssignmentManger.BulkAssign_98th_percentile
+master.AssignmentManger.BulkAssign_99.9th_percentile
+master.AssignmentManger.BulkAssign_99th_percentile
+master.AssignmentManger.BulkAssign_max
+master.AssignmentManger.BulkAssign_mean
+master.AssignmentManger.BulkAssign_median
+master.AssignmentManger.BulkAssign_min
+master.AssignmentManger.BulkAssign_num_ops
+master.AssignmentManger.ritCount
+master.AssignmentManger.ritCountOverThreshold
+master.AssignmentManger.ritOldestAge
+master.Balancer.BalancerCluster_25th_percentile
+master.Balancer.BalancerCluster_75th_percentile
+master.Balancer.BalancerCluster_90th_percentile
+master.Balancer.BalancerCluster_95th_percentile
+master.Balancer.BalancerCluster_98th_percentile
+master.Balancer.BalancerCluster_99.9th_percentile
+master.Balancer.BalancerCluster_99th_percentile
+master.Balancer.BalancerCluster_max
+master.Balancer.BalancerCluster_mean
+master.Balancer.BalancerCluster_median
+master.Balancer.BalancerCluster_min
+master.Balancer.BalancerCluster_num_ops
+master.Balancer.miscInvocationCount
+master.FileSystem.HlogSplitSize_25th_percentile
+master.FileSystem.HlogSplitSize_75th_percentile
+master.FileSystem.HlogSplitSize_90th_percentile
+master.FileSystem.HlogSplitSize_95th_percentile
+master.FileSystem.HlogSplitSize_98th_percentile
+master.FileSystem.HlogSplitSize_99.9th_percentile
+master.FileSystem.HlogSplitSize_99th_percentile
+master.FileSystem.HlogSplitSize_max
+master.FileSystem.HlogSplitSize_mean
+master.FileSystem.HlogSplitSize_median
+master.FileSystem.HlogSplitSize_min
+master.FileSystem.HlogSplitSize_num_ops
+master.FileSystem.HlogSplitTime_25th_percentile
+master.FileSystem.HlogSplitTime_75th_percentile
+master.FileSystem.HlogSplitTime_90th_percentile
+master.FileSystem.HlogSplitTime_95th_percentile
+master.FileSystem.HlogSplitTime_98th_percentile
+master.FileSystem.HlogSplitTime_99.9th_percentile
+master.FileSystem.HlogSplitTime_99th_percentile
+master.FileSystem.HlogSplitTime_max
+master.FileSystem.HlogSplitTime_mean
+master.FileSystem.HlogSplitTime_median
+master.FileSystem.HlogSplitTime_min
+master.FileSystem.HlogSplitTime_num_ops
+master.FileSystem.MetaHlogSplitSize_25th_percentile
+master.FileSystem.MetaHlogSplitSize_75th_percentile
+master.FileSystem.MetaHlogSplitSize_90th_percentile
+master.FileSystem.MetaHlogSplitSize_95th_percentile
+master.FileSystem.MetaHlogSplitSize_98th_percentile
+master.FileSystem.MetaHlogSplitSize_99.9th_percentile
+master.FileSystem.MetaHlogSplitSize_99th_percentile
+master.FileSystem.MetaHlogSplitSize_max
+master.FileSystem.MetaHlogSplitSize_mean
+master.FileSystem.MetaHlogSplitSize_median
+master.FileSystem.MetaHlogSplitSize_min
+master.FileSystem.MetaHlogSplitSize_num_ops
+master.FileSystem.MetaHlogSplitTime_25th_percentile
+master.FileSystem.MetaHlogSplitTime_75th_percentile
+master.FileSystem.MetaHlogSplitTime_90th_percentile
+master.FileSystem.MetaHlogSplitTime_95th_percentile
+master.FileSystem.MetaHlogSplitTime_98th_percentile
+master.FileSystem.MetaHlogSplitTime_99.9th_percentile
+master.FileSystem.MetaHlogSplitTime_99th_percentile
+master.FileSystem.MetaHlogSplitTime_max
+master.FileSystem.MetaHlogSplitTime_mean
+master.FileSystem.MetaHlogSplitTime_median
+master.FileSystem.MetaHlogSplitTime_min
+master.FileSystem.MetaHlogSplitTime_num_ops
+master.Master.ProcessCallTime_25th_percentile
+master.Master.ProcessCallTime_75th_percentile
+master.Master.ProcessCallTime_90th_percentile
+master.Master.ProcessCallTime_95th_percentile
+master.Master.ProcessCallTime_98th_percentile
+master.Master.ProcessCallTime_99.9th_percentile
+master.Master.ProcessCallTime_99th_percentile
+master.Master.ProcessCallTime_TimeRangeCount_0-1
+master.Master.ProcessCallTime_max
+master.Master.ProcessCallTime_mean
+master.Master.ProcessCallTime_median
+master.Master.ProcessCallTime_min
+master.Master.ProcessCallTime_num_ops
+master.Master.QueueCallTime_25th_percentile
+master.Master.QueueCallTime_75th_percentile
+master.Master.QueueCallTime_90th_percentile
+master.Master.QueueCallTime_95th_percentile
+master.Master.QueueCallTime_98th_percentile
+master.Master.QueueCallTime_99.9th_percentile
+master.Master.QueueCallTime_99th_percentile
+master.Master.QueueCallTime_TimeRangeCount_0-1
+master.Master.QueueCallTime_TimeRangeCount_1-3
+master.Master.QueueCallTime_max
+master.Master.QueueCallTime_mean
+master.Master.QueueCallTime_median
+master.Master.QueueCallTime_min
+master.Master.QueueCallTime_num_ops
+master.Master.RequestSize_25th_percentile
+master.Master.RequestSize_75th_percentile
+master.Master.RequestSize_90th_percentile
+master.Master.RequestSize_95th_percentile
+master.Master.RequestSize_98th_percentile
+master.Master.RequestSize_99.9th_percentile
+master.Master.RequestSize_99th_percentile
+master.Master.RequestSize_SizeRangeCount_100-1000
+master.Master.RequestSize_max
+master.Master.RequestSize_mean
+master.Master.RequestSize_median
+master.Master.RequestSize_min
+master.Master.RequestSize_num_ops
+master.Master.ResponseSize_25th_percentile
+master.Master.ResponseSize_75th_percentile
+master.Master.ResponseSize_90th_percentile
+master.Master.ResponseSize_95th_percentile
+master.Master.ResponseSize_98th_percentile
+master.Master.ResponseSize_99.9th_percentile
+master.Master.ResponseSize_99th_percentile
+master.Master.ResponseSize_SizeRangeCount_0-10
+master.Master.ResponseSize_max
+master.Master.ResponseSize_mean
+master.Master.ResponseSize_median
+master.Master.ResponseSize_min
+master.Master.ResponseSize_num_ops
+master.Master.TotalCallTime_25th_percentile
+master.Master.TotalCallTime_75th_percentile
+master.Master.TotalCallTime_90th_percentile
+master.Master.TotalCallTime_95th_percentile
+master.Master.TotalCallTime_98th_percentile
+master.Master.TotalCallTime_99.9th_percentile
+master.Master.TotalCallTime_99th_percentile
+master.Master.TotalCallTime_TimeRangeCount_0-1
+master.Master.TotalCallTime_TimeRangeCount_1-3
+master.Master.TotalCallTime_max
+master.Master.TotalCallTime_mean
+master.Master.TotalCallTime_median
+master.Master.TotalCallTime_min
+master.Master.TotalCallTime_num_ops
+master.Master.authenticationFailures
+master.Master.authenticationSuccesses
+master.Master.authorizationFailures
+master.Master.authorizationSuccesses
+master.Master.exceptions
+master.Master.exceptions.FailedSanityCheckException
+master.Master.exceptions.NotServingRegionException
+master.Master.exceptions.OutOfOrderScannerNextException
+master.Master.exceptions.RegionMovedException
+master.Master.exceptions.RegionTooBusyException
+master.Master.exceptions.ScannerResetException
+master.Master.exceptions.UnknownScannerException
+master.Master.numActiveHandler
+master.Master.numCallsInGeneralQueue
+master.Master.numCallsInPriorityQueue
+master.Master.numCallsInReplicationQueue
+master.Master.numGeneralCallsDropped
+master.Master.numLifoModeSwitches
+master.Master.numOpenConnections
+master.Master.queueSize
+master.Master.receivedBytes
+master.Master.sentBytes
+master.Procedure.numMasterWALs
+master.Server.averageLoad
+master.Server.clusterRequests
+master.Server.masterActiveTime
+master.Server.masterStartTime
+master.Server.numDeadRegionServers
+master.Server.numRegionServers
 metricssystem.MetricsSystem.DroppedPubAll 
 metricssystem.MetricsSystem.NumActiveSinks 
 metricssystem.MetricsSystem.NumActiveSources 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/SLAVE_HBASE.dat b/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/SLAVE_HBASE.dat
index 38b870f..3b8e586 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/SLAVE_HBASE.dat
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/SLAVE_HBASE.dat
@@ -25,29 +25,29 @@ ipc.IPC.QueueCallTime_num_ops
 ipc.IPC.queueSize 
 ipc.IPC.receivedBytes 
 ipc.IPC.sentBytes 
-jvm.JvmMetrics.GcCount 
-jvm.JvmMetrics.GcCountConcurrentMarkSweep 
-jvm.JvmMetrics.GcCountCopy 
-jvm.JvmMetrics.GcTimeMillis 
-jvm.JvmMetrics.GcTimeMillisConcurrentMarkSweep 
-jvm.JvmMetrics.GcTimeMillisCopy 
-jvm.JvmMetrics.LogError 
-jvm.JvmMetrics.LogFatal 
-jvm.JvmMetrics.LogInfo 
-jvm.JvmMetrics.LogWarn 
-jvm.JvmMetrics.MemHeapCommittedM 
-jvm.JvmMetrics.MemHeapMaxM 
-jvm.JvmMetrics.MemHeapUsedM 
-jvm.JvmMetrics.MemMaxM 
-jvm.JvmMetrics.MemNonHeapCommittedM 
-jvm.JvmMetrics.MemNonHeapMaxM 
-jvm.JvmMetrics.MemNonHeapUsedM 
-jvm.JvmMetrics.ThreadsBlocked 
-jvm.JvmMetrics.ThreadsNew 
-jvm.JvmMetrics.ThreadsRunnable 
-jvm.JvmMetrics.ThreadsTerminated 
-jvm.JvmMetrics.ThreadsTimedWaiting 
-jvm.JvmMetrics.ThreadsWaiting 
+jvm.RegionServer.JvmMetrics.GcCount
+jvm.RegionServer.JvmMetrics.GcCountConcurrentMarkSweep
+jvm.RegionServer.JvmMetrics.GcCountParNew
+jvm.RegionServer.JvmMetrics.GcTimeMillis
+jvm.RegionServer.JvmMetrics.GcTimeMillisConcurrentMarkSweep
+jvm.RegionServer.JvmMetrics.GcTimeMillisParNew
+jvm.RegionServer.JvmMetrics.LogError
+jvm.RegionServer.JvmMetrics.LogFatal
+jvm.RegionServer.JvmMetrics.LogInfo
+jvm.RegionServer.JvmMetrics.LogWarn
+jvm.RegionServer.JvmMetrics.MemHeapCommittedM
+jvm.RegionServer.JvmMetrics.MemHeapMaxM
+jvm.RegionServer.JvmMetrics.MemHeapUsedM
+jvm.RegionServer.JvmMetrics.MemMaxM
+jvm.RegionServer.JvmMetrics.MemNonHeapCommittedM
+jvm.RegionServer.JvmMetrics.MemNonHeapMaxM
+jvm.RegionServer.JvmMetrics.MemNonHeapUsedM
+jvm.RegionServer.JvmMetrics.ThreadsBlocked
+jvm.RegionServer.JvmMetrics.ThreadsNew
+jvm.RegionServer.JvmMetrics.ThreadsRunnable
+jvm.RegionServer.JvmMetrics.ThreadsTerminated
+jvm.RegionServer.JvmMetrics.ThreadsTimedWaiting
+jvm.RegionServer.JvmMetrics.ThreadsWaiting
 metricssystem.MetricsSystem.DroppedPubAll 
 metricssystem.MetricsSystem.NumActiveSinks 
 metricssystem.MetricsSystem.NumActiveSources 
@@ -60,119 +60,541 @@ metricssystem.MetricsSystem.Sink_timelineDropped
 metricssystem.MetricsSystem.Sink_timelineNumOps 
 metricssystem.MetricsSystem.Sink_timelineQsize 
 metricssystem.MetricsSystem.SnapshotAvgTime 
-metricssystem.MetricsSystem.SnapshotNumOps 
-regionserver.Server.Append_75th_percentile 
-regionserver.Server.Append_95th_percentile 
-regionserver.Server.Append_99th_percentile 
-regionserver.Server.Append_max 
-regionserver.Server.Append_mean 
-regionserver.Server.Append_median 
-regionserver.Server.Append_min 
-regionserver.Server.Append_num_ops 
-regionserver.Server.blockCacheCount 
-regionserver.Server.blockCacheEvictionCount 
-regionserver.Server.blockCacheExpressHitPercent 
-regionserver.Server.blockCacheFreeSize 
-regionserver.Server.blockCacheHitCount 
-regionserver.Server.blockCacheMissCount 
-regionserver.Server.blockCacheSize 
-regionserver.Server.blockCountHitPercent 
-regionserver.Server.checkMutateFailedCount 
-regionserver.Server.checkMutatePassedCount 
-regionserver.Server.compactionQueueLength 
-regionserver.Server.Delete_75th_percentile 
-regionserver.Server.Delete_95th_percentile 
-regionserver.Server.Delete_99th_percentile 
-regionserver.Server.Delete_max 
-regionserver.Server.Delete_mean 
-regionserver.Server.Delete_median 
-regionserver.Server.Delete_min 
-regionserver.Server.Delete_num_ops 
-regionserver.Server.flushQueueLength 
-regionserver.Server.Get_75th_percentile 
-regionserver.Server.Get_95th_percentile 
-regionserver.Server.Get_99th_percentile 
-regionserver.Server.Get_max 
-regionserver.Server.Get_mean 
-regionserver.Server.Get_median 
-regionserver.Server.Get_min 
-regionserver.Server.Get_num_ops 
-regionserver.Server.hlogFileCount 
-regionserver.Server.hlogFileSize 
-regionserver.Server.Increment_75th_percentile 
-regionserver.Server.Increment_95th_percentile 
-regionserver.Server.Increment_99th_percentile 
-regionserver.Server.Increment_max 
-regionserver.Server.Increment_mean 
-regionserver.Server.Increment_median 
-regionserver.Server.Increment_min 
-regionserver.Server.Increment_num_ops 
-regionserver.Server.memStoreSize 
-regionserver.Server.Mutate_75th_percentile 
-regionserver.Server.Mutate_95th_percentile 
-regionserver.Server.Mutate_99th_percentile 
-regionserver.Server.Mutate_max 
-regionserver.Server.Mutate_mean 
-regionserver.Server.Mutate_median 
-regionserver.Server.Mutate_min 
-regionserver.Server.Mutate_num_ops 
-regionserver.Server.mutationsWithoutWALCount 
-regionserver.Server.mutationsWithoutWALSize 
-regionserver.Server.percentFilesLocal 
-regionserver.Server.readRequestCount 
-regionserver.Server.regionCount 
-regionserver.Server.regionServerStartTime 
-regionserver.Server.Replay_75th_percentile 
-regionserver.Server.Replay_95th_percentile 
-regionserver.Server.Replay_99th_percentile 
-regionserver.Server.Replay_max 
-regionserver.Server.Replay_mean 
-regionserver.Server.Replay_median 
-regionserver.Server.Replay_min 
-regionserver.Server.Replay_num_ops 
-regionserver.Server.slowAppendCount 
-regionserver.Server.slowDeleteCount 
-regionserver.Server.slowGetCount 
-regionserver.Server.slowIncrementCount 
-regionserver.Server.slowPutCount 
-regionserver.Server.staticBloomSize 
-regionserver.Server.staticIndexSize 
-regionserver.Server.storeCount 
-regionserver.Server.storeFileCount 
-regionserver.Server.storeFileIndexSize 
-regionserver.Server.storeFileSize 
-regionserver.Server.totalRequestCount 
-regionserver.Server.updatesBlockedTime 
-regionserver.Server.writeRequestCount 
-regionserver.WAL.appendCount 
-regionserver.WAL.AppendSize_75th_percentile 
-regionserver.WAL.AppendSize_95th_percentile 
-regionserver.WAL.AppendSize_99th_percentile 
-regionserver.WAL.AppendSize_max 
-regionserver.WAL.AppendSize_mean 
-regionserver.WAL.AppendSize_median 
-regionserver.WAL.AppendSize_min 
-regionserver.WAL.AppendSize_num_ops 
-regionserver.WAL.AppendTime_75th_percentile 
-regionserver.WAL.AppendTime_95th_percentile 
-regionserver.WAL.AppendTime_99th_percentile 
-regionserver.WAL.AppendTime_max 
-regionserver.WAL.AppendTime_mean 
-regionserver.WAL.AppendTime_median 
-regionserver.WAL.AppendTime_min 
-regionserver.WAL.AppendTime_num_ops 
-regionserver.WAL.slowAppendCount 
-regionserver.WAL.SyncTime_75th_percentile 
-regionserver.WAL.SyncTime_95th_percentile 
-regionserver.WAL.SyncTime_99th_percentile 
-regionserver.WAL.SyncTime_max 
-regionserver.WAL.SyncTime_mean 
-regionserver.WAL.SyncTime_median 
-regionserver.WAL.SyncTime_min 
-regionserver.WAL.SyncTime_num_ops 
-ugi.UgiMetrics.GetGroupsAvgTime 
-ugi.UgiMetrics.GetGroupsNumOps 
-ugi.UgiMetrics.LoginFailureAvgTime 
-ugi.UgiMetrics.LoginFailureNumOps 
-ugi.UgiMetrics.LoginSuccessAvgTime 
-ugi.UgiMetrics.LoginSuccessNumOps
\ No newline at end of file
+metricssystem.MetricsSystem.SnapshotNumOps
+regionserver.RegionServer.ProcessCallTime_25th_percentile
+regionserver.RegionServer.ProcessCallTime_75th_percentile
+regionserver.RegionServer.ProcessCallTime_90th_percentile
+regionserver.RegionServer.ProcessCallTime_95th_percentile
+regionserver.RegionServer.ProcessCallTime_98th_percentile
+regionserver.RegionServer.ProcessCallTime_99.9th_percentile
+regionserver.RegionServer.ProcessCallTime_99th_percentile
+regionserver.RegionServer.ProcessCallTime_max
+regionserver.RegionServer.ProcessCallTime_mean
+regionserver.RegionServer.ProcessCallTime_median
+regionserver.RegionServer.ProcessCallTime_min
+regionserver.RegionServer.ProcessCallTime_num_ops
+regionserver.RegionServer.QueueCallTime_25th_percentile
+regionserver.RegionServer.QueueCallTime_75th_percentile
+regionserver.RegionServer.QueueCallTime_90th_percentile
+regionserver.RegionServer.QueueCallTime_95th_percentile
+regionserver.RegionServer.QueueCallTime_98th_percentile
+regionserver.RegionServer.QueueCallTime_99.9th_percentile
+regionserver.RegionServer.QueueCallTime_99th_percentile
+regionserver.RegionServer.QueueCallTime_max
+regionserver.RegionServer.QueueCallTime_mean
+regionserver.RegionServer.QueueCallTime_median
+regionserver.RegionServer.QueueCallTime_min
+regionserver.RegionServer.QueueCallTime_num_ops
+regionserver.RegionServer.RequestSize_25th_percentile
+regionserver.RegionServer.RequestSize_75th_percentile
+regionserver.RegionServer.RequestSize_90th_percentile
+regionserver.RegionServer.RequestSize_95th_percentile
+regionserver.RegionServer.RequestSize_98th_percentile
+regionserver.RegionServer.RequestSize_99.9th_percentile
+regionserver.RegionServer.RequestSize_99th_percentile
+regionserver.RegionServer.RequestSize_max
+regionserver.RegionServer.RequestSize_mean
+regionserver.RegionServer.RequestSize_median
+regionserver.RegionServer.RequestSize_min
+regionserver.RegionServer.RequestSize_num_ops
+regionserver.RegionServer.ResponseSize_25th_percentile
+regionserver.RegionServer.ResponseSize_75th_percentile
+regionserver.RegionServer.ResponseSize_90th_percentile
+regionserver.RegionServer.ResponseSize_95th_percentile
+regionserver.RegionServer.ResponseSize_98th_percentile
+regionserver.RegionServer.ResponseSize_99.9th_percentile
+regionserver.RegionServer.ResponseSize_99th_percentile
+regionserver.RegionServer.ResponseSize_max
+regionserver.RegionServer.ResponseSize_mean
+regionserver.RegionServer.ResponseSize_median
+regionserver.RegionServer.ResponseSize_min
+regionserver.RegionServer.ResponseSize_num_ops
+regionserver.RegionServer.TotalCallTime_25th_percentile
+regionserver.RegionServer.TotalCallTime_75th_percentile
+regionserver.RegionServer.TotalCallTime_90th_percentile
+regionserver.RegionServer.TotalCallTime_95th_percentile
+regionserver.RegionServer.TotalCallTime_98th_percentile
+regionserver.RegionServer.TotalCallTime_99.9th_percentile
+regionserver.RegionServer.TotalCallTime_99th_percentile
+regionserver.RegionServer.TotalCallTime_max
+regionserver.RegionServer.TotalCallTime_mean
+regionserver.RegionServer.TotalCallTime_median
+regionserver.RegionServer.TotalCallTime_min
+regionserver.RegionServer.TotalCallTime_num_ops
+regionserver.RegionServer.authenticationFailures
+regionserver.RegionServer.authenticationSuccesses
+regionserver.RegionServer.authorizationFailures
+regionserver.RegionServer.authorizationSuccesses
+regionserver.RegionServer.exceptions
+regionserver.RegionServer.exceptions.FailedSanityCheckException
+regionserver.RegionServer.exceptions.NotServingRegionException
+regionserver.RegionServer.exceptions.OutOfOrderScannerNextException
+regionserver.RegionServer.exceptions.RegionMovedException
+regionserver.RegionServer.exceptions.RegionTooBusyException
+regionserver.RegionServer.exceptions.ScannerResetException
+regionserver.RegionServer.exceptions.UnknownScannerException
+regionserver.RegionServer.numActiveHandler
+regionserver.RegionServer.numCallsInGeneralQueue
+regionserver.RegionServer.numCallsInPriorityQueue
+regionserver.RegionServer.numCallsInReplicationQueue
+regionserver.RegionServer.numGeneralCallsDropped
+regionserver.RegionServer.numLifoModeSwitches
+regionserver.RegionServer.numOpenConnections
+regionserver.RegionServer.queueSize
+regionserver.RegionServer.receivedBytes
+regionserver.RegionServer.sentBytes
+regionserver.Replication.sink.ageOfLastAppliedOp
+regionserver.Replication.sink.appliedBatches
+regionserver.Replication.sink.appliedHFiles
+regionserver.Replication.sink.appliedOps
+regionserver.Server.Append_25th_percentile
+regionserver.Server.Append_75th_percentile
+regionserver.Server.Append_90th_percentile
+regionserver.Server.Append_95th_percentile
+regionserver.Server.Append_98th_percentile
+regionserver.Server.Append_99.9th_percentile
+regionserver.Server.Append_99th_percentile
+regionserver.Server.Append_max
+regionserver.Server.Append_mean
+regionserver.Server.Append_median
+regionserver.Server.Append_min
+regionserver.Server.Append_num_ops
+regionserver.Server.CompactionInputFileCount_25th_percentile
+regionserver.Server.CompactionInputFileCount_75th_percentile
+regionserver.Server.CompactionInputFileCount_90th_percentile
+regionserver.Server.CompactionInputFileCount_95th_percentile
+regionserver.Server.CompactionInputFileCount_98th_percentile
+regionserver.Server.CompactionInputFileCount_99.9th_percentile
+regionserver.Server.CompactionInputFileCount_99th_percentile
+regionserver.Server.CompactionInputFileCount_max
+regionserver.Server.CompactionInputFileCount_mean
+regionserver.Server.CompactionInputFileCount_median
+regionserver.Server.CompactionInputFileCount_min
+regionserver.Server.CompactionInputFileCount_num_ops
+regionserver.Server.CompactionInputSize_25th_percentile
+regionserver.Server.CompactionInputSize_75th_percentile
+regionserver.Server.CompactionInputSize_90th_percentile
+regionserver.Server.CompactionInputSize_95th_percentile
+regionserver.Server.CompactionInputSize_98th_percentile
+regionserver.Server.CompactionInputSize_99.9th_percentile
+regionserver.Server.CompactionInputSize_99th_percentile
+regionserver.Server.CompactionInputSize_SizeRangeCount_100-1000
+regionserver.Server.CompactionInputSize_max
+regionserver.Server.CompactionInputSize_mean
+regionserver.Server.CompactionInputSize_median
+regionserver.Server.CompactionInputSize_min
+regionserver.Server.CompactionInputSize_num_ops
+regionserver.Server.CompactionOutputFileCount_25th_percentile
+regionserver.Server.CompactionOutputFileCount_75th_percentile
+regionserver.Server.CompactionOutputFileCount_90th_percentile
+regionserver.Server.CompactionOutputFileCount_95th_percentile
+regionserver.Server.CompactionOutputFileCount_98th_percentile
+regionserver.Server.CompactionOutputFileCount_99.9th_percentile
+regionserver.Server.CompactionOutputFileCount_99th_percentile
+regionserver.Server.CompactionOutputFileCount_max
+regionserver.Server.CompactionOutputFileCount_mean
+regionserver.Server.CompactionOutputFileCount_median
+regionserver.Server.CompactionOutputFileCount_min
+regionserver.Server.CompactionOutputFileCount_num_ops
+regionserver.Server.CompactionOutputSize_25th_percentile
+regionserver.Server.CompactionOutputSize_75th_percentile
+regionserver.Server.CompactionOutputSize_90th_percentile
+regionserver.Server.CompactionOutputSize_95th_percentile
+regionserver.Server.CompactionOutputSize_98th_percentile
+regionserver.Server.CompactionOutputSize_99.9th_percentile
+regionserver.Server.CompactionOutputSize_99th_percentile
+regionserver.Server.CompactionOutputSize_SizeRangeCount_100-1000
+regionserver.Server.CompactionOutputSize_max
+regionserver.Server.CompactionOutputSize_mean
+regionserver.Server.CompactionOutputSize_median
+regionserver.Server.CompactionOutputSize_min
+regionserver.Server.CompactionOutputSize_num_ops
+regionserver.Server.CompactionTime_25th_percentile
+regionserver.Server.CompactionTime_75th_percentile
+regionserver.Server.CompactionTime_90th_percentile
+regionserver.Server.CompactionTime_95th_percentile
+regionserver.Server.CompactionTime_98th_percentile
+regionserver.Server.CompactionTime_99.9th_percentile
+regionserver.Server.CompactionTime_99th_percentile
+regionserver.Server.CompactionTime_TimeRangeCount_100-300
+regionserver.Server.CompactionTime_max
+regionserver.Server.CompactionTime_mean
+regionserver.Server.CompactionTime_median
+regionserver.Server.CompactionTime_min
+regionserver.Server.CompactionTime_num_ops
+regionserver.Server.Delete_25th_percentile
+regionserver.Server.Delete_75th_percentile
+regionserver.Server.Delete_90th_percentile
+regionserver.Server.Delete_95th_percentile
+regionserver.Server.Delete_98th_percentile
+regionserver.Server.Delete_99.9th_percentile
+regionserver.Server.Delete_99th_percentile
+regionserver.Server.Delete_max
+regionserver.Server.Delete_mean
+regionserver.Server.Delete_median
+regionserver.Server.Delete_min
+regionserver.Server.Delete_num_ops
+regionserver.Server.FlushMemstoreSize_25th_percentile
+regionserver.Server.FlushMemstoreSize_75th_percentile
+regionserver.Server.FlushMemstoreSize_90th_percentile
+regionserver.Server.FlushMemstoreSize_95th_percentile
+regionserver.Server.FlushMemstoreSize_98th_percentile
+regionserver.Server.FlushMemstoreSize_99.9th_percentile
+regionserver.Server.FlushMemstoreSize_99th_percentile
+regionserver.Server.FlushMemstoreSize_SizeRangeCount_100-1000
+regionserver.Server.FlushMemstoreSize_max
+regionserver.Server.FlushMemstoreSize_mean
+regionserver.Server.FlushMemstoreSize_median
+regionserver.Server.FlushMemstoreSize_min
+regionserver.Server.FlushMemstoreSize_num_ops
+regionserver.Server.FlushOutputSize_25th_percentile
+regionserver.Server.FlushOutputSize_75th_percentile
+regionserver.Server.FlushOutputSize_90th_percentile
+regionserver.Server.FlushOutputSize_95th_percentile
+regionserver.Server.FlushOutputSize_98th_percentile
+regionserver.Server.FlushOutputSize_99.9th_percentile
+regionserver.Server.FlushOutputSize_99th_percentile
+regionserver.Server.FlushOutputSize_SizeRangeCount_100-1000
+regionserver.Server.FlushOutputSize_max
+regionserver.Server.FlushOutputSize_mean
+regionserver.Server.FlushOutputSize_median
+regionserver.Server.FlushOutputSize_min
+regionserver.Server.FlushOutputSize_num_ops
+regionserver.Server.FlushTime_25th_percentile
+regionserver.Server.FlushTime_75th_percentile
+regionserver.Server.FlushTime_90th_percentile
+regionserver.Server.FlushTime_95th_percentile
+regionserver.Server.FlushTime_98th_percentile
+regionserver.Server.FlushTime_99.9th_percentile
+regionserver.Server.FlushTime_99th_percentile
+regionserver.Server.FlushTime_max
+regionserver.Server.FlushTime_mean
+regionserver.Server.FlushTime_median
+regionserver.Server.FlushTime_min
+regionserver.Server.FlushTime_num_ops
+regionserver.Server.Get_25th_percentile
+regionserver.Server.Get_75th_percentile
+regionserver.Server.Get_90th_percentile
+regionserver.Server.Get_95th_percentile
+regionserver.Server.Get_98th_percentile
+regionserver.Server.Get_99.9th_percentile
+regionserver.Server.Get_99th_percentile
+regionserver.Server.Get_max
+regionserver.Server.Get_mean
+regionserver.Server.Get_median
+regionserver.Server.Get_min
+regionserver.Server.Get_num_ops
+regionserver.Server.Increment_25th_percentile
+regionserver.Server.Increment_75th_percentile
+regionserver.Server.Increment_90th_percentile
+regionserver.Server.Increment_95th_percentile
+regionserver.Server.Increment_98th_percentile
+regionserver.Server.Increment_99.9th_percentile
+regionserver.Server.Increment_99th_percentile
+regionserver.Server.Increment_max
+regionserver.Server.Increment_mean
+regionserver.Server.Increment_median
+regionserver.Server.Increment_min
+regionserver.Server.Increment_num_ops
+regionserver.Server.MajorCompactionInputFileCount_25th_percentile
+regionserver.Server.MajorCompactionInputFileCount_75th_percentile
+regionserver.Server.MajorCompactionInputFileCount_90th_percentile
+regionserver.Server.MajorCompactionInputFileCount_95th_percentile
+regionserver.Server.MajorCompactionInputFileCount_98th_percentile
+regionserver.Server.MajorCompactionInputFileCount_99.9th_percentile
+regionserver.Server.MajorCompactionInputFileCount_99th_percentile
+regionserver.Server.MajorCompactionInputFileCount_max
+regionserver.Server.MajorCompactionInputFileCount_mean
+regionserver.Server.MajorCompactionInputFileCount_median
+regionserver.Server.MajorCompactionInputFileCount_min
+regionserver.Server.MajorCompactionInputFileCount_num_ops
+regionserver.Server.MajorCompactionInputSize_25th_percentile
+regionserver.Server.MajorCompactionInputSize_75th_percentile
+regionserver.Server.MajorCompactionInputSize_90th_percentile
+regionserver.Server.MajorCompactionInputSize_95th_percentile
+regionserver.Server.MajorCompactionInputSize_98th_percentile
+regionserver.Server.MajorCompactionInputSize_99.9th_percentile
+regionserver.Server.MajorCompactionInputSize_99th_percentile
+regionserver.Server.MajorCompactionInputSize_max
+regionserver.Server.MajorCompactionInputSize_mean
+regionserver.Server.MajorCompactionInputSize_median
+regionserver.Server.MajorCompactionInputSize_min
+regionserver.Server.MajorCompactionInputSize_num_ops
+regionserver.Server.MajorCompactionOutputFileCount_25th_percentile
+regionserver.Server.MajorCompactionOutputFileCount_75th_percentile
+regionserver.Server.MajorCompactionOutputFileCount_90th_percentile
+regionserver.Server.MajorCompactionOutputFileCount_95th_percentile
+regionserver.Server.MajorCompactionOutputFileCount_98th_percentile
+regionserver.Server.MajorCompactionOutputFileCount_99.9th_percentile
+regionserver.Server.MajorCompactionOutputFileCount_99th_percentile
+regionserver.Server.MajorCompactionOutputFileCount_max
+regionserver.Server.MajorCompactionOutputFileCount_mean
+regionserver.Server.MajorCompactionOutputFileCount_median
+regionserver.Server.MajorCompactionOutputFileCount_min
+regionserver.Server.MajorCompactionOutputFileCount_num_ops
+regionserver.Server.MajorCompactionOutputSize_25th_percentile
+regionserver.Server.MajorCompactionOutputSize_75th_percentile
+regionserver.Server.MajorCompactionOutputSize_90th_percentile
+regionserver.Server.MajorCompactionOutputSize_95th_percentile
+regionserver.Server.MajorCompactionOutputSize_98th_percentile
+regionserver.Server.MajorCompactionOutputSize_99.9th_percentile
+regionserver.Server.MajorCompactionOutputSize_99th_percentile
+regionserver.Server.MajorCompactionOutputSize_max
+regionserver.Server.MajorCompactionOutputSize_mean
+regionserver.Server.MajorCompactionOutputSize_median
+regionserver.Server.MajorCompactionOutputSize_min
+regionserver.Server.MajorCompactionOutputSize_num_ops
+regionserver.Server.MajorCompactionTime_25th_percentile
+regionserver.Server.MajorCompactionTime_75th_percentile
+regionserver.Server.MajorCompactionTime_90th_percentile
+regionserver.Server.MajorCompactionTime_95th_percentile
+regionserver.Server.MajorCompactionTime_98th_percentile
+regionserver.Server.MajorCompactionTime_99.9th_percentile
+regionserver.Server.MajorCompactionTime_99th_percentile
+regionserver.Server.MajorCompactionTime_max
+regionserver.Server.MajorCompactionTime_mean
+regionserver.Server.MajorCompactionTime_median
+regionserver.Server.MajorCompactionTime_min
+regionserver.Server.MajorCompactionTime_num_ops
+regionserver.Server.Mutate_25th_percentile
+regionserver.Server.Mutate_75th_percentile
+regionserver.Server.Mutate_90th_percentile
+regionserver.Server.Mutate_95th_percentile
+regionserver.Server.Mutate_98th_percentile
+regionserver.Server.Mutate_99.9th_percentile
+regionserver.Server.Mutate_99th_percentile
+regionserver.Server.Mutate_max
+regionserver.Server.Mutate_mean
+regionserver.Server.Mutate_median
+regionserver.Server.Mutate_min
+regionserver.Server.Mutate_num_ops
+regionserver.Server.PauseTimeWithGc_25th_percentile
+regionserver.Server.PauseTimeWithGc_75th_percentile
+regionserver.Server.PauseTimeWithGc_90th_percentile
+regionserver.Server.PauseTimeWithGc_95th_percentile
+regionserver.Server.PauseTimeWithGc_98th_percentile
+regionserver.Server.PauseTimeWithGc_99.9th_percentile
+regionserver.Server.PauseTimeWithGc_99th_percentile
+regionserver.Server.PauseTimeWithGc_max
+regionserver.Server.PauseTimeWithGc_mean
+regionserver.Server.PauseTimeWithGc_median
+regionserver.Server.PauseTimeWithGc_min
+regionserver.Server.PauseTimeWithGc_num_ops
+regionserver.Server.PauseTimeWithoutGc_25th_percentile
+regionserver.Server.PauseTimeWithoutGc_75th_percentile
+regionserver.Server.PauseTimeWithoutGc_90th_percentile
+regionserver.Server.PauseTimeWithoutGc_95th_percentile
+regionserver.Server.PauseTimeWithoutGc_98th_percentile
+regionserver.Server.PauseTimeWithoutGc_99.9th_percentile
+regionserver.Server.PauseTimeWithoutGc_99th_percentile
+regionserver.Server.PauseTimeWithoutGc_max
+regionserver.Server.PauseTimeWithoutGc_mean
+regionserver.Server.PauseTimeWithoutGc_median
+regionserver.Server.PauseTimeWithoutGc_min
+regionserver.Server.PauseTimeWithoutGc_num_ops
+regionserver.Server.Replay_25th_percentile
+regionserver.Server.Replay_75th_percentile
+regionserver.Server.Replay_90th_percentile
+regionserver.Server.Replay_95th_percentile
+regionserver.Server.Replay_98th_percentile
+regionserver.Server.Replay_99.9th_percentile
+regionserver.Server.Replay_99th_percentile
+regionserver.Server.Replay_max
+regionserver.Server.Replay_mean
+regionserver.Server.Replay_median
+regionserver.Server.Replay_min
+regionserver.Server.Replay_num_ops
+regionserver.Server.ScanSize_25th_percentile
+regionserver.Server.ScanSize_75th_percentile
+regionserver.Server.ScanSize_90th_percentile
+regionserver.Server.ScanSize_95th_percentile
+regionserver.Server.ScanSize_98th_percentile
+regionserver.Server.ScanSize_99.9th_percentile
+regionserver.Server.ScanSize_99th_percentile
+regionserver.Server.ScanSize_max
+regionserver.Server.ScanSize_mean
+regionserver.Server.ScanSize_median
+regionserver.Server.ScanSize_min
+regionserver.Server.ScanSize_num_ops
+regionserver.Server.ScanTime_25th_percentile
+regionserver.Server.ScanTime_75th_percentile
+regionserver.Server.ScanTime_90th_percentile
+regionserver.Server.ScanTime_95th_percentile
+regionserver.Server.ScanTime_98th_percentile
+regionserver.Server.ScanTime_99.9th_percentile
+regionserver.Server.ScanTime_99th_percentile
+regionserver.Server.ScanTime_max
+regionserver.Server.ScanTime_mean
+regionserver.Server.ScanTime_median
+regionserver.Server.ScanTime_min
+regionserver.Server.ScanTime_num_ops
+regionserver.Server.SplitTime_25th_percentile
+regionserver.Server.SplitTime_75th_percentile
+regionserver.Server.SplitTime_90th_percentile
+regionserver.Server.SplitTime_95th_percentile
+regionserver.Server.SplitTime_98th_percentile
+regionserver.Server.SplitTime_99.9th_percentile
+regionserver.Server.SplitTime_99th_percentile
+regionserver.Server.SplitTime_max
+regionserver.Server.SplitTime_mean
+regionserver.Server.SplitTime_median
+regionserver.Server.SplitTime_min
+regionserver.Server.SplitTime_num_ops
+regionserver.Server.averageRegionSize
+regionserver.Server.avgStoreFileAge
+regionserver.Server.blockCacheBloomChunkHitCount
+regionserver.Server.blockCacheBloomChunkMissCount
+regionserver.Server.blockCacheCount
+regionserver.Server.blockCacheCountHitPercent
+regionserver.Server.blockCacheDataHitCount
+regionserver.Server.blockCacheDataMissCount
+regionserver.Server.blockCacheDeleteFamilyBloomHitCount
+regionserver.Server.blockCacheDeleteFamilyBloomMissCount
+regionserver.Server.blockCacheEvictionCount
+regionserver.Server.blockCacheEvictionCountPrimary
+regionserver.Server.blockCacheExpressHitPercent
+regionserver.Server.blockCacheFileInfoHitCount
+regionserver.Server.blockCacheFileInfoMissCount
+regionserver.Server.blockCacheFreeSize
+regionserver.Server.blockCacheGeneralBloomMetaHitCount
+regionserver.Server.blockCacheGeneralBloomMetaMissCount
+regionserver.Server.blockCacheHitCount
+regionserver.Server.blockCacheHitCountPrimary
+regionserver.Server.blockCacheIntermediateIndexHitCount
+regionserver.Server.blockCacheIntermediateIndexMissCount
+regionserver.Server.blockCacheLeafIndexHitCount
+regionserver.Server.blockCacheLeafIndexMissCount
+regionserver.Server.blockCacheMetaHitCount
+regionserver.Server.blockCacheMetaMissCount
+regionserver.Server.blockCacheMissCount
+regionserver.Server.blockCacheMissCountPrimary
+regionserver.Server.blockCacheRootIndexHitCount
+regionserver.Server.blockCacheRootIndexMissCount
+regionserver.Server.blockCacheSize
+regionserver.Server.blockCacheTrailerHitCount
+regionserver.Server.blockCacheTrailerMissCount
+regionserver.Server.blockedRequestCount
+regionserver.Server.cellsCountCompactedFromMob
+regionserver.Server.cellsCountCompactedToMob
+regionserver.Server.cellsSizeCompactedFromMob
+regionserver.Server.cellsSizeCompactedToMob
+regionserver.Server.checkMutateFailedCount
+regionserver.Server.checkMutatePassedCount
+regionserver.Server.compactedCellsCount
+regionserver.Server.compactedCellsSize
+regionserver.Server.compactedInputBytes
+regionserver.Server.compactedOutputBytes
+regionserver.Server.compactionQueueLength
+regionserver.Server.flushQueueLength
+regionserver.Server.flushedCellsCount
+regionserver.Server.flushedCellsSize
+regionserver.Server.flushedMemstoreBytes
+regionserver.Server.flushedOutputBytes
+regionserver.Server.hlogFileCount
+regionserver.Server.hlogFileSize
+regionserver.Server.largeCompactionQueueLength
+regionserver.Server.majorCompactedCellsCount
+regionserver.Server.majorCompactedCellsSize
+regionserver.Server.majorCompactedInputBytes
+regionserver.Server.majorCompactedOutputBytes
+regionserver.Server.maxStoreFileAge
+regionserver.Server.memStoreSize
+regionserver.Server.minStoreFileAge
+regionserver.Server.mobFileCacheAccessCount
+regionserver.Server.mobFileCacheCount
+regionserver.Server.mobFileCacheEvictedCount
+regionserver.Server.mobFileCacheHitPercent
+regionserver.Server.mobFileCacheMissCount
+regionserver.Server.mobFlushCount
+regionserver.Server.mobFlushedCellsCount
+regionserver.Server.mobFlushedCellsSize
+regionserver.Server.mobScanCellsCount
+regionserver.Server.mobScanCellsSize
+regionserver.Server.mutationsWithoutWALCount
+regionserver.Server.mutationsWithoutWALSize
+regionserver.Server.numReferenceFiles
+regionserver.Server.pauseInfoThresholdExceeded
+regionserver.Server.pauseWarnThresholdExceeded
+regionserver.Server.percentFilesLocal
+regionserver.Server.percentFilesLocalSecondaryRegions
+regionserver.Server.readRequestCount
+regionserver.Server.regionCount
+regionserver.Server.regionServerStartTime
+regionserver.Server.rpcGetRequestCount
+regionserver.Server.rpcMultiRequestCount
+regionserver.Server.rpcMutateRequestCount
+regionserver.Server.rpcScanRequestCount
+regionserver.Server.slowAppendCount
+regionserver.Server.slowDeleteCount
+regionserver.Server.slowGetCount
+regionserver.Server.slowIncrementCount
+regionserver.Server.slowPutCount
+regionserver.Server.smallCompactionQueueLength
+regionserver.Server.splitQueueLength
+regionserver.Server.splitRequestCount
+regionserver.Server.splitSuccessCount
+regionserver.Server.staticBloomSize
+regionserver.Server.staticIndexSize
+regionserver.Server.storeCount
+regionserver.Server.storeFileCount
+regionserver.Server.storeFileIndexSize
+regionserver.Server.storeFileSize
+regionserver.Server.totalRequestCount
+regionserver.Server.updatesBlockedTime
+regionserver.Server.writeRequestCount
+regionserver.WAL.AppendSize_25th_percentile
+regionserver.WAL.AppendSize_75th_percentile
+regionserver.WAL.AppendSize_90th_percentile
+regionserver.WAL.AppendSize_95th_percentile
+regionserver.WAL.AppendSize_98th_percentile
+regionserver.WAL.AppendSize_99.9th_percentile
+regionserver.WAL.AppendSize_99th_percentile
+regionserver.WAL.AppendSize_SizeRangeCount_100-1000
+regionserver.WAL.AppendSize_max
+regionserver.WAL.AppendSize_mean
+regionserver.WAL.AppendSize_median
+regionserver.WAL.AppendSize_min
+regionserver.WAL.AppendSize_num_ops
+regionserver.WAL.AppendTime_25th_percentile
+regionserver.WAL.AppendTime_75th_percentile
+regionserver.WAL.AppendTime_90th_percentile
+regionserver.WAL.AppendTime_95th_percentile
+regionserver.WAL.AppendTime_98th_percentile
+regionserver.WAL.AppendTime_99.9th_percentile
+regionserver.WAL.AppendTime_99th_percentile
+regionserver.WAL.AppendTime_TimeRangeCount_0-1
+regionserver.WAL.AppendTime_max
+regionserver.WAL.AppendTime_mean
+regionserver.WAL.AppendTime_median
+regionserver.WAL.AppendTime_min
+regionserver.WAL.AppendTime_num_ops
+regionserver.WAL.SyncTime_25th_percentile
+regionserver.WAL.SyncTime_75th_percentile
+regionserver.WAL.SyncTime_90th_percentile
+regionserver.WAL.SyncTime_95th_percentile
+regionserver.WAL.SyncTime_98th_percentile
+regionserver.WAL.SyncTime_99.9th_percentile
+regionserver.WAL.SyncTime_99th_percentile
+regionserver.WAL.SyncTime_TimeRangeCount_0-1
+regionserver.WAL.SyncTime_TimeRangeCount_1-3
+regionserver.WAL.SyncTime_TimeRangeCount_10-30
+regionserver.WAL.SyncTime_TimeRangeCount_3-10
+regionserver.WAL.SyncTime_TimeRangeCount_30-100
+regionserver.WAL.SyncTime_max
+regionserver.WAL.SyncTime_mean
+regionserver.WAL.SyncTime_median
+regionserver.WAL.SyncTime_min
+regionserver.WAL.SyncTime_num_ops
+regionserver.WAL.appendCount
+regionserver.WAL.lowReplicaRollRequest
+regionserver.WAL.rollRequest
+regionserver.WAL.slowAppendCount
+regionserver.WAL.writtenBytes
+ugi.UgiMetrics.GetGroupsAvgTime
+ugi.UgiMetrics.GetGroupsNumOps
+ugi.UgiMetrics.LoginFailureAvgTime
+ugi.UgiMetrics.LoginFailureNumOps
+ugi.UgiMetrics.LoginSuccessAvgTime
+ugi.UgiMetrics.LoginSuccessNumOps
+ugi.UgiMetrics.RenewalFailures
+ugi.UgiMetrics.RenewalFailuresTotal
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
index f6d69f6..c25d414 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
@@ -111,7 +111,7 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
   public void testGetMetricRecordsMinutes() throws IOException, SQLException {
     // GIVEN
     TimelineMetricAggregator aggregatorMinute =
-      TimelineMetricAggregatorFactory.createTimelineMetricAggregatorMinute(hdb, new Configuration(), null);
+      TimelineMetricAggregatorFactory.createTimelineMetricAggregatorMinute(hdb, new Configuration(), null, null);
 
     long startTime = System.currentTimeMillis();
     long ctime = startTime;
@@ -149,7 +149,7 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
   public void testGetMetricRecordsHours() throws IOException, SQLException {
     // GIVEN
     TimelineMetricAggregator aggregator =
-      TimelineMetricAggregatorFactory.createTimelineMetricAggregatorHourly(hdb, new Configuration(), null);
+      TimelineMetricAggregatorFactory.createTimelineMetricAggregatorHourly(hdb, new Configuration(), null, null);
 
     MetricHostAggregate expectedAggregate =
       createMetricHostAggregate(2.0, 0.0, 20, 15.0);
@@ -283,7 +283,7 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
   public void testGetClusterMetricRecordsHours() throws Exception {
     // GIVEN
     TimelineMetricAggregator agg =
-      TimelineMetricAggregatorFactory.createTimelineClusterAggregatorHourly(hdb, new Configuration(), null);
+      TimelineMetricAggregatorFactory.createTimelineClusterAggregatorHourly(hdb, new Configuration(), null, null);
 
     long startTime = System.currentTimeMillis();
     long ctime = startTime;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/MetricTestHelper.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/MetricTestHelper.java
index 7eeb9c4..7dfe1fc 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/MetricTestHelper.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/MetricTestHelper.java
@@ -109,7 +109,7 @@ public class MetricTestHelper {
   public static TimelineClusterMetric createEmptyTimelineClusterMetric(
       String name, long startTime) {
     TimelineClusterMetric metric = new TimelineClusterMetric(name,
-        "test_app", "instance_id", startTime, null);
+        "test_app", "instance_id", startTime);
 
     return metric;
   }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
index bf9246d..7be3c0d 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
@@ -29,6 +29,7 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.Function;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultCondition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixConnectionProvider;
@@ -55,6 +56,7 @@ import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 
+import static org.easymock.EasyMock.anyObject;
 import static org.easymock.EasyMock.expect;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.fail;
@@ -256,7 +258,7 @@ public class PhoenixHBaseAccessorTest {
 
     TimelineClusterMetric clusterMetric =
         new TimelineClusterMetric("metricName", "appId", "instanceId",
-            System.currentTimeMillis(), "type");
+            System.currentTimeMillis());
     TimelineMetric timelineMetric = new TimelineMetric();
     timelineMetric.setMetricName("Metric1");
     timelineMetric.setType("type1");
@@ -268,6 +270,12 @@ public class PhoenixHBaseAccessorTest {
     clusterTimeAggregateMap.put(clusterMetric, new MetricHostAggregate());
     hostAggregateMap.put(timelineMetric, new MetricHostAggregate());
 
+    TimelineMetricMetadataManager metricMetadataManagerMock = EasyMock.createMock(TimelineMetricMetadataManager.class);
+    expect(metricMetadataManagerMock.getUuid(anyObject(TimelineClusterMetric.class))).andReturn(new byte[16]).times(2);
+    expect(metricMetadataManagerMock.getUuid(anyObject(TimelineMetric.class))).andReturn(new byte[20]).once();
+    replay(metricMetadataManagerMock);
+
+    accessor.setMetadataInstance(metricMetadataManagerMock);
     accessor.saveClusterAggregateRecords(clusterAggregateMap);
     accessor.saveHostAggregateRecords(hostAggregateMap,
         PhoenixTransactSQL.METRICS_AGGREGATE_MINUTE_TABLE_NAME);
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestPhoenixTransactSQL.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestPhoenixTransactSQL.java
index e988a61..dd73a8a 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestPhoenixTransactSQL.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestPhoenixTransactSQL.java
@@ -26,6 +26,7 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.TopNCondition;
 import org.easymock.Capture;
 import org.junit.Assert;
+import org.junit.Ignore;
 import org.junit.Test;
 import java.sql.Connection;
 import java.sql.ParameterMetaData;
@@ -46,13 +47,12 @@ import org.easymock.EasyMock;
 public class TestPhoenixTransactSQL {
   @Test
   public void testConditionClause() throws Exception {
-    Condition condition = new DefaultCondition(
+    Condition condition = new DefaultCondition(Arrays.asList(new byte[8], new byte[8]),
       new ArrayList<>(Arrays.asList("cpu_user", "mem_free")), Collections.singletonList("h1"),
       "a1", "i1", 1407959718L, 1407959918L, null, null, false);
 
     String preparedClause = condition.getConditionClause().toString();
-    String expectedClause = "(METRIC_NAME IN (?, ?)) AND HOSTNAME = ? AND " +
-      "APP_ID = ? AND INSTANCE_ID = ? AND SERVER_TIME >= ? AND SERVER_TIME < ?";
+    String expectedClause = "(UUID IN (?, ?)) AND SERVER_TIME >= ? AND SERVER_TIME < ?";
 
     Assert.assertNotNull(preparedClause);
     Assert.assertEquals(expectedClause, preparedClause);
@@ -60,21 +60,21 @@ public class TestPhoenixTransactSQL {
 
   @Test
   public void testSplitByMetricNamesCondition() throws Exception {
-    Condition c = new DefaultCondition(
+    Condition c = new DefaultCondition(Arrays.asList(new byte[8], new byte[8]),
       Arrays.asList("cpu_user", "mem_free"), Collections.singletonList("h1"),
       "a1", "i1", 1407959718L, 1407959918L, null, null, false);
 
     SplitByMetricNamesCondition condition = new SplitByMetricNamesCondition(c);
-    condition.setCurrentMetric(c.getMetricNames().get(0));
+    condition.setCurrentUuid(new byte[8]);
 
     String preparedClause = condition.getConditionClause().toString();
-    String expectedClause = "METRIC_NAME = ? AND HOSTNAME = ? AND " +
-      "APP_ID = ? AND INSTANCE_ID = ? AND SERVER_TIME >= ? AND SERVER_TIME < ?";
+    String expectedClause = "UUID = ? AND SERVER_TIME >= ? AND SERVER_TIME < ?";
 
     Assert.assertNotNull(preparedClause);
     Assert.assertEquals(expectedClause, preparedClause);
   }
 
+  @Ignore
   @Test
   public void testLikeConditionClause() throws Exception {
     Condition condition = new DefaultCondition(
@@ -363,7 +363,7 @@ public class TestPhoenixTransactSQL {
 
   @Test
   public void testPrepareGetLatestMetricSqlStmtMultipleHostNames() throws SQLException {
-    Condition condition = new DefaultCondition(
+    Condition condition = new DefaultCondition(Arrays.asList(new byte[16], new byte[16], new byte[16], new byte[16]),
       new ArrayList<>(Arrays.asList("cpu_user", "mem_free")), Arrays.asList("h1", "h2"),
       "a1", "i1", null, null, null, null, false);
     Connection connection = createNiceMock(Connection.class);
@@ -376,7 +376,7 @@ public class TestPhoenixTransactSQL {
       .andReturn(parameterMetaData).once();
     // 6 = 1 instance_id + 1 appd_id + 2 hostnames + 2 metric names
     expect(parameterMetaData.getParameterCount())
-      .andReturn(6).once();
+      .andReturn(4).once();
 
     replay(connection, preparedStatement, parameterMetaData);
     PhoenixTransactSQL.prepareGetLatestMetricSqlStmt(connection, condition);
@@ -389,7 +389,7 @@ public class TestPhoenixTransactSQL {
   @Test
   public void testPrepareGetLatestMetricSqlStmtSortMergeJoinAlgorithm()
     throws SQLException {
-    Condition condition = new DefaultCondition(
+    Condition condition = new DefaultCondition(Arrays.asList(new byte[16], new byte[16]),
       new ArrayList<>(Arrays.asList("cpu_user", "mem_free")), Arrays.asList("h1"),
       "a1", "i1", null, null, null, null, false);
     Connection connection = createNiceMock(Connection.class);
@@ -401,7 +401,7 @@ public class TestPhoenixTransactSQL {
     expect(preparedStatement.getParameterMetaData())
       .andReturn(parameterMetaData).anyTimes();
     expect(parameterMetaData.getParameterCount())
-      .andReturn(6).anyTimes();
+      .andReturn(2).anyTimes();
 
     replay(connection, preparedStatement, parameterMetaData);
     PhoenixTransactSQL.setSortMergeJoinEnabled(true);
@@ -558,22 +558,19 @@ public class TestPhoenixTransactSQL {
 
   @Test
   public void testTopNHostsConditionClause() throws Exception {
-    List<String> hosts = Arrays.asList("h1", "h2", "h3", "h4");
+    List<String> hosts = Arrays.asList("h1", "h2");
+    List<byte[]> uuids = Arrays.asList(new byte[16], new byte[16]);
 
-    Condition condition = new TopNCondition(
-      new ArrayList<>(Collections.singletonList("cpu_user")), hosts,
+    Condition condition = new TopNCondition(uuids, new ArrayList<>(Collections.singletonList("cpu_user")), hosts,
       "a1", "i1", 1407959718L, 1407959918L, null, null, false, 2, null, false);
 
     String conditionClause = condition.getConditionClause().toString();
-    String expectedClause = "(METRIC_NAME IN (?)) AND HOSTNAME IN (" +
-      "SELECT " + PhoenixTransactSQL.getNaiveTimeRangeHint(condition.getStartTime(),120000l) +
-      " HOSTNAME FROM METRIC_RECORD WHERE " +
-          "(METRIC_NAME IN (?)) AND " +
-          "HOSTNAME IN (? ,? ,? ,?) AND " +
-          "APP_ID = ? AND INSTANCE_ID = ? AND " +
+    String expectedClause = " UUID IN (" +
+      "SELECT " + PhoenixTransactSQL.getNaiveTimeRangeHint(condition.getStartTime(),120000l) + " " +
+      "UUID FROM METRIC_RECORD WHERE " +
+          "(UUID IN (?, ?)) AND " +
           "SERVER_TIME >= ? AND SERVER_TIME < ? " +
-          "GROUP BY METRIC_NAME, HOSTNAME, APP_ID ORDER BY MAX(METRIC_MAX) DESC LIMIT 2) " +
-      "AND APP_ID = ? AND INSTANCE_ID = ? AND SERVER_TIME >= ? AND SERVER_TIME < ?";
+          "GROUP BY UUID ORDER BY MAX(METRIC_MAX) DESC LIMIT 2) AND SERVER_TIME >= ? AND SERVER_TIME < ?";
 
     Assert.assertEquals(expectedClause, conditionClause);
   }
@@ -581,21 +578,18 @@ public class TestPhoenixTransactSQL {
   @Test
   public void testTopNMetricsConditionClause() throws Exception {
     List<String> metricNames = new ArrayList<>(Arrays.asList("m1", "m2", "m3"));
+    List<byte[]> uuids = Arrays.asList(new byte[16], new byte[16], new byte[16]);
 
-    Condition condition = new TopNCondition(
-      metricNames, Collections.singletonList("h1"),
+    Condition condition = new TopNCondition(uuids, metricNames, Collections.singletonList("h1"),
       "a1", "i1", 1407959718L, 1407959918L, null, null, false, 2, null, false);
 
     String conditionClause = condition.getConditionClause().toString();
-    String expectedClause = " METRIC_NAME IN (" +
+    String expectedClause = " UUID IN (" +
       "SELECT " + PhoenixTransactSQL.getNaiveTimeRangeHint(condition.getStartTime(),120000l) +
-      " METRIC_NAME FROM METRIC_RECORD WHERE " +
-      "(METRIC_NAME IN (?, ?, ?)) AND " +
-      "HOSTNAME = ? AND " +
-      "APP_ID = ? AND INSTANCE_ID = ? AND " +
+      " UUID FROM METRIC_RECORD WHERE " +
+      "(UUID IN (?, ?, ?)) AND " +
       "SERVER_TIME >= ? AND SERVER_TIME < ? " +
-      "GROUP BY METRIC_NAME, APP_ID ORDER BY MAX(METRIC_MAX) DESC LIMIT 2) " +
-      "AND HOSTNAME = ? AND APP_ID = ? AND INSTANCE_ID = ? AND SERVER_TIME >= ? AND SERVER_TIME < ?";
+      "GROUP BY UUID ORDER BY MAX(METRIC_MAX) DESC LIMIT 2) AND SERVER_TIME >= ? AND SERVER_TIME < ?";
 
     Assert.assertEquals(expectedClause, conditionClause);
   }
@@ -605,57 +599,12 @@ public class TestPhoenixTransactSQL {
     List<String> metricNames = new ArrayList<>(Arrays.asList("m1", "m2"));
 
     List<String> hosts = Arrays.asList("h1", "h2");
+    List<byte[]> uuids = Arrays.asList(new byte[16], new byte[16], new byte[16], new byte[16]);
 
-    Condition condition = new TopNCondition(
-      metricNames, hosts,
+    Condition condition = new TopNCondition(uuids, metricNames, hosts,
       "a1", "i1", 1407959718L, 1407959918L, null, null, false, 2, null, false);
 
     Assert.assertEquals(condition.getConditionClause(), null);
   }
 
-  @Test
-  public void testHostsRegexpConditionClause() {
-    Condition condition = new TopNCondition(
-      new ArrayList<>(Arrays.asList("m1")), Arrays.asList("%.ambari", "host1.apache"),
-            "a1", "i1", 1407959718L, 1407959918L, null, null, false, 2, null, false);
-
-    String conditionClause = condition.getConditionClause().toString();
-    String expectedClause = "(METRIC_NAME IN (?)) AND HOSTNAME IN (SELECT " +
-      PhoenixTransactSQL.getNaiveTimeRangeHint(condition.getStartTime(),120000l) +
-      " HOSTNAME FROM METRIC_RECORD WHERE (METRIC_NAME IN (?)) " +
-      "AND (HOSTNAME LIKE ? OR HOSTNAME LIKE ?) AND APP_ID = ? AND INSTANCE_ID = ? AND SERVER_TIME >= ? AND SERVER_TIME < ? GROUP BY " +
-      "METRIC_NAME, HOSTNAME, APP_ID ORDER BY MAX(METRIC_MAX) DESC LIMIT 2) AND APP_ID = ? AND INSTANCE_ID = ? AND " +
-      "SERVER_TIME >= ? AND SERVER_TIME < ?";
-    Assert.assertEquals(expectedClause, conditionClause);
-
-    condition = new TopNCondition(
-      new ArrayList<>(Arrays.asList("m1")), Arrays.asList("%.ambari"),
-            "a1", "i1", 1407959718L, 1407959918L, null, null, false, 2, null, false);
-
-    conditionClause = condition.getConditionClause().toString();
-    expectedClause = "(METRIC_NAME IN (?)) AND HOSTNAME IN (SELECT " +
-      PhoenixTransactSQL.getNaiveTimeRangeHint(condition.getStartTime(),120000l) +
-      " HOSTNAME FROM METRIC_RECORD WHERE (METRIC_NAME IN (?)) " +
-      "AND (HOSTNAME LIKE ?) AND APP_ID = ? AND INSTANCE_ID = ? AND SERVER_TIME >= ? AND SERVER_TIME < ? GROUP BY " +
-      "METRIC_NAME, HOSTNAME, APP_ID ORDER BY MAX(METRIC_MAX) DESC LIMIT 2) AND APP_ID = ? AND INSTANCE_ID = ? AND " +
-      "SERVER_TIME >= ? AND SERVER_TIME < ?";
-
-    Assert.assertEquals(expectedClause, conditionClause);
-
-    condition = new TopNCondition(
-      new ArrayList<>(Arrays.asList("m1", "m2", "m3")), Arrays.asList("h1.ambari"),
-            "a1", "i1", 1407959718L, 1407959918L, null, null, false, 2, null, false);
-
-    conditionClause = condition.getConditionClause().toString();
-    expectedClause = " METRIC_NAME IN (" +
-            "SELECT " + PhoenixTransactSQL.getNaiveTimeRangeHint(condition.getStartTime(),120000l) +
-            " METRIC_NAME FROM METRIC_RECORD WHERE " +
-            "(METRIC_NAME IN (?, ?, ?)) AND " +
-            "HOSTNAME = ? AND " +
-            "APP_ID = ? AND INSTANCE_ID = ? AND " +
-            "SERVER_TIME >= ? AND SERVER_TIME < ? " +
-            "GROUP BY METRIC_NAME, APP_ID ORDER BY MAX(METRIC_MAX) DESC LIMIT 2) " +
-            "AND HOSTNAME = ? AND APP_ID = ? AND INSTANCE_ID = ? AND SERVER_TIME >= ? AND SERVER_TIME < ?";
-    Assert.assertEquals(expectedClause, conditionClause);
-  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java
index 8abcd83..07e0daa 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java
@@ -25,6 +25,8 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
 import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
+
 import java.io.IOException;
 import java.sql.SQLException;
 import java.util.ArrayList;
@@ -111,5 +113,10 @@ public class TestTimelineMetricStore implements TimelineMetricStore {
   public List<String> getLiveInstances() {
     return Collections.emptyList();
   }
-  
+
+  @Override
+  public Map<String, TimelineMetricMetadataKey> getUuids() throws SQLException, IOException {
+    return null;
+  }
+
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsAggregatorMemorySink.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsAggregatorMemorySink.java
index 53f6f6c..51cde4a 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsAggregatorMemorySink.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsAggregatorMemorySink.java
@@ -92,7 +92,7 @@ public class TimelineMetricsAggregatorMemorySink
       TimelineClusterMetric clusterMetricClone =
           new TimelineClusterMetric(clusterMetric.getMetricName(),
               clusterMetric.getAppId(), clusterMetric.getInstanceId(),
-              clusterMetric.getTimestamp(), clusterMetric.getType());
+              clusterMetric.getTimestamp());
       MetricHostAggregate hostAggregate = entry.getValue();
       MetricHostAggregate hostAggregateClone = new MetricHostAggregate(
           hostAggregate.getSum(), (int) hostAggregate.getNumberOfSamples(),
@@ -116,7 +116,7 @@ public class TimelineMetricsAggregatorMemorySink
       TimelineClusterMetric clusterMetricClone =
           new TimelineClusterMetric(clusterMetric.getMetricName(),
               clusterMetric.getAppId(), clusterMetric.getInstanceId(),
-              clusterMetric.getTimestamp(), clusterMetric.getType());
+              clusterMetric.getTimestamp());
       MetricClusterAggregate clusterAggregate = entry.getValue();
       MetricClusterAggregate clusterAggregateClone = new MetricClusterAggregate(
           clusterAggregate.getSum(), (int) clusterAggregate.getNumberOfHosts(),
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/DownSamplerTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/DownSamplerTest.java
index d02d2a8..7fb8e78 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/DownSamplerTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/DownSamplerTest.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline
 
 import junit.framework.Assert;
 import org.apache.hadoop.conf.Configuration;
+import org.junit.Ignore;
 import org.junit.Test;
 
 import java.util.List;
@@ -57,6 +58,7 @@ public class DownSamplerTest {
     Assert.assertTrue(downSamplers.get(0) instanceof TopNDownSampler);
   }
 
+  @Ignore
   @Test
   public void testPrepareTopNDownSamplingStatement() throws Exception {
     Configuration configuration = new Configuration();
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java
index 86c9b40..e66e65d 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java
@@ -94,7 +94,6 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
     Condition condition = new DefaultCondition(null, null, null, null, startTime,
       endTime, null, null, true);
     condition.setStatement(String.format(GET_CLUSTER_AGGREGATE_SQL,
-      PhoenixTransactSQL.getNaiveTimeRangeHint(startTime, NATIVE_TIME_RANGE_DELTA),
       METRICS_CLUSTER_AGGREGATE_TABLE_NAME));
 
     PreparedStatement pstmt = PhoenixTransactSQL.prepareGetMetricsSqlStmt(conn, condition);
@@ -169,7 +168,6 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
     Condition condition = new DefaultCondition(null, null, null, null, startTime,
       endTime, null, null, true);
     condition.setStatement(String.format(GET_CLUSTER_AGGREGATE_SQL,
-      PhoenixTransactSQL.getNaiveTimeRangeHint(startTime, NATIVE_TIME_RANGE_DELTA),
       METRICS_CLUSTER_AGGREGATE_TABLE_NAME));
 
     PreparedStatement pstmt = PhoenixTransactSQL.prepareGetMetricsSqlStmt(conn, condition);
@@ -229,7 +227,6 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
     Condition condition = new DefaultCondition(null, null, null, null, startTime,
       endTime, null, null, true);
     condition.setStatement(String.format(GET_CLUSTER_AGGREGATE_SQL,
-      PhoenixTransactSQL.getNaiveTimeRangeHint(startTime, NATIVE_TIME_RANGE_DELTA),
       METRICS_CLUSTER_AGGREGATE_TABLE_NAME));
 
     PreparedStatement pstmt = PhoenixTransactSQL.prepareGetMetricsSqlStmt(conn, condition);
@@ -263,7 +260,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
   public void testAggregateDailyClusterMetrics() throws Exception {
     // GIVEN
     TimelineMetricAggregator agg =
-      TimelineMetricAggregatorFactory.createTimelineClusterAggregatorDaily(hdb, getConfigurationForTest(false), null);
+      TimelineMetricAggregatorFactory.createTimelineClusterAggregatorDaily(hdb, getConfigurationForTest(false), null, null);
 
     // this time can be virtualized! or made independent from real clock
     long startTime = System.currentTimeMillis();
@@ -308,7 +305,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
   public void testShouldAggregateClusterOnMinuteProperly() throws Exception {
 
     TimelineMetricAggregator agg =
-      TimelineMetricAggregatorFactory.createTimelineClusterAggregatorMinute(hdb, getConfigurationForTest(false), null);
+      TimelineMetricAggregatorFactory.createTimelineClusterAggregatorMinute(hdb, getConfigurationForTest(false), null, null);
 
     long startTime = System.currentTimeMillis();
     long ctime = startTime;
@@ -375,7 +372,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
   public void testShouldAggregateClusterOnHourProperly() throws Exception {
     // GIVEN
     TimelineMetricAggregator agg =
-      TimelineMetricAggregatorFactory.createTimelineClusterAggregatorHourly(hdb, getConfigurationForTest(false), null);
+      TimelineMetricAggregatorFactory.createTimelineClusterAggregatorHourly(hdb, getConfigurationForTest(false), null, null);
 
     // this time can be virtualized! or made independent from real clock
     long startTime = System.currentTimeMillis();
@@ -419,7 +416,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
   public void testShouldAggregateDifferentMetricsOnHourProperly() throws Exception {
     // GIVEN
     TimelineMetricAggregator agg =
-      TimelineMetricAggregatorFactory.createTimelineClusterAggregatorHourly(hdb, getConfigurationForTest(false), null);
+      TimelineMetricAggregatorFactory.createTimelineClusterAggregatorHourly(hdb, getConfigurationForTest(false), null, null);
 
     long startTime = System.currentTimeMillis();
     long ctime = startTime;
@@ -507,7 +504,6 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
       new ArrayList<String>() {{ add("cpu_user"); }}, null, "app1", null,
       startTime, endTime, null, null, true);
     condition.setStatement(String.format(GET_CLUSTER_AGGREGATE_SQL,
-      PhoenixTransactSQL.getNaiveTimeRangeHint(startTime, NATIVE_TIME_RANGE_DELTA),
       METRICS_CLUSTER_AGGREGATE_TABLE_NAME));
 
     PreparedStatement pstmt = PhoenixTransactSQL.prepareGetMetricsSqlStmt(conn, condition);
@@ -583,7 +579,6 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
     Condition condition = new DefaultCondition(null, null, null, null, startTime,
       endTime, null, null, true);
     condition.setStatement(String.format(GET_CLUSTER_AGGREGATE_SQL,
-      PhoenixTransactSQL.getNaiveTimeRangeHint(startTime, NATIVE_TIME_RANGE_DELTA),
       METRICS_CLUSTER_AGGREGATE_TABLE_NAME));
 
     PreparedStatement pstmt = PhoenixTransactSQL.prepareGetMetricsSqlStmt(conn, condition);
@@ -611,7 +606,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
   public void testAggregationUsingGroupByQuery() throws Exception {
     // GIVEN
     TimelineMetricAggregator agg =
-      TimelineMetricAggregatorFactory.createTimelineClusterAggregatorHourly(hdb, getConfigurationForTest(true), null);
+      TimelineMetricAggregatorFactory.createTimelineClusterAggregatorHourly(hdb, getConfigurationForTest(true), null, null);
 
     long startTime = System.currentTimeMillis();
     long ctime = startTime;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITMetricAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITMetricAggregator.java
index 75b3f91..14ac4d7 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITMetricAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITMetricAggregator.java
@@ -85,7 +85,7 @@ public class ITMetricAggregator extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator aggregatorMinute =
       TimelineMetricAggregatorFactory.createTimelineMetricAggregatorMinute(hdb,
-        getConfigurationForTest(false), null);
+        getConfigurationForTest(false), null, null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
 
     long startTime = System.currentTimeMillis();
@@ -146,7 +146,7 @@ public class ITMetricAggregator extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator aggregator =
       TimelineMetricAggregatorFactory.createTimelineMetricAggregatorHourly(hdb,
-        getConfigurationForTest(false), null);
+        getConfigurationForTest(false), null, null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
     long startTime = System.currentTimeMillis();
 
@@ -209,7 +209,7 @@ public class ITMetricAggregator extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator aggregator =
       TimelineMetricAggregatorFactory.createTimelineMetricAggregatorDaily(hdb,
-        getConfigurationForTest(false), null);
+        getConfigurationForTest(false), null, null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
     long startTime = System.currentTimeMillis();
 
@@ -271,7 +271,7 @@ public class ITMetricAggregator extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator aggregatorMinute =
       TimelineMetricAggregatorFactory.createTimelineMetricAggregatorMinute(hdb,
-        getConfigurationForTest(true), null);
+        getConfigurationForTest(true), null, null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
 
     long startTime = System.currentTimeMillis();
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondTest.java
index 6541b2c..937dd80 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondTest.java
@@ -19,9 +19,11 @@ package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline
 
 
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.AGGREGATOR_NAME.METRIC_AGGREGATE_SECOND;
+import static org.easymock.EasyMock.anyBoolean;
 import static org.easymock.EasyMock.anyObject;
 import static org.easymock.EasyMock.createNiceMock;
 import static org.easymock.EasyMock.expect;
+import static org.easymock.EasyMock.expectLastCall;
 import static org.easymock.EasyMock.replay;
 
 import java.sql.ResultSet;
@@ -51,6 +53,8 @@ public class TimelineMetricClusterAggregatorSecondTest {
 
     Configuration configuration = new Configuration();
     TimelineMetricMetadataManager metricMetadataManagerMock = createNiceMock(TimelineMetricMetadataManager.class);
+    expect(metricMetadataManagerMock.getUuid(anyObject(TimelineClusterMetric.class))).andReturn(new byte[16]).once();
+    replay(metricMetadataManagerMock);
 
     TimelineMetricClusterAggregatorSecond secondAggregator = new TimelineMetricClusterAggregatorSecond(
       METRIC_AGGREGATE_SECOND, metricMetadataManagerMock, null,
@@ -84,7 +88,7 @@ public class TimelineMetricClusterAggregatorSecondTest {
     Map<TimelineClusterMetric, Double> timelineClusterMetricMap = secondAggregator.sliceFromTimelineMetric(counterMetric, timeSlices);
 
     TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric(counterMetric.getMetricName(), counterMetric.getAppId(),
-      counterMetric.getInstanceId(), 0l, null);
+      counterMetric.getInstanceId(), 0l);
 
     timelineClusterMetric.setTimestamp(roundedStartTime + 2*sliceInterval);
     Assert.assertTrue(timelineClusterMetricMap.containsKey(timelineClusterMetric));
@@ -103,7 +107,7 @@ public class TimelineMetricClusterAggregatorSecondTest {
     timelineClusterMetricMap = secondAggregator.sliceFromTimelineMetric(metric, timeSlices);
 
     timelineClusterMetric = new TimelineClusterMetric(metric.getMetricName(), metric.getAppId(),
-      metric.getInstanceId(), 0l, null);
+      metric.getInstanceId(), 0l);
 
     timelineClusterMetric.setTimestamp(roundedStartTime + 2*sliceInterval);
     Assert.assertTrue(timelineClusterMetricMap.containsKey(timelineClusterMetric));
@@ -168,7 +172,7 @@ public class TimelineMetricClusterAggregatorSecondTest {
 
     Assert.assertEquals(aggregateClusterMetrics.size(), 4);
     TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric(timelineMetric.getMetricName(),
-      timelineMetric.getAppId(), timelineMetric.getInstanceId(), startTime + 30*seconds, timelineMetric.getType());
+      timelineMetric.getAppId(), timelineMetric.getInstanceId(), startTime + 30*seconds);
 
     Assert.assertTrue(aggregateClusterMetrics.containsKey(timelineClusterMetric));
     Assert.assertEquals(aggregateClusterMetrics.get(timelineClusterMetric).getSum(), 1.0);
@@ -330,6 +334,29 @@ public class TimelineMetricClusterAggregatorSecondTest {
     TimelineMetricMetadataManager metricMetadataManagerMock = createNiceMock(TimelineMetricMetadataManager.class);
 
     expect(metricMetadataManagerMock.getMetadataCacheValue((TimelineMetricMetadataKey) anyObject())).andReturn(null).anyTimes();
+
+    /*
+    m1-h1-a1
+    m2-h1-a1
+    m2-h1-a2
+    m2-h2-a1
+    m2-h2-a2
+    m2-h3-a2
+
+    So live_hosts : a1 = 2
+       live_hosts : a2 = 3
+     */
+
+    TimelineMetric metric1  = new TimelineMetric("m1", "h1", "a1", null);
+    TimelineMetric metric2  = new TimelineMetric("m2", "h1", "a1", null);
+    TimelineMetric metric3  = new TimelineMetric("m2", "h1", "a2", null);
+    TimelineMetric metric4  = new TimelineMetric("m2", "h2", "a1", null);
+    TimelineMetric metric5  = new TimelineMetric("m2", "h2", "a2", null);
+    TimelineMetric metric6  = new TimelineMetric("m2", "h3", "a2", null);
+
+    expect(metricMetadataManagerMock.getMetricFromUuid((byte[]) anyObject())).
+      andReturn(metric1).andReturn(metric2).andReturn(metric3).
+      andReturn(metric4).andReturn(metric5).andReturn(metric6);
     replay(metricMetadataManagerMock);
 
     TimelineMetricClusterAggregatorSecond secondAggregator = new TimelineMetricClusterAggregatorSecond(
@@ -344,40 +371,16 @@ public class TimelineMetricClusterAggregatorSecondTest {
     ResultSet rs = createNiceMock(ResultSet.class);
 
     TreeMap<Long, Double> metricValues = new TreeMap<>();
-    metricValues.put(startTime + 15*seconds, 1.0);
-    metricValues.put(startTime + 45*seconds, 2.0);
-    metricValues.put(startTime + 75*seconds, 3.0);
-    metricValues.put(startTime + 105*seconds, 4.0);
+    metricValues.put(startTime + 15 * seconds, 1.0);
+    metricValues.put(startTime + 45 * seconds, 2.0);
+    metricValues.put(startTime + 75 * seconds, 3.0);
+    metricValues.put(startTime + 105 * seconds, 4.0);
 
     expect(rs.next()).andReturn(true).times(6);
     expect(rs.next()).andReturn(false);
 
-    /*
-    m1-h1-a1
-    m2-h1-a1
-    m2-h1-a2
-    m2-h2-a1
-    m2-h2-a2
-    m2-h3-a2
-
-    So live_hosts : a1 = 2
-       live_hosts : a2 = 3
-     */
-    expect(rs.getString("METRIC_NAME")).andReturn("m1").times(1);
-    expect(rs.getString("METRIC_NAME")).andReturn("m2").times(5);
-
-    expect(rs.getString("HOSTNAME")).andReturn("h1").times(3);
-    expect(rs.getString("HOSTNAME")).andReturn("h2").times(2);
-    expect(rs.getString("HOSTNAME")).andReturn("h3").times(1);
-
-    expect(rs.getString("APP_ID")).andReturn("a1").times(2);
-    expect(rs.getString("APP_ID")).andReturn("a2").times(1);
-    expect(rs.getString("APP_ID")).andReturn("a1").times(1);
-    expect(rs.getString("APP_ID")).andReturn("a2").times(2);
-
     expect(rs.getLong("SERVER_TIME")).andReturn(now - 150000).times(6);
     expect(rs.getLong("START_TIME")).andReturn(now - 150000).times(6);
-    expect(rs.getString("UNITS")).andReturn(null).times(6);
 
     ObjectMapper mapper = new ObjectMapper();
     expect(rs.getString("METRICS")).andReturn(mapper.writeValueAsString(metricValues)).times(6);
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java
index 3adf770..ca1fc20 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery;
 
+import static org.easymock.EasyMock.createNiceMock;
 import static org.easymock.EasyMock.expect;
 import static org.easymock.EasyMock.replay;
 
@@ -27,7 +28,7 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.AbstractMiniHBaseClusterTest;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricsFilter;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
 import org.easymock.EasyMock;
 import org.junit.Before;
 import org.junit.Test;
@@ -35,6 +36,10 @@ import org.junit.Test;
 import java.io.IOException;
 import java.net.URISyntaxException;
 import java.sql.SQLException;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
 import java.util.Map;
 import java.util.Set;
 import java.util.TreeMap;
@@ -44,8 +49,7 @@ public class TestMetadataManager extends AbstractMiniHBaseClusterTest {
 
   @Before
   public void insertDummyRecords() throws IOException, SQLException, URISyntaxException {
-    // Initialize new manager
-    metadataManager = new TimelineMetricMetadataManager(new Configuration(), hdb);
+
     final long now = System.currentTimeMillis();
 
     TimelineMetrics timelineMetrics = new TimelineMetrics();
@@ -77,29 +81,13 @@ public class TestMetadataManager extends AbstractMiniHBaseClusterTest {
     }});
     timelineMetrics.getMetrics().add(metric2);
 
+    Configuration metricsConf = createNiceMock(Configuration.class);
+    expect(metricsConf.get("timeline.metrics.service.operation.mode")).andReturn("distributed").anyTimes();
+    replay(metricsConf);
 
-    //Test whitelisting
-    TimelineMetric metric3 = new TimelineMetric();
-    metric3.setMetricName("dummy_metric3");
-    metric3.setHostName("dummy_host3");
-    metric3.setTimestamp(now);
-    metric3.setStartTime(now - 1000);
-    metric3.setAppId("dummy_app3");
-    metric3.setType("Integer");
-    metric3.setMetricValues(new TreeMap<Long, Double>() {{
-      put(now - 100, 1.0);
-      put(now - 200, 2.0);
-      put(now - 300, 3.0);
-    }});
-    timelineMetrics.getMetrics().add(metric3);
-
-    Configuration metricsConf = new Configuration();
-    TimelineMetricConfiguration configuration = EasyMock.createNiceMock(TimelineMetricConfiguration.class);
-    expect(configuration.getMetricsConf()).andReturn(metricsConf).once();
-    replay(configuration);
-    TimelineMetricsFilter.initializeMetricFilter(configuration);
-    TimelineMetricsFilter.addToWhitelist("dummy_metric1");
-    TimelineMetricsFilter.addToWhitelist("dummy_metric2");
+    // Initialize new manager
+    metadataManager = new TimelineMetricMetadataManager(metricsConf, hdb);
+    hdb.setMetadataInstance(metadataManager);
 
     hdb.insertMetricRecordsWithMetadata(metadataManager, timelineMetrics, true);
   }
@@ -109,20 +97,16 @@ public class TestMetadataManager extends AbstractMiniHBaseClusterTest {
     Map<TimelineMetricMetadataKey, TimelineMetricMetadata> cachedData = metadataManager.getMetadataCache();
 
     Assert.assertNotNull(cachedData);
-    Assert.assertEquals(3, cachedData.size());
-    TimelineMetricMetadataKey key1 = new TimelineMetricMetadataKey("dummy_metric1", "dummy_app1");
-    TimelineMetricMetadataKey key2 = new TimelineMetricMetadataKey("dummy_metric2", "dummy_app2");
-    TimelineMetricMetadataKey key3 = new TimelineMetricMetadataKey("dummy_metric3", "dummy_app3");
+    Assert.assertEquals(2, cachedData.size());
+    TimelineMetricMetadataKey key1 = new TimelineMetricMetadataKey("dummy_metric1", "dummy_app1", null);
+    TimelineMetricMetadataKey key2 = new TimelineMetricMetadataKey("dummy_metric2", "dummy_app2", "instance2");
     TimelineMetricMetadata value1 = new TimelineMetricMetadata("dummy_metric1",
-      "dummy_app1", "Integer", null, 1L, true, false);
+      "dummy_app1", null, null, "Integer", 1L, true, true);
     TimelineMetricMetadata value2 = new TimelineMetricMetadata("dummy_metric2",
-      "dummy_app2", "Integer", null, 1L, true, false);
-    TimelineMetricMetadata value3 = new TimelineMetricMetadata("dummy_metric3",
-      "dummy_app3", "Integer", null, 1L, true, true);
+      "dummy_app2", "instance2", null, "Integer", 1L, true, true);
 
     Assert.assertEquals(value1, cachedData.get(key1));
     Assert.assertEquals(value2, cachedData.get(key2));
-    Assert.assertEquals(value3, cachedData.get(key3));
 
     TimelineMetricMetadataSync syncRunnable = new TimelineMetricMetadataSync(metadataManager);
     syncRunnable.run();
@@ -131,26 +115,125 @@ public class TestMetadataManager extends AbstractMiniHBaseClusterTest {
       hdb.getTimelineMetricMetadata();
 
     Assert.assertNotNull(savedData);
-    Assert.assertEquals(3, savedData.size());
+    Assert.assertEquals(2, savedData.size());
     Assert.assertEquals(value1, savedData.get(key1));
     Assert.assertEquals(value2, savedData.get(key2));
-    Assert.assertEquals(value3, savedData.get(key3));
 
-    Map<String, Set<String>> cachedHostData = metadataManager.getHostedAppsCache();
-    Map<String, Set<String>> savedHostData = metadataManager.getHostedAppsFromStore();
+    Map<String, TimelineMetricHostMetadata> cachedHostData = metadataManager.getHostedAppsCache();
+    Map<String, TimelineMetricHostMetadata> savedHostData = metadataManager.getHostedAppsFromStore();
     Assert.assertEquals(cachedData.size(), savedData.size());
-    Assert.assertEquals("dummy_app1", cachedHostData.get("dummy_host1").iterator().next());
-    Assert.assertEquals("dummy_app2", cachedHostData.get("dummy_host2").iterator().next());
-    Assert.assertEquals("dummy_app3", cachedHostData.get("dummy_host3").iterator().next());
-    Assert.assertEquals("dummy_app1", savedHostData.get("dummy_host1").iterator().next());
-    Assert.assertEquals("dummy_app2", savedHostData.get("dummy_host2").iterator().next());
-    Assert.assertEquals("dummy_app3", cachedHostData.get("dummy_host3").iterator().next());
-
+    Assert.assertEquals("dummy_app1", cachedHostData.get("dummy_host1").getHostedApps().iterator().next());
+    Assert.assertEquals("dummy_app2", cachedHostData.get("dummy_host2").getHostedApps().iterator().next());
+    Assert.assertEquals("dummy_app1", savedHostData.get("dummy_host1").getHostedApps().iterator().next());
+    Assert.assertEquals("dummy_app2", savedHostData.get("dummy_host2").getHostedApps().iterator().next());
 
     Map<String, Set<String>> cachedHostInstanceData = metadataManager.getHostedInstanceCache();
     Map<String, Set<String>> savedHostInstanceData = metadataManager.getHostedInstancesFromStore();
     Assert.assertEquals(cachedHostInstanceData.size(), savedHostInstanceData.size());
     Assert.assertEquals("dummy_host2", cachedHostInstanceData.get("instance2").iterator().next());
+  }
 
+  @Test
+  public void testGenerateUuidFromMetric() throws SQLException {
+
+    TimelineMetric timelineMetric = new TimelineMetric();
+    timelineMetric.setMetricName("regionserver.Server.blockCacheExpressHitPercent");
+    timelineMetric.setAppId("hbase");
+    timelineMetric.setHostName("avijayan-ams-2.openstacklocal");
+    timelineMetric.setInstanceId("test1");
+
+    byte[] uuid = metadataManager.getUuid(timelineMetric);
+    Assert.assertNotNull(uuid);
+    Assert.assertEquals(uuid.length, 20);
+
+    byte[] uuidWithoutHost = metadataManager.getUuid(new TimelineClusterMetric(timelineMetric.getMetricName(), timelineMetric.getAppId(), timelineMetric.getInstanceId(), -1));
+    Assert.assertNotNull(uuidWithoutHost);
+    Assert.assertEquals(uuidWithoutHost.length, 16);
+
+    TimelineMetric metric2 = metadataManager.getMetricFromUuid(uuid);
+    Assert.assertEquals(metric2, timelineMetric);
+    TimelineMetric metric3 = metadataManager.getMetricFromUuid(uuidWithoutHost);
+    Assert.assertEquals(metric3.getMetricName(), timelineMetric.getMetricName());
+    Assert.assertEquals(metric3.getAppId(), timelineMetric.getAppId());
+    Assert.assertEquals(metric3.getInstanceId(), timelineMetric.getInstanceId());
+    Assert.assertEquals(metric3.getHostName(), null);
+
+    String metricName1 = metadataManager.getMetricNameFromUuid(uuid);
+    Assert.assertEquals(metricName1, "regionserver.Server.blockCacheExpressHitPercent");
+    String metricName2 = metadataManager.getMetricNameFromUuid(uuidWithoutHost);
+    Assert.assertEquals(metricName2, "regionserver.Server.blockCacheExpressHitPercent");
   }
+
+  @Test
+  public void testWildcardSanitization() throws IOException, SQLException, URISyntaxException {
+    // Initialize new manager
+    metadataManager = new TimelineMetricMetadataManager(new Configuration(), hdb);
+    final long now = System.currentTimeMillis();
+
+    TimelineMetrics timelineMetrics = new TimelineMetrics();
+
+    TimelineMetric metric1 = new TimelineMetric();
+    metric1.setMetricName("dummy_m1");
+    metric1.setHostName("dummy_host1");
+    metric1.setTimestamp(now);
+    metric1.setStartTime(now - 1000);
+    metric1.setAppId("dummy_app1");
+    metric1.setType("Integer");
+    metric1.setMetricValues(new TreeMap<Long, Double>() {{
+      put(now - 100, 1.0);
+      put(now - 200, 2.0);
+      put(now - 300, 3.0);
+    }});
+    timelineMetrics.getMetrics().add(metric1);
+
+    TimelineMetric metric2 = new TimelineMetric();
+    metric2.setMetricName("dummy_m2");
+    metric2.setHostName("dummy_host2");
+    metric2.setTimestamp(now);
+    metric2.setStartTime(now - 1000);
+    metric2.setAppId("dummy_app2");
+    metric2.setType("Integer");
+    metric2.setMetricValues(new TreeMap<Long, Double>() {{
+      put(now - 100, 1.0);
+      put(now - 200, 2.0);
+      put(now - 300, 3.0);
+    }});
+    timelineMetrics.getMetrics().add(metric2);
+
+    TimelineMetric metric3 = new TimelineMetric();
+    metric3.setMetricName("gummy_3");
+    metric3.setHostName("dummy_3h");
+    metric3.setTimestamp(now);
+    metric3.setStartTime(now - 1000);
+    metric3.setAppId("dummy_app3");
+    metric3.setType("Integer");
+    metric3.setMetricValues(new TreeMap<Long, Double>() {{
+      put(now - 100, 1.0);
+      put(now - 200, 2.0);
+      put(now - 300, 3.0);
+    }});
+    timelineMetrics.getMetrics().add(metric3);
+
+    Configuration metricsConf = new Configuration();
+    TimelineMetricConfiguration configuration = EasyMock.createNiceMock(TimelineMetricConfiguration.class);
+    expect(configuration.getMetricsConf()).andReturn(metricsConf).once();
+    replay(configuration);
+
+    hdb.insertMetricRecordsWithMetadata(metadataManager, timelineMetrics, true);
+
+    List<byte[]> uuids = metadataManager.getUuids(Collections.singletonList("dummy_m%"),
+      Collections.singletonList("dummy_host2"), "dummy_app1", null);
+    Assert.assertTrue(uuids.size() == 2);
+
+    uuids = metadataManager.getUuids(Collections.singletonList("dummy_m%"),
+      Collections.singletonList("dummy_host%"), "dummy_app2", null);
+    Assert.assertTrue(uuids.size() == 4);
+
+    Collection<String> metrics = Arrays.asList("dummy_m%", "dummy_3", "dummy_m2");
+    List<String> hosts = Arrays.asList("dummy_host%", "dummy_3h");
+    uuids = metadataManager.getUuids(metrics, hosts, "dummy_app2", null);
+    Assert.assertTrue(uuids.size() == 9);
+  }
+
+
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataSync.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataSync.java
index a524b13..8d486e1 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataSync.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataSync.java
@@ -41,19 +41,19 @@ public class TestMetadataSync {
     PhoenixHBaseAccessor hBaseAccessor = createNiceMock(PhoenixHBaseAccessor.class);
 
     final TimelineMetricMetadata testMetadata1 = new TimelineMetricMetadata(
-      "m1", "a1", "", GAUGE.name(), System.currentTimeMillis(), true, false);
+      "m1", "a1", null, "", GAUGE.name(), System.currentTimeMillis(), true, false);
     final TimelineMetricMetadata testMetadata2 = new TimelineMetricMetadata(
-      "m2", "a2", "", GAUGE.name(), System.currentTimeMillis(), true, false);
+      "m2", "a2", null, "", GAUGE.name(), System.currentTimeMillis(), true, false);
 
     Map<TimelineMetricMetadataKey, TimelineMetricMetadata> metadata =
       new HashMap<TimelineMetricMetadataKey, TimelineMetricMetadata>() {{
-        put(new TimelineMetricMetadataKey("m1", "a1"), testMetadata1);
-        put(new TimelineMetricMetadataKey("m2", "a2"), testMetadata2);
+        put(new TimelineMetricMetadataKey("m1", "a1", null), testMetadata1);
+        put(new TimelineMetricMetadataKey("m2", "a2", null), testMetadata2);
       }};
 
-    Map<String, Set<String>> hostedApps = new HashMap<String, Set<String>>() {{
-      put("h1", new HashSet<>(Arrays.asList("a1")));
-      put("h2", new HashSet<>(Arrays.asList("a1", "a2")));
+    Map<String, TimelineMetricHostMetadata> hostedApps = new HashMap<String, TimelineMetricHostMetadata>() {{
+      put("h1", new TimelineMetricHostMetadata(new HashSet<>(Arrays.asList("a1"))));
+      put("h2", new TimelineMetricHostMetadata((new HashSet<>(Arrays.asList("a1", "a2")))));
     }};
 
     Map<String, Set<String>> hostedInstances = new HashMap<String, Set<String>>() {{
@@ -61,14 +61,14 @@ public class TestMetadataSync {
       put("i2", new HashSet<>(Arrays.asList("h1", "h2")));
     }};
 
-    expect(configuration.get("timeline.metrics.service.operation.mode", "")).andReturn("distributed");
+    expect(configuration.get("timeline.metrics.service.operation.mode")).andReturn("distributed");
     expect(hBaseAccessor.getTimelineMetricMetadata()).andReturn(metadata);
     expect(hBaseAccessor.getHostedAppsMetadata()).andReturn(hostedApps);
     expect(hBaseAccessor.getInstanceHostsMetdata()).andReturn(hostedInstances);
 
     replay(configuration, hBaseAccessor);
 
-    TimelineMetricMetadataManager metadataManager = new TimelineMetricMetadataManager(new Configuration(), hBaseAccessor);
+    TimelineMetricMetadataManager metadataManager = new TimelineMetricMetadataManager(configuration, hBaseAccessor);
 
     metadataManager.metricMetadataSync = new TimelineMetricMetadataSync(metadataManager);
 
@@ -78,13 +78,13 @@ public class TestMetadataSync {
 
     metadata = metadataManager.getMetadataCache();
     Assert.assertEquals(2, metadata.size());
-    Assert.assertTrue(metadata.containsKey(new TimelineMetricMetadataKey("m1", "a1")));
-    Assert.assertTrue(metadata.containsKey(new TimelineMetricMetadataKey("m2", "a2")));
+    Assert.assertTrue(metadata.containsKey(new TimelineMetricMetadataKey("m1", "a1", null)));
+    Assert.assertTrue(metadata.containsKey(new TimelineMetricMetadataKey("m2", "a2", null)));
 
     hostedApps = metadataManager.getHostedAppsCache();
     Assert.assertEquals(2, hostedApps.size());
-    Assert.assertEquals(1, hostedApps.get("h1").size());
-    Assert.assertEquals(2, hostedApps.get("h2").size());
+    Assert.assertEquals(1, hostedApps.get("h1").getHostedApps().size());
+    Assert.assertEquals(2, hostedApps.get("h2").getHostedApps().size());
 
     hostedInstances = metadataManager.getHostedInstanceCache();
     Assert.assertEquals(2, hostedInstances.size());
@@ -99,11 +99,11 @@ public class TestMetadataSync {
     PhoenixHBaseAccessor hBaseAccessor = createNiceMock(PhoenixHBaseAccessor.class);
 
     TimelineMetricMetadata metadata1 = new TimelineMetricMetadata(
-      "xxx.abc.yyy", "a1", "", GAUGE.name(), System.currentTimeMillis(), true, false);
+      "xxx.abc.yyy", "a1", null, "", GAUGE.name(), System.currentTimeMillis(), true, false);
     TimelineMetricMetadata metadata2 = new TimelineMetricMetadata(
-      "xxx.cdef.yyy", "a2", "", GAUGE.name(), System.currentTimeMillis(), true, false);
+      "xxx.cdef.yyy", "a2", null, "", GAUGE.name(), System.currentTimeMillis(), true, false);
     TimelineMetricMetadata metadata3 = new TimelineMetricMetadata(
-      "xxx.pqr.zzz", "a3", "", GAUGE.name(), System.currentTimeMillis(), true, false);
+      "xxx.pqr.zzz", "a3", null, "", GAUGE.name(), System.currentTimeMillis(), true, false);
 
     expect(configuration.get(TIMELINE_METRIC_METADATA_FILTERS)).andReturn("abc,cde");
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/TimelineMetricUuidManagerTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/TimelineMetricUuidManagerTest.java
new file mode 100644
index 0000000..d1b3f01
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/TimelineMetricUuidManagerTest.java
@@ -0,0 +1,184 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid;
+
+import static org.easymock.EasyMock.anyString;
+import static org.easymock.EasyMock.expect;
+import static org.easymock.EasyMock.replay;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
+import org.easymock.EasyMock;
+import org.junit.Assert;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import java.io.BufferedReader;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.net.URL;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+public class TimelineMetricUuidManagerTest {
+
+
+  private List<String> apps = Arrays.asList("namenode",
+    "datanode", "master_hbase", "slave_hbase", "kafka_broker", "nimbus", "ams-hbase",
+    "accumulo", "nodemanager", "resourcemanager", "ambari_server", "HOST", "timeline_metric_store_watcher",
+    "jobhistoryserver", "hiveserver2", "hivemetastore", "applicationhistoryserver", "amssmoketestfake");
+
+  private Map<String, Set<String>> metricSet  = new HashMap<>(populateMetricWhitelistFromFile());
+
+  @Test
+  public void testHashBasedUuidForMetricName() throws SQLException {
+
+    MetricUuidGenStrategy strategy = new HashBasedUuidGenStrategy();
+    Map<String, TimelineClusterMetric> uuids = new HashMap<>();
+    for (String app : metricSet.keySet()) {
+      Set<String> metrics = metricSet.get(app);
+      for (String metric : metrics) {
+        TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric(metric, app, null, -1l);
+        byte[] uuid = strategy.computeUuid(timelineClusterMetric, 16);
+        Assert.assertNotNull(uuid);
+        Assert.assertTrue(uuid.length == 16);
+        String uuidStr = new String(uuid);
+        Assert.assertFalse(uuids.containsKey(uuidStr) && !uuids.containsValue(timelineClusterMetric));
+        if (uuids.containsKey(uuidStr) ) {
+          if (!uuids.containsValue(timelineClusterMetric)) {
+            System.out.println("COLLISION : " + timelineClusterMetric.toString() + " = " + uuids.get(uuidStr));
+          }
+        }
+        uuids.put(uuidStr, timelineClusterMetric);
+      }
+    }
+  }
+
+  @Test
+  public void testHaseBasedUuidForAppIds() throws SQLException {
+
+    MetricUuidGenStrategy strategy = new HashBasedUuidGenStrategy();
+    Map<String, TimelineClusterMetric> uuids = new HashMap<>();
+    for (String app : metricSet.keySet()) {
+      TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric("TestMetric", app, null, -1l);
+      byte[] uuid = strategy.computeUuid(timelineClusterMetric, 16);
+      String uuidStr = new String(uuid);
+      if (uuids.containsKey(uuidStr) ) {
+        if (!uuids.containsValue(timelineClusterMetric)) {
+          System.out.println("COLLISION : " + timelineClusterMetric.toString() + " = " + uuids.get(uuidStr));
+        }
+      }
+      uuids.put(uuidStr, timelineClusterMetric);
+    }
+  }
+
+  @Test
+  public void testHashBasedUuidForHostnames() throws SQLException {
+
+    MetricUuidGenStrategy strategy = new HashBasedUuidGenStrategy();
+    Map<String, String> uuids = new HashMap<>();
+
+    List<String> hosts = new ArrayList<>();
+    String hostPrefix = "TestHost.";
+    String hostSuffix = ".ambari.apache.org";
+
+    for (int i=0; i<=2000; i++) {
+      hosts.add(hostPrefix + i + hostSuffix);
+    }
+
+    for (String host : hosts) {
+      byte[] uuid = strategy.computeUuid(host, 4);
+      Assert.assertNotNull(uuid);
+      Assert.assertTrue(uuid.length == 4);
+      String uuidStr = new String(uuid);
+      Assert.assertFalse(uuids.containsKey(uuidStr));
+      uuids.put(uuidStr, host);
+    }
+  }
+
+
+  @Test
+  public void testRandomUuidForWhitelistedMetrics() throws SQLException {
+
+    MetricUuidGenStrategy strategy = new RandomUuidGenStrategy();
+    Map<String, String> uuids = new HashMap<>();
+    for (String app : metricSet.keySet()) {
+      Set<String> metrics = metricSet.get(app);
+      for (String metric : metrics) {
+        byte[] uuid = strategy.computeUuid(new TimelineClusterMetric(metric, app, null, -1l), 16);
+        Assert.assertNotNull(uuid);
+        Assert.assertTrue(uuid.length == 16);
+        String uuidStr = new String(uuid);
+        Assert.assertFalse(uuids.containsKey(uuidStr) && !uuids.containsValue(metric));
+        uuids.put(uuidStr, metric);
+      }
+    }
+  }
+
+  public Map<String, Set<String>> populateMetricWhitelistFromFile() {
+
+
+    Map<String, Set<String>> metricSet = new HashMap<String, Set<String>>();
+    FileInputStream fstream = null;
+    BufferedReader br = null;
+    String strLine;
+    for (String appId : apps) {
+      URL fileUrl = ClassLoader.getSystemResource("metrics_def/" + appId.toUpperCase() + ".dat");
+
+      Set<String> metricsForApp = new HashSet<>();
+      try {
+        fstream = new FileInputStream(fileUrl.getPath());
+        br = new BufferedReader(new InputStreamReader(fstream));
+        while ((strLine = br.readLine()) != null)   {
+          strLine = strLine.trim();
+          metricsForApp.add(strLine);
+        }
+      } catch (Exception ioEx) {
+        System.out.println("Metrics for AppId " + appId + " not found.");
+      } finally {
+        if (br != null) {
+          try {
+            br.close();
+          } catch (IOException e) {
+          }
+        }
+
+        if (fstream != null) {
+          try {
+            fstream.close();
+          } catch (IOException e) {
+          }
+        }
+      }
+      metricsForApp.add("live_hosts");
+      metricSet.put(appId.contains("hbase") ? "hbase" : appId, metricsForApp);
+      System.out.println("Found " + metricsForApp.size() + " metrics for appId = " + appId);
+    }
+    return metricSet;
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/resources/test_data/full_whitelist.dat b/ambari-metrics/ambari-metrics-timelineservice/src/test/resources/test_data/full_whitelist.dat
new file mode 100644
index 0000000..0e22ffb
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/resources/test_data/full_whitelist.dat
@@ -0,0 +1,1615 @@
+AMBARI_METRICS.SmokeTest.FakeMetric
+TimelineMetricStoreWatcher.FakeMetric
+boottime
+bytes_in
+bytes_out
+cpu_idle
+cpu_intr
+cpu_nice
+cpu_num
+cpu_sintr
+cpu_steal
+cpu_system
+cpu_user
+cpu_wio
+disk_free
+disk_num
+disk_percent
+disk_total
+disk_used
+load_fifteen
+load_five
+load_one
+mem_buffered
+mem_cached
+mem_free
+mem_shared
+mem_total
+mem_used
+pkts_in
+pkts_out
+proc_run
+proc_total
+read_bps
+read_bytes
+read_count
+read_time
+sdisk_vda1_read_bytes
+sdisk_vda1_read_count
+sdisk_vda1_read_time
+sdisk_vda1_write_bytes
+sdisk_vda1_write_count
+sdisk_vda1_write_time
+sdisk_vdb_read_bytes
+sdisk_vdb_read_count
+sdisk_vdb_read_time
+sdisk_vdb_write_bytes
+sdisk_vdb_write_count
+sdisk_vdb_write_time
+swap_free
+swap_in
+swap_out
+swap_total
+swap_used
+write_bps
+write_bytes
+write_count
+write_time
+regionserver.WAL.SyncTime_min
+regionserver.WAL.SyncTime_num_ops
+regionserver.WAL.appendCount
+regionserver.WAL.slowAppendCount
+jvm.JvmMetrics.GcTimeMillis
+jvm.JvmMetrics.GcTimeMillisConcurrentMarkSweep
+jvm.JvmMetrics.GcTimeMillisParNew
+ugi.UgiMetrics.GetGroupsAvgTime
+ugi.UgiMetrics.GetGroupsNumOps
+ugi.UgiMetrics.LoginFailureNumOps
+ugi.UgiMetrics.LoginSuccessAvgTime
+ugi.UgiMetrics.LoginSuccessNumOps
+ugi.UgiMetrics.LoginFailureAvgTime
+jvm.JvmMetrics.LogError
+jvm.JvmMetrics.LogFatal
+jvm.JvmMetrics.LogInfo
+jvm.JvmMetrics.LogWarn
+jvm.JvmMetrics.MemHeapCommittedM
+jvm.JvmMetrics.MemHeapMaxM
+jvm.JvmMetrics.MemHeapUsedM
+jvm.JvmMetrics.MemMaxM
+jvm.JvmMetrics.MemNonHeapCommittedM
+jvm.JvmMetrics.MemNonHeapMaxM
+jvm.JvmMetrics.MemNonHeapUsedM
+jvm.JvmMetrics.ThreadsBlocked
+jvm.JvmMetrics.ThreadsNew
+jvm.JvmMetrics.ThreadsRunnable
+jvm.JvmMetrics.ThreadsTerminated
+jvm.JvmMetrics.ThreadsTimedWaiting
+master.AssignmentManger.Assign_75th_percentile
+master.AssignmentManger.Assign_95th_percentile
+master.AssignmentManger.Assign_99th_percentile
+master.AssignmentManger.Assign_max
+master.AssignmentManger.Assign_mean
+master.AssignmentManger.Assign_median
+master.AssignmentManger.Assign_min
+jvm.JvmMetrics.ThreadsWaiting
+master.AssignmentManger.Assign_num_ops
+master.AssignmentManger.BulkAssign_75th_percentile
+master.AssignmentManger.BulkAssign_95th_percentile
+master.AssignmentManger.BulkAssign_99th_percentile
+master.AssignmentManger.BulkAssign_max
+master.AssignmentManger.BulkAssign_mean
+master.AssignmentManger.BulkAssign_median
+master.AssignmentManger.BulkAssign_min
+master.AssignmentManger.BulkAssign_num_ops
+master.AssignmentManger.ritCount
+master.AssignmentManger.ritCountOverThreshold
+master.AssignmentManger.ritOldestAge
+master.Balancer.BalancerCluster_75th_percentile
+master.Balancer.BalancerCluster_95th_percentile
+master.Balancer.BalancerCluster_99th_percentile
+master.Balancer.BalancerCluster_max
+master.Balancer.BalancerCluster_mean
+master.Balancer.BalancerCluster_median
+master.Balancer.BalancerCluster_min
+master.Balancer.BalancerCluster_num_ops
+master.Balancer.miscInvocationCount
+master.FileSystem.HlogSplitSize_75th_percentile
+master.FileSystem.HlogSplitSize_95th_percentile
+master.FileSystem.HlogSplitSize_99th_percentile
+master.FileSystem.HlogSplitSize_max
+master.FileSystem.HlogSplitSize_mean
+master.FileSystem.HlogSplitSize_median
+master.FileSystem.HlogSplitSize_min
+master.FileSystem.HlogSplitSize_num_ops
+master.FileSystem.HlogSplitTime_75th_percentile
+master.FileSystem.HlogSplitTime_95th_percentile
+master.FileSystem.HlogSplitTime_99th_percentile
+master.FileSystem.HlogSplitTime_max
+master.FileSystem.HlogSplitTime_mean
+master.FileSystem.HlogSplitTime_median
+master.FileSystem.HlogSplitTime_min
+master.FileSystem.HlogSplitTime_num_ops
+master.FileSystem.MetaHlogSplitSize_75th_percentile
+master.FileSystem.MetaHlogSplitSize_95th_percentile
+master.FileSystem.MetaHlogSplitSize_99th_percentile
+master.FileSystem.MetaHlogSplitSize_max
+master.FileSystem.MetaHlogSplitSize_mean
+master.FileSystem.MetaHlogSplitSize_median
+master.FileSystem.MetaHlogSplitSize_min
+master.FileSystem.MetaHlogSplitSize_num_ops
+master.FileSystem.MetaHlogSplitTime_75th_percentile
+master.FileSystem.MetaHlogSplitTime_95th_percentile
+master.FileSystem.MetaHlogSplitTime_99th_percentile
+master.FileSystem.MetaHlogSplitTime_max
+master.FileSystem.MetaHlogSplitTime_mean
+master.FileSystem.MetaHlogSplitTime_median
+master.FileSystem.MetaHlogSplitTime_min
+master.FileSystem.MetaHlogSplitTime_num_ops
+master.Server.averageLoad
+master.Server.clusterRequests
+master.Server.masterActiveTime
+master.Server.masterStartTime
+master.Server.numDeadRegionServers
+master.Server.numRegionServers
+metricssystem.MetricsSystem.DroppedPubAll
+metricssystem.MetricsSystem.NumActiveSinks
+ipc.IPC.ProcessCallTime_75th_percentile
+ipc.IPC.ProcessCallTime_95th_percentile
+metricssystem.MetricsSystem.NumActiveSources
+metricssystem.MetricsSystem.NumAllSinks
+ipc.IPC.ProcessCallTime_99th_percentile
+metricssystem.MetricsSystem.NumAllSources
+metricssystem.MetricsSystem.PublishAvgTime
+metricssystem.MetricsSystem.PublishNumOps
+ipc.IPC.ProcessCallTime_max
+ipc.IPC.ProcessCallTime_mean
+metricssystem.MetricsSystem.Sink_timelineAvgTime
+ipc.IPC.ProcessCallTime_median
+metricssystem.MetricsSystem.Sink_timelineDropped
+metricssystem.MetricsSystem.Sink_timelineNumOps
+ipc.IPC.ProcessCallTime_num_ops
+metricssystem.MetricsSystem.Sink_timelineQsize
+metricssystem.MetricsSystem.SnapshotAvgTime
+ipc.IPC.QueueCallTime_95th_percentile
+metricssystem.MetricsSystem.SnapshotNumOps
+ipc.IPC.ProcessCallTime_min
+ipc.IPC.QueueCallTime_75th_percentile
+ipc.IPC.QueueCallTime_99th_percentile
+ipc.IPC.QueueCallTime_max
+ipc.IPC.QueueCallTime_mean
+ipc.IPC.QueueCallTime_median
+ipc.IPC.QueueCallTime_min
+regionserver.Server.Append_75th_percentile
+regionserver.Server.Append_95th_percentile
+ipc.IPC.QueueCallTime_num_ops
+ipc.IPC.authenticationFailures
+regionserver.Server.Append_99th_percentile
+regionserver.Server.Append_max
+ipc.IPC.authenticationSuccesses
+regionserver.Server.Append_mean
+regionserver.Server.Append_median
+regionserver.Server.Append_min
+regionserver.Server.Append_num_ops
+regionserver.Server.Delete_75th_percentile
+regionserver.Server.Delete_95th_percentile
+ipc.IPC.authorizationFailures
+regionserver.Server.Delete_99th_percentile
+regionserver.Server.Delete_max
+regionserver.Server.Delete_mean
+regionserver.Server.Delete_median
+regionserver.Server.Delete_min
+regionserver.Server.Delete_num_ops
+ipc.IPC.authorizationSuccesses
+ipc.IPC.numActiveHandler
+ipc.IPC.numCallsInGeneralQueue
+regionserver.Server.Get_75th_percentile
+regionserver.Server.Get_95th_percentile
+regionserver.Server.Get_99th_percentile
+regionserver.Server.Get_max
+regionserver.Server.Get_mean
+regionserver.Server.Get_median
+ipc.IPC.numCallsInPriorityQueue
+regionserver.Server.Get_min
+regionserver.Server.Get_num_ops
+regionserver.Server.Increment_75th_percentile
+regionserver.Server.Increment_95th_percentile
+regionserver.Server.Increment_99th_percentile
+regionserver.Server.Increment_max
+regionserver.Server.Increment_mean
+regionserver.Server.Increment_median
+ipc.IPC.numCallsInReplicationQueue
+ipc.IPC.numOpenConnections
+regionserver.Server.Increment_min
+regionserver.Server.Increment_num_ops
+ipc.IPC.queueSize
+regionserver.Server.Mutate_75th_percentile
+regionserver.Server.Mutate_95th_percentile
+regionserver.Server.Mutate_99th_percentile
+regionserver.Server.Mutate_max
+regionserver.Server.Mutate_mean
+regionserver.Server.Mutate_median
+ipc.IPC.receivedBytes
+regionserver.Server.Mutate_min
+regionserver.Server.Mutate_num_ops
+regionserver.Server.Replay_75th_percentile
+regionserver.Server.Replay_95th_percentile
+regionserver.Server.Replay_99th_percentile
+regionserver.Server.Replay_max
+regionserver.Server.Replay_mean
+regionserver.Server.Replay_median
+ipc.IPC.sentBytes
+jvm.JvmMetrics.GcCount
+regionserver.Server.Replay_min
+regionserver.Server.Replay_num_ops
+regionserver.Server.blockCacheCount
+regionserver.Server.blockCacheEvictionCount
+regionserver.Server.blockCacheExpressHitPercent
+regionserver.Server.blockCacheFreeSize
+regionserver.Server.blockCacheHitCount
+regionserver.Server.blockCacheMissCount
+regionserver.Server.blockCacheSize
+regionserver.Server.blockCountHitPercent
+regionserver.Server.checkMutateFailedCount
+regionserver.Server.checkMutatePassedCount
+regionserver.Server.compactionQueueLength
+regionserver.Server.flushQueueLength
+jvm.JvmMetrics.GcCountConcurrentMarkSweep
+regionserver.Server.hlogFileCount
+regionserver.Server.hlogFileSize
+regionserver.Server.memStoreSize
+regionserver.Server.mutationsWithoutWALCount
+regionserver.Server.mutationsWithoutWALSize
+regionserver.Server.percentFilesLocal
+regionserver.Server.readRequestCount
+regionserver.Server.regionCount
+regionserver.Server.regionServerStartTime
+regionserver.Server.slowAppendCount
+regionserver.Server.slowDeleteCount
+regionserver.Server.slowGetCount
+regionserver.Server.slowIncrementCount
+regionserver.Server.slowPutCount
+regionserver.Server.staticBloomSize
+regionserver.Server.staticIndexSize
+regionserver.Server.storeCount
+regionserver.Server.storeFileCount
+regionserver.Server.storeFileIndexSize
+regionserver.Server.storeFileSize
+regionserver.Server.totalRequestCount
+regionserver.Server.updatesBlockedTime
+regionserver.Server.writeRequestCount
+regionserver.WAL.AppendSize_75th_percentile
+regionserver.WAL.AppendSize_95th_percentile
+regionserver.WAL.AppendSize_99th_percentile
+regionserver.WAL.AppendSize_max
+regionserver.WAL.AppendSize_mean
+regionserver.WAL.AppendSize_median
+regionserver.WAL.SyncTime_median
+jvm.JvmMetrics.GcCountParNew
+regionserver.WAL.AppendSize_min
+regionserver.WAL.AppendSize_num_ops
+regionserver.WAL.SyncTime_max
+regionserver.WAL.AppendTime_75th_percentile
+regionserver.WAL.AppendTime_95th_percentile
+regionserver.WAL.AppendTime_99th_percentile
+regionserver.WAL.AppendTime_max
+regionserver.WAL.SyncTime_95th_percentile
+regionserver.WAL.AppendTime_mean
+regionserver.WAL.AppendTime_median
+regionserver.WAL.AppendTime_min
+regionserver.WAL.AppendTime_num_ops
+regionserver.WAL.SyncTime_75th_percentile
+regionserver.WAL.SyncTime_99th_percentile
+regionserver.WAL.SyncTime_mean
+BatchCompleteCount
+BatchEmptyCount
+BatchUnderflowCount
+ChannelCapacity
+ChannelFillPercentage
+ChannelSize
+ConnectionClosedCount
+ConnectionCreatedCount
+ConnectionFailedCount
+EventDrainAttemptCount
+EventDrainSuccessCount
+EventPutAttemptCount
+EventPutSuccessCount
+EventTakeSuccessCount
+EventTakeAttemptCount
+StartTime
+StopTime
+regionserver.WAL.SyncTime_min
+regionserver.WAL.SyncTime_num_ops
+regionserver.WAL.appendCount
+regionserver.WAL.slowAppendCount
+jvm.JvmMetrics.GcTimeMillis
+jvm.JvmMetrics.GcTimeMillisConcurrentMarkSweep
+jvm.JvmMetrics.GcTimeMillisParNew
+ugi.UgiMetrics.GetGroupsAvgTime
+ugi.UgiMetrics.GetGroupsNumOps
+ugi.UgiMetrics.LoginFailureNumOps
+ugi.UgiMetrics.LoginSuccessAvgTime
+ugi.UgiMetrics.LoginSuccessNumOps
+ugi.UgiMetrics.LoginFailureAvgTime
+jvm.JvmMetrics.LogError
+jvm.JvmMetrics.LogFatal
+jvm.JvmMetrics.LogInfo
+jvm.JvmMetrics.LogWarn
+jvm.JvmMetrics.MemHeapCommittedM
+jvm.JvmMetrics.MemHeapMaxM
+jvm.JvmMetrics.MemHeapUsedM
+jvm.JvmMetrics.MemMaxM
+jvm.JvmMetrics.MemNonHeapCommittedM
+jvm.JvmMetrics.MemNonHeapMaxM
+jvm.JvmMetrics.MemNonHeapUsedM
+jvm.JvmMetrics.ThreadsBlocked
+jvm.JvmMetrics.ThreadsNew
+jvm.JvmMetrics.ThreadsRunnable
+jvm.JvmMetrics.ThreadsTerminated
+jvm.JvmMetrics.ThreadsTimedWaiting
+master.AssignmentManger.Assign_75th_percentile
+master.AssignmentManger.Assign_95th_percentile
+master.AssignmentManger.Assign_99th_percentile
+master.AssignmentManger.Assign_max
+master.AssignmentManger.Assign_mean
+master.AssignmentManger.Assign_median
+master.AssignmentManger.Assign_min
+jvm.JvmMetrics.ThreadsWaiting
+master.AssignmentManger.Assign_num_ops
+master.AssignmentManger.BulkAssign_75th_percentile
+master.AssignmentManger.BulkAssign_95th_percentile
+master.AssignmentManger.BulkAssign_99th_percentile
+master.AssignmentManger.BulkAssign_max
+master.AssignmentManger.BulkAssign_mean
+master.AssignmentManger.BulkAssign_median
+master.AssignmentManger.BulkAssign_min
+master.AssignmentManger.BulkAssign_num_ops
+master.AssignmentManger.ritCount
+master.AssignmentManger.ritCountOverThreshold
+master.AssignmentManger.ritOldestAge
+master.Balancer.BalancerCluster_75th_percentile
+master.Balancer.BalancerCluster_95th_percentile
+master.Balancer.BalancerCluster_99th_percentile
+master.Balancer.BalancerCluster_max
+master.Balancer.BalancerCluster_mean
+master.Balancer.BalancerCluster_median
+master.Balancer.BalancerCluster_min
+master.Balancer.BalancerCluster_num_ops
+master.Balancer.miscInvocationCount
+master.FileSystem.HlogSplitSize_75th_percentile
+master.FileSystem.HlogSplitSize_95th_percentile
+master.FileSystem.HlogSplitSize_99th_percentile
+master.FileSystem.HlogSplitSize_max
+master.FileSystem.HlogSplitSize_mean
+master.FileSystem.HlogSplitSize_median
+master.FileSystem.HlogSplitSize_min
+master.FileSystem.HlogSplitSize_num_ops
+master.FileSystem.HlogSplitTime_75th_percentile
+master.FileSystem.HlogSplitTime_95th_percentile
+master.FileSystem.HlogSplitTime_99th_percentile
+master.FileSystem.HlogSplitTime_max
+master.FileSystem.HlogSplitTime_mean
+master.FileSystem.HlogSplitTime_median
+master.FileSystem.HlogSplitTime_min
+master.FileSystem.HlogSplitTime_num_ops
+master.FileSystem.MetaHlogSplitSize_75th_percentile
+master.FileSystem.MetaHlogSplitSize_95th_percentile
+master.FileSystem.MetaHlogSplitSize_99th_percentile
+master.FileSystem.MetaHlogSplitSize_max
+master.FileSystem.MetaHlogSplitSize_mean
+master.FileSystem.MetaHlogSplitSize_median
+master.FileSystem.MetaHlogSplitSize_min
+master.FileSystem.MetaHlogSplitSize_num_ops
+master.FileSystem.MetaHlogSplitTime_75th_percentile
+master.FileSystem.MetaHlogSplitTime_95th_percentile
+master.FileSystem.MetaHlogSplitTime_99th_percentile
+master.FileSystem.MetaHlogSplitTime_max
+master.FileSystem.MetaHlogSplitTime_mean
+master.FileSystem.MetaHlogSplitTime_median
+master.FileSystem.MetaHlogSplitTime_min
+master.FileSystem.MetaHlogSplitTime_num_ops
+master.Server.averageLoad
+master.Server.clusterRequests
+master.Server.masterActiveTime
+master.Server.masterStartTime
+master.Server.numDeadRegionServers
+master.Server.numRegionServers
+metricssystem.MetricsSystem.DroppedPubAll
+metricssystem.MetricsSystem.NumActiveSinks
+ipc.IPC.ProcessCallTime_75th_percentile
+ipc.IPC.ProcessCallTime_95th_percentile
+metricssystem.MetricsSystem.NumActiveSources
+metricssystem.MetricsSystem.NumAllSinks
+ipc.IPC.ProcessCallTime_99th_percentile
+metricssystem.MetricsSystem.NumAllSources
+metricssystem.MetricsSystem.PublishAvgTime
+metricssystem.MetricsSystem.PublishNumOps
+ipc.IPC.ProcessCallTime_max
+ipc.IPC.ProcessCallTime_mean
+metricssystem.MetricsSystem.Sink_timelineAvgTime
+ipc.IPC.ProcessCallTime_median
+metricssystem.MetricsSystem.Sink_timelineDropped
+metricssystem.MetricsSystem.Sink_timelineNumOps
+ipc.IPC.ProcessCallTime_num_ops
+metricssystem.MetricsSystem.Sink_timelineQsize
+metricssystem.MetricsSystem.SnapshotAvgTime
+ipc.IPC.QueueCallTime_95th_percentile
+metricssystem.MetricsSystem.SnapshotNumOps
+ipc.IPC.ProcessCallTime_min
+ipc.IPC.QueueCallTime_75th_percentile
+ipc.IPC.QueueCallTime_99th_percentile
+ipc.IPC.QueueCallTime_max
+ipc.IPC.QueueCallTime_mean
+ipc.IPC.QueueCallTime_median
+ipc.IPC.QueueCallTime_min
+regionserver.Server.Append_75th_percentile
+regionserver.Server.Append_95th_percentile
+ipc.IPC.QueueCallTime_num_ops
+ipc.IPC.authenticationFailures
+regionserver.Server.Append_99th_percentile
+regionserver.Server.Append_max
+ipc.IPC.authenticationSuccesses
+regionserver.Server.Append_mean
+regionserver.Server.Append_median
+regionserver.Server.Append_min
+regionserver.Server.Append_num_ops
+regionserver.Server.Delete_75th_percentile
+regionserver.Server.Delete_95th_percentile
+ipc.IPC.authorizationFailures
+regionserver.Server.Delete_99th_percentile
+regionserver.Server.Delete_max
+regionserver.Server.Delete_mean
+regionserver.Server.Delete_median
+regionserver.Server.Delete_min
+regionserver.Server.Delete_num_ops
+ipc.IPC.authorizationSuccesses
+ipc.IPC.numActiveHandler
+ipc.IPC.numCallsInGeneralQueue
+regionserver.Server.Get_75th_percentile
+regionserver.Server.Get_95th_percentile
+regionserver.Server.Get_99th_percentile
+regionserver.Server.Get_max
+regionserver.Server.Get_mean
+regionserver.Server.Get_median
+ipc.IPC.numCallsInPriorityQueue
+regionserver.Server.Get_min
+regionserver.Server.Get_num_ops
+regionserver.Server.Increment_75th_percentile
+regionserver.Server.Increment_95th_percentile
+regionserver.Server.Increment_99th_percentile
+regionserver.Server.Increment_max
+regionserver.Server.Increment_mean
+regionserver.Server.Increment_median
+ipc.IPC.numCallsInReplicationQueue
+ipc.IPC.numOpenConnections
+regionserver.Server.Increment_min
+regionserver.Server.Increment_num_ops
+ipc.IPC.queueSize
+regionserver.Server.Mutate_75th_percentile
+regionserver.Server.Mutate_95th_percentile
+regionserver.Server.Mutate_99th_percentile
+regionserver.Server.Mutate_max
+regionserver.Server.Mutate_mean
+regionserver.Server.Mutate_median
+ipc.IPC.receivedBytes
+regionserver.Server.Mutate_min
+regionserver.Server.Mutate_num_ops
+regionserver.Server.Replay_75th_percentile
+regionserver.Server.Replay_95th_percentile
+regionserver.Server.Replay_99th_percentile
+regionserver.Server.Replay_max
+regionserver.Server.Replay_mean
+regionserver.Server.Replay_median
+ipc.IPC.sentBytes
+jvm.JvmMetrics.GcCount
+regionserver.Server.Replay_min
+regionserver.Server.Replay_num_ops
+regionserver.Server.blockCacheCount
+regionserver.Server.blockCacheEvictionCount
+regionserver.Server.blockCacheExpressHitPercent
+regionserver.Server.blockCacheFreeSize
+regionserver.Server.blockCacheHitCount
+regionserver.Server.blockCacheMissCount
+regionserver.Server.blockCacheSize
+regionserver.Server.blockCountHitPercent
+regionserver.Server.checkMutateFailedCount
+regionserver.Server.checkMutatePassedCount
+regionserver.Server.compactionQueueLength
+regionserver.Server.flushQueueLength
+jvm.JvmMetrics.GcCountConcurrentMarkSweep
+regionserver.Server.hlogFileCount
+regionserver.Server.hlogFileSize
+regionserver.Server.memStoreSize
+regionserver.Server.mutationsWithoutWALCount
+regionserver.Server.mutationsWithoutWALSize
+regionserver.Server.percentFilesLocal
+regionserver.Server.readRequestCount
+regionserver.Server.regionCount
+regionserver.Server.regionServerStartTime
+regionserver.Server.slowAppendCount
+regionserver.Server.slowDeleteCount
+regionserver.Server.slowGetCount
+regionserver.Server.slowIncrementCount
+regionserver.Server.slowPutCount
+regionserver.Server.staticBloomSize
+regionserver.Server.staticIndexSize
+regionserver.Server.storeCount
+regionserver.Server.storeFileCount
+regionserver.Server.storeFileIndexSize
+regionserver.Server.storeFileSize
+regionserver.Server.totalRequestCount
+regionserver.Server.updatesBlockedTime
+regionserver.Server.writeRequestCount
+regionserver.WAL.AppendSize_75th_percentile
+regionserver.WAL.AppendSize_95th_percentile
+regionserver.WAL.AppendSize_99th_percentile
+regionserver.WAL.AppendSize_max
+regionserver.WAL.AppendSize_mean
+regionserver.WAL.AppendSize_median
+regionserver.WAL.SyncTime_median
+jvm.JvmMetrics.GcCountParNew
+regionserver.WAL.AppendSize_min
+regionserver.WAL.AppendSize_num_ops
+regionserver.WAL.SyncTime_max
+regionserver.WAL.AppendTime_75th_percentile
+regionserver.WAL.AppendTime_95th_percentile
+regionserver.WAL.AppendTime_99th_percentile
+regionserver.WAL.AppendTime_max
+regionserver.WAL.SyncTime_95th_percentile
+regionserver.WAL.AppendTime_mean
+regionserver.WAL.AppendTime_median
+regionserver.WAL.AppendTime_min
+regionserver.WAL.AppendTime_num_ops
+regionserver.WAL.SyncTime_75th_percentile
+regionserver.WAL.SyncTime_99th_percentile
+regionserver.WAL.SyncTime_mean
+regionserver.WAL.SyncTime_median
+regionserver.WAL.SyncTime_min
+regionserver.WAL.SyncTime_num_ops
+regionserver.WAL.appendCount
+regionserver.Server.majorCompactedCellsSize
+regionserver.WAL.rollRequest
+regionserver.WAL.AppendTime_99th_percentile
+regionserver.WAL.slowAppendCount
+regionserver.WAL.AppendTime_num_ops
+regionserver.WAL.SyncTime_95th_percentile
+regionserver.Server.Mutate_median
+regionserver.WAL.AppendTime_75th_percentile
+regionserver.WAL.AppendSize_num_ops
+regionserver.Server.Mutate_max
+regionserver.WAL.AppendSize_min
+regionserver.WAL.AppendTime_min
+regionserver.WAL.SyncTime_99th_percentile
+regionserver.Server.Mutate_95th_percentile
+regionserver.WAL.AppendSize_mean
+regionserver.WAL.SyncTime_mean
+regionserver.WAL.AppendSize_99th_percentile
+jvm.JvmMetrics.GcTimeMillis
+regionserver.WAL.AppendSize_75th_percentile
+jvm.JvmMetrics.GcTimeMillisConcurrentMarkSweep
+regionserver.WAL.SyncTime_max
+regionserver.Server.Increment_median
+regionserver.Server.updatesBlockedTime
+regionserver.Server.Increment_max
+ugi.UgiMetrics.GetGroupsAvgTime
+regionserver.WAL.lowReplicaRollRequest
+ugi.UgiMetrics.GetGroupsNumOps
+regionserver.Server.storeFileSize
+regionserver.Server.Increment_95th_percentile
+jvm.JvmMetrics.GcTimeMillisParNew
+ugi.UgiMetrics.LoginFailureAvgTime
+ugi.UgiMetrics.LoginFailureNumOps
+regionserver.Server.storeFileCount
+ugi.UgiMetrics.LoginSuccessNumOps
+regionserver.Server.staticIndexSize
+jvm.JvmMetrics.LogError
+regionserver.Server.splitQueueLength
+regionserver.Server.Get_median
+regionserver.Server.slowPutCount
+regionserver.Server.Get_max
+jvm.JvmMetrics.LogFatal
+regionserver.Server.slowGetCount
+jvm.JvmMetrics.LogInfo
+regionserver.Server.slowAppendCount
+regionserver.Server.Get_95th_percentile
+jvm.JvmMetrics.LogWarn
+regionserver.Server.regionCount
+regionserver.Server.FlushTime_num_ops
+regionserver.Server.FlushTime_min
+regionserver.Server.readRequestCount
+jvm.JvmMetrics.MemHeapCommittedM
+regionserver.Server.percentFilesLocalSecondaryRegions
+regionserver.Server.percentFilesLocal
+regionserver.Server.FlushTime_max
+regionserver.Server.FlushTime_99th_percentile
+regionserver.Server.FlushTime_95th_percentile
+regionserver.Server.Delete_num_ops
+jvm.JvmMetrics.MemHeapMaxM
+regionserver.Server.mutationsWithoutWALCount
+jvm.JvmMetrics.MemHeapUsedM
+regionserver.Server.Delete_median
+regionserver.Server.ScanNext_max
+regionserver.Server.ScanNext_99th_percentile
+regionserver.Server.majorCompactedCellsCount
+regionserver.Server.hlogFileSize
+regionserver.Server.flushedCellsCount
+jvm.JvmMetrics.MemMaxM
+regionserver.Server.hlogFileCount
+regionserver.Server.Delete_95th_percentile
+jvm.JvmMetrics.MemNonHeapCommittedM
+jvm.JvmMetrics.MemNonHeapMaxM
+jvm.JvmMetrics.MemNonHeapUsedM
+regionserver.Server.Append_num_ops
+regionserver.Server.flushQueueLength
+jvm.JvmMetrics.ThreadsBlocked
+regionserver.Server.Append_median
+jvm.JvmMetrics.ThreadsNew
+regionserver.Server.checkMutatePassedCount
+regionserver.Server.compactedCellsSize
+jvm.JvmMetrics.ThreadsRunnable
+jvm.JvmMetrics.ThreadsTerminated
+jvm.JvmMetrics.ThreadsTimedWaiting
+master.AssignmentManger.Assign_75th_percentile
+master.AssignmentManger.Assign_95th_percentile
+master.AssignmentManger.Assign_99th_percentile
+master.AssignmentManger.Assign_max
+regionserver.Server.Append_95th_percentile
+master.AssignmentManger.Assign_mean
+master.AssignmentManger.Assign_median
+regionserver.Replication.sink.appliedOps
+regionserver.Replication.sink.appliedBatches
+regionserver.Replication.sink.ageOfLastAppliedOp
+regionserver.WAL.SyncTime_75th_percentile
+regionserver.RegionServer.receivedBytes
+regionserver.RegionServer.queueSize
+regionserver.RegionServer.numOpenConnections
+regionserver.RegionServer.numCallsInPriorityQueue
+regionserver.Server.Replay_num_ops
+master.AssignmentManger.Assign_min
+master.AssignmentManger.Assign_num_ops
+regionserver.Server.checkMutateFailedCount
+regionserver.RegionServer.exceptions.RegionTooBusyException
+regionserver.RegionServer.exceptions.RegionMovedException
+regionserver.RegionServer.exceptions.OutOfOrderScannerNextException
+master.AssignmentManger.BulkAssign_75th_percentile
+master.AssignmentManger.BulkAssign_95th_percentile
+regionserver.RegionServer.exceptions.FailedSanityCheckException
+regionserver.RegionServer.exceptions
+regionserver.RegionServer.authorizationSuccesses
+regionserver.RegionServer.authenticationSuccesses
+regionserver.RegionServer.authenticationFailures
+regionserver.RegionServer.TotalCallTime_num_ops
+master.AssignmentManger.BulkAssign_99th_percentile
+jvm.JvmMetrics.ThreadsWaiting
+regionserver.RegionServer.TotalCallTime_median
+regionserver.RegionServer.TotalCallTime_mean
+master.AssignmentManger.BulkAssign_max
+regionserver.RegionServer.TotalCallTime_95th_percentile
+regionserver.RegionServer.TotalCallTime_75th_percentile
+regionserver.RegionServer.QueueCallTime_num_ops
+master.AssignmentManger.BulkAssign_mean
+master.AssignmentManger.BulkAssign_median
+regionserver.RegionServer.QueueCallTime_median
+regionserver.RegionServer.QueueCallTime_mean
+regionserver.RegionServer.QueueCallTime_max
+regionserver.RegionServer.QueueCallTime_95th_percentile
+regionserver.RegionServer.QueueCallTime_75th_percentile
+regionserver.RegionServer.ProcessCallTime_num_ops
+regionserver.RegionServer.ProcessCallTime_median
+regionserver.RegionServer.ProcessCallTime_mean
+regionserver.Server.ScanNext_num_ops
+master.AssignmentManger.BulkAssign_num_ops
+master.AssignmentManger.BulkAssign_min
+regionserver.RegionServer.ProcessCallTime_95th_percentile
+master.AssignmentManger.ritCount
+master.AssignmentManger.ritCountOverThreshold
+master.AssignmentManger.ritOldestAge
+master.Balancer.BalancerCluster_75th_percentile
+master.Balancer.BalancerCluster_95th_percentile
+master.Balancer.BalancerCluster_99th_percentile
+ugi.UgiMetrics.LoginSuccessAvgTime
+master.Balancer.BalancerCluster_max
+master.Balancer.BalancerCluster_mean
+master.Balancer.BalancerCluster_median
+master.Balancer.BalancerCluster_min
+regionserver.Server.ScanNext_median
+master.Balancer.BalancerCluster_num_ops
+master.Balancer.miscInvocationCount
+master.FileSystem.HlogSplitSize_75th_percentile
+master.FileSystem.HlogSplitSize_95th_percentile
+master.FileSystem.HlogSplitSize_max
+master.FileSystem.HlogSplitSize_99th_percentile
+master.FileSystem.HlogSplitSize_mean
+master.FileSystem.HlogSplitSize_median
+master.FileSystem.HlogSplitSize_min
+master.FileSystem.HlogSplitSize_num_ops
+master.FileSystem.HlogSplitTime_75th_percentile
+master.FileSystem.HlogSplitTime_95th_percentile
+regionserver.Server.SplitTime_median
+master.FileSystem.HlogSplitTime_max
+master.FileSystem.HlogSplitTime_99th_percentile
+master.FileSystem.HlogSplitTime_mean
+master.FileSystem.HlogSplitTime_median
+master.FileSystem.HlogSplitTime_min
+master.FileSystem.HlogSplitTime_num_ops
+master.FileSystem.MetaHlogSplitSize_75th_percentile
+master.FileSystem.MetaHlogSplitSize_95th_percentile
+master.FileSystem.MetaHlogSplitSize_max
+master.FileSystem.MetaHlogSplitSize_99th_percentile
+master.FileSystem.MetaHlogSplitSize_mean
+master.FileSystem.MetaHlogSplitSize_median
+master.FileSystem.MetaHlogSplitSize_min
+master.FileSystem.MetaHlogSplitSize_num_ops
+master.FileSystem.MetaHlogSplitTime_75th_percentile
+master.FileSystem.MetaHlogSplitTime_95th_percentile
+master.FileSystem.MetaHlogSplitTime_max
+master.FileSystem.MetaHlogSplitTime_99th_percentile
+master.FileSystem.MetaHlogSplitTime_mean
+master.FileSystem.MetaHlogSplitTime_median
+master.FileSystem.MetaHlogSplitTime_min
+master.FileSystem.MetaHlogSplitTime_num_ops
+master.Master.ProcessCallTime_75th_percentile
+master.Master.ProcessCallTime_95th_percentile
+master.Master.ProcessCallTime_99th_percentile
+master.Master.ProcessCallTime_max
+master.Master.ProcessCallTime_mean
+master.Master.ProcessCallTime_median
+master.Master.ProcessCallTime_min
+master.Master.ProcessCallTime_num_ops
+master.Master.QueueCallTime_75th_percentile
+master.Master.QueueCallTime_95th_percentile
+master.Master.QueueCallTime_99th_percentile
+master.Master.QueueCallTime_max
+master.Master.QueueCallTime_mean
+regionserver.Server.blockCacheCountHitPercent
+master.Master.QueueCallTime_median
+master.Master.QueueCallTime_min
+master.Master.QueueCallTime_num_ops
+master.Master.TotalCallTime_75th_percentile
+master.Master.TotalCallTime_95th_percentile
+master.Master.TotalCallTime_99th_percentile
+master.Master.TotalCallTime_max
+master.Master.TotalCallTime_mean
+master.Master.TotalCallTime_median
+master.Master.TotalCallTime_min
+master.Master.TotalCallTime_num_ops
+master.Master.authenticationFailures
+master.Master.authenticationSuccesses
+master.Master.authorizationFailures
+master.Master.authorizationSuccesses
+master.Master.exceptions
+master.Master.exceptions.FailedSanityCheckException
+master.Master.exceptions.NotServingRegionException
+master.Master.exceptions.OutOfOrderScannerNextException
+master.Master.exceptions.RegionMovedException
+master.Master.exceptions.RegionTooBusyException
+master.Master.exceptions.UnknownScannerException
+master.Master.numActiveHandler
+master.Master.numCallsInGeneralQueue
+master.Master.numCallsInPriorityQueue
+master.Master.numCallsInReplicationQueue
+regionserver.Server.blockCacheSize
+master.Master.numOpenConnections
+master.Master.queueSize
+master.Master.receivedBytes
+master.Master.sentBytes
+master.Server.averageLoad
+master.Server.clusterRequests
+master.Server.masterActiveTime
+master.Server.numDeadRegionServers
+master.Server.masterStartTime
+master.Server.numRegionServers
+metricssystem.MetricsSystem.DroppedPubAll
+regionserver.Server.SplitTime_min
+regionserver.Server.blockCacheHitCount
+metricssystem.MetricsSystem.NumActiveSinks
+metricssystem.MetricsSystem.NumActiveSources
+metricssystem.MetricsSystem.NumAllSinks
+metricssystem.MetricsSystem.NumAllSources
+regionserver.Server.blockCacheExpressHitPercent
+metricssystem.MetricsSystem.PublishAvgTime
+metricssystem.MetricsSystem.PublishNumOps
+metricssystem.MetricsSystem.Sink_timelineAvgTime
+regionserver.Server.SplitTime_num_ops
+metricssystem.MetricsSystem.Sink_timelineDropped
+metricssystem.MetricsSystem.Sink_timelineNumOps
+regionserver.Server.SplitTime_max
+regionserver.Server.ScanNext_min
+metricssystem.MetricsSystem.Sink_timelineQsize
+metricssystem.MetricsSystem.SnapshotAvgTime
+metricssystem.MetricsSystem.SnapshotNumOps
+regionserver.Server.SplitTime_95th_percentile
+regionserver.Server.SplitTime_99th_percentile
+regionserver.RegionServer.ProcessCallTime_75th_percentile
+regionserver.RegionServer.ProcessCallTime_99th_percentile
+regionserver.RegionServer.ProcessCallTime_max
+regionserver.RegionServer.ProcessCallTime_min
+regionserver.RegionServer.QueueCallTime_99th_percentile
+regionserver.RegionServer.QueueCallTime_min
+regionserver.RegionServer.TotalCallTime_99th_percentile
+regionserver.RegionServer.TotalCallTime_max
+regionserver.RegionServer.TotalCallTime_min
+regionserver.RegionServer.authorizationFailures
+regionserver.RegionServer.exceptions.NotServingRegionException
+regionserver.RegionServer.exceptions.UnknownScannerException
+regionserver.RegionServer.numActiveHandler
+regionserver.RegionServer.numCallsInGeneralQueue
+regionserver.Server.ScanNext_95th_percentile
+regionserver.RegionServer.numCallsInReplicationQueue
+regionserver.RegionServer.sentBytes
+regionserver.Server.Append_75th_percentile
+regionserver.Server.Append_99th_percentile
+regionserver.Server.Append_max
+regionserver.Server.Append_mean
+regionserver.Server.Append_min
+regionserver.Server.Delete_75th_percentile
+regionserver.Server.Delete_99th_percentile
+regionserver.Server.Delete_max
+regionserver.Server.Delete_mean
+regionserver.Server.Delete_min
+regionserver.Server.FlushTime_75th_percentile
+regionserver.Server.FlushTime_mean
+regionserver.Server.FlushTime_median
+regionserver.Server.Get_75th_percentile
+regionserver.Server.Get_99th_percentile
+regionserver.Server.Get_mean
+regionserver.Server.Get_min
+regionserver.Server.Get_num_ops
+regionserver.Server.Increment_75th_percentile
+regionserver.Server.Increment_99th_percentile
+regionserver.Server.Increment_mean
+regionserver.Server.Increment_min
+regionserver.Server.Increment_num_ops
+regionserver.Server.Mutate_75th_percentile
+regionserver.Server.Mutate_99th_percentile
+regionserver.Server.Mutate_mean
+regionserver.Server.Mutate_min
+regionserver.Server.Mutate_num_ops
+regionserver.Server.Replay_75th_percentile
+regionserver.Server.Replay_99th_percentile
+regionserver.Server.Replay_mean
+regionserver.Server.Replay_min
+regionserver.Server.ScanNext_75th_percentile
+regionserver.Server.ScanNext_mean
+regionserver.Server.SplitTime_75th_percentile
+jvm.JvmMetrics.GcCount
+regionserver.Server.SplitTime_mean
+regionserver.Server.Replay_max
+regionserver.Server.blockCacheCount
+regionserver.Server.blockCacheEvictionCount
+regionserver.Server.blockCacheFreeSize
+regionserver.Server.blockCacheMissCount
+regionserver.Server.Replay_median
+regionserver.Server.blockedRequestCount
+regionserver.Server.compactedCellsCount
+regionserver.Server.compactionQueueLength
+regionserver.Server.flushedCellsSize
+regionserver.Server.memStoreSize
+regionserver.Server.mutationsWithoutWALSize
+jvm.JvmMetrics.GcCountConcurrentMarkSweep
+regionserver.Server.regionServerStartTime
+regionserver.Server.slowDeleteCount
+regionserver.Server.slowIncrementCount
+regionserver.Server.splitRequestCount
+regionserver.Server.splitSuccessCount
+regionserver.Server.staticBloomSize
+regionserver.Server.storeCount
+regionserver.Server.storeFileIndexSize
+regionserver.Server.totalRequestCount
+regionserver.Server.writeRequestCount
+regionserver.WAL.AppendSize_95th_percentile
+regionserver.WAL.AppendSize_max
+regionserver.WAL.AppendSize_median
+regionserver.Server.Replay_95th_percentile
+regionserver.WAL.AppendTime_95th_percentile
+regionserver.WAL.AppendTime_median
+regionserver.WAL.AppendTime_max
+jvm.JvmMetrics.GcCountParNew
+regionserver.WAL.AppendTime_mean
+FSDatasetState.org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.CacheCapacity
+FSDatasetState.org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.CacheUsed
+FSDatasetState.org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.Capacity
+FSDatasetState.org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.DfsUsed
+FSDatasetState.org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.EstimatedCapacityLostTotal
+FSDatasetState.org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.LastVolumeFailureDate
+FSDatasetState.org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.NumBlocksCached
+FSDatasetState.org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.NumBlocksFailedToCache
+FSDatasetState.org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.NumBlocksFailedToUnCache
+FSDatasetState.org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.NumFailedVolumes
+FSDatasetState.org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.Remaining
+default.StartupProgress.ElapsedTime
+default.StartupProgress.LoadingEditsCount
+default.StartupProgress.LoadingEditsElapsedTime
+default.StartupProgress.LoadingEditsPercentComplete
+default.StartupProgress.LoadingEditsTotal
+default.StartupProgress.LoadingFsImageCount
+default.StartupProgress.LoadingFsImageElapsedTime
+default.StartupProgress.LoadingFsImagePercentComplete
+default.StartupProgress.LoadingFsImageTotal
+default.StartupProgress.PercentComplete
+default.StartupProgress.SafeModeCount
+default.StartupProgress.SafeModeElapsedTime
+default.StartupProgress.SafeModePercentComplete
+default.StartupProgress.SafeModeTotal
+default.StartupProgress.SavingCheckpointCount
+default.StartupProgress.SavingCheckpointElapsedTime
+default.StartupProgress.SavingCheckpointPercentComplete
+default.StartupProgress.SavingCheckpointTotal
+dfs.FSNamesystem.BlockCapacity
+dfs.FSNamesystem.BlocksTotal
+dfs.FSNamesystem.CapacityRemaining
+dfs.FSNamesystem.CapacityRemainingGB
+dfs.FSNamesystem.CapacityTotal
+dfs.FSNamesystem.CapacityTotalGB
+dfs.FSNamesystem.CapacityUsed
+dfs.FSNamesystem.CapacityUsedGB
+dfs.FSNamesystem.CapacityUsedNonDFS
+dfs.FSNamesystem.CorruptBlocks
+dfs.FSNamesystem.ExcessBlocks
+dfs.FSNamesystem.ExpiredHeartbeats
+dfs.FSNamesystem.FilesTotal
+dfs.FSNamesystem.LastCheckpointTime
+dfs.FSNamesystem.LastWrittenTransactionId
+dfs.FSNamesystem.LockQueueLength
+dfs.FSNamesystem.MillisSinceLastLoadedEdits
+dfs.FSNamesystem.MissingBlocks
+dfs.FSNamesystem.MissingReplOneBlocks
+dfs.FSNamesystem.NumActiveClients
+dfs.FSNamesystem.NumFilesUnderConstruction
+dfs.FSNamesystem.NumTimedOutPendingReplications
+dfs.FSNamesystem.PendingDataNodeMessageCount
+dfs.FSNamesystem.PendingDeletionBlocks
+dfs.FSNamesystem.PendingReplicationBlocks
+dfs.FSNamesystem.PostponedMisreplicatedBlocks
+dfs.FSNamesystem.ScheduledReplicationBlocks
+dfs.FSNamesystem.Snapshots
+dfs.FSNamesystem.SnapshottableDirectories
+dfs.FSNamesystem.StaleDataNodes
+dfs.FSNamesystem.TotalFiles
+dfs.FSNamesystem.TotalLoad
+dfs.FSNamesystem.TotalSyncCount
+dfs.FSNamesystem.TransactionsSinceLastCheckpoint
+dfs.FSNamesystem.TransactionsSinceLastLogRoll
+dfs.FSNamesystem.UnderReplicatedBlocks
+dfs.FsVolume.DataFileIoRateAvgTime
+dfs.FsVolume.DataFileIoRateNumOps
+dfs.FsVolume.FileIoErrorRateAvgTime
+dfs.FsVolume.FileIoErrorRateNumOps
+dfs.FsVolume.FlushIoRateAvgTime
+dfs.FsVolume.FlushIoRateNumOps
+dfs.FsVolume.MetadataOperationRateAvgTime
+dfs.FsVolume.MetadataOperationRateNumOps
+dfs.FsVolume.ReadIoRateAvgTime
+dfs.FsVolume.ReadIoRateNumOps
+dfs.FsVolume.SyncIoRateAvgTime
+dfs.FsVolume.SyncIoRateNumOps
+dfs.FsVolume.TotalDataFileIos
+dfs.FsVolume.TotalFileIoErrors
+dfs.FsVolume.TotalMetadataOperations
+dfs.FsVolume.WriteIoRateAvgTime
+dfs.FsVolume.WriteIoRateNumOps
+dfs.datanode.BlockChecksumOpAvgTime
+dfs.datanode.BlockChecksumOpNumOps
+dfs.datanode.BlockReportsAvgTime
+dfs.datanode.BlockReportsNumOps
+dfs.datanode.BlockVerificationFailures
+dfs.datanode.BlocksCached
+dfs.datanode.BlocksGetLocalPathInfo
+dfs.datanode.BlocksRead
+dfs.datanode.BlocksRemoved
+dfs.datanode.BlocksReplicated
+dfs.datanode.BlocksUncached
+dfs.datanode.BlocksVerified
+dfs.datanode.BlocksWritten
+dfs.datanode.BytesRead
+dfs.datanode.BytesWritten
+dfs.datanode.CacheReportsAvgTime
+dfs.datanode.CacheReportsNumOps
+dfs.datanode.CopyBlockOpAvgTime
+dfs.datanode.CopyBlockOpNumOps
+dfs.datanode.DataNodeActiveXceiversCount
+dfs.datanode.DatanodeNetworkErrors
+dfs.datanode.FlushNanosAvgTime
+dfs.datanode.FlushNanosNumOps
+dfs.datanode.FsyncCount
+dfs.datanode.FsyncNanosAvgTime
+dfs.datanode.FsyncNanosNumOps
+dfs.datanode.HeartbeatsAvgTime
+dfs.datanode.HeartbeatsNumOps
+dfs.datanode.HeartbeatsTotalAvgTime
+dfs.datanode.HeartbeatsTotalNumOps
+dfs.datanode.IncrementalBlockReportsAvgTime
+dfs.datanode.IncrementalBlockReportsNumOps
+dfs.datanode.LifelinesAvgTime
+dfs.datanode.LifelinesNumOps
+dfs.datanode.PacketAckRoundTripTimeNanosAvgTime
+dfs.datanode.PacketAckRoundTripTimeNanosNumOps
+dfs.datanode.RamDiskBlocksDeletedBeforeLazyPersisted
+dfs.datanode.RamDiskBlocksEvicted
+dfs.datanode.RamDiskBlocksEvictedWithoutRead
+dfs.datanode.RamDiskBlocksEvictionWindowMsAvgTime
+dfs.datanode.RamDiskBlocksEvictionWindowMsNumOps
+dfs.datanode.RamDiskBlocksLazyPersistWindowMsAvgTime
+dfs.datanode.RamDiskBlocksLazyPersistWindowMsNumOps
+dfs.datanode.RamDiskBlocksLazyPersisted
+dfs.datanode.RamDiskBlocksReadHits
+dfs.datanode.RamDiskBlocksWrite
+dfs.datanode.RamDiskBlocksWriteFallback
+dfs.datanode.RamDiskBytesLazyPersisted
+dfs.datanode.RamDiskBytesWrite
+dfs.datanode.ReadBlockOpAvgTime
+dfs.datanode.ReadBlockOpNumOps
+dfs.datanode.ReadsFromLocalClient
+dfs.datanode.ReadsFromRemoteClient
+dfs.datanode.RemoteBytesRead
+dfs.datanode.RemoteBytesWritten
+dfs.datanode.ReplaceBlockOpAvgTime
+dfs.datanode.ReplaceBlockOpNumOps
+dfs.datanode.SendDataPacketBlockedOnNetworkNanosAvgTime
+dfs.datanode.SendDataPacketBlockedOnNetworkNanosNumOps
+dfs.datanode.SendDataPacketTransferNanosAvgTime
+dfs.datanode.SendDataPacketTransferNanosNumOps
+dfs.datanode.TotalReadTime
+dfs.datanode.TotalWriteTime
+dfs.datanode.VolumeFailures
+dfs.datanode.WriteBlockOpAvgTime
+dfs.datanode.WriteBlockOpNumOps
+dfs.datanode.WritesFromLocalClient
+dfs.datanode.WritesFromRemoteClient
+dfs.namenode.AddBlockOps
+dfs.namenode.AllowSnapshotOps
+dfs.namenode.BlockOpsBatched
+dfs.namenode.BlockOpsQueued
+dfs.namenode.BlockReceivedAndDeletedOps
+dfs.namenode.BlockReportAvgTime
+dfs.namenode.BlockReportNumOps
+dfs.namenode.CacheReportAvgTime
+dfs.namenode.CacheReportNumOps
+dfs.namenode.CreateFileOps
+dfs.namenode.CreateSnapshotOps
+dfs.namenode.CreateSymlinkOps
+dfs.namenode.DeleteFileOps
+dfs.namenode.DeleteSnapshotOps
+dfs.namenode.DisallowSnapshotOps
+dfs.namenode.FileInfoOps
+dfs.namenode.FilesAppended
+dfs.namenode.FilesCreated
+dfs.namenode.FilesDeleted
+dfs.namenode.FilesInGetListingOps
+dfs.namenode.FilesRenamed
+dfs.namenode.FilesTruncated
+dfs.namenode.FsImageLoadTime
+dfs.namenode.GetAdditionalDatanodeOps
+dfs.namenode.GetBlockLocations
+dfs.namenode.GetEditAvgTime
+dfs.namenode.GetEditNumOps
+dfs.namenode.GetImageAvgTime
+dfs.namenode.GetImageNumOps
+dfs.namenode.GetLinkTargetOps
+dfs.namenode.GetListingOps
+dfs.namenode.ListSnapshottableDirOps
+dfs.namenode.PutImageAvgTime
+dfs.namenode.PutImageNumOps
+dfs.namenode.RenameSnapshotOps
+dfs.namenode.SafeModeTime
+dfs.namenode.SnapshotDiffReportOps
+dfs.namenode.StorageBlockReportOps
+dfs.namenode.SyncsAvgTime
+dfs.namenode.SyncsNumOps
+dfs.namenode.TotalFileOps
+dfs.namenode.TransactionsAvgTime
+dfs.namenode.TransactionsBatchedInSync
+dfs.namenode.TransactionsNumOps
+jvm.JvmMetrics.GcCount
+jvm.JvmMetrics.GcCountConcurrentMarkSweep
+jvm.JvmMetrics.GcCountParNew
+jvm.JvmMetrics.GcNumInfoThresholdExceeded
+jvm.JvmMetrics.GcNumWarnThresholdExceeded
+jvm.JvmMetrics.GcTimeMillis
+jvm.JvmMetrics.GcTimeMillisConcurrentMarkSweep
+jvm.JvmMetrics.GcTimeMillisParNew
+jvm.JvmMetrics.GcTotalExtraSleepTime
+jvm.JvmMetrics.LogError
+jvm.JvmMetrics.LogFatal
+jvm.JvmMetrics.LogInfo
+jvm.JvmMetrics.LogWarn
+jvm.JvmMetrics.MemHeapCommittedM
+jvm.JvmMetrics.MemHeapMaxM
+jvm.JvmMetrics.MemHeapUsedM
+jvm.JvmMetrics.MemMaxM
+jvm.JvmMetrics.MemNonHeapCommittedM
+jvm.JvmMetrics.MemNonHeapMaxM
+jvm.JvmMetrics.MemNonHeapUsedM
+jvm.JvmMetrics.ThreadsBlocked
+jvm.JvmMetrics.ThreadsNew
+jvm.JvmMetrics.ThreadsRunnable
+jvm.JvmMetrics.ThreadsTerminated
+jvm.JvmMetrics.ThreadsTimedWaiting
+jvm.JvmMetrics.ThreadsWaiting
+metricssystem.MetricsSystem.DroppedPubAll
+metricssystem.MetricsSystem.NumActiveSinks
+metricssystem.MetricsSystem.NumActiveSources
+metricssystem.MetricsSystem.NumAllSinks
+metricssystem.MetricsSystem.NumAllSources
+metricssystem.MetricsSystem.PublishAvgTime
+metricssystem.MetricsSystem.PublishNumOps
+metricssystem.MetricsSystem.Sink_timelineAvgTime
+metricssystem.MetricsSystem.Sink_timelineDropped
+metricssystem.MetricsSystem.Sink_timelineNumOps
+metricssystem.MetricsSystem.Sink_timelineQsize
+metricssystem.MetricsSystem.SnapshotAvgTime
+metricssystem.MetricsSystem.SnapshotNumOps
+rpc.RetryCache.NameNodeRetryCache.CacheCleared
+rpc.RetryCache.NameNodeRetryCache.CacheHit
+rpc.RetryCache.NameNodeRetryCache.CacheUpdated
+rpc.rpc.CallQueueLength
+rpc.rpc.NumOpenConnections
+rpc.rpc.ReceivedBytes
+rpc.rpc.RpcAuthenticationFailures
+rpc.rpc.RpcAuthenticationSuccesses
+rpc.rpc.RpcAuthorizationFailures
+rpc.rpc.RpcAuthorizationSuccesses
+rpc.rpc.RpcClientBackoff
+rpc.rpc.RpcProcessingTimeAvgTime
+rpc.rpc.RpcProcessingTimeNumOps
+rpc.rpc.RpcQueueTimeAvgTime
+rpc.rpc.RpcQueueTimeNumOps
+rpc.rpc.RpcSlowCalls
+rpc.rpc.SentBytes
+rpc.rpc.client.CallQueueLength
+rpc.rpc.client.NumOpenConnections
+rpc.rpc.client.ReceivedBytes
+rpc.rpc.client.RpcAuthenticationFailures
+rpc.rpc.client.RpcAuthenticationSuccesses
+rpc.rpc.client.RpcAuthorizationFailures
+rpc.rpc.client.RpcAuthorizationSuccesses
+rpc.rpc.client.RpcClientBackoff
+rpc.rpc.client.RpcProcessingTimeAvgTime
+rpc.rpc.client.RpcProcessingTimeNumOps
+rpc.rpc.client.RpcQueueTimeAvgTime
+rpc.rpc.client.RpcQueueTimeNumOps
+rpc.rpc.client.RpcSlowCalls
+rpc.rpc.client.SentBytes
+rpcdetailed.rpcdetailed.InitReplicaRecoveryAvgTime
+rpcdetailed.rpcdetailed.InitReplicaRecoveryNumOps
+rpcdetailed.rpcdetailed.UpdateReplicaUnderRecoveryAvgTime
+rpcdetailed.rpcdetailed.UpdateReplicaUnderRecoveryNumOps
+rpcdetailed.rpcdetailed.client.AddBlockAvgTime
+rpcdetailed.rpcdetailed.client.AddBlockNumOps
+rpcdetailed.rpcdetailed.client.BlockReceivedAndDeletedAvgTime
+rpcdetailed.rpcdetailed.client.BlockReceivedAndDeletedNumOps
+rpcdetailed.rpcdetailed.client.BlockReportAvgTime
+rpcdetailed.rpcdetailed.client.BlockReportNumOps
+rpcdetailed.rpcdetailed.client.CheckAccessAvgTime
+rpcdetailed.rpcdetailed.client.CheckAccessNumOps
+rpcdetailed.rpcdetailed.client.CommitBlockSynchronizationAvgTime
+rpcdetailed.rpcdetailed.client.CommitBlockSynchronizationNumOps
+rpcdetailed.rpcdetailed.client.CompleteAvgTime
+rpcdetailed.rpcdetailed.client.CompleteNumOps
+rpcdetailed.rpcdetailed.client.CreateAvgTime
+rpcdetailed.rpcdetailed.client.CreateNumOps
+rpcdetailed.rpcdetailed.client.DeleteAvgTime
+rpcdetailed.rpcdetailed.client.DeleteNumOps
+rpcdetailed.rpcdetailed.client.FsyncAvgTime
+rpcdetailed.rpcdetailed.client.FsyncNumOps
+rpcdetailed.rpcdetailed.client.GetBlockLocationsAvgTime
+rpcdetailed.rpcdetailed.client.GetBlockLocationsNumOps
+rpcdetailed.rpcdetailed.client.GetEditLogManifestAvgTime
+rpcdetailed.rpcdetailed.client.GetEditLogManifestNumOps
+rpcdetailed.rpcdetailed.client.GetFileInfoAvgTime
+rpcdetailed.rpcdetailed.client.GetFileInfoNumOps
+rpcdetailed.rpcdetailed.client.GetListingAvgTime
+rpcdetailed.rpcdetailed.client.GetListingNumOps
+rpcdetailed.rpcdetailed.client.GetServerDefaultsAvgTime
+rpcdetailed.rpcdetailed.client.GetServerDefaultsNumOps
+rpcdetailed.rpcdetailed.client.GetTransactionIdAvgTime
+rpcdetailed.rpcdetailed.client.GetTransactionIdNumOps
+rpcdetailed.rpcdetailed.client.IsRollingUpgradeAvgTime
+rpcdetailed.rpcdetailed.client.IsRollingUpgradeNumOps
+rpcdetailed.rpcdetailed.client.ListEncryptionZonesAvgTime
+rpcdetailed.rpcdetailed.client.ListEncryptionZonesNumOps
+rpcdetailed.rpcdetailed.client.MkdirsAvgTime
+rpcdetailed.rpcdetailed.client.MkdirsNumOps
+rpcdetailed.rpcdetailed.client.PathIsNotEmptyDirectoryExceptionAvgTime
+rpcdetailed.rpcdetailed.client.PathIsNotEmptyDirectoryExceptionNumOps
+rpcdetailed.rpcdetailed.client.RecoverLeaseAvgTime
+rpcdetailed.rpcdetailed.client.RecoverLeaseNumOps
+rpcdetailed.rpcdetailed.client.RegisterDatanodeAvgTime
+rpcdetailed.rpcdetailed.client.RegisterDatanodeNumOps
+rpcdetailed.rpcdetailed.client.Rename2AvgTime
+rpcdetailed.rpcdetailed.client.Rename2NumOps
+rpcdetailed.rpcdetailed.client.RenameAvgTime
+rpcdetailed.rpcdetailed.client.RenameNumOps
+rpcdetailed.rpcdetailed.client.RenewLeaseAvgTime
+rpcdetailed.rpcdetailed.client.RenewLeaseNumOps
+rpcdetailed.rpcdetailed.client.RetriableExceptionAvgTime
+rpcdetailed.rpcdetailed.client.RetriableExceptionNumOps
+rpcdetailed.rpcdetailed.client.RollEditLogAvgTime
+rpcdetailed.rpcdetailed.client.RollEditLogNumOps
+rpcdetailed.rpcdetailed.client.SafeModeExceptionAvgTime
+rpcdetailed.rpcdetailed.client.SafeModeExceptionNumOps
+rpcdetailed.rpcdetailed.client.SendHeartbeatAvgTime
+rpcdetailed.rpcdetailed.client.SendHeartbeatNumOps
+rpcdetailed.rpcdetailed.client.SetPermissionAvgTime
+rpcdetailed.rpcdetailed.client.SetPermissionNumOps
+rpcdetailed.rpcdetailed.client.SetReplicationAvgTime
+rpcdetailed.rpcdetailed.client.SetReplicationNumOps
+rpcdetailed.rpcdetailed.client.SetSafeModeAvgTime
+rpcdetailed.rpcdetailed.client.SetSafeModeNumOps
+rpcdetailed.rpcdetailed.client.SetTimesAvgTime
+rpcdetailed.rpcdetailed.client.SetTimesNumOps
+rpcdetailed.rpcdetailed.client.VersionRequestAvgTime
+rpcdetailed.rpcdetailed.client.VersionRequestNumOps
+ugi.UgiMetrics.GetGroupsAvgTime
+ugi.UgiMetrics.GetGroupsNumOps
+ugi.UgiMetrics.LoginFailureAvgTime
+ugi.UgiMetrics.LoginFailureNumOps
+ugi.UgiMetrics.LoginSuccessAvgTime
+ugi.UgiMetrics.LoginSuccessNumOps
+ugi.UgiMetrics.RenewalFailures
+ugi.UgiMetrics.RenewalFailuresTotal
+default.General.active_calls_api_create_table
+default.General.active_calls_api_drop_table
+default.General.active_calls_api_get_all_databases
+default.General.active_calls_api_get_database
+default.General.active_calls_api_get_functions
+default.General.active_calls_api_get_table
+default.General.active_calls_api_get_tables
+default.General.api_create_table_15min_rate
+default.General.api_create_table_1min_rate
+default.General.api_create_table_5min_rate
+default.General.api_create_table_75thpercentile
+default.General.api_create_table_95thpercentile
+default.General.api_create_table_98thpercentile
+default.General.api_create_table_999thpercentile
+default.General.api_create_table_99thpercentile
+default.General.api_create_table_count
+default.General.api_create_table_max
+default.General.api_create_table_mean
+default.General.api_create_table_mean_rate
+default.General.api_create_table_median
+default.General.api_create_table_min
+default.General.api_create_table_stddev
+default.General.api_drop_table_15min_rate
+default.General.api_drop_table_1min_rate
+default.General.api_drop_table_5min_rate
+default.General.api_drop_table_75thpercentile
+default.General.api_drop_table_95thpercentile
+default.General.api_drop_table_98thpercentile
+default.General.api_drop_table_999thpercentile
+default.General.api_drop_table_99thpercentile
+default.General.api_drop_table_count
+default.General.api_drop_table_max
+default.General.api_drop_table_mean
+default.General.api_drop_table_mean_rate
+default.General.api_drop_table_median
+default.General.api_drop_table_min
+default.General.api_drop_table_stddev
+default.General.api_get_all_databases_15min_rate
+default.General.api_get_all_databases_1min_rate
+default.General.api_get_all_databases_5min_rate
+default.General.api_get_all_databases_75thpercentile
+default.General.api_get_all_databases_95thpercentile
+default.General.api_get_all_databases_98thpercentile
+default.General.api_get_all_databases_999thpercentile
+default.General.api_get_all_databases_99thpercentile
+default.General.api_get_all_databases_count
+default.General.api_get_all_databases_max
+default.General.api_get_all_databases_mean
+default.General.api_get_all_databases_mean_rate
+default.General.api_get_all_databases_median
+default.General.api_get_all_databases_min
+default.General.api_get_all_databases_stddev
+default.General.api_get_database_15min_rate
+default.General.api_get_database_1min_rate
+default.General.api_get_database_5min_rate
+default.General.api_get_database_75thpercentile
+default.General.api_get_database_95thpercentile
+default.General.api_get_database_98thpercentile
+default.General.api_get_database_999thpercentile
+default.General.api_get_database_99thpercentile
+default.General.api_get_database_count
+default.General.api_get_database_max
+default.General.api_get_database_mean
+default.General.api_get_database_mean_rate
+default.General.api_get_database_median
+default.General.api_get_database_min
+default.General.api_get_database_stddev
+default.General.api_get_functions_15min_rate
+default.General.api_get_functions_1min_rate
+default.General.api_get_functions_5min_rate
+default.General.api_get_functions_75thpercentile
+default.General.api_get_functions_95thpercentile
+default.General.api_get_functions_98thpercentile
+default.General.api_get_functions_999thpercentile
+default.General.api_get_functions_99thpercentile
+default.General.api_get_functions_count
+default.General.api_get_functions_max
+default.General.api_get_functions_mean
+default.General.api_get_functions_mean_rate
+default.General.api_get_functions_median
+default.General.api_get_functions_min
+default.General.api_get_functions_stddev
+default.General.api_get_table_15min_rate
+default.General.api_get_table_1min_rate
+default.General.api_get_table_5min_rate
+default.General.api_get_table_75thpercentile
+default.General.api_get_table_95thpercentile
+default.General.api_get_table_98thpercentile
+default.General.api_get_table_999thpercentile
+default.General.api_get_table_99thpercentile
+default.General.api_get_table_count
+default.General.api_get_table_max
+default.General.api_get_table_mean
+default.General.api_get_table_mean_rate
+default.General.api_get_table_median
+default.General.api_get_table_min
+default.General.api_get_table_stddev
+default.General.api_get_tables_15min_rate
+default.General.api_get_tables_1min_rate
+default.General.api_get_tables_5min_rate
+default.General.api_get_tables_75thpercentile
+default.General.api_get_tables_95thpercentile
+default.General.api_get_tables_98thpercentile
+default.General.api_get_tables_999thpercentile
+default.General.api_get_tables_99thpercentile
+default.General.api_get_tables_count
+default.General.api_get_tables_max
+default.General.api_get_tables_mean
+default.General.api_get_tables_mean_rate
+default.General.api_get_tables_median
+default.General.api_get_tables_min
+default.General.api_get_tables_stddev
+default.General.buffers.direct.capacity
+default.General.buffers.direct.count
+default.General.buffers.direct.used
+default.General.buffers.mapped.capacity
+default.General.buffers.mapped.count
+default.General.buffers.mapped.used
+default.General.classLoading.loaded
+default.General.classLoading.unloaded
+default.General.create_total_count_tables
+default.General.delete_total_count_tables
+default.General.gc.PS-MarkSweep.count
+default.General.gc.PS-MarkSweep.time
+default.General.gc.PS-Scavenge.count
+default.General.gc.PS-Scavenge.time
+default.General.init_total_count_dbs
+default.General.init_total_count_partitions
+default.General.init_total_count_tables
+default.General.jvm.pause.extraSleepTime
+default.General.memory.heap.committed
+default.General.memory.heap.init
+default.General.memory.heap.max
+default.General.memory.heap.usage
+default.General.memory.heap.used
+default.General.memory.non-heap.committed
+default.General.memory.non-heap.init
+default.General.memory.non-heap.max
+default.General.memory.non-heap.usage
+default.General.memory.non-heap.used
+default.General.memory.pools.Code-Cache.usage
+default.General.memory.pools.Compressed-Class-Space.usage
+default.General.memory.pools.Metaspace.usage
+default.General.memory.pools.PS-Eden-Space.usage
+default.General.memory.pools.PS-Old-Gen.usage
+default.General.memory.pools.PS-Survivor-Space.usage
+default.General.memory.total.committed
+default.General.memory.total.init
+default.General.memory.total.max
+default.General.memory.total.used
+default.General.open_connections
+default.General.threads.blocked.count
+default.General.threads.count
+default.General.threads.daemon.count
+default.General.threads.deadlock.count
+default.General.threads.new.count
+default.General.threads.runnable.count
+default.General.threads.terminated.count
+default.General.threads.timed_waiting.count
+default.General.threads.waiting.count
+metricssystem.MetricsSystem.DroppedPubAll
+metricssystem.MetricsSystem.NumActiveSinks
+metricssystem.MetricsSystem.NumActiveSources
+metricssystem.MetricsSystem.NumAllSinks
+metricssystem.MetricsSystem.NumAllSources
+metricssystem.MetricsSystem.PublishAvgTime
+metricssystem.MetricsSystem.PublishNumOps
+metricssystem.MetricsSystem.Sink_timelineAvgTime
+metricssystem.MetricsSystem.Sink_timelineDropped
+metricssystem.MetricsSystem.Sink_timelineNumOps
+metricssystem.MetricsSystem.Sink_timelineQsize
+metricssystem.MetricsSystem.SnapshotAvgTime
+metricssystem.MetricsSystem.SnapshotNumOps
+ugi.UgiMetrics.GetGroupsAvgTime
+ugi.UgiMetrics.GetGroupsNumOps
+ugi.UgiMetrics.LoginFailureAvgTime
+ugi.UgiMetrics.LoginFailureNumOps
+ugi.UgiMetrics.LoginSuccessAvgTime
+ugi.UgiMetrics.LoginSuccessNumOps
+ugi.UgiMetrics.RenewalFailures
+ugi.UgiMetrics.RenewalFailuresTotal
+Supervisors
+Total Tasks
+Total Slots
+Used Slots
+Topologies
+Total Executors
+Free Slots
+jvm.JvmMetrics.GcCount
+jvm.JvmMetrics.GcCountPS
+jvm.JvmMetrics.GcTimeMillis
+jvm.JvmMetrics.GcTimeMillisPS
+jvm.JvmMetrics.LogError
+jvm.JvmMetrics.LogFatal
+jvm.JvmMetrics.LogInfo
+jvm.JvmMetrics.LogWarn
+jvm.JvmMetrics.MemHeapCommittedM
+jvm.JvmMetrics.MemHeapMaxM
+jvm.JvmMetrics.MemHeapUsedM
+jvm.JvmMetrics.MemMaxM
+jvm.JvmMetrics.MemNonHeapCommittedM
+jvm.JvmMetrics.MemNonHeapMaxM
+jvm.JvmMetrics.MemNonHeapUsedM
+jvm.JvmMetrics.ThreadsBlocked
+jvm.JvmMetrics.ThreadsNew
+jvm.JvmMetrics.ThreadsRunnable
+jvm.JvmMetrics.ThreadsTerminated
+jvm.JvmMetrics.ThreadsTimedWaiting
+jvm.JvmMetrics.ThreadsWaiting
+mapred.ShuffleMetrics.ShuffleConnections
+mapred.ShuffleMetrics.ShuffleOutputBytes
+mapred.ShuffleMetrics.ShuffleOutputsFailed
+mapred.ShuffleMetrics.ShuffleOutputsOK
+metricssystem.MetricsSystem.DroppedPubAll
+metricssystem.MetricsSystem.NumActiveSinks
+metricssystem.MetricsSystem.NumActiveSources
+metricssystem.MetricsSystem.NumAllSinks
+metricssystem.MetricsSystem.NumAllSources
+metricssystem.MetricsSystem.PublishAvgTime
+metricssystem.MetricsSystem.PublishNumOps
+metricssystem.MetricsSystem.Sink_timelineAvgTime
+metricssystem.MetricsSystem.Sink_timelineDropped
+metricssystem.MetricsSystem.Sink_timelineNumOps
+metricssystem.MetricsSystem.Sink_timelineQsize
+metricssystem.MetricsSystem.SnapshotAvgTime
+metricssystem.MetricsSystem.SnapshotNumOps
+rpc.rpc.CallQueueLength
+rpc.rpc.NumOpenConnections
+rpc.rpc.ReceivedBytes
+rpc.rpc.RpcAuthenticationFailures
+rpc.rpc.RpcAuthenticationSuccesses
+rpc.rpc.RpcAuthorizationFailures
+rpc.rpc.RpcAuthorizationSuccesses
+rpc.rpc.RpcClientBackoff
+rpc.rpc.RpcProcessingTimeAvgTime
+rpc.rpc.RpcProcessingTimeNumOps
+rpc.rpc.RpcQueueTimeAvgTime
+rpc.rpc.RpcQueueTimeNumOps
+rpc.rpc.RpcSlowCalls
+rpc.rpc.SentBytes
+rpcdetailed.rpcdetailed.AllocateAvgTime
+rpcdetailed.rpcdetailed.AllocateNumOps
+rpcdetailed.rpcdetailed.FinishApplicationMasterAvgTime
+rpcdetailed.rpcdetailed.FinishApplicationMasterNumOps
+rpcdetailed.rpcdetailed.GetApplicationReportAvgTime
+rpcdetailed.rpcdetailed.GetApplicationReportNumOps
+rpcdetailed.rpcdetailed.GetClusterMetricsAvgTime
+rpcdetailed.rpcdetailed.GetClusterMetricsNumOps
+rpcdetailed.rpcdetailed.GetClusterNodesAvgTime
+rpcdetailed.rpcdetailed.GetClusterNodesNumOps
+rpcdetailed.rpcdetailed.GetContainerStatusesAvgTime
+rpcdetailed.rpcdetailed.GetContainerStatusesNumOps
+rpcdetailed.rpcdetailed.GetNewApplicationAvgTime
+rpcdetailed.rpcdetailed.GetNewApplicationNumOps
+rpcdetailed.rpcdetailed.GetQueueInfoAvgTime
+rpcdetailed.rpcdetailed.GetQueueInfoNumOps
+rpcdetailed.rpcdetailed.GetQueueUserAclsAvgTime
+rpcdetailed.rpcdetailed.GetQueueUserAclsNumOps
+rpcdetailed.rpcdetailed.HeartbeatAvgTime
+rpcdetailed.rpcdetailed.HeartbeatNumOps
+rpcdetailed.rpcdetailed.NodeHeartbeatAvgTime
+rpcdetailed.rpcdetailed.NodeHeartbeatNumOps
+rpcdetailed.rpcdetailed.RegisterApplicationMasterAvgTime
+rpcdetailed.rpcdetailed.RegisterApplicationMasterNumOps
+rpcdetailed.rpcdetailed.RegisterNodeManagerAvgTime
+rpcdetailed.rpcdetailed.RegisterNodeManagerNumOps
+rpcdetailed.rpcdetailed.StartContainersAvgTime
+rpcdetailed.rpcdetailed.StartContainersNumOps
+rpcdetailed.rpcdetailed.StopContainersAvgTime
+rpcdetailed.rpcdetailed.StopContainersNumOps
+rpcdetailed.rpcdetailed.SubmitApplicationAvgTime
+rpcdetailed.rpcdetailed.SubmitApplicationNumOps
+ugi.UgiMetrics.GetGroupsAvgTime
+ugi.UgiMetrics.GetGroupsNumOps
+ugi.UgiMetrics.LoginFailureAvgTime
+ugi.UgiMetrics.LoginFailureNumOps
+ugi.UgiMetrics.LoginSuccessAvgTime
+ugi.UgiMetrics.LoginSuccessNumOps
+yarn.ClusterMetrics.AMLaunchDelayAvgTime
+yarn.ClusterMetrics.AMLaunchDelayNumOps
+yarn.ClusterMetrics.AMRegisterDelayAvgTime
+yarn.ClusterMetrics.AMRegisterDelayNumOps
+yarn.ClusterMetrics.NumActiveNMs
+yarn.ClusterMetrics.NumDecommissionedNMs
+yarn.ClusterMetrics.NumLostNMs
+yarn.ClusterMetrics.NumRebootedNMs
+yarn.ClusterMetrics.NumUnhealthyNMs
+yarn.NodeManagerMetrics.AllocatedContainers
+yarn.NodeManagerMetrics.AllocatedGB
+yarn.NodeManagerMetrics.AllocatedVCores
+yarn.NodeManagerMetrics.AvailableGB
+yarn.NodeManagerMetrics.AvailableVCores
+yarn.NodeManagerMetrics.BadLocalDirs
+yarn.NodeManagerMetrics.BadLogDirs
+yarn.NodeManagerMetrics.ContainerLaunchDurationAvgTime
+yarn.NodeManagerMetrics.ContainerLaunchDurationNumOps
+yarn.NodeManagerMetrics.ContainersCompleted
+yarn.NodeManagerMetrics.ContainersFailed
+yarn.NodeManagerMetrics.ContainersIniting
+yarn.NodeManagerMetrics.ContainersKilled
+yarn.NodeManagerMetrics.ContainersLaunched
+yarn.NodeManagerMetrics.ContainersRunning
+yarn.NodeManagerMetrics.GoodLocalDirsDiskUtilizationPerc
+yarn.NodeManagerMetrics.GoodLogDirsDiskUtilizationPerc
+yarn.QueueMetrics.Queue=root.AMResourceLimitMB
+yarn.QueueMetrics.Queue=root.AMResourceLimitVCores
+yarn.QueueMetrics.Queue=root.ActiveApplications
+yarn.QueueMetrics.Queue=root.ActiveUsers
+yarn.QueueMetrics.Queue=root.AggregateContainersAllocated
+yarn.QueueMetrics.Queue=root.AggregateContainersReleased
+yarn.QueueMetrics.Queue=root.AllocatedContainers
+yarn.QueueMetrics.Queue=root.AllocatedMB
+yarn.QueueMetrics.Queue=root.AllocatedVCores
+yarn.QueueMetrics.Queue=root.AppAttemptFirstContainerAllocationDelayAvgTime
+yarn.QueueMetrics.Queue=root.AppAttemptFirstContainerAllocationDelayNumOps
+yarn.QueueMetrics.Queue=root.AppsCompleted
+yarn.QueueMetrics.Queue=root.AppsFailed
+yarn.QueueMetrics.Queue=root.AppsKilled
+yarn.QueueMetrics.Queue=root.AppsPending
+yarn.QueueMetrics.Queue=root.AppsRunning
+yarn.QueueMetrics.Queue=root.AppsSubmitted
+yarn.QueueMetrics.Queue=root.AvailableMB
+yarn.QueueMetrics.Queue=root.AvailableVCores
+yarn.QueueMetrics.Queue=root.PendingContainers
+yarn.QueueMetrics.Queue=root.PendingMB
+yarn.QueueMetrics.Queue=root.PendingVCores
+yarn.QueueMetrics.Queue=root.ReservedContainers
+yarn.QueueMetrics.Queue=root.ReservedMB
+yarn.QueueMetrics.Queue=root.ReservedVCores
+yarn.QueueMetrics.Queue=root.UsedAMResourceMB
+yarn.QueueMetrics.Queue=root.UsedAMResourceVCores
+yarn.QueueMetrics.Queue=root.default.AMResourceLimitMB
+yarn.QueueMetrics.Queue=root.default.AMResourceLimitVCores
+yarn.QueueMetrics.Queue=root.default.ActiveApplications
+yarn.QueueMetrics.Queue=root.default.ActiveUsers
+yarn.QueueMetrics.Queue=root.default.AggregateContainersAllocated
+yarn.QueueMetrics.Queue=root.default.AggregateContainersReleased
+yarn.QueueMetrics.Queue=root.default.AllocatedContainers
+yarn.QueueMetrics.Queue=root.default.AllocatedMB
+yarn.QueueMetrics.Queue=root.default.AllocatedVCores
+yarn.QueueMetrics.Queue=root.default.AppAttemptFirstContainerAllocationDelayAvgTime
+yarn.QueueMetrics.Queue=root.default.AppAttemptFirstContainerAllocationDelayNumOps
+yarn.QueueMetrics.Queue=root.default.AppsCompleted
+yarn.QueueMetrics.Queue=root.default.AppsFailed
+yarn.QueueMetrics.Queue=root.default.AppsKilled
+yarn.QueueMetrics.Queue=root.default.AppsPending
+yarn.QueueMetrics.Queue=root.default.AppsRunning
+yarn.QueueMetrics.Queue=root.default.AppsSubmitted
+yarn.QueueMetrics.Queue=root.default.AvailableMB
+yarn.QueueMetrics.Queue=root.default.AvailableVCores
+yarn.QueueMetrics.Queue=root.default.PendingContainers
+yarn.QueueMetrics.Queue=root.default.PendingMB
+yarn.QueueMetrics.Queue=root.default.PendingVCores
+yarn.QueueMetrics.Queue=root.default.ReservedContainers
+yarn.QueueMetrics.Queue=root.default.ReservedMB
+yarn.QueueMetrics.Queue=root.default.ReservedVCores
+yarn.QueueMetrics.Queue=root.default.UsedAMResourceMB
+yarn.QueueMetrics.Queue=root.default.UsedAMResourceVCores
+yarn.QueueMetrics.Queue=root.default.running_0
+yarn.QueueMetrics.Queue=root.default.running_1440
+yarn.QueueMetrics.Queue=root.default.running_300
+yarn.QueueMetrics.Queue=root.default.running_60
+yarn.QueueMetrics.Queue=root.running_0
+yarn.QueueMetrics.Queue=root.running_1440
+yarn.QueueMetrics.Queue=root.running_300
+yarn.QueueMetrics.Queue=root.running_60
\ No newline at end of file

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 21/39: AMBARI-22215 Refine cluster second aggregator by aligning sink publish times to 1 minute boundaries. (dsen)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 166fff69c2b3414bbb9e907e4d049342779ff3f2
Author: Dmytro Sen <ds...@apache.org>
AuthorDate: Thu Oct 12 12:49:22 2017 +0300

    AMBARI-22215 Refine cluster second aggregator by aligning sink publish times to 1 minute boundaries. (dsen)
---
 .../sink/timeline/AbstractTimelineMetricsSink.java |  95 +++++++-
 .../metrics2/sink/timeline/TimelineMetric.java     |   3 +
 .../timeline/AbstractTimelineMetricSinkTest.java   | 240 +++++++++++++++++++++
 .../AbstractTimelineMetricSinkTest.java            | 113 ----------
 .../sink/timeline/HadoopTimelineMetricsSink.java   |   2 +-
 .../timeline/HadoopTimelineMetricsSinkTest.java    |   4 +-
 .../src/main/python/core/application_metric_map.py |  52 ++++-
 .../test/python/core/TestApplicationMetricMap.py   |  38 +++-
 .../timeline/TimelineMetricConfiguration.java      |   3 -
 .../timeline/TimelineMetricsIgniteCache.java       |  14 +-
 .../timeline/aggregators/AggregatorUtils.java      |   2 +-
 .../TimelineMetricAggregatorFactory.java           |   7 +-
 ...tricClusterAggregatorSecondWithCacheSource.java |  38 +---
 .../timeline/TimelineMetricsIgniteCacheTest.java   |  56 -----
 ...ClusterAggregatorSecondWithCacheSourceTest.java |  65 +-----
 15 files changed, 437 insertions(+), 295 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java
index 3c06032..739e9dc 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.metrics2.sink.timeline;
 
 import com.google.common.base.Supplier;
 import com.google.common.base.Suppliers;
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
 import com.google.common.reflect.TypeToken;
 import com.google.gson.Gson;
 import com.google.gson.JsonSyntaxException;
@@ -58,6 +60,7 @@ import java.util.List;
 import java.util.Random;
 import java.util.Set;
 import java.util.SortedSet;
+import java.util.TreeMap;
 import java.util.TreeSet;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicInteger;
@@ -130,6 +133,13 @@ public abstract class AbstractTimelineMetricsSink {
   private static final int COLLECTOR_HOST_CACHE_MAX_EXPIRATION_MINUTES = 75;
   private static final int COLLECTOR_HOST_CACHE_MIN_EXPIRATION_MINUTES = 60;
 
+  //10 seconds
+  protected int collectionPeriodMillis = 10000;
+
+  private int cacheExpireTimeMinutesDefault = 10;
+
+  private volatile Cache<String, TimelineMetric> metricsPostCache = CacheBuilder.newBuilder().expireAfterAccess(cacheExpireTimeMinutesDefault, TimeUnit.MINUTES).build();
+
   static {
     mapper = new ObjectMapper();
     AnnotationIntrospector introspector = new JaxbAnnotationIntrospector();
@@ -289,7 +299,21 @@ public abstract class AbstractTimelineMetricsSink {
     return collectorHost;
   }
 
+  /**
+   * @param metrics metrics to post, metric values will be aligned by minute mark,
+   *                last uncompleted minute will be cached to post in future iteration
+   */
   protected boolean emitMetrics(TimelineMetrics metrics) {
+    return emitMetrics(metrics, false);
+  }
+
+  /**
+   * @param metrics metrics to post, if postAllCachedMetrics is false metric values will be aligned by minute mark,
+   *                last uncompleted minute will be cached to post in future iteration
+   * @param postAllCachedMetrics if set to true all cached metrics will be posted, ignoring the minute aligning
+   * @return
+   */
+  protected boolean emitMetrics(TimelineMetrics metrics, boolean postAllCachedMetrics) {
     String connectUrl;
     boolean validCollectorHost = true;
 
@@ -307,11 +331,20 @@ public abstract class AbstractTimelineMetricsSink {
       connectUrl = getCollectorUri(collectorHost);
     }
 
+    TimelineMetrics metricsToEmit = alignMetricsByMinuteMark(metrics);
+
+    if (postAllCachedMetrics) {
+      for (TimelineMetric timelineMetric : metricsPostCache.asMap().values()) {
+        metricsToEmit.addOrMergeTimelineMetric(timelineMetric);
+      }
+      metricsPostCache.invalidateAll();
+    }
+
     if (validCollectorHost) {
       String jsonData = null;
       LOG.debug("EmitMetrics connectUrl = "  + connectUrl);
       try {
-        jsonData = mapper.writeValueAsString(metrics);
+        jsonData = mapper.writeValueAsString(metricsToEmit);
       } catch (IOException e) {
         LOG.error("Unable to parse metrics", e);
       }
@@ -335,6 +368,61 @@ public abstract class AbstractTimelineMetricsSink {
   }
 
   /**
+   * Align metrics by the minutes so that only complete minutes are send.
+   * Not completed minutes data points will be cached and posted when the minute will be completed.
+   * Cached metrics are merged with currently posting metrics
+   * e.g:
+   * first iteration if metrics from 00m15s to 01m15s are processed,
+   *               then metrics from 00m15s to 00m59s will be posted
+   *                        and from 01m00s to 01m15s will be cached
+   * second iteration   metrics from 01m25s to 02m55s are processed,
+   *     cached metrics from previous call will be merged with current,
+   *                    metrics from 01m00s to 02m55s will be posted, cache will be empty
+   * @param metrics
+   * @return
+   */
+  protected TimelineMetrics alignMetricsByMinuteMark(TimelineMetrics metrics) {
+    TimelineMetrics allMetricsToPost = new TimelineMetrics();
+
+    for (TimelineMetric metric : metrics.getMetrics()) {
+      TimelineMetric cachedMetric = metricsPostCache.getIfPresent(metric.getMetricName());
+      if (cachedMetric != null) {
+        metric.addMetricValues(cachedMetric.getMetricValues());
+        metricsPostCache.invalidate(metric.getMetricName());
+      }
+    }
+
+    for (TimelineMetric metric : metrics.getMetrics()) {
+      TreeMap<Long, Double> valuesToCache = new TreeMap<>();
+      TreeMap<Long, Double> valuesToPost = metric.getMetricValues();
+
+      // in case there can't be any more datapoints in last minute just post the metrics,
+      // otherwise need to cut off and cache the last uncompleted minute
+      if (!(valuesToPost.lastKey() % 60000 > 60000 - collectionPeriodMillis)) {
+        Long lastMinute = valuesToPost.lastKey() / 60000;
+        while (!valuesToPost.isEmpty() && valuesToPost.lastKey() / 60000 == lastMinute) {
+          valuesToCache.put(valuesToPost.lastKey(), valuesToPost.get(valuesToPost.lastKey()));
+          valuesToPost.remove(valuesToPost.lastKey());
+        }
+      }
+
+      if (!valuesToCache.isEmpty()) {
+        TimelineMetric metricToCache = new TimelineMetric(metric);
+        metricToCache.setMetricValues(valuesToCache);
+        metricsPostCache.put(metricToCache.getMetricName(), metricToCache);
+      }
+
+      if (!valuesToPost.isEmpty()) {
+        TimelineMetric metricToPost = new TimelineMetric(metric);
+        metricToPost.setMetricValues(valuesToPost);
+        allMetricsToPost.addOrMergeTimelineMetric(metricToPost);
+      }
+    }
+
+    return allMetricsToPost;
+  }
+
+  /**
    * Cleans up and closes an input stream
    * see http://docs.oracle.com/javase/6/docs/technotes/guides/net/http-keepalive.html
    * @param is the InputStream to clean up
@@ -609,6 +697,11 @@ public abstract class AbstractTimelineMetricsSink {
       rand.nextInt(zookeeperMaxBackoffTimeMins - zookeeperMinBackoffTimeMins + 1)) * 60*1000l;
   }
 
+  //for now it's used only for testing
+  protected Cache<String, TimelineMetric> getMetricsPostCache() {
+    return metricsPostCache;
+  }
+
   /**
    * Get a pre-formatted URI for the collector
    */
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java
index 3dfcf4e..b376048 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetric.java
@@ -146,6 +146,9 @@ public class TimelineMetric implements Comparable<TimelineMetric>, Serializable
 
   public void addMetricValues(Map<Long, Double> metricValues) {
     this.metricValues.putAll(metricValues);
+    if (!this.metricValues.isEmpty()) {
+      this.setStartTime(this.metricValues.firstKey());
+    }
   }
 
   @XmlElement(name = "metadata")
diff --git a/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricSinkTest.java b/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricSinkTest.java
new file mode 100644
index 0000000..634d18c
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricSinkTest.java
@@ -0,0 +1,240 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.metrics2.sink.timeline;
+
+import junit.framework.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.powermock.api.easymock.PowerMock;
+import org.powermock.core.classloader.annotations.PrepareForTest;
+import org.powermock.modules.junit4.PowerMockRunner;
+
+import java.io.OutputStream;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.TreeMap;
+
+import static org.easymock.EasyMock.anyString;
+import static org.easymock.EasyMock.expect;
+import static org.powermock.api.easymock.PowerMock.expectNew;
+import static org.powermock.api.easymock.PowerMock.replayAll;
+
+@RunWith(PowerMockRunner.class)
+@PrepareForTest({AbstractTimelineMetricsSink.class, HttpURLConnection.class})
+public class AbstractTimelineMetricSinkTest {
+
+  @Test
+  public void testParseHostsStringIntoCollection() {
+    AbstractTimelineMetricsSink sink = new TestTimelineMetricsSink();
+    Collection<String> hosts;
+
+    hosts = sink.parseHostsStringIntoCollection("");
+    Assert.assertTrue(hosts.isEmpty());
+
+    hosts = sink.parseHostsStringIntoCollection("test1.123.abc.def.local");
+    Assert.assertTrue(hosts.size() == 1);
+    Assert.assertTrue(hosts.contains("test1.123.abc.def.local"));
+
+    hosts = sink.parseHostsStringIntoCollection("test1.123.abc.def.local ");
+    Assert.assertTrue(hosts.size() == 1);
+    Assert.assertTrue(hosts.contains("test1.123.abc.def.local"));
+
+    hosts = sink.parseHostsStringIntoCollection("test1.123.abc.def.local,test1.456.abc.def.local");
+    Assert.assertTrue(hosts.size() == 2);
+
+    hosts = sink.parseHostsStringIntoCollection("test1.123.abc.def.local, test1.456.abc.def.local");
+    Assert.assertTrue(hosts.size() == 2);
+    Assert.assertTrue(hosts.contains("test1.123.abc.def.local"));
+    Assert.assertTrue(hosts.contains("test1.456.abc.def.local"));
+  }
+
+  @Test
+  @PrepareForTest({URL.class, OutputStream.class, AbstractTimelineMetricsSink.class, HttpURLConnection.class, TimelineMetric.class})
+  public void testEmitMetrics() throws Exception {
+    HttpURLConnection connection = PowerMock.createNiceMock(HttpURLConnection.class);
+    URL url = PowerMock.createNiceMock(URL.class);
+    expectNew(URL.class, anyString()).andReturn(url).anyTimes();
+    expect(url.openConnection()).andReturn(connection).anyTimes();
+    expect(connection.getResponseCode()).andReturn(200).anyTimes();
+    OutputStream os = PowerMock.createNiceMock(OutputStream.class);
+    expect(connection.getOutputStream()).andReturn(os).anyTimes();
+
+
+    TestTimelineMetricsSink sink = new TestTimelineMetricsSink();
+    TimelineMetrics timelineMetrics = new TimelineMetrics();
+    long startTime = System.currentTimeMillis() / 60000 * 60000;
+
+    long seconds = 1000;
+    TreeMap<Long, Double> metricValues = new TreeMap<>();
+    /*
+
+    0        +30s      +60s
+    |         |         |
+      (1)(2)(3) (4)(5)   (6)  m1
+
+    */
+    // (6) should be cached, the rest - posted
+
+    metricValues.put(startTime + 4*seconds, 1.0);
+    metricValues.put(startTime + 14*seconds, 2.0);
+    metricValues.put(startTime + 24*seconds, 3.0);
+    metricValues.put(startTime + 34*seconds, 4.0);
+    metricValues.put(startTime + 44*seconds, 5.0);
+    metricValues.put(startTime + 64*seconds, 6.0);
+
+    TimelineMetric timelineMetric = new TimelineMetric("metric1", "host1", "app1", "instance1");
+    timelineMetric.setStartTime(metricValues.firstKey());
+    timelineMetric.addMetricValues(metricValues);
+
+    timelineMetrics.addOrMergeTimelineMetric(timelineMetric);
+
+    replayAll();
+    sink.emitMetrics(timelineMetrics);
+    Assert.assertEquals(1, sink.getMetricsPostCache().size());
+    metricValues = new TreeMap<>();
+    metricValues.put(startTime + 64*seconds, 6.0);
+    Assert.assertEquals(metricValues, sink.getMetricsPostCache().getIfPresent("metric1").getMetricValues());
+
+    timelineMetrics = new TimelineMetrics();
+    metricValues = new TreeMap<>();
+    /*
+
+    +60      +90s     +120s     +150s     +180s
+    |         |         |         |         |
+       (7)      (8)       (9)           (10)   (11)   m1
+
+    */
+    // (6) from previous post should be merged with current data
+    // (6),(7),(8),(9),(10) - should be posted, (11) - cached
+    metricValues.put(startTime + 74*seconds, 7.0);
+    metricValues.put(startTime + 94*seconds, 8.0);
+    metricValues.put(startTime + 124*seconds, 9.0);
+    metricValues.put(startTime + 154*seconds, 10.0);
+    metricValues.put(startTime + 184*seconds, 11.0);
+
+    timelineMetric = new TimelineMetric("metric1", "host1", "app1", "instance1");
+    timelineMetric.setStartTime(metricValues.firstKey());
+    timelineMetric.addMetricValues(metricValues);
+
+    timelineMetrics.addOrMergeTimelineMetric(timelineMetric);
+    sink.emitMetrics(timelineMetrics);
+
+    Assert.assertEquals(1, sink.getMetricsPostCache().size());
+    metricValues = new TreeMap<>();
+    metricValues.put(startTime + 184*seconds, 11.0);
+    Assert.assertEquals(metricValues, sink.getMetricsPostCache().getIfPresent("metric1").getMetricValues());timelineMetrics = new TimelineMetrics();
+
+    metricValues = new TreeMap<>();
+    /*
+
+    +180s   +210s   +240s
+    |         |       |
+       (12)        (13)
+
+    */
+    // (11) from previous post should be merged with current data
+    // (11),(12),(13) - should be posted, cache should be empty
+    metricValues.put(startTime + 194*seconds, 12.0);
+    metricValues.put(startTime + 239*seconds, 13.0);
+
+    timelineMetric = new TimelineMetric("metric1", "host1", "app1", "instance1");
+    timelineMetric.setStartTime(metricValues.firstKey());
+    timelineMetric.addMetricValues(metricValues);
+
+    timelineMetrics.addOrMergeTimelineMetric(timelineMetric);
+    sink.emitMetrics(timelineMetrics);
+
+    Assert.assertEquals(0, sink.getMetricsPostCache().size());
+
+    metricValues = new TreeMap<>();
+    /*
+
+    +240s   +270s   +300s   +330s
+    |         |       |       |
+       (14)        (15)   (16)
+
+    */
+    // since postAllCachedMetrics in emitMetrics call is true (14),(15),(16) - should be posted, cache should be empty
+    metricValues.put(startTime + 245*seconds, 14.0);
+    metricValues.put(startTime + 294*seconds, 15.0);
+    metricValues.put(startTime + 315*seconds, 16.0);
+
+    timelineMetric = new TimelineMetric("metric1", "host1", "app1", "instance1");
+    timelineMetric.setStartTime(metricValues.firstKey());
+    timelineMetric.addMetricValues(metricValues);
+
+    timelineMetrics.addOrMergeTimelineMetric(timelineMetric);
+    sink.emitMetrics(timelineMetrics, true);
+
+    Assert.assertEquals(0, sink.getMetricsPostCache().size());
+  }
+
+  private class TestTimelineMetricsSink extends AbstractTimelineMetricsSink {
+    @Override
+    protected String getCollectorUri(String host) {
+      return "";
+    }
+
+    @Override
+    protected String getCollectorProtocol() {
+      return "http";
+    }
+
+    @Override
+    protected String getCollectorPort() {
+      return "2181";
+    }
+
+    @Override
+    protected int getTimeoutSeconds() {
+      return 10;
+    }
+
+    @Override
+    protected String getZookeeperQuorum() {
+      return "localhost:2181";
+    }
+
+    @Override
+    protected Collection<String> getConfiguredCollectorHosts() {
+      return Arrays.asList("localhost");
+    }
+
+    @Override
+    protected String getHostname() {
+      return "h1";
+    }
+
+    @Override
+    protected boolean isHostInMemoryAggregationEnabled() {
+      return true;
+    }
+
+    @Override
+    protected int getHostInMemoryAggregationPort() {
+      return 61888;
+    }
+
+    @Override
+    protected String getHostInMemoryAggregationProtocol() {
+      return "http";
+    }
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/availability/AbstractTimelineMetricSinkTest.java b/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/availability/AbstractTimelineMetricSinkTest.java
deleted file mode 100644
index 396d08d..0000000
--- a/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/availability/AbstractTimelineMetricSinkTest.java
+++ /dev/null
@@ -1,113 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.metrics2.sink.timeline.availability;
-
-import junit.framework.Assert;
-import org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-import org.junit.Test;
-
-import java.util.Arrays;
-import java.util.Collection;
-
-public class AbstractTimelineMetricSinkTest {
-
-  @Test
-  public void testParseHostsStringIntoCollection() {
-    AbstractTimelineMetricsSink sink = new TestTimelineMetricsSink();
-    Collection<String> hosts;
-
-    hosts = sink.parseHostsStringIntoCollection("");
-    Assert.assertTrue(hosts.isEmpty());
-
-    hosts = sink.parseHostsStringIntoCollection("test1.123.abc.def.local");
-    Assert.assertTrue(hosts.size() == 1);
-    Assert.assertTrue(hosts.contains("test1.123.abc.def.local"));
-
-    hosts = sink.parseHostsStringIntoCollection("test1.123.abc.def.local ");
-    Assert.assertTrue(hosts.size() == 1);
-    Assert.assertTrue(hosts.contains("test1.123.abc.def.local"));
-
-    hosts = sink.parseHostsStringIntoCollection("test1.123.abc.def.local,test1.456.abc.def.local");
-    Assert.assertTrue(hosts.size() == 2);
-
-    hosts = sink.parseHostsStringIntoCollection("test1.123.abc.def.local, test1.456.abc.def.local");
-    Assert.assertTrue(hosts.size() == 2);
-    Assert.assertTrue(hosts.contains("test1.123.abc.def.local"));
-    Assert.assertTrue(hosts.contains("test1.456.abc.def.local"));
-
-  }
-
-  private class TestTimelineMetricsSink extends AbstractTimelineMetricsSink {
-    @Override
-    protected String getCollectorUri(String host) {
-      return "";
-    }
-
-    @Override
-    protected String getCollectorProtocol() {
-      return "http";
-    }
-
-    @Override
-    protected String getCollectorPort() {
-      return "2181";
-    }
-
-    @Override
-    protected int getTimeoutSeconds() {
-      return 10;
-    }
-
-    @Override
-    protected String getZookeeperQuorum() {
-      return "localhost:2181";
-    }
-
-    @Override
-    protected Collection<String> getConfiguredCollectorHosts() {
-      return Arrays.asList("localhost");
-    }
-
-    @Override
-    protected String getHostname() {
-      return "h1";
-    }
-
-    @Override
-    protected boolean isHostInMemoryAggregationEnabled() {
-      return true;
-    }
-
-    @Override
-    protected int getHostInMemoryAggregationPort() {
-      return 61888;
-    }
-
-    @Override
-    protected String getHostInMemoryAggregationProtocol() {
-      return "http";
-    }
-
-    @Override
-    public boolean emitMetrics(TimelineMetrics metrics) {
-      super.init();
-      return super.emitMetrics(metrics);
-    }
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-hadoop-sink/src/main/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSink.java b/ambari-metrics/ambari-metrics-hadoop-sink/src/main/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSink.java
index f0eefc2..1a4a5fd 100644
--- a/ambari-metrics/ambari-metrics-hadoop-sink/src/main/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSink.java
+++ b/ambari-metrics/ambari-metrics-hadoop-sink/src/main/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSink.java
@@ -509,7 +509,7 @@ public class HadoopTimelineMetricsSink extends AbstractTimelineMetricsSink imple
         LOG.debug("Closing HadoopTimelineMetricSink. Flushing metrics to collector...");
         TimelineMetrics metrics = metricsCache.getAllMetrics();
         if (metrics != null) {
-          emitMetrics(metrics);
+          emitMetrics(metrics, true);
         }
       }
     });
diff --git a/ambari-metrics/ambari-metrics-hadoop-sink/src/test/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSinkTest.java b/ambari-metrics/ambari-metrics-hadoop-sink/src/test/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSinkTest.java
index a92b436..addbbda 100644
--- a/ambari-metrics/ambari-metrics-hadoop-sink/src/test/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSinkTest.java
+++ b/ambari-metrics/ambari-metrics-hadoop-sink/src/test/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSinkTest.java
@@ -180,7 +180,7 @@ public class HadoopTimelineMetricsSinkTest {
       createMockBuilder(HadoopTimelineMetricsSink.class)
         .withConstructor().addMockedMethod("appendPrefix")
         .addMockedMethod("findLiveCollectorHostsFromKnownCollector")
-        .addMockedMethod("emitMetrics").createNiceMock();
+        .addMockedMethod("emitMetrics", TimelineMetrics.class).createNiceMock();
 
     SubsetConfiguration conf = PowerMock.createNiceMock(SubsetConfiguration.class);
     expect(conf.getString("slave.host.name")).andReturn("localhost").anyTimes();
@@ -311,7 +311,7 @@ public class HadoopTimelineMetricsSinkTest {
       createMockBuilder(HadoopTimelineMetricsSink.class)
         .withConstructor().addMockedMethod("appendPrefix")
         .addMockedMethod("findLiveCollectorHostsFromKnownCollector")
-        .addMockedMethod("emitMetrics").createNiceMock();
+        .addMockedMethod("emitMetrics", TimelineMetrics.class).createNiceMock();
 
     SubsetConfiguration conf = PowerMock.createNiceMock(SubsetConfiguration.class);
     expect(conf.getString("slave.host.name")).andReturn("localhost").anyTimes();
diff --git a/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/application_metric_map.py b/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/application_metric_map.py
index 34a6787..bd957a0 100644
--- a/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/application_metric_map.py
+++ b/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/core/application_metric_map.py
@@ -41,6 +41,7 @@ class ApplicationMetricMap:
     self.ip_address = ip_address
     self.lock = RLock()
     self.app_metric_map = {}
+    self.cached_metric_map = {}
   pass
 
   def put_metric(self, application_id, metric_id_to_value_map, timestamp):
@@ -98,7 +99,7 @@ class ApplicationMetricMap:
             "appid" : "HOST",
             "instanceid" : result_instanceid,
             "starttime" : self.get_start_time(appId, metricId),
-            "metrics" : metricData
+            "metrics" : self.align_values_by_minute_mark(appId, metricId, metricData) if clear_once_flattened else metricData
           }
           timeline_metrics[ "metrics" ].append( timeline_metric )
         pass
@@ -114,6 +115,10 @@ class ApplicationMetricMap:
 
   def get_start_time(self, app_id, metric_id):
     with self.lock:
+      if self.cached_metric_map.has_key(app_id):
+        if self.cached_metric_map.get(app_id).has_key(metric_id):
+          metrics = self.cached_metric_map.get(app_id).get(metric_id)
+          return min(metrics.iterkeys())
       if self.app_metric_map.has_key(app_id):
         if self.app_metric_map.get(app_id).has_key(metric_id):
           metrics = self.app_metric_map.get(app_id).get(metric_id)
@@ -137,3 +142,48 @@ class ApplicationMetricMap:
     with self.lock:
       self.app_metric_map.clear()
   pass
+
+  # Align metrics by the minutes so that only complete minutes are send.
+  # Not completed minutes data points will be cached and posted when the minute will be completed.
+  # Cached metrics are merged with currently posting metrics
+  # e.g:
+  # first iteration if metrics from 00m15s to 01m15s are processed,
+  #               then metrics from 00m15s to 00m59s will be posted
+  #                        and from 01m00s to 01m15s will be cached
+  # second iteration   metrics from 01m25s to 02m55s are processed,
+  #     cached metrics from previous call will be merged with current,
+  #                    metrics from 01m00s to 02m55s will be posted, cache will be empty
+  def align_values_by_minute_mark(self, appId, metricId, metricData):
+    with self.lock:
+      # append with cached values
+      if self.cached_metric_map.get(appId) and self.cached_metric_map.get(appId).get(metricId):
+        metricData.update(self.cached_metric_map[appId][metricId])
+        self.cached_metric_map[appId].pop(metricId)
+
+      # check if needs to be cached
+      # in case there can't be any more datapoints in last minute just post the metrics,
+      # otherwise need to cut off and cache the last uncompleted minute
+      max_time = max(metricData.iterkeys())
+      if max_time % 60000 <= 60000 - 10000:
+        max_minute = max_time / 60000
+        metric_data_copy = metricData.copy()
+        for time,value in metric_data_copy.iteritems():
+          if time / 60000 == max_minute:
+            cached_metric_map = self.cached_metric_map.get(appId)
+            if not cached_metric_map:
+              cached_metric_map = { metricId : { time : value } }
+              self.cached_metric_map[ appId ] = cached_metric_map
+            else:
+              cached_metric_id_map = cached_metric_map.get(metricId)
+              if not cached_metric_id_map:
+                cached_metric_id_map = { time : value }
+                cached_metric_map[ metricId ] = cached_metric_id_map
+              else:
+                cached_metric_map[ metricId ].update( { time : value } )
+              pass
+            pass
+            metricData.pop(time)
+          pass
+        pass
+
+      return metricData
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-host-monitoring/src/test/python/core/TestApplicationMetricMap.py b/ambari-metrics/ambari-metrics-host-monitoring/src/test/python/core/TestApplicationMetricMap.py
index a956a78..d9ea55d 100644
--- a/ambari-metrics/ambari-metrics-host-monitoring/src/test/python/core/TestApplicationMetricMap.py
+++ b/ambari-metrics/ambari-metrics-host-monitoring/src/test/python/core/TestApplicationMetricMap.py
@@ -50,7 +50,7 @@ class TestApplicationMetricMap(TestCase):
     self.assertEqual(p['metrics'][0]['metrics'][str(timestamp)], 'bv')
     
     self.assertEqual(application_metric_map.get_start_time(application_id, "b"), timestamp)
-    
+
     metrics = {}
     metrics.update({"b" : 'bv'})
     metrics.update({"a" : 'av'})
@@ -71,4 +71,38 @@ class TestApplicationMetricMap(TestCase):
     json_data = json.loads(application_metric_map.flatten('A1', True))
     self.assertEqual(len(json_data['metrics']), 1)
     self.assertTrue(json_data['metrics'][0]['metricname'] == 'a')
-    self.assertFalse(application_metric_map.app_metric_map)
\ No newline at end of file
+    self.assertFalse(application_metric_map.app_metric_map)
+
+  def test_flatten_and_align_values_by_minute_mark(self):
+    application_metric_map = ApplicationMetricMap("host", "10.10.10.10")
+    second = 1000
+    timestamp = int(round(1415390640.3806491 * second))
+    application_id = application_metric_map.format_app_id("A","1")
+    metrics = {}
+    metrics.update({"b" : 'bv'})
+
+    #   0s    60s   120s
+    #   (0) (1)   (2)    (3)
+    # (3) should be cut off and cached
+    application_metric_map.put_metric(application_id, metrics, timestamp)
+    application_metric_map.put_metric(application_id, metrics, timestamp + second*24)
+    application_metric_map.put_metric(application_id, metrics, timestamp + second*84)
+    application_metric_map.put_metric(application_id, metrics, timestamp + second*124)
+
+    json_data = json.loads(application_metric_map.flatten(application_id, True))
+    self.assertEqual(len(json_data['metrics'][0]['metrics']), 3)
+    self.assertEqual(len(application_metric_map.cached_metric_map.get(application_id).get("b")), 1)
+    self.assertEqual(application_metric_map.cached_metric_map.get(application_id).get("b"), {timestamp + second*124 : 'bv'})
+
+    #   120s    180s
+    #      (3)  (4)
+    # cached (3) should be added to the post;
+    # (4) should be posted as well because there can't be more data points in the minute
+    application_metric_map.put_metric(application_id, metrics, timestamp + second * 176)
+
+    json_data = json.loads(application_metric_map.flatten(application_id, True))
+    self.assertEqual(len(json_data['metrics'][0]['metrics']), 2)
+
+    # starttime should be set to (3)
+    self.assertEqual(json_data['metrics'][0]['starttime'], timestamp + second*124)
+    self.assertEqual(len(application_metric_map.cached_metric_map.get(application_id)), 0)
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
index 85dad1f..929fc8c 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
@@ -121,9 +121,6 @@ public class TimelineMetricConfiguration {
   public static final String CLUSTER_AGGREGATOR_TIMESLICE_INTERVAL =
     "timeline.metrics.cluster.aggregator.second.timeslice.interval";
 
-  public static final String CLUSTER_CACHE_AGGREGATOR_TIMESLICE_INTERVAL =
-    "timeline.metrics.cluster.cache.aggregator.second.timeslice.interval";
-
   public static final String AGGREGATOR_CHECKPOINT_DELAY =
     "timeline.metrics.service.checkpointDelay";
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCache.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCache.java
index aeaa4ba..6441c9c 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCache.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCache.java
@@ -50,7 +50,6 @@ import java.net.URISyntaxException;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.Date;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
@@ -60,12 +59,11 @@ import java.util.concurrent.locks.Lock;
 
 import static java.util.concurrent.TimeUnit.SECONDS;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_SECOND_SLEEP_INTERVAL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_CACHE_AGGREGATOR_TIMESLICE_INTERVAL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_TIMESLICE_INTERVAL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_APP_ID;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_CLUSTER_AGGREGATOR_INTERPOLATION_ENABLED;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_COLLECTOR_IGNITE_BACKUPS;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_COLLECTOR_IGNITE_NODES;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_SINK_COLLECTION_PERIOD;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRIC_AGGREGATION_SQL_FILTERS;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_SERVICE_HTTP_POLICY;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedCheckPointTimeMillis;
@@ -77,7 +75,6 @@ public class TimelineMetricsIgniteCache implements TimelineMetricDistributedCach
       LogFactory.getLog(TimelineMetricsIgniteCache.class);
   private IgniteCache<TimelineClusterMetric, MetricClusterAggregate> igniteCache;
   private long cacheSliceIntervalMillis;
-  private int collectionPeriodMillis;
   private boolean interpolationEnabled;
   private List<String> skipAggrPatternStrings = new ArrayList<>();
   private List<String> appIdsToAggregate;
@@ -110,8 +107,7 @@ public class TimelineMetricsIgniteCache implements TimelineMetricDistributedCach
     //aggregation parameters
     appIdsToAggregate = timelineMetricConfiguration.getAppIdsForHostAggregation();
     interpolationEnabled = Boolean.parseBoolean(metricConf.get(TIMELINE_METRICS_CLUSTER_AGGREGATOR_INTERPOLATION_ENABLED, "true"));
-    collectionPeriodMillis = (int) SECONDS.toMillis(metricConf.getInt(TIMELINE_METRICS_SINK_COLLECTION_PERIOD, 10));
-    cacheSliceIntervalMillis = SECONDS.toMillis(metricConf.getInt(CLUSTER_CACHE_AGGREGATOR_TIMESLICE_INTERVAL, 30));
+    cacheSliceIntervalMillis = SECONDS.toMillis(metricConf.getInt(CLUSTER_AGGREGATOR_TIMESLICE_INTERVAL, 30));
     Long aggregationInterval = metricConf.getLong(CLUSTER_AGGREGATOR_SECOND_SLEEP_INTERVAL, 120L);
 
     String filteredMetricPatterns = metricConf.get(TIMELINE_METRIC_AGGREGATION_SQL_FILTERS);
@@ -215,12 +211,6 @@ public class TimelineMetricsIgniteCache implements TimelineMetricDistributedCach
 
       if (slicedClusterMetrics != null) {
         for (Map.Entry<TimelineClusterMetric, Double> metricDoubleEntry : slicedClusterMetrics.entrySet()) {
-          if (metricDoubleEntry.getKey().getTimestamp() == timeSlices.get(timeSlices.size()-1)[1] && metricDoubleEntry.getKey().getTimestamp() - metric.getMetricValues().lastKey() > collectionPeriodMillis) {
-            if(LOG.isDebugEnabled()) {
-              LOG.debug("Last skipped timestamp @ " + new Date(metric.getMetricValues().lastKey()) + " slice timestamp @ " + new Date(metricDoubleEntry.getKey().getTimestamp()));
-            }
-            continue;
-          }
           MetricClusterAggregate newMetricClusterAggregate  = new MetricClusterAggregate(
               metricDoubleEntry.getValue(), 1, null, metricDoubleEntry.getValue(), metricDoubleEntry.getValue());
           //put app metric into cache
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AggregatorUtils.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AggregatorUtils.java
index b12cb86..b8338fb 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AggregatorUtils.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AggregatorUtils.java
@@ -223,7 +223,7 @@ public class AggregatorUtils {
    */
   public static Long getSliceTimeForMetric(List<Long[]> timeSlices, Long timestamp) {
     for (Long[] timeSlice : timeSlices) {
-      if (timestamp > timeSlice[0] && timestamp <= timeSlice[1]) {
+      if (timestamp >= timeSlice[0] && timestamp < timeSlice[1]) {
         return timeSlice[1];
       }
     }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
index c27d712..9e493ea 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
@@ -41,7 +41,6 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_SECOND_DISABLED;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_SECOND_SLEEP_INTERVAL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_TIMESLICE_INTERVAL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_CACHE_AGGREGATOR_TIMESLICE_INTERVAL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.DEFAULT_CHECKPOINT_LOCATION;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_AGGREGATOR_DAILY_CHECKPOINT_CUTOFF_MULTIPLIER;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_AGGREGATOR_DAILY_DISABLED;
@@ -273,9 +272,6 @@ public class TimelineMetricAggregatorFactory {
     long timeSliceIntervalMillis = SECONDS.toMillis(metricsConf.getInt
       (CLUSTER_AGGREGATOR_TIMESLICE_INTERVAL, 30));
 
-    long cacheTimeSliceIntervalMillis = SECONDS.toMillis(metricsConf.getInt
-      (CLUSTER_CACHE_AGGREGATOR_TIMESLICE_INTERVAL, 30));
-
     int checkpointCutOffMultiplier =
       metricsConf.getInt(CLUSTER_AGGREGATOR_SECOND_CHECKPOINT_CUTOFF_MULTIPLIER, 2);
 
@@ -297,8 +293,7 @@ public class TimelineMetricAggregatorFactory {
         120000l,
         timeSliceIntervalMillis,
         haController,
-        distributedCache,
-        cacheTimeSliceIntervalMillis
+        distributedCache
       );
     }
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSource.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSource.java
index 0c030b6..888044a 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSource.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSource.java
@@ -31,19 +31,16 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getSliceTimeForMetric;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getTimeSlices;
 
 public class TimelineMetricClusterAggregatorSecondWithCacheSource extends TimelineMetricClusterAggregatorSecond {
   private TimelineMetricDistributedCache distributedCache;
-  private Long cacheTimeSliceIntervalMillis;
   public TimelineMetricClusterAggregatorSecondWithCacheSource(AggregationTaskRunner.AGGREGATOR_NAME metricAggregateSecond, TimelineMetricMetadataManager metricMetadataManager, PhoenixHBaseAccessor hBaseAccessor, Configuration metricsConf, String checkpointLocation, long sleepIntervalMillis, int checkpointCutOffMultiplier, String aggregatorDisabledParam, String inputTableName, String outputTableName,
                                                               Long nativeTimeRangeDelay,
                                                               Long timeSliceInterval,
-                                                              MetricCollectorHAController haController, TimelineMetricDistributedCache distributedCache, Long cacheTimeSliceIntervalMillis) {
+                                                              MetricCollectorHAController haController, TimelineMetricDistributedCache distributedCache) {
     super(metricAggregateSecond, metricMetadataManager, hBaseAccessor, metricsConf, checkpointLocation, sleepIntervalMillis, checkpointCutOffMultiplier, aggregatorDisabledParam, inputTableName, outputTableName, nativeTimeRangeDelay, timeSliceInterval, haController);
     this.distributedCache = distributedCache;
-    this.cacheTimeSliceIntervalMillis = cacheTimeSliceIntervalMillis;
   }
 
   @Override
@@ -81,36 +78,11 @@ public class TimelineMetricClusterAggregatorSecondWithCacheSource extends Timeli
 
   //Slices in cache could be different from aggregate slices, so need to recalculate. Counts hosted apps
   Map<TimelineClusterMetric, MetricClusterAggregate> aggregateMetricsFromMetricClusterAggregates(Map<TimelineClusterMetric, MetricClusterAggregate> metricsFromCache, List<Long[]> timeSlices) {
-    Map<TimelineClusterMetric, MetricClusterAggregate> result = new HashMap<>();
-
-    //normalize if slices in cache are different from the aggregation slices
-    //TODO add basic interpolation, current implementation assumes that cacheTimeSliceIntervalMillis <= timeSliceIntervalMillis
-    if (cacheTimeSliceIntervalMillis.equals(timeSliceIntervalMillis)) {
-      result = metricsFromCache;
-    } else {
-      for (Map.Entry<TimelineClusterMetric, MetricClusterAggregate> clusterMetricAggregateEntry : metricsFromCache.entrySet()) {
-        Long timestamp = getSliceTimeForMetric(timeSlices, clusterMetricAggregateEntry.getKey().getTimestamp());
-        if (timestamp <= 0) {
-          LOG.warn("Entry doesn't match any slice. Slices : " + timeSlices + " metric timestamp : " + clusterMetricAggregateEntry.getKey().getTimestamp());
-          continue;
-        }
-        TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric(clusterMetricAggregateEntry.getKey().getMetricName(), clusterMetricAggregateEntry.getKey().getAppId(), clusterMetricAggregateEntry.getKey().getInstanceId(), timestamp);
-        if (result.containsKey(timelineClusterMetric)) {
-          MetricClusterAggregate metricClusterAggregate = result.get(timelineClusterMetric);
-          metricClusterAggregate.updateMax(clusterMetricAggregateEntry.getValue().getMax());
-          metricClusterAggregate.updateMin(clusterMetricAggregateEntry.getValue().getMin());
-          metricClusterAggregate.setSum((metricClusterAggregate.getSum() + clusterMetricAggregateEntry.getValue().getSum()) / 2D);
-          metricClusterAggregate.setNumberOfHosts(Math.max(metricClusterAggregate.getNumberOfHosts(), clusterMetricAggregateEntry.getValue().getNumberOfHosts()));
-        } else {
-          result.put(timelineClusterMetric, clusterMetricAggregateEntry.getValue());
-        }
-      }
-    }
-
+    //TODO add basic interpolation
     //TODO investigate if needed, maybe add config to disable/enable
     //count hosted apps
     Map<String, MutableInt> hostedAppCounter = new HashMap<>();
-    for (Map.Entry<TimelineClusterMetric, MetricClusterAggregate> clusterMetricAggregateEntry : result.entrySet()) {
+    for (Map.Entry<TimelineClusterMetric, MetricClusterAggregate> clusterMetricAggregateEntry : metricsFromCache.entrySet()) {
       int numHosts = clusterMetricAggregateEntry.getValue().getNumberOfHosts();
       String appId = clusterMetricAggregateEntry.getKey().getAppId();
       if (!hostedAppCounter.containsKey(appId)) {
@@ -124,9 +96,9 @@ public class TimelineMetricClusterAggregatorSecondWithCacheSource extends Timeli
     }
 
     // Add liveHosts per AppId metrics.
-    processLiveAppCountMetrics(result, hostedAppCounter, timeSlices.get(timeSlices.size() - 1)[1]);
+    processLiveAppCountMetrics(metricsFromCache, hostedAppCounter, timeSlices.get(timeSlices.size() - 1)[1]);
 
-    return result;
+    return metricsFromCache;
   }
 
 }
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCacheTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCacheTest.java
index d3c6061..2cb66ba 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCacheTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCacheTest.java
@@ -167,62 +167,6 @@ public class TimelineMetricsIgniteCacheTest {
     metricValues.clear();
     timelineMetrics.clear();
 
-    /*
-
-    0      +30s    +60s    +90s
-    |       |       |       |
-     (1)      (2)                h1
-                (3)       (4)    h2
-                 (5)      (6)    h1
-
-    */
-    // Case 3 : merging host data points, ignore (2) for h1 as it will conflict with (5), two hosts.
-    metricValues = new TreeMap<>();
-    metricValues.put(startTime + 15*seconds, 1.0);
-    metricValues.put(startTime + 45*seconds, 2.0);
-    timelineMetric = new TimelineMetric("metric1", "host1", "app1", "instance1");
-    timelineMetric.setMetricValues(metricValues);
-    timelineMetrics.add(timelineMetric);
-
-    metricValues = new TreeMap<>();
-    metricValues.put(startTime + 45*seconds, 3.0);
-    metricValues.put(startTime + 85*seconds, 4.0);
-    timelineMetric = new TimelineMetric("metric1", "host2", "app1", "instance1");
-    timelineMetric.setMetricValues(metricValues);
-    timelineMetrics.add(timelineMetric);
-
-    metricValues = new TreeMap<>();
-    metricValues.put(startTime + 55*seconds, 5.0);
-    metricValues.put(startTime + 85*seconds, 6.0);
-    timelineMetric = new TimelineMetric("metric1", "host1", "app1", "instance1");
-    timelineMetric.setMetricValues(metricValues);
-    timelineMetrics.add(timelineMetric);
-
-    timelineMetricsIgniteCache.putMetrics(timelineMetrics, metricMetadataManagerMock);
-
-    aggregateMap = timelineMetricsIgniteCache.evictMetricAggregates(startTime, startTime + 120*seconds);
-
-    Assert.assertEquals(aggregateMap.size(), 3);
-    timelineClusterMetric = new TimelineClusterMetric(timelineMetric.getMetricName(),
-      timelineMetric.getAppId(), timelineMetric.getInstanceId(), startTime + 30*seconds);
-
-    Assert.assertTrue(aggregateMap.containsKey(timelineClusterMetric));
-    Assert.assertEquals(1.0, aggregateMap.get(timelineClusterMetric).getSum());
-    Assert.assertEquals(1, aggregateMap.get(timelineClusterMetric).getNumberOfHosts());
-
-    timelineClusterMetric.setTimestamp(startTime + 2*30*seconds);
-    Assert.assertTrue(aggregateMap.containsKey(timelineClusterMetric));
-    Assert.assertEquals(8.0, aggregateMap.get(timelineClusterMetric).getSum());
-    Assert.assertEquals(2, aggregateMap.get(timelineClusterMetric).getNumberOfHosts());
-
-    timelineClusterMetric.setTimestamp(startTime + 3*30*seconds);
-    Assert.assertTrue(aggregateMap.containsKey(timelineClusterMetric));
-    Assert.assertEquals(10.0, aggregateMap.get(timelineClusterMetric).getSum());
-    Assert.assertEquals(2, aggregateMap.get(timelineClusterMetric).getNumberOfHosts());
-
-    metricValues.clear();
-    timelineMetrics.clear();
-
     Assert.assertEquals(0d, timelineMetricsIgniteCache.getPointInTimeCacheMetrics().get("Cluster_KeySize"));
   }
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSourceTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSourceTest.java
index 7cddb00..e8a9dc2 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSourceTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSourceTest.java
@@ -79,7 +79,7 @@ public class TimelineMetricClusterAggregatorSecondWithCacheSourceTest {
     TimelineMetricClusterAggregatorSecondWithCacheSource secondAggregator = new TimelineMetricClusterAggregatorSecondWithCacheSource(
         METRIC_AGGREGATE_SECOND, metricMetadataManagerMock, null, configuration, null,
         aggregatorInterval, 2, "false", "", "", aggregatorInterval,
-        sliceInterval, null, timelineMetricsIgniteCache, 30L);
+        sliceInterval, null, timelineMetricsIgniteCache);
 
     long now = System.currentTimeMillis();
     long startTime = now - 120000;
@@ -112,67 +112,4 @@ public class TimelineMetricClusterAggregatorSecondWithCacheSourceTest {
     Assert.assertEquals(2d, a1.getSum());
     Assert.assertEquals(5d, a2.getSum());
   }
-
-  @Test
-  public void testSlicesRecalculation() throws Exception {
-    long aggregatorInterval = 120000;
-    long sliceInterval = 30000;
-
-    Configuration configuration = new Configuration();
-
-    TimelineMetricMetadataManager metricMetadataManagerMock = createNiceMock(TimelineMetricMetadataManager.class);
-    expect(metricMetadataManagerMock.getMetadataCacheValue((TimelineMetricMetadataKey) anyObject())).andReturn(null).anyTimes();
-    replay(metricMetadataManagerMock);
-
-    TimelineMetricClusterAggregatorSecondWithCacheSource secondAggregator = new TimelineMetricClusterAggregatorSecondWithCacheSource(
-        METRIC_AGGREGATE_SECOND, metricMetadataManagerMock, null, configuration, null,
-        aggregatorInterval, 2, "false", "", "", aggregatorInterval,
-        sliceInterval, null, timelineMetricsIgniteCache, 30L);
-
-    long seconds = 1000;
-    long now = getRoundedCheckPointTimeMillis(System.currentTimeMillis(), 120*seconds);
-    long startTime = now - 120*seconds;
-
-    Map<TimelineClusterMetric, MetricClusterAggregate> metricsFromCache = new HashMap<>();
-    metricsFromCache.put(new TimelineClusterMetric("m1", "a1", "i1",startTime + 5 * seconds),
-        new MetricClusterAggregate(1.0, 2, 1.0, 1.0, 1.0));
-    metricsFromCache.put(new TimelineClusterMetric("m1", "a1", "i1",startTime + 25 * seconds),
-        new MetricClusterAggregate(2.0, 2, 1.0, 2.0, 2.0));
-    metricsFromCache.put(new TimelineClusterMetric("m1", "a1", "i1",startTime + 45 * seconds),
-        new MetricClusterAggregate(3.0, 2, 1.0, 1.0, 1.0));
-    metricsFromCache.put(new TimelineClusterMetric("m1", "a1", "i1",startTime + 65 * seconds),
-        new MetricClusterAggregate(4.0, 2, 1.0, 4.0, 4.0));
-    metricsFromCache.put(new TimelineClusterMetric("m1", "a1", "i1",startTime + 85 * seconds),
-        new MetricClusterAggregate(5.0, 2, 1.0, 5.0, 5.0));
-
-    List<Long[]> timeslices = getTimeSlices(startTime, startTime + 120*seconds, 30*seconds);
-
-    Map<TimelineClusterMetric, MetricClusterAggregate> aggregates = secondAggregator.aggregateMetricsFromMetricClusterAggregates(metricsFromCache, timeslices);
-
-    Assert.assertNotNull(aggregates);
-    Assert.assertEquals(4, aggregates.size());
-
-    TimelineClusterMetric timelineClusterMetric = new TimelineClusterMetric("m1", "a1", "i1", startTime + 30*seconds);
-    MetricClusterAggregate metricClusterAggregate = aggregates.get(timelineClusterMetric);
-    Assert.assertNotNull(metricClusterAggregate);
-    Assert.assertEquals(1.5, metricClusterAggregate.getSum());
-    Assert.assertEquals(1d, metricClusterAggregate.getMin());
-    Assert.assertEquals(2d, metricClusterAggregate.getMax());
-    Assert.assertEquals(2, metricClusterAggregate.getNumberOfHosts());
-
-    timelineClusterMetric.setTimestamp(startTime + 60*seconds);
-    metricClusterAggregate = aggregates.get(timelineClusterMetric);
-    Assert.assertNotNull(metricClusterAggregate);
-    Assert.assertEquals(3d, metricClusterAggregate.getSum());
-
-    timelineClusterMetric.setTimestamp(startTime + 90*seconds);
-    metricClusterAggregate = aggregates.get(timelineClusterMetric);
-    Assert.assertNotNull(metricClusterAggregate);
-    Assert.assertEquals(4.5d, metricClusterAggregate.getSum());
-
-    timelineClusterMetric = new TimelineClusterMetric("live_hosts", "a1", null, startTime + 120*seconds);
-    metricClusterAggregate = aggregates.get(timelineClusterMetric);
-    Assert.assertNotNull(metricClusterAggregate);
-    Assert.assertEquals(2d, metricClusterAggregate.getSum());
-  }
 }

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 23/39: AMBARI-22192. Setup an application server for hosting the AD System Manager. (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 6ffb35b1676edf1b84d707c582c78b92aba3fcc1
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Wed Oct 18 10:16:28 2017 -0700

    AMBARI-22192. Setup an application server for hosting the AD System Manager. (avijayan)
---
 .../pom.xml                                        | 52 +++++++++++++++++++++-
 .../prototype/core/AmbariServerInterface.java      | 34 +++++++-------
 .../adservice/app/AnomalyDetectionApp.scala        |  2 +
 .../timeline/AbstractMiniHBaseClusterTest.java     | 13 ++++++
 .../metrics/timeline/PhoenixHBaseAccessorTest.java | 13 +++++-
 5 files changed, 94 insertions(+), 20 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
index c9bb7b7..554d026 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
@@ -35,7 +35,7 @@
     <scala.version>2.12.3</scala.version>
     <scala.binary.version>2.11</scala.binary.version>
     <hadoop.version>2.7.3.2.6.0.3-8</hadoop.version>
-    <jackson.version>2.8.9</jackson.version>
+    <jackson.version>2.9.1</jackson.version>
     <dropwizard.version>1.2.0</dropwizard.version>
     <spark.version>2.2.0</spark.version>
   </properties>
@@ -223,6 +223,11 @@
       <version>0.10.1.0</version>
     </dependency>
     <dependency>
+      <groupId>com.fasterxml.jackson.core</groupId>
+      <artifactId>jackson-databind</artifactId>
+      <version>${jackson.version}</version>
+    </dependency>
+    <dependency>
       <groupId>org.apache.kafka</groupId>
       <artifactId>connect-json</artifactId>
       <version>0.10.1.0</version>
@@ -236,6 +241,28 @@
       <groupId>org.apache.phoenix</groupId>
       <artifactId>phoenix-spark</artifactId>
       <version>4.10.0-HBase-1.1</version>
+      <exclusions>
+        <exclusion>
+          <artifactId>jersey-server</artifactId>
+          <groupId>com.sun.jersey</groupId>
+        </exclusion>
+        <exclusion>
+          <artifactId>jersey-core</artifactId>
+          <groupId>com.sun.jersey</groupId>
+        </exclusion>
+        <exclusion>
+          <artifactId>jersey-client</artifactId>
+          <groupId>com.sun.jersey</groupId>
+        </exclusion>
+        <exclusion>
+          <artifactId>jersey-guice</artifactId>
+          <groupId>com.sun.jersey.contribs</groupId>
+        </exclusion>
+        <exclusion>
+          <artifactId>jersey-json</artifactId>
+          <groupId>com.sun.jersey</groupId>
+        </exclusion>
+      </exclusions>
     </dependency>
     <dependency>
       <groupId>org.apache.ambari</groupId>
@@ -257,6 +284,12 @@
       <artifactId>spark-core_${scala.binary.version}</artifactId>
       <version>${spark.version}</version>
       <scope>provided</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>com.fasterxml.jackson.module</groupId>
+          <artifactId>jackson-module-scala_2.11</artifactId>
+        </exclusion>
+      </exclusions>
     </dependency>
     <dependency>
       <groupId>org.apache.spark</groupId>
@@ -286,6 +319,18 @@
           <groupId>org.mortbay.jetty</groupId>
           <artifactId>jsp-2.1-jetty</artifactId>
         </exclusion>
+        <exclusion>
+          <artifactId>jersey-server</artifactId>
+          <groupId>com.sun.jersey</groupId>
+        </exclusion>
+        <exclusion>
+          <artifactId>jersey-core</artifactId>
+          <groupId>com.sun.jersey</groupId>
+        </exclusion>
+        <exclusion>
+          <artifactId>jersey-json</artifactId>
+          <groupId>com.sun.jersey</groupId>
+        </exclusion>
       </exclusions>
     </dependency>
     <dependency>
@@ -385,5 +430,10 @@
       <version>21.0</version>
       <scope>test</scope>
     </dependency>
+    <dependency>
+      <groupId>io.dropwizard.metrics</groupId>
+      <artifactId>metrics-core</artifactId>
+      <version>3.2.5</version>
+    </dependency>
   </dependencies>
 </project>
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/AmbariServerInterface.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/AmbariServerInterface.java
index 920d758..ac50c54 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/AmbariServerInterface.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/AmbariServerInterface.java
@@ -20,8 +20,6 @@ package org.apache.ambari.metrics.adservice.prototype.core;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import org.codehaus.jettison.json.JSONArray;
-import org.codehaus.jettison.json.JSONObject;
 
 import java.io.BufferedReader;
 import java.io.IOException;
@@ -72,22 +70,22 @@ public class AmbariServerInterface implements Serializable{
         responseJsonSb.append(line);
       }
 
-      JSONObject jsonObject = new JSONObject(responseJsonSb.toString());
-      JSONArray array = jsonObject.getJSONArray("items");
-      for(int i = 0 ; i < array.length() ; i++){
-        JSONObject alertDefn = array.getJSONObject(i).getJSONObject("AlertDefinition");
-        if (alertDefn.get("name") != null && alertDefn.get("name").equals("point_in_time_metrics_anomalies")) {
-          JSONObject sourceNode = alertDefn.getJSONObject("source");
-          JSONArray params = sourceNode.getJSONArray("parameters");
-          for(int j = 0 ; j < params.length() ; j++){
-            JSONObject param = params.getJSONObject(j);
-            if (param.get("name").equals("sensitivity")) {
-              return param.getInt("value");
-            }
-          }
-          break;
-        }
-      }
+//      JSONObject jsonObject = new JSONObject(responseJsonSb.toString());
+//      JSONArray array = jsonObject.getJSONArray("items");
+//      for(int i = 0 ; i < array.length() ; i++){
+//        JSONObject alertDefn = array.getJSONObject(i).getJSONObject("AlertDefinition");
+//        if (alertDefn.get("name") != null && alertDefn.get("name").equals("point_in_time_metrics_anomalies")) {
+//          JSONObject sourceNode = alertDefn.getJSONObject("source");
+//          JSONArray params = sourceNode.getJSONArray("parameters");
+//          for(int j = 0 ; j < params.length() ; j++){
+//            JSONObject param = params.getJSONObject(j);
+//            if (param.get("name").equals("sensitivity")) {
+//              return param.getInt("value");
+//            }
+//          }
+//          break;
+//        }
+//      }
 
     } catch (Exception e) {
       LOG.error(e);
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala
index 2cf0fc5..b7f217e 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala
@@ -61,6 +61,8 @@ class AnomalyDetectionApp extends Application[AnomalyDetectionAppConfig] {
     provider.setMapper(objectMapper)
     provider
   }
+
+  override def bootstrapLogging(): Unit = {}
 }
 
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
index 40691d6..9c55305 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
@@ -41,6 +41,7 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.IntegrationTestingUtility;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils;
@@ -222,6 +223,18 @@ public abstract class AbstractMiniHBaseClusterTest extends BaseTest {
             }
             return connection;
           }
+
+          @Override
+          public Connection getConnectionRetryingOnException(RetryCounterFactory retryCounterFactory) throws SQLException, InterruptedException {
+            Connection connection = null;
+            try {
+              connection = DriverManager.getConnection(getUrl());
+            } catch (SQLException e) {
+              LOG.warn("Unable to connect to HBase store using Phoenix.", e);
+            }
+            return connection;
+          }
+
         });
   }
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
index 97d2512..5d81faa 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
@@ -22,6 +22,7 @@ import com.google.common.collect.Multimap;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
 import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
 import org.apache.hadoop.metrics2.sink.timeline.MetricHostAggregate;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
@@ -94,7 +95,12 @@ public class PhoenixHBaseAccessorTest {
       public Connection getConnection() throws SQLException {
         return null;
       }
-    };
+
+      @Override
+      public Connection getConnectionRetryingOnException(RetryCounterFactory retryCounterFactory) throws SQLException, InterruptedException {
+        return null;
+      }
+      };
 
     accessor = new PhoenixHBaseAccessor(connectionProvider);
   }
@@ -250,6 +256,11 @@ public class PhoenixHBaseAccessorTest {
       public Connection getConnection() throws SQLException {
         return connection;
       }
+
+      @Override
+      public Connection getConnectionRetryingOnException(RetryCounterFactory retryCounterFactory) throws SQLException, InterruptedException {
+        return connection;
+      }
     };
 
     accessor = new PhoenixHBaseAccessor(connectionProvider);

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 38/39: AMBARI-23250 : Fix deployment issues in AMS perf branch.

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit c5c9d03328bd1dbf7ad86434d209709592b0ac27
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Fri Mar 16 11:08:56 2018 -0700

    AMBARI-23250 : Fix deployment issues in AMS perf branch.
---
 .../ambari-metrics-timelineservice/pom.xml         | 64 ++++++++++++++++++----
 .../timeline/uuid/HashBasedUuidGenStrategy.java    | 60 ++++++++++----------
 .../src/main/resources/metrics_def/HOST.dat        |  6 ++
 .../webapp/TestTimelineWebServices.java            |  6 --
 4 files changed, 90 insertions(+), 46 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-timelineservice/pom.xml b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
index 6a6dc3e..13076f5 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/pom.xml
+++ b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
@@ -336,6 +336,22 @@
           <groupId>net.sourceforge.findbugs</groupId>
           <artifactId>annotations</artifactId>
         </exclusion>
+        <exclusion>
+          <artifactId>jersey-server</artifactId>
+          <groupId>org.glassfish.jersey.core</groupId>
+        </exclusion>
+        <exclusion>
+          <artifactId>jersey-common</artifactId>
+          <groupId>org.glassfish.jersey.core</groupId>
+        </exclusion>
+        <exclusion>
+          <artifactId>jersey-container-servlet-core</artifactId>
+          <groupId>org.glassfish.jersey.containers</groupId>
+        </exclusion>
+        <exclusion>
+          <artifactId>javax.ws.rs-api</artifactId>
+          <groupId>javax.ws.rs</groupId>
+        </exclusion>
       </exclusions>
     </dependency>
 
@@ -450,6 +466,16 @@
       <artifactId>jersey-server</artifactId>
       <version>1.11</version>
     </dependency>
+    <dependency>
+      <groupId>com.sun.jersey</groupId>
+      <artifactId>jersey-core</artifactId>
+      <version>1.11</version>
+    </dependency>
+    <dependency>
+      <groupId>com.sun.jersey</groupId>
+      <artifactId>jersey-client</artifactId>
+      <version>1.11</version>
+    </dependency>
     <!-- 'mvn dependency:analyze' fails to detect use of this dependency -->
     <dependency>
       <groupId>org.apache.hadoop</groupId>
@@ -481,16 +507,6 @@
       <version>1.1</version>
     </dependency>
     <dependency>
-      <groupId>com.sun.jersey</groupId>
-      <artifactId>jersey-core</artifactId>
-      <version>1.11</version>
-    </dependency>
-    <dependency>
-      <groupId>com.sun.jersey</groupId>
-      <artifactId>jersey-client</artifactId>
-      <version>1.11</version>
-    </dependency>
-    <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
       <version>4.5.2</version>
@@ -653,6 +669,24 @@
       <groupId>org.apache.phoenix</groupId>
       <artifactId>phoenix-core</artifactId>
       <type>test-jar</type>
+      <exclusions>
+        <exclusion>
+          <artifactId>jersey-server</artifactId>
+          <groupId>org.glassfish.jersey.core</groupId>
+        </exclusion>
+        <exclusion>
+          <artifactId>jersey-common</artifactId>
+          <groupId>org.glassfish.jersey.core</groupId>
+        </exclusion>
+        <exclusion>
+          <artifactId>jersey-container-servlet-core</artifactId>
+          <groupId>org.glassfish.jersey.containers</groupId>
+        </exclusion>
+        <exclusion>
+          <artifactId>javax.ws.rs-api</artifactId>
+          <groupId>javax.ws.rs</groupId>
+        </exclusion>
+      </exclusions>
       <version>${phoenix.version}</version>
       <scope>test</scope>
     </dependency>
@@ -661,6 +695,16 @@
       <artifactId>hbase-it</artifactId>
       <version>${hbase.version}</version>
       <scope>test</scope>
+      <exclusions>
+        <exclusion>
+          <artifactId>jersey-common</artifactId>
+          <groupId>org.glassfish.jersey.core</groupId>
+        </exclusion>
+        <exclusion>
+          <artifactId>javax.ws.rs-api</artifactId>
+          <groupId>javax.ws.rs</groupId>
+        </exclusion>
+      </exclusions>
       <classifier>tests</classifier>
     </dependency>
     <dependency>
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java
index 3acf656..16d1bf2 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java
@@ -37,7 +37,6 @@ public class HashBasedUuidGenStrategy implements MetricUuidGenStrategy {
    */
   @Override
   public byte[] computeUuid(TimelineClusterMetric timelineClusterMetric, int maxLength) {
-
     int metricNameUuidLength = 12;
     String metricName = timelineClusterMetric.getMetricName();
 
@@ -51,16 +50,10 @@ public class HashBasedUuidGenStrategy implements MetricUuidGenStrategy {
      */
     StringBuilder splitSums = new StringBuilder();
     if (splits.length > 0) {
-      for (int s = 0; s < splits.length; s++) {
+      for (String split : splits) {
         int asciiSum = 0;
-        if ( s < splits.length -1) {
-          for (int i = 0; i < splits[s].length(); i++) {
-            asciiSum += (int) splits[s].charAt(i); // Get Ascii Sum.
-          }
-        } else {
-          for (int i = 0; i < splits[s].length(); i++) {
-            asciiSum += ((i+1) * (int) splits[s].charAt(i)); //weighted sum for last split.
-          }
+        for (int i = 0; i < split.length(); i++) {
+          asciiSum += ((i + 1) * (int) split.charAt(i)); //weighted sum for split.
         }
         splitSums.append(asciiSum); //Append the sum to the array of sums.
       }
@@ -69,9 +62,7 @@ public class HashBasedUuidGenStrategy implements MetricUuidGenStrategy {
     //Compute a unique metric seed for the stemmed metric name
     String stemmedMetric = stem(metricName);
     long metricSeed = 100123456789L;
-    for (int i = 0; i < stemmedMetric.length(); i++) {
-      metricSeed += stemmedMetric.charAt(i);
-    }
+    metricSeed += computeWeightedNumericalAsciiSum(stemmedMetric);
 
     //Reverse the computed seed to get a metric UUID portion which is used optionally.
     byte[] metricUuidPortion = StringUtils.reverse(String.valueOf(metricSeed)).getBytes();
@@ -80,7 +71,9 @@ public class HashBasedUuidGenStrategy implements MetricUuidGenStrategy {
 
     //If splitSums length > required metric UUID length, use only the required length suffix substring of the splitSums as metric UUID.
     if (splitLength > metricNameUuidLength) {
-      metricUuidPortion = ArrayUtils.subarray(splitSumString.getBytes(), splitLength - metricNameUuidLength, splitLength);
+      int pad = (int)(0.25 * splitLength);
+      metricUuidPortion = ArrayUtils.addAll(ArrayUtils.subarray(splitSumString.getBytes(), splitLength - metricNameUuidLength + pad, splitLength)
+        , ArrayUtils.subarray(metricUuidPortion, 0, pad));
     } else {
       //If splitSums is not enough for required metric UUID length, pad with the metric uuid portion.
       int pad = metricNameUuidLength - splitLength;
@@ -94,7 +87,7 @@ public class HashBasedUuidGenStrategy implements MetricUuidGenStrategy {
     String appId = timelineClusterMetric.getAppId();
     int appidSeed = 11;
     for (int i = 0; i < appId.length(); i++) {
-      appidSeed += appId.charAt(i);
+      appidSeed += ((i+1) * appId.charAt(i));
     }
     String appIdSeedStr = String.valueOf(appidSeed);
     byte[] appUuidPortion = ArrayUtils.subarray(appIdSeedStr.getBytes(), appIdSeedStr.length() - 2, appIdSeedStr.length());
@@ -105,7 +98,7 @@ public class HashBasedUuidGenStrategy implements MetricUuidGenStrategy {
     if (StringUtils.isNotEmpty(instanceId)) {
       int instanceIdSeed = 1489;
       for (int i = 0; i < appId.length(); i++) {
-        instanceIdSeed += appId.charAt(i);
+        instanceIdSeed += ((i+1)* appId.charAt(i));
       }
       buffer.putInt(instanceIdSeed);
       ArrayUtils.subarray(buffer.array(), 2, 4);
@@ -126,11 +119,12 @@ public class HashBasedUuidGenStrategy implements MetricUuidGenStrategy {
   private String[] getIndidivualSplits(String metricName) {
     List<String> tokens = new ArrayList<>();
     String[] splits = new String[0];
-    if (metricName.contains("\\.")) {
+    if (metricName.contains(".")) {
       splits = metricName.split("\\.");
       for (String split : splits) {
         if (split.contains("_")) {
-          tokens.addAll(Arrays.asList(split.split("_")));
+          String[] subSplits = split.split("\\_");
+          tokens.addAll(Arrays.asList(subSplits));
         } else {
           tokens.add(split);
         }
@@ -176,31 +170,37 @@ public class HashBasedUuidGenStrategy implements MetricUuidGenStrategy {
     if (StringUtils.isEmpty(value)) {
       return null;
     }
+
+    int customAsciiSum = 1489 + (int) computeWeightedNumericalAsciiSum(value); //seed = 1489
+
+    String customAsciiSumStr = String.valueOf(customAsciiSum);
+    if (customAsciiSumStr.length() < maxLength) {
+      return null;
+    } else {
+      return customAsciiSumStr.substring(customAsciiSumStr.length() - maxLength, customAsciiSumStr.length()).getBytes();
+    }
+  }
+
+  private long computeWeightedNumericalAsciiSum(String value) {
     int len = value.length();
-    int numericValue = 0;
-    int seed = 1489;
+    long numericValue = 0;
+    int sum = 0;
     for (int i = 0; i < len; i++) {
       int ascii = value.charAt(i);
       if (48 <= ascii && ascii <= 57) {
         numericValue += numericValue * 10 + (ascii - 48);
       } else {
         if (numericValue > 0) {
-          seed += numericValue;
+          sum += numericValue;
           numericValue = 0;
         }
-        seed+= value.charAt(i);
+        sum += value.charAt(i);
       }
     }
 
     if (numericValue != 0) {
-      seed+=numericValue;
-    }
-
-    String seedStr = String.valueOf(seed);
-    if (seedStr.length() < maxLength) {
-      return null;
-    } else {
-      return seedStr.substring(seedStr.length() - maxLength, seedStr.length()).getBytes();
+      sum +=numericValue;
     }
+    return sum;
   }
 }
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/HOST.dat b/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/HOST.dat
index 3758140..6a7034b 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/HOST.dat
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/resources/metrics_def/HOST.dat
@@ -53,3 +53,9 @@ write_bps
 write_bytes
 write_count
 write_time
+sdisk_dm-12_write_count
+sdisk_dm-21_write_count
+sdisk_dm-40_write_bytes
+sdisk_dm-22_write_bytes
+sdisk_dm-26_write_time
+sdisk_dm-17_write_time
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestTimelineWebServices.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestTimelineWebServices.java
index 83e2a27..cd20470 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestTimelineWebServices.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestTimelineWebServices.java
@@ -23,14 +23,8 @@ import static org.junit.Assert.assertEquals;
 import javax.ws.rs.core.MediaType;
 
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEntity;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEvent;
-import org.apache.hadoop.yarn.api.records.timeline.TimelineEvents;
-import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TestTimelineMetricStore;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.TimelineStore;
 import org.apache.hadoop.yarn.webapp.GenericExceptionHandler;
 import org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider;
 import org.junit.Test;

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 02/39: AMBARI-21106 : Ambari Metrics Anomaly detection prototype.(avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit e10b5da6638fbccab6a1330047babfb234913679
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Tue May 30 12:34:18 2017 -0700

    AMBARI-21106 : Ambari Metrics Anomaly detection prototype.(avijayan)
---
 ambari-metrics/ambari-metrics-alertservice/pom.xml | 121 +++++++++++
 .../ambari/metrics/alertservice/R/AmsRTest.java    | 130 ++++++++++++
 .../metrics/alertservice/R/RFunctionInvoker.java   | 180 +++++++++++++++++
 .../metrics/alertservice/common/DataSet.java       |  21 ++
 .../metrics/alertservice/common/MethodResult.java  |  10 +
 .../metrics/alertservice/common/MetricAnomaly.java |  52 +++++
 .../metrics/alertservice/common/ResultSet.java     |  26 +++
 .../common/SingleValuedTimelineMetric.java         |  86 ++++++++
 .../alertservice/common/StatisticUtils.java        |  60 ++++++
 .../alertservice/common/TimelineMetric.java        | 221 +++++++++++++++++++++
 .../alertservice/common/TimelineMetrics.java       | 112 +++++++++++
 .../alertservice/methods/MetricAnomalyModel.java   |  12 ++
 .../metrics/alertservice/methods/ema/EmaDS.java    |  56 ++++++
 .../metrics/alertservice/methods/ema/EmaModel.java | 114 +++++++++++
 .../alertservice/methods/ema/EmaModelLoader.java   |  29 +++
 .../alertservice/methods/ema/EmaResult.java        |  19 ++
 .../alertservice/methods/ema/TestEmaModel.java     |  51 +++++
 .../alertservice/spark/AmsKafkaProducer.java       |  75 +++++++
 .../alertservice/spark/AnomalyMetricPublisher.java | 181 +++++++++++++++++
 .../alertservice/spark/MetricAnomalyDetector.java  | 134 +++++++++++++
 ambari-metrics/ambari-metrics-spark/pom.xml        | 133 +++++++++++++
 .../metrics/spark/MetricAnomalyDetector.scala      |  97 +++++++++
 .../ambari/metrics/spark/SparkPhoenixReader.scala  |  67 +++++++
 .../ambari-metrics-timelineservice/pom.xml         |   5 +
 ...Store.java => HBaseTimelineMetricsService.java} |  39 +++-
 ambari-metrics/pom.xml                             |   2 +
 26 files changed, 2029 insertions(+), 4 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-alertservice/pom.xml b/ambari-metrics/ambari-metrics-alertservice/pom.xml
new file mode 100644
index 0000000..3a3545b
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/pom.xml
@@ -0,0 +1,121 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+    <parent>
+        <artifactId>ambari-metrics</artifactId>
+        <groupId>org.apache.ambari</groupId>
+        <version>2.5.1.0.0</version>
+    </parent>
+    <modelVersion>4.0.0</modelVersion>
+    <artifactId>ambari-metrics-alertservice</artifactId>
+    <version>2.5.1.0.0</version>
+    <build>
+        <plugins>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-compiler-plugin</artifactId>
+                <configuration>
+                    <source>1.8</source>
+                    <target>1.8</target>
+                </configuration>
+            </plugin>
+        </plugins>
+    </build>
+    <name>Ambari Metrics Alert Service</name>
+    <packaging>jar</packaging>
+
+    <dependencies>
+        <dependency>
+            <groupId>org.apache.ambari</groupId>
+            <artifactId>ambari-metrics-common</artifactId>
+            <version>${project.version}</version>
+        </dependency>
+
+        <dependency>
+            <groupId>commons-lang</groupId>
+            <artifactId>commons-lang</artifactId>
+            <version>2.5</version>
+        </dependency>
+
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-api</artifactId>
+            <version>1.7.2</version>
+        </dependency>
+
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-log4j12</artifactId>
+            <version>1.7.2</version>
+        </dependency>
+
+        <dependency>
+            <groupId>com.github.lucarosellini.rJava</groupId>
+            <artifactId>JRI</artifactId>
+            <version>0.9-7</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.spark</groupId>
+            <artifactId>spark-streaming_2.11</artifactId>
+            <version>2.1.1</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.kafka</groupId>
+            <artifactId>kafka_2.10</artifactId>
+            <version>0.10.1.0</version>
+            <exclusions>
+                <exclusion>
+                    <groupId>com.sun.jdmk</groupId>
+                    <artifactId>jmxtools</artifactId>
+                </exclusion>
+                <exclusion>
+                    <groupId>com.sun.jmx</groupId>
+                    <artifactId>jmxri</artifactId>
+                </exclusion>
+                <exclusion>
+                    <groupId>javax.mail</groupId>
+                    <artifactId>mail</artifactId>
+                </exclusion>
+                <exclusion>
+                    <groupId>javax.jms</groupId>
+                    <artifactId>jmx</artifactId>
+                </exclusion>
+                <exclusion>
+                    <groupId>javax.jms</groupId>
+                    <artifactId>jms</artifactId>
+                </exclusion>
+            </exclusions>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.kafka</groupId>
+            <artifactId>kafka-clients</artifactId>
+            <version>0.10.1.0</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.kafka</groupId>
+            <artifactId>connect-json</artifactId>
+            <version>0.10.1.0</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.spark</groupId>
+            <artifactId>spark-streaming-kafka_2.10</artifactId>
+            <version>1.6.3</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.spark</groupId>
+            <artifactId>spark-sql_2.10</artifactId>
+            <version>1.6.3</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.phoenix</groupId>
+            <artifactId>phoenix-spark</artifactId>
+            <version>4.7.0-HBase-1.0</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.spark</groupId>
+            <artifactId>spark-mllib_2.10</artifactId>
+            <version>1.3.0</version>
+        </dependency>
+    </dependencies>
+</project>
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/AmsRTest.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/AmsRTest.java
new file mode 100644
index 0000000..0929f4c
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/AmsRTest.java
@@ -0,0 +1,130 @@
+package org.apache.ambari.metrics.alertservice.R;
+
+import org.apache.ambari.metrics.alertservice.common.ResultSet;
+import org.apache.ambari.metrics.alertservice.common.DataSet;
+import org.apache.commons.lang.ArrayUtils;
+import org.rosuda.JRI.REXP;
+import org.rosuda.JRI.RVector;
+import org.rosuda.JRI.Rengine;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Random;
+
+public class AmsRTest {
+
+    public static void main(String[] args) {
+
+        String metricName = "TestMetric";
+        double[] ts = getTS(1000);
+
+        double[] train_ts = ArrayUtils.subarray(ts, 0,750);
+        double[] train_x = getData(750);
+        DataSet trainData = new DataSet(metricName, train_ts, train_x);
+
+        double[] test_ts = ArrayUtils.subarray(ts, 750,1000);
+        double[] test_x = getData(250);
+        test_x[50] = 5.5; //Anomaly
+        DataSet testData = new DataSet(metricName, test_ts, test_x);
+        ResultSet rs;
+
+        Map<String, String> configs = new HashMap();
+
+        System.out.println("TUKEYS");
+        configs.put("tukeys.n", "3");
+        rs = RFunctionInvoker.tukeys(trainData, testData, configs);
+        rs.print();
+        System.out.println("--------------");
+
+        System.out.println("EMA Global");
+        configs.put("ema.n", "3");
+        configs.put("ema.w", "0.8");
+        rs = RFunctionInvoker.ema_global(trainData, testData, configs);
+        rs.print();
+        System.out.println("--------------");
+
+        System.out.println("EMA Daily");
+        rs = RFunctionInvoker.ema_daily(trainData, testData, configs);
+        rs.print();
+        System.out.println("--------------");
+
+        configs.put("ks.p_value", "0.05");
+        System.out.println("KS Test");
+        rs = RFunctionInvoker.ksTest(trainData, testData, configs);
+        rs.print();
+        System.out.println("--------------");
+
+        ts = getTS(5000);
+        train_ts = ArrayUtils.subarray(ts, 30,4800);
+        train_x = getData(4800);
+        trainData = new DataSet(metricName, train_ts, train_x);
+        test_ts = ArrayUtils.subarray(ts, 4800,5000);
+        test_x = getData(200);
+        for (int i =0; i<200;i++) {
+            test_x[i] = test_x[i]*5;
+        }
+        testData = new DataSet(metricName, test_ts, test_x);
+        configs.put("hsdev.n", "3");
+        configs.put("hsdev.nhp", "3");
+        configs.put("hsdev.interval", "86400000");
+        configs.put("hsdev.period", "604800000");
+        System.out.println("HSdev");
+        rs = RFunctionInvoker.hsdev(trainData, testData, configs);
+        rs.print();
+        System.out.println("--------------");
+
+    }
+
+    static double[] getTS(int n) {
+        long currentTime = System.currentTimeMillis();
+        double[] ts = new double[n];
+        currentTime = currentTime - (currentTime % (5*60*1000));
+
+        for (int i = 0,j=n-1; i<n; i++,j--) {
+            ts[j] = currentTime;
+            currentTime = currentTime - (5*60*1000);
+        }
+        return ts;
+    }
+
+    static void testBasic() {
+        Rengine r = new Rengine(new String[]{"--no-save"}, false, null);
+        try {
+            r.eval("library(ambarimetricsAD)");
+            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/test.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
+            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/util.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
+            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/tukeys.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
+            double[] ts = getTS(5000);
+            double[] x = getData(5000);
+            r.assign("ts", ts);
+            r.assign("x", x);
+            r.eval("x[1000] <- 4.5");
+            r.eval("x[2000] <- 4.75");
+            r.eval("x[3000] <- 3.5");
+            r.eval("x[4000] <- 5.5");
+            r.eval("x[5000] <- 5.0");
+            r.eval("data <- data.frame(ts,x)");
+            r.eval("names(data) <- c(\"TS\", \"Metric\")");
+            System.out.println(r.eval("data"));
+            REXP exp = r.eval("t_an <- test_methods(data)");
+            exp = r.eval("t_an");
+            String strExp = exp.asString();
+            System.out.println("result:" + exp);
+            RVector cont = (RVector) exp.getContent();
+            double[] an_ts = cont.at(0).asDoubleArray();
+            double[] an_x = cont.at(1).asDoubleArray();
+            System.out.println("result:" + strExp);
+        }
+        finally {
+            r.end();
+        }
+    }
+    static double[] getData(int n) {
+        double[] metrics = new double[n];
+        Random random = new Random();
+        for (int i = 0; i<n; i++) {
+            metrics[i] = random.nextDouble();
+        }
+        return metrics;
+    }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java
new file mode 100644
index 0000000..8d1e520
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/R/RFunctionInvoker.java
@@ -0,0 +1,180 @@
+package org.apache.ambari.metrics.alertservice.R;
+
+
+import org.apache.ambari.metrics.alertservice.common.ResultSet;
+import org.apache.ambari.metrics.alertservice.common.DataSet;
+import org.rosuda.JRI.REXP;
+import org.rosuda.JRI.RVector;
+import org.rosuda.JRI.Rengine;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class RFunctionInvoker {
+
+    public static Rengine r = new Rengine(new String[]{"--no-save"}, false, null);
+
+
+    private static void loadDataSets(Rengine r, DataSet trainData, DataSet testData) {
+        r.assign("train_ts", trainData.ts);
+        r.assign("train_x", trainData.values);
+        r.eval("train_data <- data.frame(train_ts,train_x)");
+        r.eval("names(train_data) <- c(\"TS\", " + trainData.metricName + ")");
+
+        r.assign("test_ts", testData.ts);
+        r.assign("test_x", testData.values);
+        r.eval("test_data <- data.frame(test_ts,test_x)");
+        r.eval("names(test_data) <- c(\"TS\", " + testData.metricName + ")");
+    }
+
+
+    public static ResultSet tukeys(DataSet trainData, DataSet testData, Map<String, String> configs) {
+        try {
+            r.eval("library(ambarimetricsAD)");
+            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/tukeys.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
+
+            int n = Integer.parseInt(configs.get("tukeys.n"));
+            r.eval("n <- " + n);
+
+            loadDataSets(r, trainData, testData);
+
+            r.eval("an <- ams_tukeys(train_data, test_data, n)");
+            REXP exp = r.eval("an");
+            RVector cont = (RVector) exp.getContent();
+            List<double[]> result = new ArrayList();
+            for (int i = 0; i< cont.size(); i++) {
+                result.add(cont.at(i).asDoubleArray());
+            }
+            return new ResultSet(result);
+        } catch(Exception e) {
+            e.printStackTrace();
+        } finally {
+            r.end();
+        }
+        return null;
+    }
+
+    public static ResultSet ema_global(DataSet trainData, DataSet testData, Map<String, String> configs) {
+        try {
+            r.eval("library(ambarimetricsAD)");
+            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/ema.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
+
+            int n = Integer.parseInt(configs.get("ema.n"));
+            r.eval("n <- " + n);
+
+            double w = Double.parseDouble(configs.get("ema.w"));
+            r.eval("w <- " + w);
+
+            loadDataSets(r, trainData, testData);
+
+            r.eval("an <- ema_global(train_data, test_data, w, n)");
+            REXP exp = r.eval("an");
+            RVector cont = (RVector) exp.getContent();
+            List<double[]> result = new ArrayList();
+            for (int i = 0; i< cont.size(); i++) {
+                result.add(cont.at(i).asDoubleArray());
+            }
+            return new ResultSet(result);
+
+        } catch(Exception e) {
+            e.printStackTrace();
+        } finally {
+            r.end();
+        }
+        return null;
+    }
+
+    public static ResultSet ema_daily(DataSet trainData, DataSet testData, Map<String, String> configs) {
+        try {
+            r.eval("library(ambarimetricsAD)");
+            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/ema.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
+
+            int n = Integer.parseInt(configs.get("ema.n"));
+            r.eval("n <- " + n);
+
+            double w = Double.parseDouble(configs.get("ema.w"));
+            r.eval("w <- " + w);
+
+            loadDataSets(r, trainData, testData);
+
+            r.eval("an <- ema_daily(train_data, test_data, w, n)");
+            REXP exp = r.eval("an");
+            RVector cont = (RVector) exp.getContent();
+            List<double[]> result = new ArrayList();
+            for (int i = 0; i< cont.size(); i++) {
+                result.add(cont.at(i).asDoubleArray());
+            }
+            return new ResultSet(result);
+
+        } catch(Exception e) {
+            e.printStackTrace();
+        } finally {
+            r.end();
+        }
+        return null;
+    }
+
+    public static ResultSet ksTest(DataSet trainData, DataSet testData, Map<String, String> configs) {
+        try {
+            r.eval("library(ambarimetricsAD)");
+            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/kstest.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
+
+            double p_value = Double.parseDouble(configs.get("ks.p_value"));
+            r.eval("p_value <- " + p_value);
+
+            loadDataSets(r, trainData, testData);
+
+            r.eval("an <- ams_ks(train_data, test_data, p_value)");
+            REXP exp = r.eval("an");
+            RVector cont = (RVector) exp.getContent();
+            List<double[]> result = new ArrayList();
+            for (int i = 0; i< cont.size(); i++) {
+                result.add(cont.at(i).asDoubleArray());
+            }
+            return new ResultSet(result);
+
+        } catch(Exception e) {
+            e.printStackTrace();
+        } finally {
+            r.end();
+        }
+        return null;
+    }
+
+    public static ResultSet hsdev(DataSet trainData, DataSet testData, Map<String, String> configs) {
+        try {
+            r.eval("library(ambarimetricsAD)");
+            r.eval("source('~/dev/AMS/AD/ambarimetricsAD/org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R/hsdev.org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.alerting.R', echo=TRUE)");
+
+            int n = Integer.parseInt(configs.get("hsdev.n"));
+            r.eval("n <- " + n);
+
+            int nhp = Integer.parseInt(configs.get("hsdev.nhp"));
+            r.eval("nhp <- " + nhp);
+
+            long interval = Long.parseLong(configs.get("hsdev.interval"));
+            r.eval("interval <- " + interval);
+
+            long period = Long.parseLong(configs.get("hsdev.period"));
+            r.eval("period <- " + period);
+
+            loadDataSets(r, trainData, testData);
+
+            r.eval("an2 <- hsdev_daily(train_data, test_data, n, nhp, interval, period)");
+            REXP exp = r.eval("an2");
+            RVector cont = (RVector) exp.getContent();
+
+            List<double[]> result = new ArrayList();
+            for (int i = 0; i< cont.size(); i++) {
+                result.add(cont.at(i).asDoubleArray());
+            }
+            return new ResultSet(result);
+        } catch(Exception e) {
+            e.printStackTrace();
+        } finally {
+            r.end();
+        }
+        return null;
+    }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/DataSet.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/DataSet.java
new file mode 100644
index 0000000..47bf9b6
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/DataSet.java
@@ -0,0 +1,21 @@
+package org.apache.ambari.metrics.alertservice.common;
+
+import java.util.Arrays;
+
+public class DataSet {
+
+    public String metricName;
+    public double[] ts;
+    public double[] values;
+
+    public DataSet(String metricName, double[] ts, double[] values) {
+        this.metricName = metricName;
+        this.ts = ts;
+        this.values = values;
+    }
+
+    @Override
+    public String toString() {
+        return metricName + Arrays.toString(ts) + Arrays.toString(values);
+    }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MethodResult.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MethodResult.java
new file mode 100644
index 0000000..915da4c
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MethodResult.java
@@ -0,0 +1,10 @@
+package org.apache.ambari.metrics.alertservice.common;
+
+public abstract class MethodResult {
+    protected String methodType;
+    public abstract String prettyPrint();
+
+    public String getMethodType() {
+        return methodType;
+    }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MetricAnomaly.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MetricAnomaly.java
new file mode 100644
index 0000000..d237bee
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/MetricAnomaly.java
@@ -0,0 +1,52 @@
+package org.apache.ambari.metrics.alertservice.common;
+
+public class MetricAnomaly {
+
+    private String metricKey;
+    private long timestamp;
+    private double metricValue;
+    private MethodResult methodResult;
+
+    public MetricAnomaly(String metricKey, long timestamp, double metricValue, MethodResult methodResult) {
+        this.metricKey = metricKey;
+        this.timestamp = timestamp;
+        this.metricValue = metricValue;
+        this.methodResult = methodResult;
+    }
+
+    public String getMetricKey() {
+        return metricKey;
+    }
+
+    public void setMetricName(String metricName) {
+        this.metricKey = metricName;
+    }
+
+    public long getTimestamp() {
+        return timestamp;
+    }
+
+    public void setTimestamp(long timestamp) {
+        this.timestamp = timestamp;
+    }
+
+    public double getMetricValue() {
+        return metricValue;
+    }
+
+    public void setMetricValue(double metricValue) {
+        this.metricValue = metricValue;
+    }
+
+    public MethodResult getMethodResult() {
+        return methodResult;
+    }
+
+    public void setMethodResult(MethodResult methodResult) {
+        this.methodResult = methodResult;
+    }
+
+    public String getAnomalyAsString() {
+        return metricKey + ":" + timestamp + ":" + metricValue + ":" + methodResult.prettyPrint();
+    }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/ResultSet.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/ResultSet.java
new file mode 100644
index 0000000..96b74e0
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/ResultSet.java
@@ -0,0 +1,26 @@
+package org.apache.ambari.metrics.alertservice.common;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+public class ResultSet {
+
+    List<double[]> resultset = new ArrayList<>();
+
+    public ResultSet(List<double[]> resultset) {
+        this.resultset = resultset;
+    }
+
+    public void print() {
+        System.out.println("Result : ");
+        if (!resultset.isEmpty()) {
+            for (int i = 0; i<resultset.get(0).length;i++) {
+                for (double[] entity : resultset) {
+                    System.out.print(entity[i] + " ");
+                }
+                System.out.println();
+            }
+        }
+    }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/SingleValuedTimelineMetric.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/SingleValuedTimelineMetric.java
new file mode 100644
index 0000000..5118225
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/SingleValuedTimelineMetric.java
@@ -0,0 +1,86 @@
+package org.apache.ambari.metrics.alertservice.common;
+
+
+public class SingleValuedTimelineMetric {
+    private Long timestamp;
+    private Double value;
+    private String metricName;
+    private String appId;
+    private String instanceId;
+    private String hostName;
+    private Long startTime;
+    private String type;
+
+    public void setSingleTimeseriesValue(Long timestamp, Double value) {
+        this.timestamp = timestamp;
+        this.value = value;
+    }
+
+    public SingleValuedTimelineMetric(String metricName, String appId,
+                                      String instanceId, String hostName,
+                                      long timestamp, long startTime, String type) {
+        this.metricName = metricName;
+        this.appId = appId;
+        this.instanceId = instanceId;
+        this.hostName = hostName;
+        this.timestamp = timestamp;
+        this.startTime = startTime;
+        this.type = type;
+    }
+
+    public Long getTimestamp() {
+        return timestamp;
+    }
+
+    public long getStartTime() {
+        return startTime;
+    }
+
+    public String getType() {
+        return type;
+    }
+
+    public Double getValue() {
+        return value;
+    }
+
+    public String getMetricName() {
+        return metricName;
+    }
+
+    public String getAppId() {
+        return appId;
+    }
+
+    public String getInstanceId() {
+        return instanceId;
+    }
+
+    public String getHostName() {
+        return hostName;
+    }
+
+    public boolean equalsExceptTime(TimelineMetric metric) {
+        if (!metricName.equals(metric.getMetricName())) return false;
+        if (hostName != null ? !hostName.equals(metric.getHostName()) : metric.getHostName() != null)
+            return false;
+        if (appId != null ? !appId.equals(metric.getAppId()) : metric.getAppId() != null)
+            return false;
+        if (instanceId != null ? !instanceId.equals(metric.getInstanceId()) : metric.getInstanceId() != null) return false;
+
+        return true;
+    }
+
+    public TimelineMetric getTimelineMetric() {
+        TimelineMetric metric = new TimelineMetric();
+        metric.setMetricName(this.metricName);
+        metric.setAppId(this.appId);
+        metric.setHostName(this.hostName);
+        metric.setType(this.type);
+        metric.setInstanceId(this.instanceId);
+        metric.setStartTime(this.startTime);
+        metric.setTimestamp(this.timestamp);
+        metric.getMetricValues().put(timestamp, value);
+        return metric;
+    }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/StatisticUtils.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/StatisticUtils.java
new file mode 100644
index 0000000..dff56e6
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/StatisticUtils.java
@@ -0,0 +1,60 @@
+package org.apache.ambari.metrics.alertservice.common;
+
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+
+public class StatisticUtils {
+
+  public static double mean(Collection<Double> values) {
+    double sum = 0;
+    for (double d : values) {
+      sum += d;
+    }
+    return sum / values.size();
+  }
+
+  public static double variance(Collection<Double> values) {
+    double avg =  mean(values);
+    double variance = 0;
+    for (double d : values) {
+      variance += Math.pow(d - avg, 2.0);
+    }
+    return variance;
+  }
+
+  public static double sdev(Collection<Double> values, boolean useBesselsCorrection) {
+    double variance = variance(values);
+    int n = (useBesselsCorrection) ? values.size() - 1 : values.size();
+    return Math.sqrt(variance / n);
+  }
+
+  public static double median(Collection<Double> values) {
+    ArrayList<Double> clonedValues = new ArrayList<Double>(values);
+    Collections.sort(clonedValues);
+    int n = values.size();
+
+    if (n % 2 != 0) {
+      return clonedValues.get((n-1)/2);
+    } else {
+      return ( clonedValues.get((n-1)/2) + clonedValues.get(n/2) ) / 2;
+    }
+  }
+
+
+
+//  public static void main(String[] args) {
+//
+//    Collection<Double> values = new ArrayList<>();
+//    values.add(1.0);
+//    values.add(2.0);
+//    values.add(3.0);
+//    values.add(4.0);
+//    values.add(5.0);
+//
+//    System.out.println(mean(values));
+//    System.out.println(sdev(values, false));
+//    System.out.println(median(values));
+//  }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetric.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetric.java
new file mode 100644
index 0000000..2a73855
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetric.java
@@ -0,0 +1,221 @@
+package org.apache.ambari.metrics.alertservice.common;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import java.io.Serializable;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.TreeMap;
+
+@XmlRootElement(name = "metric")
+@XmlAccessorType(XmlAccessType.NONE)
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+public class TimelineMetric implements Comparable<TimelineMetric>, Serializable {
+
+    private String metricName;
+    private String appId;
+    private String instanceId;
+    private String hostName;
+    private long timestamp;
+    private long startTime;
+    private String type;
+    private String units;
+    private TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
+    private Map<String, String> metadata = new HashMap<>();
+
+    // default
+    public TimelineMetric() {
+
+    }
+
+    public TimelineMetric(String metricName, String appId, String hostName, TreeMap<Long,Double> metricValues) {
+        this.metricName = metricName;
+        this.appId = appId;
+        this.hostName = hostName;
+        this.metricValues.putAll(metricValues);
+    }
+
+    // copy constructor
+    public TimelineMetric(TimelineMetric metric) {
+        setMetricName(metric.getMetricName());
+        setType(metric.getType());
+        setUnits(metric.getUnits());
+        setTimestamp(metric.getTimestamp());
+        setAppId(metric.getAppId());
+        setInstanceId(metric.getInstanceId());
+        setHostName(metric.getHostName());
+        setStartTime(metric.getStartTime());
+        setMetricValues(new TreeMap<Long, Double>(metric.getMetricValues()));
+    }
+
+    @XmlElement(name = "metricname")
+    public String getMetricName() {
+        return metricName;
+    }
+
+    public void setMetricName(String metricName) {
+        this.metricName = metricName;
+    }
+
+    @XmlElement(name = "appid")
+    public String getAppId() {
+        return appId;
+    }
+
+    public void setAppId(String appId) {
+        this.appId = appId;
+    }
+
+    @XmlElement(name = "instanceid")
+    public String getInstanceId() {
+        return instanceId;
+    }
+
+    public void setInstanceId(String instanceId) {
+        this.instanceId = instanceId;
+    }
+
+    @XmlElement(name = "hostname")
+    public String getHostName() {
+        return hostName;
+    }
+
+    public void setHostName(String hostName) {
+        this.hostName = hostName;
+    }
+
+    @XmlElement(name = "timestamp")
+    public long getTimestamp() {
+        return timestamp;
+    }
+
+    public void setTimestamp(long timestamp) {
+        this.timestamp = timestamp;
+    }
+
+    @XmlElement(name = "starttime")
+    public long getStartTime() {
+        return startTime;
+    }
+
+    public void setStartTime(long startTime) {
+        this.startTime = startTime;
+    }
+
+    @XmlElement(name = "type", defaultValue = "UNDEFINED")
+    public String getType() {
+        return type;
+    }
+
+    public void setType(String type) {
+        this.type = type;
+    }
+
+    @XmlElement(name = "units")
+    public String getUnits() {
+        return units;
+    }
+
+    public void setUnits(String units) {
+        this.units = units;
+    }
+
+    @XmlElement(name = "metrics")
+    public TreeMap<Long, Double> getMetricValues() {
+        return metricValues;
+    }
+
+    public void setMetricValues(TreeMap<Long, Double> metricValues) {
+        this.metricValues = metricValues;
+    }
+
+    public void addMetricValues(Map<Long, Double> metricValues) {
+        this.metricValues.putAll(metricValues);
+    }
+
+    @XmlElement(name = "metadata")
+    public Map<String,String> getMetadata () {
+        return metadata;
+    }
+
+    public void setMetadata (Map<String,String> metadata) {
+        this.metadata = metadata;
+    }
+
+    @Override
+    public boolean equals(Object o) {
+        if (this == o) return true;
+        if (o == null || getClass() != o.getClass()) return false;
+
+        TimelineMetric metric = (TimelineMetric) o;
+
+        if (!metricName.equals(metric.metricName)) return false;
+        if (hostName != null ? !hostName.equals(metric.hostName) : metric.hostName != null)
+            return false;
+        if (appId != null ? !appId.equals(metric.appId) : metric.appId != null)
+            return false;
+        if (instanceId != null ? !instanceId.equals(metric.instanceId) : metric.instanceId != null)
+            return false;
+        if (timestamp != metric.timestamp) return false;
+        if (startTime != metric.startTime) return false;
+
+        return true;
+    }
+
+    public boolean equalsExceptTime(TimelineMetric metric) {
+        if (!metricName.equals(metric.metricName)) return false;
+        if (hostName != null ? !hostName.equals(metric.hostName) : metric.hostName != null)
+            return false;
+        if (appId != null ? !appId.equals(metric.appId) : metric.appId != null)
+            return false;
+        if (instanceId != null ? !instanceId.equals(metric.instanceId) : metric.instanceId != null)
+            return false;
+
+        return true;
+    }
+
+    @Override
+    public int hashCode() {
+        int result = metricName.hashCode();
+        result = 31 * result + (appId != null ? appId.hashCode() : 0);
+        result = 31 * result + (instanceId != null ? instanceId.hashCode() : 0);
+        result = 31 * result + (hostName != null ? hostName.hashCode() : 0);
+        result = 31 * result + (int) (timestamp ^ (timestamp >>> 32));
+        return result;
+    }
+
+    @Override
+    public int compareTo(TimelineMetric other) {
+        if (timestamp > other.timestamp) {
+            return -1;
+        } else if (timestamp < other.timestamp) {
+            return 1;
+        } else {
+            return metricName.compareTo(other.metricName);
+        }
+    }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetrics.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetrics.java
new file mode 100644
index 0000000..500e1e9
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/common/TimelineMetrics.java
@@ -0,0 +1,112 @@
+package org.apache.ambari.metrics.alertservice.common;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * The class that hosts a list of timeline entities.
+ */
+@XmlRootElement(name = "metrics")
+@XmlAccessorType(XmlAccessType.NONE)
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+public class TimelineMetrics implements Serializable {
+
+    private List<TimelineMetric> allMetrics = new ArrayList<TimelineMetric>();
+
+    public TimelineMetrics() {}
+
+    @XmlElement(name = "metrics")
+    public List<TimelineMetric> getMetrics() {
+        return allMetrics;
+    }
+
+    public void setMetrics(List<TimelineMetric> allMetrics) {
+        this.allMetrics = allMetrics;
+    }
+
+    private boolean isEqualTimelineMetrics(TimelineMetric metric1,
+                                           TimelineMetric metric2) {
+
+        boolean isEqual = true;
+
+        if (!metric1.getMetricName().equals(metric2.getMetricName())) {
+            return false;
+        }
+
+        if (metric1.getHostName() != null) {
+            isEqual = metric1.getHostName().equals(metric2.getHostName());
+        }
+
+        if (metric1.getAppId() != null) {
+            isEqual = metric1.getAppId().equals(metric2.getAppId());
+        }
+
+        return isEqual;
+    }
+
+    /**
+     * Merge with existing TimelineMetric if everything except startTime is
+     * the same.
+     * @param metric {@link TimelineMetric}
+     */
+    public void addOrMergeTimelineMetric(TimelineMetric metric) {
+        TimelineMetric metricToMerge = null;
+
+        if (!allMetrics.isEmpty()) {
+            for (TimelineMetric timelineMetric : allMetrics) {
+                if (timelineMetric.equalsExceptTime(metric)) {
+                    metricToMerge = timelineMetric;
+                    break;
+                }
+            }
+        }
+
+        if (metricToMerge != null) {
+            metricToMerge.addMetricValues(metric.getMetricValues());
+            if (metricToMerge.getTimestamp() > metric.getTimestamp()) {
+                metricToMerge.setTimestamp(metric.getTimestamp());
+            }
+            if (metricToMerge.getStartTime() > metric.getStartTime()) {
+                metricToMerge.setStartTime(metric.getStartTime());
+            }
+        } else {
+            allMetrics.add(metric);
+        }
+    }
+
+    // Optimization that addresses too many TreeMaps from getting created.
+    public void addOrMergeTimelineMetric(SingleValuedTimelineMetric metric) {
+        TimelineMetric metricToMerge = null;
+
+        if (!allMetrics.isEmpty()) {
+            for (TimelineMetric timelineMetric : allMetrics) {
+                if (metric.equalsExceptTime(timelineMetric)) {
+                    metricToMerge = timelineMetric;
+                    break;
+                }
+            }
+        }
+
+        if (metricToMerge != null) {
+            metricToMerge.getMetricValues().put(metric.getTimestamp(), metric.getValue());
+            if (metricToMerge.getTimestamp() > metric.getTimestamp()) {
+                metricToMerge.setTimestamp(metric.getTimestamp());
+            }
+            if (metricToMerge.getStartTime() > metric.getStartTime()) {
+                metricToMerge.setStartTime(metric.getStartTime());
+            }
+        } else {
+            allMetrics.add(metric.getTimelineMetric());
+        }
+    }
+}
+
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/MetricAnomalyModel.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/MetricAnomalyModel.java
new file mode 100644
index 0000000..7ae91a3
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/MetricAnomalyModel.java
@@ -0,0 +1,12 @@
+package org.apache.ambari.metrics.alertservice.methods;
+
+import org.apache.ambari.metrics.alertservice.common.MetricAnomaly;
+import org.apache.ambari.metrics.alertservice.common.TimelineMetric;
+
+import java.util.List;
+
+public interface MetricAnomalyModel {
+
+    public List<MetricAnomaly> onNewMetric(TimelineMetric metric);
+    public List<MetricAnomaly> test(TimelineMetric metric);
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaDS.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaDS.java
new file mode 100644
index 0000000..ec548c8
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaDS.java
@@ -0,0 +1,56 @@
+package org.apache.ambari.metrics.alertservice.methods.ema;
+
+import com.sun.org.apache.commons.logging.Log;
+import com.sun.org.apache.commons.logging.LogFactory;
+
+import javax.xml.bind.annotation.XmlRootElement;
+import java.io.Serializable;
+
+@XmlRootElement
+public class EmaDS implements Serializable {
+
+    String metricName;
+    String appId;
+    String hostname;
+    double ema;
+    double ems;
+    double weight;
+    int timessdev;
+    private static final Log LOG = LogFactory.getLog(EmaDS.class);
+
+    public EmaDS(String metricName, String appId, String hostname, double weight, int timessdev) {
+        this.metricName = metricName;
+        this.appId = appId;
+        this.hostname = hostname;
+        this.weight = weight;
+        this.timessdev = timessdev;
+        this.ema = 0.0;
+        this.ems = 0.0;
+    }
+
+
+    public EmaResult testAndUpdate(double metricValue) {
+
+        double diff  = Math.abs(ema - metricValue) - (timessdev * ems);
+
+        ema = weight * ema + (1 - weight) * metricValue;
+        ems = Math.sqrt(weight * Math.pow(ems, 2.0) + (1 - weight) * Math.pow(metricValue - ema, 2.0));
+
+        System.out.println(ema + ", " + ems);
+        LOG.info(ema + ", " + ems);
+        return diff > 0 ? new EmaResult(diff) : null;
+    }
+
+    public void update(double metricValue) {
+        ema = weight * ema + (1 - weight) * metricValue;
+        ems = Math.sqrt(weight * Math.pow(ems, 2.0) + (1 - weight) * Math.pow(metricValue - ema, 2.0));
+        System.out.println(ema + ", " + ems);
+        LOG.info(ema + ", " + ems);
+    }
+
+    public EmaResult test(double metricValue) {
+        double diff  = Math.abs(ema - metricValue) - (timessdev * ems);
+        return diff > 0 ? new EmaResult(diff) : null;
+    }
+
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModel.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModel.java
new file mode 100644
index 0000000..4aae543
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModel.java
@@ -0,0 +1,114 @@
+package org.apache.ambari.metrics.alertservice.methods.ema;
+
+import com.google.gson.Gson;
+import com.sun.org.apache.commons.logging.Log;
+import com.sun.org.apache.commons.logging.LogFactory;
+import org.apache.ambari.metrics.alertservice.common.MethodResult;
+import org.apache.ambari.metrics.alertservice.common.MetricAnomaly;
+import org.apache.ambari.metrics.alertservice.common.TimelineMetric;
+import org.apache.ambari.metrics.alertservice.methods.MetricAnomalyModel;
+import org.apache.spark.SparkContext;
+import org.apache.spark.mllib.util.Saveable;
+
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import java.io.*;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+@XmlRootElement
+public class EmaModel implements MetricAnomalyModel, Saveable, Serializable {
+
+    @XmlElement(name = "trackedEmas")
+    private Map<String, EmaDS> trackedEmas = new HashMap<>();
+    private static final Log LOG = LogFactory.getLog(EmaModel.class);
+
+    public List<MetricAnomaly> onNewMetric(TimelineMetric metric) {
+
+        String metricName = metric.getMetricName();
+        String appId = metric.getAppId();
+        String hostname = metric.getHostName();
+        String key = metricName + "_" + appId + "_" + hostname;
+        List<MetricAnomaly> anomalies = new ArrayList<>();
+
+        if (!trackedEmas.containsKey(metricName)) {
+            trackedEmas.put(key, new EmaDS(metricName, appId, hostname, 0.8, 3));
+        }
+
+        EmaDS emaDS = trackedEmas.get(key);
+        for (Long timestamp : metric.getMetricValues().keySet()) {
+            double metricValue = metric.getMetricValues().get(timestamp);
+            MethodResult result = emaDS.testAndUpdate(metricValue);
+            if (result != null) {
+                MetricAnomaly metricAnomaly = new MetricAnomaly(key,timestamp, metricValue, result);
+                anomalies.add(metricAnomaly);
+            }
+        }
+        return anomalies;
+    }
+
+    public EmaDS train(TimelineMetric metric, double weight, int timessdev) {
+
+        String metricName = metric.getMetricName();
+        String appId = metric.getAppId();
+        String hostname = metric.getHostName();
+        String key = metricName + "_" + appId + "_" + hostname;
+
+        EmaDS emaDS = new EmaDS(metric.getMetricName(), metric.getAppId(), metric.getHostName(), weight, timessdev);
+        LOG.info("In EMA Train step");
+        for (Long timestamp : metric.getMetricValues().keySet()) {
+            System.out.println(timestamp + " : " + metric.getMetricValues().get(timestamp));
+            LOG.info(timestamp + " : " + metric.getMetricValues().get(timestamp));
+            emaDS.update(metric.getMetricValues().get(timestamp));
+        }
+        trackedEmas.put(key, emaDS);
+        return emaDS;
+    }
+
+    public List<MetricAnomaly> test(TimelineMetric metric) {
+        String metricName = metric.getMetricName();
+        String appId = metric.getAppId();
+        String hostname = metric.getHostName();
+        String key = metricName + "_" + appId + "_" + hostname;
+
+        EmaDS emaDS = trackedEmas.get(key);
+
+        if (emaDS == null) {
+            return new ArrayList<>();
+        }
+
+        List<MetricAnomaly> anomalies = new ArrayList<>();
+
+        for (Long timestamp : metric.getMetricValues().keySet()) {
+            double metricValue = metric.getMetricValues().get(timestamp);
+            MethodResult result = emaDS.test(metricValue);
+            if (result != null) {
+                MetricAnomaly metricAnomaly = new MetricAnomaly(key,timestamp, metricValue, result);
+                anomalies.add(metricAnomaly);
+            }
+        }
+        return anomalies;
+    }
+
+    @Override
+    public void save(SparkContext sc, String path) {
+        Gson gson = new Gson();
+        try {
+            String json = gson.toJson(this);
+            try (Writer writer = new BufferedWriter(new OutputStreamWriter(
+                    new FileOutputStream(path), "utf-8"))) {
+                writer.write(json);
+            }        } catch (IOException e) {
+            LOG.error(e);
+        }
+    }
+
+    @Override
+    public String formatVersion() {
+        return "1.0";
+    }
+
+}
+
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModelLoader.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModelLoader.java
new file mode 100644
index 0000000..f0ef340
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaModelLoader.java
@@ -0,0 +1,29 @@
+package org.apache.ambari.metrics.alertservice.methods.ema;
+
+import com.google.gson.Gson;
+import com.sun.org.apache.commons.logging.Log;
+import com.sun.org.apache.commons.logging.LogFactory;
+import org.apache.spark.SparkContext;
+import org.apache.spark.mllib.util.Loader;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+
+public class EmaModelLoader implements Loader<EmaModel> {
+    private static final Log LOG = LogFactory.getLog(EmaModelLoader.class);
+
+    @Override
+    public EmaModel load(SparkContext sc, String path) {
+        Gson gson = new Gson();
+        try {
+            String fileString = new String(Files.readAllBytes(Paths.get(path)), StandardCharsets.UTF_8);
+            return gson.fromJson(fileString, EmaModel.class);
+        } catch (IOException e) {
+            LOG.error(e);
+        }
+        return null;
+    }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaResult.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaResult.java
new file mode 100644
index 0000000..23f1793
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/EmaResult.java
@@ -0,0 +1,19 @@
+package org.apache.ambari.metrics.alertservice.methods.ema;
+
+import org.apache.ambari.metrics.alertservice.common.MethodResult;
+
+public class EmaResult extends MethodResult{
+
+    double diff;
+
+    public EmaResult(double diff) {
+        this.methodType = "EMA";
+        this.diff = diff;
+    }
+
+
+    @Override
+    public String prettyPrint() {
+        return methodType + "(` = " + diff + ")";
+    }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/TestEmaModel.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/TestEmaModel.java
new file mode 100644
index 0000000..a090786
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/methods/ema/TestEmaModel.java
@@ -0,0 +1,51 @@
+package org.apache.ambari.metrics.alertservice.methods.ema;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.gson.Gson;
+import org.apache.ambari.metrics.alertservice.common.TimelineMetric;
+
+import java.io.*;
+import java.nio.charset.StandardCharsets;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.TreeMap;
+
+public class TestEmaModel {
+
+    public static void main(String[] args) throws IOException {
+
+        long now = System.currentTimeMillis();
+        TimelineMetric metric1 = new TimelineMetric();
+        metric1.setMetricName("dummy_metric");
+        metric1.setHostName("dummy_host");
+        metric1.setTimestamp(now);
+        metric1.setStartTime(now - 1000);
+        metric1.setAppId("HOST");
+        metric1.setType("Integer");
+
+        TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
+
+        for (int i = 0; i<20;i++) {
+            double metric = 9 + Math.random();
+            metricValues.put(now - i*100, metric);
+        }
+        metric1.setMetricValues(metricValues);
+
+        EmaModel emaModel = new EmaModel();
+
+        emaModel.train(metric1, 0.8, 3);
+    }
+
+    /*
+     {{
+            put(now - 100, 1.20);
+            put(now - 200, 1.25);
+            put(now - 300, 1.30);
+            put(now - 400, 4.50);
+            put(now - 500, 1.35);
+            put(now - 400, 5.50);
+        }}
+     */
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AmsKafkaProducer.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AmsKafkaProducer.java
new file mode 100644
index 0000000..de56825
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AmsKafkaProducer.java
@@ -0,0 +1,75 @@
+package org.apache.ambari.metrics.alertservice.spark;
+
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.ambari.metrics.alertservice.common.TimelineMetric;
+import org.apache.ambari.metrics.alertservice.common.TimelineMetrics;
+import org.apache.kafka.clients.producer.*;
+
+import java.util.Properties;
+import java.util.TreeMap;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Future;
+
+public class AmsKafkaProducer {
+
+    Producer producer;
+    private static String topicName = "ambari-metrics-topic";
+
+    public AmsKafkaProducer(String kafkaServers) {
+        Properties configProperties = new Properties();
+        configProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaServers); //"avijayan-ams-2.openstacklocal:6667"
+        configProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.ByteArraySerializer");
+        configProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.connect.json.JsonSerializer");
+        producer = new KafkaProducer(configProperties);
+    }
+
+    public void sendMetrics(TimelineMetrics timelineMetrics) throws InterruptedException, ExecutionException {
+
+        ObjectMapper objectMapper = new ObjectMapper();
+        JsonNode jsonNode = objectMapper.valueToTree(timelineMetrics);
+        ProducerRecord<String, JsonNode> rec = new ProducerRecord<String, JsonNode>(topicName,jsonNode);
+        Future<RecordMetadata> kafkaFuture =  producer.send(rec);
+
+        System.out.println(kafkaFuture.isDone());
+        System.out.println(kafkaFuture.get().topic());
+    }
+
+    public static void main(String[] args) throws ExecutionException, InterruptedException {
+        final long now = System.currentTimeMillis();
+
+        TimelineMetrics timelineMetrics = new TimelineMetrics();
+        TimelineMetric metric1 = new TimelineMetric();
+        metric1.setMetricName("mem_free");
+        metric1.setHostName("avijayan-ams-3.openstacklocal");
+        metric1.setTimestamp(now);
+        metric1.setStartTime(now - 1000);
+        metric1.setAppId("HOST");
+        metric1.setType("Integer");
+
+        TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
+
+        for (int i = 0; i<20;i++) {
+            double metric = 20000 + Math.random();
+            metricValues.put(now - i*100, metric);
+        }
+
+        metric1.setMetricValues(metricValues);
+
+//        metric1.setMetricValues(new TreeMap<Long, Double>() {{
+//            put(now - 100, 1.20);
+//            put(now - 200, 11.25);
+//            put(now - 300, 1.30);
+//            put(now - 400, 4.50);
+//            put(now - 500, 16.35);
+//            put(now - 400, 5.50);
+//        }});
+
+        timelineMetrics.getMetrics().add(metric1);
+
+        for (int i = 0; i<1; i++) {
+            new AmsKafkaProducer("avijayan-ams-2.openstacklocal:6667").sendMetrics(timelineMetrics);
+            Thread.sleep(1000);
+        }
+    }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AnomalyMetricPublisher.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AnomalyMetricPublisher.java
new file mode 100644
index 0000000..5a6bb61
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/AnomalyMetricPublisher.java
@@ -0,0 +1,181 @@
+package org.apache.ambari.metrics.alertservice.spark;
+
+import org.apache.ambari.metrics.alertservice.common.MetricAnomaly;
+import org.apache.ambari.metrics.alertservice.common.TimelineMetric;
+import org.apache.ambari.metrics.alertservice.common.TimelineMetrics;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.codehaus.jackson.map.AnnotationIntrospector;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.map.annotate.JsonSerialize;
+import org.codehaus.jackson.xc.JaxbAnnotationIntrospector;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.Serializable;
+import java.net.HttpURLConnection;
+import java.net.InetAddress;
+import java.net.URL;
+import java.net.UnknownHostException;
+import java.util.*;
+
+public class AnomalyMetricPublisher implements Serializable {
+
+    private String hostName = "UNKNOWN.example.com";
+    private String instanceId = null;
+    private String serviceName = "anomaly-engine";
+    private String collectorHost;
+    private String protocol;
+    private String port;
+    private static final String WS_V1_TIMELINE_METRICS = "/ws/v1/timeline/metrics";
+    private static final Log LOG = LogFactory.getLog(AnomalyMetricPublisher.class);
+    private static ObjectMapper mapper;
+
+    static {
+        mapper = new ObjectMapper();
+        AnnotationIntrospector introspector = new JaxbAnnotationIntrospector();
+        mapper.setAnnotationIntrospector(introspector);
+        mapper.getSerializationConfig()
+                .withSerializationInclusion(JsonSerialize.Inclusion.NON_NULL);
+    }
+
+    public AnomalyMetricPublisher(String collectorHost, String protocol, String port) {
+        this.collectorHost = collectorHost;
+        this.protocol = protocol;
+        this.port = port;
+        this.hostName = getDefaultLocalHostName();
+    }
+
+    private String getDefaultLocalHostName() {
+        try {
+            return InetAddress.getLocalHost().getCanonicalHostName();
+        } catch (UnknownHostException e) {
+            LOG.info("Error getting host address");
+        }
+        return null;
+    }
+
+    public void publish(List<MetricAnomaly> metricAnomalies) {
+        LOG.info("Sending metric anomalies of size : " + metricAnomalies.size());
+        List<TimelineMetric> metricList = getTimelineMetricList(metricAnomalies);
+        LOG.info("Sending TimelineMetric list of size : " + metricList.size());
+        if (!metricList.isEmpty()) {
+            TimelineMetrics timelineMetrics = new TimelineMetrics();
+            timelineMetrics.setMetrics(metricList);
+            emitMetrics(timelineMetrics);
+        }
+    }
+
+    private List<TimelineMetric> getTimelineMetricList(List<MetricAnomaly> metricAnomalies) {
+        List<TimelineMetric> metrics = new ArrayList<>();
+
+        if (metricAnomalies.isEmpty()) {
+            return metrics;
+        }
+
+        long currentTime = System.currentTimeMillis();
+        MetricAnomaly prevAnomaly = metricAnomalies.get(0);
+
+        TimelineMetric timelineMetric = new TimelineMetric();
+        timelineMetric.setMetricName(prevAnomaly.getMetricKey() + "_" + prevAnomaly.getMethodResult().getMethodType());
+        timelineMetric.setAppId(serviceName);
+        timelineMetric.setInstanceId(instanceId);
+        timelineMetric.setHostName(hostName);
+        timelineMetric.setStartTime(currentTime);
+
+        TreeMap<Long,Double> metricValues = new TreeMap<>();
+        metricValues.put(prevAnomaly.getTimestamp(), prevAnomaly.getMetricValue());
+        MetricAnomaly currentAnomaly;
+
+        for (int i = 1; i < metricAnomalies.size(); i++) {
+            currentAnomaly = metricAnomalies.get(i);
+            if (currentAnomaly.getMetricKey().equals(prevAnomaly.getMetricKey())) {
+                metricValues.put(currentAnomaly.getTimestamp(), currentAnomaly.getMetricValue());
+            } else {
+                timelineMetric.setMetricValues(metricValues);
+                metrics.add(timelineMetric);
+
+                timelineMetric = new TimelineMetric();
+                timelineMetric.setMetricName(currentAnomaly.getMetricKey() + "_" + currentAnomaly.getMethodResult().getMethodType());
+                timelineMetric.setAppId(serviceName);
+                timelineMetric.setInstanceId(instanceId);
+                timelineMetric.setHostName(hostName);
+                timelineMetric.setStartTime(currentTime);
+                metricValues = new TreeMap<>();
+                metricValues.put(currentAnomaly.getTimestamp(), currentAnomaly.getMetricValue());
+                prevAnomaly = currentAnomaly;
+            }
+        }
+
+        timelineMetric.setMetricValues(metricValues);
+        metrics.add(timelineMetric);
+        return metrics;
+    }
+
+    private boolean emitMetrics(TimelineMetrics metrics) {
+        String connectUrl = constructTimelineMetricUri();
+        String jsonData = null;
+        LOG.info("EmitMetrics connectUrl = "  + connectUrl);
+        try {
+            jsonData = mapper.writeValueAsString(metrics);
+        } catch (IOException e) {
+            LOG.error("Unable to parse metrics", e);
+        }
+        if (jsonData != null) {
+            return emitMetricsJson(connectUrl, jsonData);
+        }
+        return false;
+    }
+
+    private HttpURLConnection getConnection(String spec) throws IOException {
+        return (HttpURLConnection) new URL(spec).openConnection();
+    }
+
+    private boolean emitMetricsJson(String connectUrl, String jsonData) {
+        LOG.info("Metrics Data : " + jsonData);
+        int timeout = 10000;
+        HttpURLConnection connection = null;
+        try {
+            if (connectUrl == null) {
+                throw new IOException("Unknown URL. Unable to connect to metrics collector.");
+            }
+            connection = getConnection(connectUrl);
+
+            connection.setRequestMethod("POST");
+            connection.setRequestProperty("Content-Type", "application/json");
+            connection.setRequestProperty("Connection", "Keep-Alive");
+            connection.setConnectTimeout(timeout);
+            connection.setReadTimeout(timeout);
+            connection.setDoOutput(true);
+
+            if (jsonData != null) {
+                try (OutputStream os = connection.getOutputStream()) {
+                    os.write(jsonData.getBytes("UTF-8"));
+                }
+            }
+
+            int statusCode = connection.getResponseCode();
+
+            if (statusCode != 200) {
+                LOG.info("Unable to POST metrics to collector, " + connectUrl + ", " +
+                        "statusCode = " + statusCode);
+            } else {
+                LOG.info("Metrics posted to Collector " + connectUrl);
+            }
+            return true;
+        } catch (IOException ioe) {
+            LOG.error(ioe.getMessage());
+        }
+        return false;
+    }
+
+    private String constructTimelineMetricUri() {
+        StringBuilder sb = new StringBuilder(protocol);
+        sb.append("://");
+        sb.append(collectorHost);
+        sb.append(":");
+        sb.append(port);
+        sb.append(WS_V1_TIMELINE_METRICS);
+        return sb.toString();
+    }
+}
diff --git a/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/MetricAnomalyDetector.java b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/MetricAnomalyDetector.java
new file mode 100644
index 0000000..ab87a95
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-alertservice/src/main/java/org/apache/ambari/metrics/alertservice/spark/MetricAnomalyDetector.java
@@ -0,0 +1,134 @@
+package org.apache.ambari.metrics.alertservice.spark;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.ambari.metrics.alertservice.common.MetricAnomaly;
+import org.apache.ambari.metrics.alertservice.common.TimelineMetric;
+import org.apache.ambari.metrics.alertservice.common.TimelineMetrics;
+import org.apache.ambari.metrics.alertservice.methods.ema.EmaModel;
+import org.apache.ambari.metrics.alertservice.methods.MetricAnomalyModel;
+import org.apache.ambari.metrics.alertservice.methods.ema.EmaModelLoader;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.spark.SparkConf;
+import org.apache.spark.api.java.function.Function;
+import org.apache.spark.streaming.Duration;
+import org.apache.spark.streaming.api.java.JavaDStream;
+import org.apache.spark.streaming.api.java.JavaPairDStream;
+import org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream;
+import org.apache.spark.streaming.api.java.JavaStreamingContext;
+import org.apache.spark.streaming.kafka.KafkaUtils;
+import scala.Tuple2;
+
+import java.util.*;
+
+public class MetricAnomalyDetector {
+
+    private static final Log LOG = LogFactory.getLog(MetricAnomalyDetector.class);
+    private static String groupId = "ambari-metrics-group";
+    private static String topicName = "ambari-metrics-topic";
+    private static int numThreads = 1;
+
+    //private static String zkQuorum = "avijayan-ams-1.openstacklocal:2181,avijayan-ams-2.openstacklocal:2181,avijayan-ams-3.openstacklocal:2181";
+    //private static Map<String, String> kafkaParams = new HashMap<>();
+    //static {
+    //    kafkaParams.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "avijayan-ams-2.openstacklocal:6667");
+    //    kafkaParams.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArraySerializer");
+    //    kafkaParams.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.connect.json.JsonSerializer");
+    //    kafkaParams.put("metadata.broker.list", "avijayan-ams-2.openstacklocal:6667");
+    //}
+
+    public MetricAnomalyDetector() {
+    }
+
+    public static void main(String[] args) throws InterruptedException {
+
+
+        if (args.length < 6) {
+            System.err.println("Usage: MetricAnomalyDetector <method1,method2> <appid1,appid2> <collector_host> <port> <protocol> <zkQuorum>");
+            System.exit(1);
+        }
+
+        List<String> appIds = Arrays.asList(args[1].split(","));
+        String collectorHost = args[2];
+        String collectorPort = args[3];
+        String collectorProtocol = args[4];
+        String zkQuorum = args[5];
+
+        List<MetricAnomalyModel> anomalyDetectionModels = new ArrayList<>();
+        AnomalyMetricPublisher anomalyMetricPublisher = new AnomalyMetricPublisher(collectorHost, collectorProtocol, collectorPort);
+
+        SparkConf sparkConf = new SparkConf().setAppName("AmbariMetricsAnomalyDetector");
+
+        JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(10000));
+
+        for (String method : args[0].split(",")) {
+            if (method.equals("ema")) {
+                LOG.info("Model EMA requested.");
+                EmaModel emaModel = new EmaModelLoader().load(jssc.sparkContext().sc(), "/tmp/model/ema");
+                anomalyDetectionModels.add(emaModel);
+            }
+        }
+
+        JavaPairReceiverInputDStream<String, String> messages =
+                KafkaUtils.createStream(jssc, zkQuorum, groupId, Collections.singletonMap(topicName, numThreads));
+
+        //Convert JSON string to TimelineMetrics.
+        JavaDStream<TimelineMetrics> timelineMetricsStream = messages.map(new Function<Tuple2<String, String>, TimelineMetrics>() {
+            @Override
+            public TimelineMetrics call(Tuple2<String, String> message) throws Exception {
+                LOG.info(message._2());
+                ObjectMapper mapper = new ObjectMapper();
+                TimelineMetrics metrics = mapper.readValue(message._2, TimelineMetrics.class);
+                return metrics;
+            }
+        });
+
+        //Group TimelineMetric by AppId.
+        JavaPairDStream<String, TimelineMetrics> appMetricStream = timelineMetricsStream.mapToPair(
+                timelineMetrics -> new Tuple2<String, TimelineMetrics>(timelineMetrics.getMetrics().get(0).getAppId(),timelineMetrics)
+        );
+
+        appMetricStream.print();
+
+        //Filter AppIds that are not needed.
+        JavaPairDStream<String, TimelineMetrics> filteredAppMetricStream = appMetricStream.filter(new Function<Tuple2<String, TimelineMetrics>, Boolean>() {
+            @Override
+            public Boolean call(Tuple2<String, TimelineMetrics> appMetricTuple) throws Exception {
+                return appIds.contains(appMetricTuple._1);
+            }
+        });
+
+        filteredAppMetricStream.print();
+
+        filteredAppMetricStream.foreachRDD(rdd -> {
+            rdd.foreach(
+                    tuple2 -> {
+                        TimelineMetrics metrics = tuple2._2();
+                        LOG.info("Received Metric : " + metrics.getMetrics().get(0).getMetricName());
+                        for (TimelineMetric metric : metrics.getMetrics()) {
+
+                            TimelineMetric timelineMetric =
+                                    new TimelineMetric(metric.getMetricName(), metric.getAppId(), metric.getHostName(), metric.getMetricValues());
+                            LOG.info("Models size : " + anomalyDetectionModels.size());
+
+                            for (MetricAnomalyModel model : anomalyDetectionModels) {
+                                LOG.info("Testing against Model : " + model.getClass().getCanonicalName());
+                                List<MetricAnomaly> anomalies = model.test(timelineMetric);
+                                anomalyMetricPublisher.publish(anomalies);
+                                for (MetricAnomaly anomaly : anomalies) {
+                                    LOG.info(anomaly.getAnomalyAsString());
+                                }
+
+                            }
+                        }
+                    });
+        });
+
+        jssc.start();
+        jssc.awaitTermination();
+    }
+}
+
+
+
+
diff --git a/ambari-metrics/ambari-metrics-spark/pom.xml b/ambari-metrics/ambari-metrics-spark/pom.xml
new file mode 100644
index 0000000..33b4257
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-spark/pom.xml
@@ -0,0 +1,133 @@
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+    <parent>
+        <artifactId>ambari-metrics</artifactId>
+        <groupId>org.apache.ambari</groupId>
+        <version>2.5.1.0.0</version>
+    </parent>
+    <modelVersion>4.0.0</modelVersion>
+    <artifactId>ambari-metrics-spark</artifactId>
+    <version>2.5.1.0.0</version>
+    <properties>
+        <scala.version>2.10.4</scala.version>
+    </properties>
+
+    <repositories>
+        <repository>
+            <id>scala-tools.org</id>
+            <name>Scala-Tools Maven2 Repository</name>
+            <url>http://scala-tools.org/repo-releases</url>
+        </repository>
+    </repositories>
+
+    <pluginRepositories>
+        <pluginRepository>
+            <id>scala-tools.org</id>
+            <name>Scala-Tools Maven2 Repository</name>
+            <url>http://scala-tools.org/repo-releases</url>
+        </pluginRepository>
+    </pluginRepositories>
+
+    <dependencies>
+        <dependency>
+            <groupId>org.scala-lang</groupId>
+            <artifactId>scala-library</artifactId>
+            <version>${scala.version}</version>
+        </dependency>
+        <dependency>
+            <groupId>junit</groupId>
+            <artifactId>junit</artifactId>
+            <version>4.4</version>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.specs</groupId>
+            <artifactId>specs</artifactId>
+            <version>1.2.5</version>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.spark</groupId>
+            <artifactId>spark-core_2.10</artifactId>
+            <version>1.6.3</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.spark</groupId>
+            <artifactId>spark-sql_2.10</artifactId>
+            <version>1.6.3</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.phoenix</groupId>
+            <artifactId>phoenix-spark</artifactId>
+            <version>4.7.0-HBase-1.1</version>
+            <scope>provided</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.ambari</groupId>
+            <artifactId>ambari-metrics-alertservice</artifactId>
+            <version>2.5.1.0.0</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.logging.log4j</groupId>
+            <artifactId>log4j-api-scala_2.10</artifactId>
+            <version>2.8.2</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.spark</groupId>
+            <artifactId>spark-mllib_2.10</artifactId>
+            <version>2.1.1</version>
+        </dependency>
+    </dependencies>
+
+    <build>
+        <sourceDirectory>src/main/scala</sourceDirectory>
+        <plugins>
+            <plugin>
+                <groupId>org.scala-tools</groupId>
+                <artifactId>maven-scala-plugin</artifactId>
+                <executions>
+                    <execution>
+                        <goals>
+                            <goal>compile</goal>
+                            <goal>testCompile</goal>
+                        </goals>
+                    </execution>
+                </executions>
+                <configuration>
+                    <scalaVersion>${scala.version}</scalaVersion>
+                    <args>
+                        <arg>-target:jvm-1.5</arg>
+                    </args>
+                </configuration>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-eclipse-plugin</artifactId>
+                <configuration>
+                    <downloadSources>true</downloadSources>
+                    <buildcommands>
+                        <buildcommand>ch.epfl.lamp.sdt.core.scalabuilder</buildcommand>
+                    </buildcommands>
+                    <additionalProjectnatures>
+                        <projectnature>ch.epfl.lamp.sdt.core.scalanature</projectnature>
+                    </additionalProjectnatures>
+                    <classpathContainers>
+                        <classpathContainer>org.eclipse.jdt.launching.JRE_CONTAINER</classpathContainer>
+                        <classpathContainer>ch.epfl.lamp.sdt.launching.SCALA_CONTAINER</classpathContainer>
+                    </classpathContainers>
+                </configuration>
+            </plugin>
+        </plugins>
+    </build>
+    <reporting>
+        <plugins>
+            <plugin>
+                <groupId>org.scala-tools</groupId>
+                <artifactId>maven-scala-plugin</artifactId>
+                <configuration>
+                    <scalaVersion>${scala.version}</scalaVersion>
+                </configuration>
+            </plugin>
+        </plugins>
+    </reporting>
+</project>
diff --git a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala b/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
new file mode 100644
index 0000000..d4ed31a
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
@@ -0,0 +1,97 @@
+package org.apache.ambari.metrics.spark
+
+
+import java.util
+import java.util.logging.LogManager
+
+import com.fasterxml.jackson.databind.ObjectMapper
+import org.apache.spark.SparkConf
+import org.apache.spark.streaming._
+import org.apache.spark.streaming.kafka._
+import org.apache.ambari.metrics.alertservice.common.{MetricAnomaly, TimelineMetrics}
+import org.apache.ambari.metrics.alertservice.methods.MetricAnomalyModel
+import org.apache.ambari.metrics.alertservice.methods.ema.{EmaModel, EmaModelLoader}
+import org.apache.ambari.metrics.alertservice.spark.AnomalyMetricPublisher
+import org.apache.log4j.Logger
+import org.apache.spark.storage.StorageLevel
+
+import scala.collection.JavaConversions._
+import org.apache.logging.log4j.scala.Logging
+
+object MetricAnomalyDetector extends Logging {
+
+
+  var zkQuorum = "avijayan-ams-1.openstacklocal:2181,avijayan-ams-2.openstacklocal:2181,avijayan-ams-3.openstacklocal:2181"
+  var groupId = "ambari-metrics-group"
+  var topicName = "ambari-metrics-topic"
+  var numThreads = 1
+  val anomalyDetectionModels: Array[MetricAnomalyModel] = Array[MetricAnomalyModel]()
+
+  def main(args: Array[String]): Unit = {
+
+    @transient
+    lazy val log: Logger = org.apache.log4j.LogManager.getLogger("MetricAnomalyDetectorLogger")
+
+    if (args.length < 5) {
+      System.err.println("Usage: MetricAnomalyDetector <method1,method2> <appid1,appid2> <collector_host> <port> <protocol>")
+      System.exit(1)
+    }
+
+    for (method <- args(0).split(",")) {
+      if (method == "ema") anomalyDetectionModels :+ new EmaModel()
+    }
+
+    val appIds = util.Arrays.asList(args(1).split(","))
+
+    val collectorHost = args(2)
+    val collectorPort = args(3)
+    val collectorProtocol = args(4)
+
+    val anomalyMetricPublisher: AnomalyMetricPublisher = new AnomalyMetricPublisher(collectorHost, collectorProtocol, collectorPort)
+
+    val sparkConf = new SparkConf().setAppName("AmbariMetricsAnomalyDetector")
+
+    val streamingContext = new StreamingContext(sparkConf, Duration(10000))
+
+    val emaModel = new EmaModelLoader().load(streamingContext.sparkContext, "/tmp/model/ema")
+
+    val kafkaStream = KafkaUtils.createStream(streamingContext, zkQuorum, groupId, Map(topicName -> numThreads), StorageLevel.MEMORY_AND_DISK_SER_2)
+    kafkaStream.print()
+
+    var timelineMetricsStream = kafkaStream.map( message => {
+      val mapper = new ObjectMapper
+      val metrics = mapper.readValue(message._2, classOf[TimelineMetrics])
+      metrics
+    })
+    timelineMetricsStream.print()
+
+    var appMetricStream = timelineMetricsStream.map( timelineMetrics => {
+      (timelineMetrics.getMetrics.get(0).getAppId, timelineMetrics)
+    })
+    appMetricStream.print()
+
+    var filteredAppMetricStream = appMetricStream.filter( appMetricTuple => {
+      appIds.contains(appMetricTuple._1)
+    } )
+    filteredAppMetricStream.print()
+
+    filteredAppMetricStream.foreachRDD( rdd => {
+      rdd.foreach( appMetricTuple => {
+        val timelineMetrics = appMetricTuple._2
+        logger.info("Received Metric (1): " + timelineMetrics.getMetrics.get(0).getMetricName)
+        log.info("Received Metric (2): " + timelineMetrics.getMetrics.get(0).getMetricName)
+        for (timelineMetric <- timelineMetrics.getMetrics) {
+          var anomalies = emaModel.test(timelineMetric)
+          anomalyMetricPublisher.publish(anomalies)
+          for (anomaly <- anomalies) {
+            var an = anomaly : MetricAnomaly
+            logger.info(an.getAnomalyAsString)
+          }
+        }
+      })
+    })
+
+    streamingContext.start()
+    streamingContext.awaitTermination()
+  }
+  }
diff --git a/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala b/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
new file mode 100644
index 0000000..5ca7b17
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-spark/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
@@ -0,0 +1,67 @@
+package org.apache.ambari.metrics.spark
+
+import org.apache.ambari.metrics.alertservice.common.TimelineMetric
+import org.apache.ambari.metrics.alertservice.methods.ema.EmaModel
+import org.apache.spark.mllib.stat.Statistics
+import org.apache.spark.sql.SQLContext
+import org.apache.spark.{SparkConf, SparkContext}
+import org.apache.spark.rdd.RDD
+
+object SparkPhoenixReader {
+
+  def main(args: Array[String]) {
+
+    if (args.length < 6) {
+      System.err.println("Usage: SparkPhoenixReader <metric_name> <appId> <hostname> <weight> <timessdev> <phoenixConnectionString> <model_dir>")
+      System.exit(1)
+    }
+
+    var metricName = args(0)
+    var appId = args(1)
+    var hostname = args(2)
+    var weight = args(3).toDouble
+    var timessdev = args(4).toInt
+    var phoenixConnectionString = args(5) //avijayan-ams-3.openstacklocal:61181:/ams-hbase-unsecure
+    var modelDir = args(6)
+
+    val conf = new SparkConf()
+    conf.set("spark.app.name", "AMSAnomalyModelBuilder")
+    //conf.set("spark.master", "spark://avijayan-ams-2.openstacklocal:7077")
+
+    var sc = new SparkContext(conf)
+    val sqlContext = new SQLContext(sc)
+
+    val currentTime = System.currentTimeMillis()
+    val oneDayBack = currentTime - 24*60*60*1000
+
+    val df = sqlContext.load("org.apache.phoenix.spark", Map("table" -> "METRIC_RECORD", "zkUrl" -> phoenixConnectionString))
+    df.registerTempTable("METRIC_RECORD")
+    val result = sqlContext.sql("SELECT METRIC_NAME, HOSTNAME, APP_ID, SERVER_TIME, METRIC_SUM, METRIC_COUNT FROM METRIC_RECORD " +
+      "WHERE METRIC_NAME = '" + metricName + "' AND HOSTNAME = '" + hostname + "' AND APP_ID = '" + appId + "' AND SERVER_TIME < " + currentTime + " AND SERVER_TIME > " + oneDayBack)
+
+    var metricValues = new java.util.TreeMap[java.lang.Long, java.lang.Double]
+    result.collect().foreach(
+      t => metricValues.put(t.getLong(3), t.getDouble(4) / t.getInt(5))
+    )
+
+    //val metricName = result.head().getString(0)
+    //val hostname = result.head().getString(1)
+    //val appId = result.head().getString(2)
+
+    val timelineMetric = new TimelineMetric(metricName, appId, hostname, metricValues)
+
+    var emaModel = new EmaModel()
+    emaModel.train(timelineMetric, weight, timessdev)
+    emaModel.save(sc, modelDir)
+
+//    var metricData:Seq[Double] = Seq.empty
+//    result.collect().foreach(
+//      t => metricData :+ t.getDouble(4) / t.getInt(5)
+//    )
+//    val data: RDD[Double] = sc.parallelize(metricData)
+//    val myCDF = Map(0.1 -> 0.2, 0.15 -> 0.6, 0.2 -> 0.05, 0.3 -> 0.05, 0.25 -> 0.1)
+//    val testResult2 = Statistics.kolmogorovSmirnovTest(data, myCDF)
+
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/pom.xml b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
index fc67cb1..67b7f4b 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/pom.xml
+++ b/ambari-metrics/ambari-metrics-timelineservice/pom.xml
@@ -697,6 +697,11 @@
       <version>1.0.0.0-SNAPSHOT</version>
       <scope>test</scope>
     </dependency>
+      <dependency>
+          <groupId>org.apache.ambari</groupId>
+          <artifactId>ambari-metrics-alertservice</artifactId>
+          <version>2.5.1.0.0</version>
+      </dependency>
   </dependencies>
 
   <profiles>
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
similarity index 92%
rename from ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricStore.java
rename to ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
index 0836a72..3558f87 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricStore.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
@@ -19,6 +19,7 @@ package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline
 
 import com.google.common.collect.ArrayListMultimap;
 import com.google.common.collect.Multimap;
+import org.apache.ambari.metrics.alertservice.spark.AmsKafkaProducer;
 import org.apache.commons.collections.MapUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
@@ -63,10 +64,7 @@ import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
-import java.util.concurrent.Executors;
-import java.util.concurrent.ScheduledExecutorService;
-import java.util.concurrent.ThreadFactory;
-import java.util.concurrent.TimeUnit;
+import java.util.concurrent.*;
 
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_HOST_INMEMORY_AGGREGATION;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.USE_GROUPBY_AGGREGATOR_QUERIES;
@@ -85,6 +83,7 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
   private Integer defaultTopNHostsLimit;
   private MetricCollectorHAController haController;
   private boolean containerMetricsDisabled = false;
+  private AmsKafkaProducer kafkaProducer = new AmsKafkaProducer("104.196.85.21:6667");
 
   /**
    * Construct the service.
@@ -372,11 +371,43 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
     // Error indicated by the Sql exception
     TimelinePutResponse response = new TimelinePutResponse();
 
+    try {
+      if (!metrics.getMetrics().isEmpty() && metrics.getMetrics().get(0).getAppId().equals("HOST")) {
+        kafkaProducer.sendMetrics(fromTimelineMetrics(metrics));
+      }
+    } catch (InterruptedException | ExecutionException e) {
+      LOG.error(e);
+    }
     hBaseAccessor.insertMetricRecordsWithMetadata(metricMetadataManager, metrics, false);
 
     return response;
   }
 
+
+  private org.apache.ambari.metrics.alertservice.common.TimelineMetrics fromTimelineMetrics(TimelineMetrics timelineMetrics) {
+    org.apache.ambari.metrics.alertservice.common.TimelineMetrics otherMetrics = new org.apache.ambari.metrics.alertservice.common.TimelineMetrics();
+
+    List<org.apache.ambari.metrics.alertservice.common.TimelineMetric> timelineMetricList = new ArrayList<>();
+    for (TimelineMetric timelineMetric : timelineMetrics.getMetrics()) {
+      timelineMetricList.add(fromTimelineMetric(timelineMetric));
+    }
+    otherMetrics.setMetrics(timelineMetricList);
+    return otherMetrics;
+  }
+
+  private org.apache.ambari.metrics.alertservice.common.TimelineMetric fromTimelineMetric(TimelineMetric timelineMetric) {
+
+    org.apache.ambari.metrics.alertservice.common.TimelineMetric otherMetric = new org.apache.ambari.metrics.alertservice.common.TimelineMetric();
+    otherMetric.setMetricValues(timelineMetric.getMetricValues());
+    otherMetric.setStartTime(timelineMetric.getStartTime());
+    otherMetric.setHostName(timelineMetric.getHostName());
+    otherMetric.setInstanceId(timelineMetric.getInstanceId());
+    otherMetric.setAppId(timelineMetric.getAppId());
+    otherMetric.setMetricName(timelineMetric.getMetricName());
+
+    return otherMetric;
+  }
+
   @Override
   public TimelinePutResponse putContainerMetrics(List<ContainerMetric> metrics)
       throws SQLException, IOException {
diff --git a/ambari-metrics/pom.xml b/ambari-metrics/pom.xml
index 47255ea..79ea06f 100644
--- a/ambari-metrics/pom.xml
+++ b/ambari-metrics/pom.xml
@@ -34,6 +34,8 @@
     <module>ambari-metrics-grafana</module>
     <module>ambari-metrics-assembly</module>
     <module>ambari-metrics-host-aggregator</module>
+    <module>ambari-metrics-alertservice</module>
+    <module>ambari-metrics-spark</module>
   </modules>
   <properties>
     <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 37/39: AMBARI-23225: Ambari Server login fails due to TimelineMetricsCacheSizeOfEngine error in AMS perf branch.

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 01243e2b639c08a4172b378dbcac96bb2de3fe67
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Tue Mar 13 16:23:19 2018 -0700

    AMBARI-23225: Ambari Server login fails due to TimelineMetricsCacheSizeOfEngine error in AMS perf branch.
---
 .../cache/TimelineMetricsCacheSizeOfEngine.java    | 85 ++++++++++++++++++++--
 1 file changed, 77 insertions(+), 8 deletions(-)

diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java
index 8b54017..baae751 100644
--- a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java
+++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java
@@ -17,24 +17,61 @@
  */
 package org.apache.ambari.server.controller.metrics.timeline.cache;
 
-import org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsEhCacheSizeOfEngine;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import net.sf.ehcache.pool.Size;
+import net.sf.ehcache.pool.SizeOfEngine;
+import net.sf.ehcache.pool.impl.DefaultSizeOfEngine;
+import net.sf.ehcache.pool.sizeof.ReflectionSizeOf;
+import net.sf.ehcache.pool.sizeof.SizeOf;
+
 /**
  * Cache sizing engine that reduces reflective calls over the Object graph to
  * find total Heap usage.
  */
-public class TimelineMetricsCacheSizeOfEngine extends TimelineMetricsEhCacheSizeOfEngine {
+public class TimelineMetricsCacheSizeOfEngine implements SizeOfEngine {
 
   private final static Logger LOG = LoggerFactory.getLogger(TimelineMetricsCacheSizeOfEngine.class);
+  public static int DEFAULT_MAX_DEPTH = 1000;
+  public static boolean DEFAULT_ABORT_WHEN_MAX_DEPTH_EXCEEDED = false;
+
+  private SizeOfEngine underlying = null;
+  SizeOf reflectionSizeOf = new ReflectionSizeOf();
+
+  // Optimizations
+  private volatile long timelineMetricPrimitivesApproximation = 0;
+
+  private long sizeOfMapEntry;
+  private long sizeOfMapEntryOverhead;
+
+  private TimelineMetricsCacheSizeOfEngine(SizeOfEngine underlying) {
+    this.underlying = underlying;
+  }
 
   public TimelineMetricsCacheSizeOfEngine() {
-    // Invoke default constructor in base class
+    this(new DefaultSizeOfEngine(DEFAULT_MAX_DEPTH, DEFAULT_ABORT_WHEN_MAX_DEPTH_EXCEEDED));
+
+    this.sizeOfMapEntry = reflectionSizeOf.sizeOf(new Long(1)) +
+      reflectionSizeOf.sizeOf(new Double(2.0));
+
+    //SizeOfMapEntryOverhead = SizeOfMapWithOneEntry - (SizeOfEmptyMap + SizeOfOneEntry)
+    TreeMap<Long, Double> map = new TreeMap<>();
+    long emptyMapSize = reflectionSizeOf.sizeOf(map);
+    map.put(new Long(1), new Double(2.0));
+    long sizeOfMapOneEntry = reflectionSizeOf.deepSizeOf(DEFAULT_MAX_DEPTH, DEFAULT_ABORT_WHEN_MAX_DEPTH_EXCEEDED, map).getCalculated();
+    this.sizeOfMapEntryOverhead =  sizeOfMapOneEntry - (emptyMapSize + this.sizeOfMapEntry);
+
+    LOG.info("Creating custom sizeof engine for TimelineMetrics.");
   }
 
   @Override
-  public long getSizeOfEntry(Object key, Object value) {
+  public Size sizeOf(Object key, Object value, Object container) {
     try {
       LOG.debug("BEGIN - Sizeof, key: {}, value: {}", key, value);
 
@@ -48,7 +85,7 @@ public class TimelineMetricsCacheSizeOfEngine extends TimelineMetricsEhCacheSize
         size += getTimelineMetricCacheValueSize((TimelineMetricsCacheValue) value);
       }
       // Mark size as not being exact
-      return size;
+      return new Size(size, false);
     } finally {
       LOG.debug("END - Sizeof, key: {}", key);
     }
@@ -71,13 +108,45 @@ public class TimelineMetricsCacheSizeOfEngine extends TimelineMetricsEhCacheSize
 
   private long getTimelineMetricCacheValueSize(TimelineMetricsCacheValue value) {
     long size = 16; // startTime + endTime
-
+    TimelineMetrics metrics = value.getTimelineMetrics();
     size += 8; // Object reference
 
-    size += getTimelineMetricsSize(value.getTimelineMetrics()); // TreeMap
+    if (metrics != null) {
+      for (TimelineMetric metric : metrics.getMetrics()) {
+
+        if (timelineMetricPrimitivesApproximation == 0) {
+          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getMetricName());
+          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getAppId());
+          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getHostName());
+          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getInstanceId());
+          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getStartTime());
+          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getType());
+          timelineMetricPrimitivesApproximation += 8; // Object overhead
+
+          LOG.debug("timelineMetricPrimitivesApproximation bytes = {}", timelineMetricPrimitivesApproximation);
+        }
+        size += timelineMetricPrimitivesApproximation;
+
+        Map<Long, Double> metricValues = metric.getMetricValues();
+        if (metricValues != null && !metricValues.isEmpty()) {
+          // Numeric wrapper: 12 bytes + 8 bytes Data type + 4 bytes alignment = 48 (Long, Double)
+          // Tree Map: 12 bytes for header + 20 bytes for 5 object fields : pointers + 1 byte for flag = 40
+          LOG.debug("Size of metric value: {}", (sizeOfMapEntry + sizeOfMapEntryOverhead) * metricValues.size());
+          size += (sizeOfMapEntry + sizeOfMapEntryOverhead) * metricValues.size(); // Treemap size is O(1)
+        }
+      }
+      LOG.debug("Total Size of metric values in cache: {}", size);
+    }
 
     return size;
   }
 
+  @Override
+  public SizeOfEngine copyWith(int maxDepth, boolean abortWhenMaxDepthExceeded) {
+    LOG.debug("Copying tracing sizeof engine, maxdepth: {}, abort: {}",
+      maxDepth, abortWhenMaxDepthExceeded);
 
-}
+    return new TimelineMetricsCacheSizeOfEngine(
+      underlying.copyWith(maxDepth, abortWhenMaxDepthExceeded));
+  }
+}
\ No newline at end of file

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 01/39: AMBARI-21079. Add ability to sink Raw metrics to external system via Http. (swagle)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit e60b1068f738934e7e321548f42d8909fbd2f133
Author: Siddharth Wagle <sw...@hortonworks.com>
AuthorDate: Tue May 23 14:01:14 2017 -0700

    AMBARI-21079. Add ability to sink Raw metrics to external system via Http. (swagle)
---
 ambari-metrics/ambari-metrics-common/pom.xml       |  29 ++-
 .../cache/TimelineMetricsEhCacheSizeOfEngine.java  | 108 ++++------
 .../ApplicationHistoryServer.java                  |  13 +-
 .../metrics/timeline/HBaseTimelineMetricStore.java |  30 ++-
 .../metrics/timeline/PhoenixHBaseAccessor.java     | 230 +++++++++++---------
 .../timeline/TimelineMetricConfiguration.java      | 176 +++++++++++-----
 .../TimelineMetricClusterAggregator.java           |   2 +-
 .../discovery/TimelineMetricMetadataManager.java   |  15 +-
 .../timeline/sink/DefaultFSSinkProvider.java       | 153 ++++++++++++++
 .../metrics/timeline/sink/ExternalMetricsSink.java |  48 +++++
 .../timeline/sink/ExternalSinkProvider.java        |  35 ++++
 .../metrics/timeline/sink/HttpSinkProvider.java    | 231 +++++++++++++++++++++
 .../DefaultInternalMetricsSourceProvider.java      |  42 ++++
 .../timeline/source/InternalMetricsSource.java     |  30 +++
 .../timeline/source/InternalSourceProvider.java    |  39 ++++
 .../metrics/timeline/source/RawMetricsSource.java  |  93 +++++++++
 .../source/cache/InternalMetricCacheKey.java       | 109 ++++++++++
 .../source/cache/InternalMetricCacheValue.java     |  37 ++++
 .../source/cache/InternalMetricsCache.java         | 231 +++++++++++++++++++++
 .../source/cache/InternalMetricsCacheProvider.java |  48 +++++
 .../cache/InternalMetricsCacheSizeOfEngine.java    |  66 ++++++
 .../TestApplicationHistoryServer.java              |   4 +-
 .../timeline/AbstractMiniHBaseClusterTest.java     |  49 ++---
 .../timeline/HBaseTimelineMetricStoreTest.java     |   8 +-
 .../metrics/timeline/ITPhoenixHBaseAccessor.java   | 110 +++++-----
 .../metrics/timeline/PhoenixHBaseAccessorTest.java | 167 ++++++---------
 .../timeline/TimelineMetricStoreWatcherTest.java   |   4 +-
 .../timeline/aggregators/ITClusterAggregator.java  |  72 +++----
 .../timeline/discovery/TestMetadataManager.java    |   2 +-
 .../timeline/discovery/TestMetadataSync.java       |   6 +-
 .../timeline/source/RawMetricsSourceTest.java      | 142 +++++++++++++
 .../cache/TimelineMetricsCacheSizeOfEngine.java    |  71 +------
 pom.xml                                            |  14 +-
 33 files changed, 1865 insertions(+), 549 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-common/pom.xml b/ambari-metrics/ambari-metrics-common/pom.xml
index a6b28e3..bd94ad1 100644
--- a/ambari-metrics/ambari-metrics-common/pom.xml
+++ b/ambari-metrics/ambari-metrics-common/pom.xml
@@ -70,43 +70,47 @@
               <relocations>
                 <relocation>
                   <pattern>com.google</pattern>
-                  <shadedPattern>org.apache.hadoop.metrics2.sink.relocated.google</shadedPattern>
+                  <shadedPattern>org.apache.ambari.metrics.sink.relocated.google</shadedPattern>
                 </relocation>
                 <relocation>
                   <pattern>org.apache.commons.io</pattern>
-                  <shadedPattern>org.apache.hadoop.metrics2.sink.relocated.commons.io</shadedPattern>StormTimelineMetricsReporter
+                  <shadedPattern>org.apache.ambari.metrics.sink.relocated.commons.io</shadedPattern>StormTimelineMetricsReporter
                 </relocation>
                 <relocation>
                   <pattern>org.apache.commons.lang</pattern>
-                  <shadedPattern>org.apache.hadoop.metrics2.sink.relocated.commons.lang</shadedPattern>
+                  <shadedPattern>org.apache.ambari.metrics.relocated.commons.lang</shadedPattern>
                 </relocation>
                 <relocation>
                   <pattern>org.apache.curator</pattern>
-                  <shadedPattern>org.apache.hadoop.metrics2.sink.relocated.curator</shadedPattern>
+                  <shadedPattern>org.apache.ambari.metrics.sink.relocated.curator</shadedPattern>
                 </relocation>
                 <relocation>
                   <pattern>org.apache.jute</pattern>
-                  <shadedPattern>org.apache.hadoop.metrics2.sink.relocated.jute</shadedPattern>
+                  <shadedPattern>org.apache.ambari.metrics.sink.relocated.jute</shadedPattern>
                 </relocation>
                 <relocation>
                   <pattern>org.apache.zookeeper</pattern>
-                  <shadedPattern>org.apache.hadoop.metrics2.sink.relocated.zookeeper</shadedPattern>
+                  <shadedPattern>org.apache.ambari.metrics.sink.relocated.zookeeper</shadedPattern>
                 </relocation>
                 <relocation>
                   <pattern>org.slf4j</pattern>
-                  <shadedPattern>org.apache.hadoop.metrics2.sink.relocated.slf4j</shadedPattern>
+                  <shadedPattern>org.apache.ambari.metrics.sink.relocated.slf4j</shadedPattern>
                 </relocation>
                 <relocation>
                   <pattern>org.apache.log4j</pattern>
-                  <shadedPattern>org.apache.hadoop.metrics2.sink.relocated.log4j</shadedPattern>
+                  <shadedPattern>org.apache.ambari.metrics.sink.relocated.log4j</shadedPattern>
                 </relocation>
                 <relocation>
                   <pattern>jline</pattern>
-                  <shadedPattern>org.apache.hadoop.metrics2.sink.relocated.jline</shadedPattern>
+                  <shadedPattern>org.apache.ambari.metrics.sink.relocated.jline</shadedPattern>
                 </relocation>
                 <relocation>
                   <pattern>org.jboss</pattern>
-                  <shadedPattern>org.apache.hadoop.metrics2.sink.relocated.jboss</shadedPattern>
+                  <shadedPattern>org.apache.ambari.metrics.sink.relocated.jboss</shadedPattern>
+                </relocation>
+                <relocation>
+                  <pattern>net.sf.ehcache</pattern>
+                  <shadedPattern>org.apache.ambari.metrics.sink.relocated.ehcache</shadedPattern>
                 </relocation>
                 <relocation>
                   <pattern>org.apache.http</pattern>
@@ -122,6 +126,11 @@
 
   <dependencies>
     <dependency>
+      <groupId>net.sf.ehcache</groupId>
+      <artifactId>ehcache</artifactId>
+      <version>2.10.0</version>
+    </dependency>
+    <dependency>
       <groupId>commons-logging</groupId>
       <artifactId>commons-logging</artifactId>
       <version>1.1.1</version>
diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsEhCacheSizeOfEngine.java
similarity index 52%
copy from ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java
copy to ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsEhCacheSizeOfEngine.java
index 2401d75..ea694b7 100644
--- a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java
+++ b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsEhCacheSizeOfEngine.java
@@ -1,4 +1,4 @@
-/*
+/**
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -6,16 +6,16 @@
  * to you under the Apache License, Version 2.0 (the
  * "License"); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
- * <p/>
+ * <p>
  * http://www.apache.org/licenses/LICENSE-2.0
- * <p/>
+ * <p>
  * Unless required by applicable law or agreed to in writing, software
  * distributed under the License is distributed on an "AS IS" BASIS,
  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.server.controller.metrics.timeline.cache;
+package org.apache.hadoop.metrics2.sink.timeline.cache;
 
 import java.util.Map;
 import java.util.TreeMap;
@@ -24,8 +24,6 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
-import net.sf.ehcache.pool.Size;
 import net.sf.ehcache.pool.SizeOfEngine;
 import net.sf.ehcache.pool.impl.DefaultSizeOfEngine;
 import net.sf.ehcache.pool.sizeof.ReflectionSizeOf;
@@ -33,28 +31,32 @@ import net.sf.ehcache.pool.sizeof.SizeOf;
 
 /**
  * Cache sizing engine that reduces reflective calls over the Object graph to
- * find total Heap usage.
+ * find total Heap usage. Used for ehcache based on available memory.
  */
-public class TimelineMetricsCacheSizeOfEngine implements SizeOfEngine {
+public abstract class TimelineMetricsEhCacheSizeOfEngine implements SizeOfEngine {
+  private final static Logger LOG = LoggerFactory.getLogger(TimelineMetricsEhCacheSizeOfEngine.class);
+
+  private static int DEFAULT_MAX_DEPTH = 1000;
+  private static boolean DEFAULT_ABORT_WHEN_MAX_DEPTH_EXCEEDED = false;
 
-  private final static Logger LOG = LoggerFactory.getLogger(TimelineMetricsCacheSizeOfEngine.class);
-  public static int DEFAULT_MAX_DEPTH = 1000;
-  public static boolean DEFAULT_ABORT_WHEN_MAX_DEPTH_EXCEEDED = false;
+  // Base Engine
+  protected SizeOfEngine underlying = null;
 
-  private SizeOfEngine underlying = null;
-  SizeOf reflectionSizeOf = new ReflectionSizeOf();
+  // Counter
+  protected SizeOf reflectionSizeOf = new ReflectionSizeOf();
 
   // Optimizations
   private volatile long timelineMetricPrimitivesApproximation = 0;
 
+  // Map entry sizing
   private long sizeOfMapEntry;
   private long sizeOfMapEntryOverhead;
 
-  private TimelineMetricsCacheSizeOfEngine(SizeOfEngine underlying) {
+  protected TimelineMetricsEhCacheSizeOfEngine(SizeOfEngine underlying) {
     this.underlying = underlying;
   }
 
-  public TimelineMetricsCacheSizeOfEngine() {
+  public TimelineMetricsEhCacheSizeOfEngine() {
     this(new DefaultSizeOfEngine(DEFAULT_MAX_DEPTH, DEFAULT_ABORT_WHEN_MAX_DEPTH_EXCEEDED));
 
     this.sizeOfMapEntry = reflectionSizeOf.sizeOf(new Long(1)) +
@@ -70,46 +72,12 @@ public class TimelineMetricsCacheSizeOfEngine implements SizeOfEngine {
     LOG.info("Creating custom sizeof engine for TimelineMetrics.");
   }
 
-  @Override
-  public Size sizeOf(Object key, Object value, Object container) {
-    try {
-      LOG.debug("BEGIN - Sizeof, key: {}, value: {}", key, value);
-
-      long size = 0;
-
-      if (key instanceof TimelineAppMetricCacheKey) {
-        size += getTimelineMetricCacheKeySize((TimelineAppMetricCacheKey) key);
-      }
-
-      if (value instanceof TimelineMetricsCacheValue) {
-        size += getTimelineMetricCacheValueSize((TimelineMetricsCacheValue) value);
-      }
-      // Mark size as not being exact
-      return new Size(size, false);
-    } finally {
-      LOG.debug("END - Sizeof, key: {}", key);
-    }
-  }
-
-  private long getTimelineMetricCacheKeySize(TimelineAppMetricCacheKey key) {
-    long size = reflectionSizeOf.sizeOf(key.getAppId());
-    size += key.getMetricNames() != null && !key.getMetricNames().isEmpty() ?
-      reflectionSizeOf.deepSizeOf(1000, false, key.getMetricNames()).getCalculated() : 0;
-    size += key.getSpec() != null ?
-      reflectionSizeOf.deepSizeOf(1000, false, key.getSpec()).getCalculated() : 0;
-    size += key.getHostNames() != null ?
-      reflectionSizeOf.deepSizeOf(1000, false, key.getHostNames()).getCalculated() : 0;
-    // 4 fixed longs of @TemporalInfo + reference
-    size += 40;
-    size += 8; // Object overhead
-
-    return size;
-  }
-
-  private long getTimelineMetricCacheValueSize(TimelineMetricsCacheValue value) {
-    long size = 16; // startTime + endTime
-    TimelineMetrics metrics = value.getTimelineMetrics();
-    size += 8; // Object reference
+  /**
+   * Return size of the metrics TreeMap in an optimized way.
+   *
+   */
+  protected long getTimelineMetricsSize(TimelineMetrics metrics) {
+    long size = 8; // Object reference
 
     if (metrics != null) {
       for (TimelineMetric metric : metrics.getMetrics()) {
@@ -124,30 +92,24 @@ public class TimelineMetricsCacheSizeOfEngine implements SizeOfEngine {
           timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getType());
           timelineMetricPrimitivesApproximation += 8; // Object overhead
 
-          LOG.debug("timelineMetricPrimitivesApproximation bytes = {}", timelineMetricPrimitivesApproximation);
+          LOG.debug("timelineMetricPrimitivesApproximation bytes = " + timelineMetricPrimitivesApproximation);
         }
         size += timelineMetricPrimitivesApproximation;
-
-        Map<Long, Double> metricValues = metric.getMetricValues();
-        if (metricValues != null && !metricValues.isEmpty()) {
-          // Numeric wrapper: 12 bytes + 8 bytes Data type + 4 bytes alignment = 48 (Long, Double)
-          // Tree Map: 12 bytes for header + 20 bytes for 5 object fields : pointers + 1 byte for flag = 40
-          LOG.debug("Size of metric value: {}", (sizeOfMapEntry + sizeOfMapEntryOverhead) * metricValues.size());
-          size += (sizeOfMapEntry + sizeOfMapEntryOverhead) * metricValues.size(); // Treemap size is O(1)
-        }
+        size += getValueMapSize(metric.getMetricValues());
       }
-      LOG.debug("Total Size of metric values in cache: {}", size);
+      LOG.debug("Total Size of metric values in cache: " + size);
     }
-
     return size;
   }
 
-  @Override
-  public SizeOfEngine copyWith(int maxDepth, boolean abortWhenMaxDepthExceeded) {
-    LOG.debug("Copying tracing sizeof engine, maxdepth: {}, abort: {}",
-      maxDepth, abortWhenMaxDepthExceeded);
-
-    return new TimelineMetricsCacheSizeOfEngine(
-      underlying.copyWith(maxDepth, abortWhenMaxDepthExceeded));
+  protected long getValueMapSize(Map<Long, Double> metricValues) {
+    long size = 0;
+    if (metricValues != null && !metricValues.isEmpty()) {
+      // Numeric wrapper: 12 bytes + 8 bytes Data type + 4 bytes alignment = 48 (Long, Double)
+      // Tree Map: 12 bytes for header + 20 bytes for 5 object fields : pointers + 1 byte for flag = 40
+      LOG.debug("Size of metric value: " + (sizeOfMapEntry + sizeOfMapEntryOverhead) * metricValues.size());
+      size += (sizeOfMapEntry + sizeOfMapEntryOverhead) * metricValues.size(); // Treemap size is O(1)
+    }
+    return size;
   }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
index 1ca9c33..331670d 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
@@ -34,7 +34,7 @@ import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.YarnUncaughtExceptionHandler;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.HBaseTimelineMetricStore;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.HBaseTimelineMetricsService;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.MemoryTimelineStore;
@@ -71,7 +71,7 @@ public class ApplicationHistoryServer extends CompositeService {
 
   @Override
   protected void serviceInit(Configuration conf) throws Exception {
-    metricConfiguration = new TimelineMetricConfiguration();
+    metricConfiguration = TimelineMetricConfiguration.getInstance();
     metricConfiguration.initialize();
     historyManager = createApplicationHistory();
     ahsClientService = createApplicationHistoryClientService(historyManager);
@@ -164,11 +164,16 @@ public class ApplicationHistoryServer extends CompositeService {
 
   protected TimelineMetricStore createTimelineMetricStore(Configuration conf) {
     LOG.info("Creating metrics store.");
-    return new HBaseTimelineMetricStore(metricConfiguration);
+    return new HBaseTimelineMetricsService(metricConfiguration);
   }
 
   protected void startWebApp() {
-    String bindAddress = metricConfiguration.getWebappAddress();
+    String bindAddress = null;
+    try {
+      bindAddress = metricConfiguration.getWebappAddress();
+    } catch (Exception e) {
+      throw new ExceptionInInitializerError("Cannot find bind address");
+    }
     LOG.info("Instantiating AHSWebApp at " + bindAddress);
     try {
       Configuration conf = metricConfiguration.getMetricsConf();
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricStore.java
index 2342bd8..0836a72 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricStore.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricStore.java
@@ -51,6 +51,8 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.function.TimelineMetricsSeriesAggregateFunctionFactory;
 
 import java.io.IOException;
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
 import java.net.UnknownHostException;
 import java.sql.SQLException;
 import java.util.ArrayList;
@@ -71,9 +73,9 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.DEFAULT_TOPN_HOSTS_LIMIT;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.ACTUAL_AGGREGATOR_NAMES;
 
-public class HBaseTimelineMetricStore extends AbstractService implements TimelineMetricStore {
+public class HBaseTimelineMetricsService extends AbstractService implements TimelineMetricStore {
 
-  static final Log LOG = LogFactory.getLog(HBaseTimelineMetricStore.class);
+  static final Log LOG = LogFactory.getLog(HBaseTimelineMetricsService.class);
   private final TimelineMetricConfiguration configuration;
   private PhoenixHBaseAccessor hBaseAccessor;
   private static volatile boolean isInitialized = false;
@@ -88,25 +90,28 @@ public class HBaseTimelineMetricStore extends AbstractService implements Timelin
    * Construct the service.
    *
    */
-  public HBaseTimelineMetricStore(TimelineMetricConfiguration configuration) {
-    super(HBaseTimelineMetricStore.class.getName());
+  public HBaseTimelineMetricsService(TimelineMetricConfiguration configuration) {
+    super(HBaseTimelineMetricsService.class.getName());
     this.configuration = configuration;
   }
 
   @Override
   protected void serviceInit(Configuration conf) throws Exception {
     super.serviceInit(conf);
-    initializeSubsystem(configuration.getHbaseConf(), configuration.getMetricsConf());
+    initializeSubsystem();
   }
 
-  private synchronized void initializeSubsystem(Configuration hbaseConf,
-                                                Configuration metricsConf) {
+  private synchronized void initializeSubsystem() {
     if (!isInitialized) {
-      hBaseAccessor = new PhoenixHBaseAccessor(hbaseConf, metricsConf);
+      hBaseAccessor = new PhoenixHBaseAccessor(null);
       // Initialize schema
       hBaseAccessor.initMetricSchema();
       // Initialize metadata from store
-      metricMetadataManager = new TimelineMetricMetadataManager(hBaseAccessor, metricsConf);
+      try {
+        metricMetadataManager = new TimelineMetricMetadataManager(hBaseAccessor);
+      } catch (MalformedURLException | URISyntaxException e) {
+        throw new ExceptionInInitializerError("Unable to initialize metadata manager");
+      }
       metricMetadataManager.initializeMetadata();
       // Initialize policies before TTL update
       hBaseAccessor.initPoliciesAndTTL();
@@ -128,6 +133,13 @@ public class HBaseTimelineMetricStore extends AbstractService implements Timelin
       //Initialize whitelisting & blacklisting if needed
       TimelineMetricsFilter.initializeMetricFilter(configuration);
 
+      Configuration metricsConf = null;
+      try {
+        metricsConf = configuration.getMetricsConf();
+      } catch (Exception e) {
+        throw new ExceptionInInitializerError("Cannot initialize configuration.");
+      }
+
       defaultTopNHostsLimit = Integer.parseInt(metricsConf.get(DEFAULT_TOPN_HOSTS_LIMIT, "20"));
       if (Boolean.parseBoolean(metricsConf.get(USE_GROUPBY_AGGREGATOR_QUERIES, "true"))) {
         LOG.info("Using group by aggregators for aggregating host and cluster metrics.");
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
index c10cf56..bab6bb2 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
@@ -17,88 +17,18 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline;
 
-import com.google.common.collect.Maps;
-import com.google.common.collect.Multimap;
-import org.apache.commons.lang.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.DoNotRetryIOException;
-import org.apache.hadoop.hbase.HColumnDescriptor;
-import org.apache.hadoop.hbase.HTableDescriptor;
-import org.apache.hadoop.hbase.client.Durability;
-import org.apache.hadoop.hbase.client.HBaseAdmin;
-import org.apache.hadoop.hbase.util.RetryCounter;
-import org.apache.hadoop.hbase.util.RetryCounterFactory;
-import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
-import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
-import org.apache.hadoop.metrics2.sink.timeline.MetricHostAggregate;
-import org.apache.hadoop.metrics2.sink.timeline.Precision;
-import org.apache.hadoop.metrics2.sink.timeline.SingleValuedTimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-import org.apache.hadoop.util.ReflectionUtils;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.Function;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineMetricReadHelper;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultPhoenixDataSource;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixConnectionProvider;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.SplitByMetricNamesCondition;
-import org.apache.hadoop.yarn.util.timeline.TimelineUtils;
-import org.apache.phoenix.exception.PhoenixIOException;
-import org.codehaus.jackson.map.ObjectMapper;
-import org.codehaus.jackson.type.TypeReference;
-
-import java.io.IOException;
-import java.sql.Connection;
-import java.sql.PreparedStatement;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.sql.Statement;
-import java.sql.Timestamp;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.TreeMap;
-import java.util.concurrent.ArrayBlockingQueue;
-import java.util.concurrent.BlockingQueue;
-import java.util.concurrent.Executors;
-import java.util.concurrent.ScheduledExecutorService;
-import java.util.concurrent.TimeUnit;
-
 import static java.util.concurrent.TimeUnit.SECONDS;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.AGGREGATE_TABLE_SPLIT_POINTS;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_PRECISION_TABLE_HBASE_BLOCKING_STORE_FILES;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_AGGREGATE_TABLE_HBASE_BLOCKING_STORE_FILES;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_HBASE_AGGREGATE_TABLE_COMPACTION_POLICY_CLASS;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_HBASE_AGGREGATE_TABLE_COMPACTION_POLICY_KEY;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_HBASE_PRECISION_TABLE_COMPACTION_POLICY_CLASS;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_HBASE_PRECISION_TABLE_COMPACTION_POLICY_KEY;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_PRECISION_TABLE_DURABILITY;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_AGGREGATE_TABLES_DURABILITY;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HBASE_BLOCKING_STORE_FILES;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.AGGREGATORS_SKIP_BLOCK_CACHE;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_DAILY_TABLE_TTL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_HOUR_TABLE_TTL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_MINUTE_TABLE_TTL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_SECOND_TABLE_TTL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CONTAINER_METRICS_TTL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.GLOBAL_MAX_RETRIES;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.GLOBAL_RESULT_LIMIT;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.GLOBAL_RETRY_INTERVAL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HBASE_BLOCKING_STORE_FILES;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HBASE_COMPRESSION_SCHEME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HBASE_ENCODING_SCHEME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_DAILY_TABLE_TTL;
@@ -107,11 +37,19 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.OUT_OFF_BAND_DATA_TIME_ALLOWANCE;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.PRECISION_TABLE_SPLIT_POINTS;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.PRECISION_TABLE_TTL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CONTAINER_METRICS_TTL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_CACHE_SIZE;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_AGGREGATE_TABLES_DURABILITY;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_AGGREGATE_TABLE_HBASE_BLOCKING_STORE_FILES;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_CACHE_COMMIT_INTERVAL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_CACHE_ENABLED;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_CACHE_SIZE;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_HBASE_AGGREGATE_TABLE_COMPACTION_POLICY_CLASS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_HBASE_AGGREGATE_TABLE_COMPACTION_POLICY_KEY;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_HBASE_PRECISION_TABLE_COMPACTION_POLICY_CLASS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_HBASE_PRECISION_TABLE_COMPACTION_POLICY_KEY;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_PRECISION_TABLE_DURABILITY;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_PRECISION_TABLE_HBASE_BLOCKING_STORE_FILES;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRIC_AGGREGATOR_SINK_CLASS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.ALTER_METRICS_METADATA_TABLE;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CONTAINER_METRICS_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_CONTAINER_METRICS_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_HOSTED_APPS_METADATA_TABLE_SQL;
@@ -120,7 +58,6 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_METRICS_CLUSTER_AGGREGATE_GROUPED_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_METRICS_CLUSTER_AGGREGATE_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_METRICS_METADATA_TABLE_SQL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.ALTER_METRICS_METADATA_TABLE;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.CREATE_METRICS_TABLE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.DEFAULT_ENCODING;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.DEFAULT_TABLE_COMPRESSION;
@@ -139,11 +76,80 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_AGGREGATE_RECORD_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_CLUSTER_AGGREGATE_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_CLUSTER_AGGREGATE_TIME_SQL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_CONTAINER_METRICS_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_HOSTED_APPS_METADATA_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_INSTANCE_HOST_METADATA_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_METADATA_SQL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_METRICS_SQL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_CONTAINER_METRICS_SQL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider.SOURCE_NAME.RAW_METRICS;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.sql.Timestamp;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.Durability;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
+import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
+import org.apache.hadoop.metrics2.sink.timeline.MetricHostAggregate;
+import org.apache.hadoop.metrics2.sink.timeline.Precision;
+import org.apache.hadoop.metrics2.sink.timeline.SingleValuedTimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.Function;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineMetricReadHelper;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultPhoenixDataSource;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixConnectionProvider;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.SplitByMetricNamesCondition;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink.ExternalMetricsSink;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink.ExternalSinkProvider;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalMetricsSource;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider;
+import org.apache.hadoop.yarn.util.timeline.TimelineUtils;
+import org.apache.phoenix.exception.PhoenixIOException;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.type.TypeReference;
+
+import com.google.common.collect.Maps;
+import com.google.common.collect.Multimap;
 
 
 /**
@@ -198,16 +204,29 @@ public class PhoenixHBaseAccessor {
 
   private HashMap<String, String> tableTTL = new HashMap<>();
 
-  public PhoenixHBaseAccessor(Configuration hbaseConf,
-                              Configuration metricsConf){
-    this(hbaseConf, metricsConf, new DefaultPhoenixDataSource(hbaseConf));
+  private final TimelineMetricConfiguration configuration;
+  private InternalMetricsSource rawMetricsSource;
+
+  public PhoenixHBaseAccessor(PhoenixConnectionProvider dataSource) {
+    this(TimelineMetricConfiguration.getInstance(), dataSource);
   }
 
-  PhoenixHBaseAccessor(Configuration hbaseConf,
-                       Configuration metricsConf,
+  // Test friendly construction since mock instrumentation is difficult to get
+  // working with hadoop mini cluster
+  PhoenixHBaseAccessor(TimelineMetricConfiguration configuration,
                        PhoenixConnectionProvider dataSource) {
-    this.hbaseConf = hbaseConf;
-    this.metricsConf = metricsConf;
+    this.configuration = TimelineMetricConfiguration.getInstance();
+    try {
+      this.hbaseConf = configuration.getHbaseConf();
+      this.metricsConf = configuration.getMetricsConf();
+    } catch (Exception e) {
+      throw new ExceptionInInitializerError("Cannot initialize configuration.");
+    }
+    if (dataSource == null) {
+      dataSource = new DefaultPhoenixDataSource(hbaseConf);
+    }
+    this.dataSource = dataSource;
+
     RESULTSET_LIMIT = metricsConf.getInt(GLOBAL_RESULT_LIMIT, RESULTSET_LIMIT);
     try {
       Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
@@ -215,7 +234,7 @@ public class PhoenixHBaseAccessor {
       LOG.error("Phoenix client jar not found in the classpath.", e);
       throw new IllegalStateException(e);
     }
-    this.dataSource = dataSource;
+
     this.retryCounterFactory = new RetryCounterFactory(metricsConf.getInt(GLOBAL_MAX_RETRIES, 10),
       (int) SECONDS.toMillis(metricsConf.getInt(GLOBAL_RETRY_INTERVAL, 3)));
     this.outOfBandTimeAllowance = metricsConf.getLong(OUT_OFF_BAND_DATA_TIME_ALLOWANCE,
@@ -249,10 +268,20 @@ public class PhoenixHBaseAccessor {
         metricsConf.getClass(TIMELINE_METRIC_AGGREGATOR_SINK_CLASS, null,
             TimelineMetricsAggregatorSink.class);
     if (metricSinkClass != null) {
-      aggregatorSink =
-          ReflectionUtils.newInstance(metricSinkClass, metricsConf);
+      aggregatorSink = ReflectionUtils.newInstance(metricSinkClass, metricsConf);
       LOG.info("Initialized aggregator sink class " + metricSinkClass);
     }
+
+    ExternalSinkProvider externalSinkProvider = configuration.getExternalSinkProvider();
+    InternalSourceProvider internalSourceProvider = configuration.getInternalSourceProvider();
+    if (externalSinkProvider != null) {
+      ExternalMetricsSink rawMetricsSink = externalSinkProvider.getExternalMetricsSink(RAW_METRICS);
+      int interval = configuration.getExternalSinkInterval(RAW_METRICS);
+      if (interval == -1){
+        interval = cacheCommitInterval;
+      }
+      rawMetricsSource = internalSourceProvider.getInternalMetricsSource(RAW_METRICS, interval, rawMetricsSink);
+    }
   }
 
   public boolean isInsertCacheEmpty() {
@@ -261,12 +290,15 @@ public class PhoenixHBaseAccessor {
 
   public void commitMetricsFromCache() {
     LOG.debug("Clearing metrics cache");
-    List<TimelineMetrics> metricsArray = new ArrayList<TimelineMetrics>(insertCache.size());
-    while (!insertCache.isEmpty()) {
-      metricsArray.add(insertCache.poll());
+    List<TimelineMetrics> metricsList = new ArrayList<TimelineMetrics>(insertCache.size());
+    if (!insertCache.isEmpty()) {
+      insertCache.drainTo(metricsList); // More performant than poll
     }
-    if (metricsArray.size() > 0) {
-      commitMetrics(metricsArray);
+    if (metricsList.size() > 0) {
+      commitMetrics(metricsList);
+      if (rawMetricsSource != null) {
+        rawMetricsSource.publishTimelineMetrics(metricsList);
+      }
     }
   }
 
@@ -367,7 +399,7 @@ public class PhoenixHBaseAccessor {
   }
 
   @SuppressWarnings("unchecked")
-  public static TreeMap<Long, Double>  readMetricFromJSON(String json) throws IOException {
+  public static TreeMap<Long, Double> readMetricFromJSON(String json) throws IOException {
     return mapper.readValue(json, metricValuesTypeRef);
   }
 
@@ -701,6 +733,9 @@ public class PhoenixHBaseAccessor {
     return "";
   }
 
+  /**
+   * Insert precision YARN container data.
+   */
   public void insertContainerMetrics(List<ContainerMetric> metrics)
       throws SQLException, IOException {
     Connection conn = getConnection();
@@ -766,6 +801,9 @@ public class PhoenixHBaseAccessor {
     }
   }
 
+  /**
+   * Insert precision data.
+   */
   public void insertMetricRecordsWithMetadata(TimelineMetricMetadataManager metadataManager,
                                               TimelineMetrics metrics, boolean skipCache) throws SQLException, IOException {
     List<TimelineMetric> timelineMetrics = metrics.getMetrics();
@@ -1389,9 +1427,7 @@ public class PhoenixHBaseAccessor {
       try {
         aggregatorSink.saveClusterAggregateRecords(records);
       } catch (Exception e) {
-        LOG.warn(
-            "Error writing cluster aggregate records metrics to external sink. "
-                + e);
+        LOG.warn("Error writing cluster aggregate records metrics to external sink. ", e);
       }
     }
   }
@@ -1402,8 +1438,8 @@ public class PhoenixHBaseAccessor {
    *
    * @throws SQLException
    */
-  public void saveClusterTimeAggregateRecords(Map<TimelineClusterMetric, MetricHostAggregate> records,
-                                              String tableName) throws SQLException {
+  public void saveClusterAggregateRecordsSecond(Map<TimelineClusterMetric, MetricHostAggregate> records,
+                                                String tableName) throws SQLException {
     if (records == null || records.isEmpty()) {
       LOG.debug("Empty aggregate records.");
       return;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
index 6222cb9..b1ecc51 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
@@ -17,15 +17,8 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline;
 
-import org.apache.commons.lang.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.conf.Configuration;
-
 import java.io.BufferedReader;
-import java.io.IOException;
+import java.io.File;
 import java.io.InputStream;
 import java.io.InputStreamReader;
 import java.net.InetAddress;
@@ -37,6 +30,21 @@ import java.util.Collections;
 import java.util.HashSet;
 import java.util.Set;
 
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink.ExternalSinkProvider;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.DefaultInternalMetricsSourceProvider;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider.SOURCE_NAME;
+import org.apache.log4j.Appender;
+import org.apache.log4j.FileAppender;
+import org.apache.log4j.Logger;
+
 /**
  * Configuration class that reads properties from ams-site.xml. All values
  * for time or intervals are given in seconds.
@@ -56,6 +64,12 @@ public class TimelineMetricConfiguration {
   public static final String TIMELINE_METRIC_AGGREGATOR_SINK_CLASS =
     "timeline.metrics.service.aggregator.sink.class";
 
+  public static final String TIMELINE_METRICS_SOURCE_PROVIDER_CLASS =
+    "timeline.metrics.service.source.provider.class";
+
+  public static final String TIMELINE_METRICS_SINK_PROVIDER_CLASS =
+    "timeline.metrics.service.sink.provider.class";
+
   public static final String TIMELINE_METRICS_CACHE_SIZE =
     "timeline.metrics.cache.size";
 
@@ -312,38 +326,63 @@ public class TimelineMetricConfiguration {
   public static final String AMSHBASE_METRICS_WHITESLIST_FILE = "amshbase_metrics_whitelist";
 
   public static final String TIMELINE_METRICS_HOST_INMEMORY_AGGREGATION = "timeline.metrics.host.inmemory.aggregation";
+  public static final String INTERNAL_CACHE_HEAP_PERCENT =
+    "timeline.metrics.service.cache.%s.heap.percent";
+
+  public static final String EXTERNAL_SINK_INTERVAL =
+    "timeline.metrics.service.external.sink.%s.interval";
+
+  public static final String DEFAULT_EXTERNAL_SINK_DIR =
+    "timeline.metrics.service.external.sink.dir";
 
   private Configuration hbaseConf;
   private Configuration metricsConf;
   private Configuration amsEnvConf;
   private volatile boolean isInitialized = false;
 
+  private static TimelineMetricConfiguration instance = new TimelineMetricConfiguration();
+
+  private TimelineMetricConfiguration() {}
+
+  public static TimelineMetricConfiguration getInstance() {
+    return instance;
+  }
+
+  // Tests
+  public TimelineMetricConfiguration(Configuration hbaseConf, Configuration metricsConf) {
+    this.hbaseConf = hbaseConf;
+    this.metricsConf = metricsConf;
+    this.isInitialized = true;
+  }
+
   public void initialize() throws URISyntaxException, MalformedURLException {
-    ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
-    if (classLoader == null) {
-      classLoader = getClass().getClassLoader();
-    }
-    URL hbaseResUrl = classLoader.getResource(HBASE_SITE_CONFIGURATION_FILE);
-    URL amsResUrl = classLoader.getResource(METRICS_SITE_CONFIGURATION_FILE);
-    LOG.info("Found hbase site configuration: " + hbaseResUrl);
-    LOG.info("Found metric service configuration: " + amsResUrl);
-
-    if (hbaseResUrl == null) {
-      throw new IllegalStateException("Unable to initialize the metrics " +
-        "subsystem. No hbase-site present in the classpath.");
-    }
+    if (!isInitialized) {
+      ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
+      if (classLoader == null) {
+        classLoader = getClass().getClassLoader();
+      }
+      URL hbaseResUrl = classLoader.getResource(HBASE_SITE_CONFIGURATION_FILE);
+      URL amsResUrl = classLoader.getResource(METRICS_SITE_CONFIGURATION_FILE);
+      LOG.info("Found hbase site configuration: " + hbaseResUrl);
+      LOG.info("Found metric service configuration: " + amsResUrl);
+
+      if (hbaseResUrl == null) {
+        throw new IllegalStateException("Unable to initialize the metrics " +
+          "subsystem. No hbase-site present in the classpath.");
+      }
 
-    if (amsResUrl == null) {
-      throw new IllegalStateException("Unable to initialize the metrics " +
-        "subsystem. No ams-site present in the classpath.");
-    }
+      if (amsResUrl == null) {
+        throw new IllegalStateException("Unable to initialize the metrics " +
+          "subsystem. No ams-site present in the classpath.");
+      }
 
-    hbaseConf = new Configuration(true);
-    hbaseConf.addResource(hbaseResUrl.toURI().toURL());
-    metricsConf = new Configuration(true);
-    metricsConf.addResource(amsResUrl.toURI().toURL());
+      hbaseConf = new Configuration(true);
+      hbaseConf.addResource(hbaseResUrl.toURI().toURL());
+      metricsConf = new Configuration(true);
+      metricsConf.addResource(amsResUrl.toURI().toURL());
 
-    isInitialized = true;
+      isInitialized = true;
+    }
   }
 
   public Configuration getHbaseConf() throws URISyntaxException, MalformedURLException {
@@ -361,31 +400,19 @@ public class TimelineMetricConfiguration {
   }
 
   public String getZKClientPort() throws MalformedURLException, URISyntaxException {
-    if (!isInitialized) {
-      initialize();
-    }
-    return hbaseConf.getTrimmed("hbase.zookeeper.property.clientPort", "2181");
+    return getHbaseConf().getTrimmed("hbase.zookeeper.property.clientPort", "2181");
   }
 
   public String getZKQuorum() throws MalformedURLException, URISyntaxException {
-    if (!isInitialized) {
-      initialize();
-    }
-    return hbaseConf.getTrimmed("hbase.zookeeper.quorum");
+    return getHbaseConf().getTrimmed("hbase.zookeeper.quorum");
   }
 
   public String getClusterZKClientPort() throws MalformedURLException, URISyntaxException {
-    if (!isInitialized) {
-      initialize();
-    }
-    return metricsConf.getTrimmed("cluster.zookeeper.property.clientPort", "2181");
+    return getMetricsConf().getTrimmed("cluster.zookeeper.property.clientPort", "2181");
   }
 
   public String getClusterZKQuorum() throws MalformedURLException, URISyntaxException {
-    if (!isInitialized) {
-      initialize();
-    }
-    return metricsConf.getTrimmed("cluster.zookeeper.quorum");
+    return getMetricsConf().getTrimmed("cluster.zookeeper.quorum");
   }
 
   public String getInstanceHostnameFromEnv() throws UnknownHostException {
@@ -405,12 +432,9 @@ public class TimelineMetricConfiguration {
     return DEFAULT_INSTANCE_PORT;
   }
 
-  public String getWebappAddress() {
+  public String getWebappAddress() throws MalformedURLException, URISyntaxException {
     String defaultHttpAddress = "0.0.0.0:6188";
-    if (metricsConf != null) {
-      return metricsConf.get(WEBAPP_HTTP_ADDRESS, defaultHttpAddress);
-    }
-    return defaultHttpAddress;
+    return getMetricsConf().get(WEBAPP_HTTP_ADDRESS, defaultHttpAddress);
   }
 
   public int getTimelineMetricsServiceHandlerThreadCount() {
@@ -472,8 +496,8 @@ public class TimelineMetricConfiguration {
 
   public boolean isDistributedCollectorModeDisabled() {
     try {
-      if (metricsConf != null) {
-        return Boolean.parseBoolean(metricsConf.get("timeline.metrics.service.distributed.collector.mode.disabled", "false"));
+      if (getMetricsConf() != null) {
+        return Boolean.parseBoolean(getMetricsConf().get("timeline.metrics.service.distributed.collector.mode.disabled", "false"));
       }
       return false;
     } catch (Exception e) {
@@ -534,4 +558,50 @@ public class TimelineMetricConfiguration {
     }
     return false;
   }
+
+  public int getExternalSinkInterval(SOURCE_NAME sourceName) {
+    return Integer.parseInt(metricsConf.get(String.format(EXTERNAL_SINK_INTERVAL, sourceName), "-1"));
+  }
+
+  public InternalSourceProvider getInternalSourceProvider() {
+    Class<? extends InternalSourceProvider> providerClass =
+      metricsConf.getClass(TIMELINE_METRICS_SOURCE_PROVIDER_CLASS,
+        DefaultInternalMetricsSourceProvider.class, InternalSourceProvider.class);
+    return ReflectionUtils.newInstance(providerClass, metricsConf);
+  }
+
+  public ExternalSinkProvider getExternalSinkProvider() {
+    Class<?> providerClass = metricsConf.getClassByNameOrNull(TIMELINE_METRICS_SINK_PROVIDER_CLASS);
+    if (providerClass != null) {
+      return (ExternalSinkProvider) ReflectionUtils.newInstance(providerClass, metricsConf);
+    }
+    return null;
+  }
+
+  public String getInternalCacheHeapPercent(String instanceName) {
+    String heapPercent = metricsConf.get(String.format(INTERNAL_CACHE_HEAP_PERCENT, instanceName));
+    if (StringUtils.isEmpty(heapPercent)) {
+      return "5%";
+    } else {
+      return heapPercent.endsWith("%") ? heapPercent : heapPercent + "%";
+    }
+  }
+
+  public String getDefaultMetricsSinkDir() {
+    String dirPath = metricsConf.get(DEFAULT_EXTERNAL_SINK_DIR);
+    if (dirPath == null) {
+      // Only one logger at the time of writing
+      Appender appender = (Appender) Logger.getRootLogger().getAllAppenders().nextElement();
+      if (appender instanceof FileAppender) {
+        File f = new File(((FileAppender) appender).getFile());
+        if (f.exists()) {
+          dirPath = f.getParent();
+        } else {
+          dirPath = "/tmp";
+        }
+      }
+    }
+
+    return dirPath;
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java
index ba16b43..74d4013 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java
@@ -83,7 +83,7 @@ public class TimelineMetricClusterAggregator extends AbstractTimelineAggregator
     Map<TimelineClusterMetric, MetricHostAggregate> hostAggregateMap = aggregateMetricsFromResultSet(rs, endTime);
 
     LOG.info("Saving " + hostAggregateMap.size() + " metric aggregates.");
-    hBaseAccessor.saveClusterTimeAggregateRecords(hostAggregateMap, outputTableName);
+    hBaseAccessor.saveClusterAggregateRecordsSecond(hostAggregateMap, outputTableName);
   }
 
   private Map<TimelineClusterMetric, MetricHostAggregate> aggregateMetricsFromResultSet(ResultSet rs, long endTime)
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
index f904ebe..8a71756 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
@@ -27,8 +27,11 @@ import org.apache.hadoop.metrics2.sink.timeline.MetadataException;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
 
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
 import java.sql.SQLException;
 import java.util.ArrayList;
 import java.util.Arrays;
@@ -69,17 +72,21 @@ public class TimelineMetricMetadataManager {
   // Filter metrics names matching given patterns, from metadata
   final List<String> metricNameFilters = new ArrayList<>();
 
-  public TimelineMetricMetadataManager(PhoenixHBaseAccessor hBaseAccessor,
-                                       Configuration metricsConf) {
-    this.hBaseAccessor = hBaseAccessor;
+  // Test friendly construction since mock instrumentation is difficult to get
+  // working with hadoop mini cluster
+  public TimelineMetricMetadataManager(Configuration metricsConf, PhoenixHBaseAccessor hBaseAccessor) {
     this.metricsConf = metricsConf;
-
+    this.hBaseAccessor = hBaseAccessor;
     String patternStrings = metricsConf.get(TIMELINE_METRIC_METADATA_FILTERS);
     if (!StringUtils.isEmpty(patternStrings)) {
       metricNameFilters.addAll(Arrays.asList(patternStrings.split(",")));
     }
   }
 
+  public TimelineMetricMetadataManager(PhoenixHBaseAccessor hBaseAccessor) throws MalformedURLException, URISyntaxException {
+    this(TimelineMetricConfiguration.getInstance().getMetricsConf(), hBaseAccessor);
+  }
+
   /**
    * Initialize Metadata from the store
    */
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/DefaultFSSinkProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/DefaultFSSinkProvider.java
new file mode 100644
index 0000000..6ec6cf9
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/DefaultFSSinkProvider.java
@@ -0,0 +1,153 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink;
+
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_CACHE_COMMIT_INTERVAL;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Date;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider;
+
+public class DefaultFSSinkProvider implements ExternalSinkProvider {
+  private static final Log LOG = LogFactory.getLog(DefaultFSSinkProvider.class);
+  TimelineMetricConfiguration conf = TimelineMetricConfiguration.getInstance();
+  private final DefaultExternalMetricsSink sink = new DefaultExternalMetricsSink();
+  private long FIXED_FILE_SIZE;
+  private final String SINK_FILE_NAME = "external-metrics-sink.dat";
+  private final String SEPARATOR = ", ";
+  private final String LINE_SEP = System.lineSeparator();
+  private final String HEADERS = "METRIC, APP_ID, INSTANCE_ID, HOSTNAME, START_TIME, DATA";
+
+  public DefaultFSSinkProvider() {
+    try {
+      FIXED_FILE_SIZE = conf.getMetricsConf().getLong("timeline.metrics.service.external.fs.sink.filesize", FileUtils.ONE_MB * 100);
+    } catch (Exception ignored) {
+      FIXED_FILE_SIZE = FileUtils.ONE_MB * 100;
+    }
+  }
+
+  @Override
+  public ExternalMetricsSink getExternalMetricsSink(InternalSourceProvider.SOURCE_NAME sourceName) {
+    return sink;
+  }
+
+  class DefaultExternalMetricsSink implements ExternalMetricsSink {
+
+    @Override
+    public int getSinkTimeOutSeconds() {
+      return 10;
+    }
+
+    @Override
+    public int getFlushSeconds() {
+      try {
+        return conf.getMetricsConf().getInt(TIMELINE_METRICS_CACHE_COMMIT_INTERVAL, 3);
+      } catch (Exception e) {
+        LOG.warn("Cannot read cache commit interval.");
+      }
+      return 3;
+    }
+
+    private boolean createFile(File f) {
+      boolean created = false;
+      if (!f.exists()) {
+        try {
+          created = f.createNewFile();
+          FileUtils.writeStringToFile(f, HEADERS);
+        } catch (IOException e) {
+          LOG.error("Cannot create " + SINK_FILE_NAME + " at " + f.getPath());
+          return false;
+        }
+      }
+
+      return created;
+    }
+
+    private boolean shouldReCreate(File f) {
+      if (!f.exists()) {
+        return true;
+      }
+      if (FileUtils.sizeOf(f) > FIXED_FILE_SIZE) {
+        return true;
+      }
+      return false;
+    }
+
+    @Override
+    public void sinkMetricData(Collection<TimelineMetrics> metrics) {
+      String dirPath = TimelineMetricConfiguration.getInstance().getDefaultMetricsSinkDir();
+      File dir = new File(dirPath);
+      if (!dir.exists()) {
+        LOG.error("Cannot sink data to file system, incorrect dir path " + dirPath);
+        return;
+      }
+
+      File f = FileUtils.getFile(dirPath, SINK_FILE_NAME);
+      if (shouldReCreate(f)) {
+        if (!f.delete()) {
+          LOG.warn("Unable to delete external sink file.");
+          return;
+        }
+        createFile(f);
+      }
+
+      if (metrics != null) {
+        for (TimelineMetrics timelineMetrics : metrics) {
+          for (TimelineMetric metric : timelineMetrics.getMetrics()) {
+            StringBuilder sb = new StringBuilder();
+            sb.append(metric.getMetricName());
+            sb.append(SEPARATOR);
+            sb.append(metric.getAppId());
+            sb.append(SEPARATOR);
+            if (StringUtils.isEmpty(metric.getInstanceId())) {
+              sb.append(SEPARATOR);
+            } else {
+              sb.append(metric.getInstanceId());
+              sb.append(SEPARATOR);
+            }
+            if (StringUtils.isEmpty(metric.getHostName())) {
+              sb.append(SEPARATOR);
+            } else {
+              sb.append(metric.getHostName());
+              sb.append(SEPARATOR);
+            }
+            sb.append(new Date(metric.getStartTime()));
+            sb.append(SEPARATOR);
+            sb.append(metric.getMetricValues().toString());
+            sb.append(LINE_SEP);
+            try {
+              FileUtils.writeStringToFile(f, sb.toString());
+            } catch (IOException e) {
+              LOG.warn("Unable to sink data to file " + f.getPath());
+            }
+          }
+        }
+      }
+    }
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/ExternalMetricsSink.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/ExternalMetricsSink.java
new file mode 100644
index 0000000..ff06307
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/ExternalMetricsSink.java
@@ -0,0 +1,48 @@
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink;
+
+import java.util.Collection;
+
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+public interface ExternalMetricsSink {
+  /**
+   * How many seconds to wait on sink before dropping metrics.
+   * Note: Care should be taken that this timeout does not bottleneck the
+   * sink thread.
+   */
+  int getSinkTimeOutSeconds();
+
+  /**
+   * How frequently to flush data to external system.
+   * Default would be between 60 - 120 seconds, coherent with default sink
+   * interval of AMS.
+   */
+  int getFlushSeconds();
+
+  /**
+   * Raw data stream to process / store on external system.
+   * The data will be held in an in-memory cache and flushed at flush seconds
+   * or when the cache size limit is exceeded we will flush the cache and
+   * drop data if write fails.
+   *
+   * @param metrics {@link Collection<TimelineMetrics>}
+   */
+  void sinkMetricData(Collection<TimelineMetrics> metrics);
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/ExternalSinkProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/ExternalSinkProvider.java
new file mode 100644
index 0000000..48887d9
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/ExternalSinkProvider.java
@@ -0,0 +1,35 @@
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink;
+
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider.SOURCE_NAME;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * Configurable provider for sink classes that match the metrics sources.
+ * Provider can return same sink of different sinks for each source.
+ */
+public interface ExternalSinkProvider {
+
+  /**
+   * Return an instance of the metrics sink for the give source
+   * @return {@link ExternalMetricsSink}
+   */
+  ExternalMetricsSink getExternalMetricsSink(SOURCE_NAME sourceName);
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/HttpSinkProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/HttpSinkProvider.java
new file mode 100644
index 0000000..bb84c8a
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/sink/HttpSinkProvider.java
@@ -0,0 +1,231 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink;
+
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_CACHE_COMMIT_INTERVAL;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.OutputStream;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.security.KeyStore;
+import java.util.Collection;
+
+import javax.net.ssl.HttpsURLConnection;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.SSLSocketFactory;
+import javax.net.ssl.TrustManagerFactory;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider;
+import org.apache.http.client.utils.URIBuilder;
+import org.codehaus.jackson.map.AnnotationIntrospector;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.map.annotate.JsonSerialize;
+import org.codehaus.jackson.xc.JaxbAnnotationIntrospector;
+
+public class HttpSinkProvider implements ExternalSinkProvider {
+  private static final Log LOG = LogFactory.getLog(HttpSinkProvider.class);
+  TimelineMetricConfiguration conf = TimelineMetricConfiguration.getInstance();
+
+  private String connectUrl;
+  private SSLSocketFactory sslSocketFactory;
+  protected static ObjectMapper mapper;
+
+  static {
+    mapper = new ObjectMapper();
+    AnnotationIntrospector introspector = new JaxbAnnotationIntrospector();
+    mapper.setAnnotationIntrospector(introspector);
+    mapper.getSerializationConfig()
+      .withSerializationInclusion(JsonSerialize.Inclusion.NON_NULL);
+  }
+
+  public HttpSinkProvider() {
+    Configuration config;
+    try {
+      config = conf.getMetricsConf();
+    } catch (Exception e) {
+      throw new ExceptionInInitializerError("Unable to read configuration for sink.");
+    }
+    String protocol = config.get("timeline.metrics.service.external.http.sink.protocol", "http");
+    String host = config.get("timeline.metrics.service.external.http.sink.host", "localhost");
+    String port = config.get("timeline.metrics.service.external.http.sink.port", "6189");
+
+    if (protocol.contains("https")) {
+      loadTruststore(
+        config.getTrimmed("timeline.metrics.service.external.http.sink.truststore.path"),
+        config.getTrimmed("timeline.metrics.service.external.http.sink.truststore.type"),
+        config.getTrimmed("timeline.metrics.service.external.http.sink.truststore.password")
+      );
+    }
+
+    URIBuilder uriBuilder = new URIBuilder();
+    uriBuilder.setScheme(protocol);
+    uriBuilder.setHost(host);
+    uriBuilder.setPort(Integer.parseInt(port));
+    connectUrl = uriBuilder.toString();
+  }
+
+  @Override
+  public ExternalMetricsSink getExternalMetricsSink(InternalSourceProvider.SOURCE_NAME sourceName) {
+    return null;
+  }
+
+  protected HttpURLConnection getConnection(String spec) throws IOException {
+    return (HttpURLConnection) new URL(spec).openConnection();
+  }
+
+  // Get an ssl connection
+  protected HttpsURLConnection getSSLConnection(String spec)
+    throws IOException, IllegalStateException {
+
+    HttpsURLConnection connection = (HttpsURLConnection) (new URL(spec).openConnection());
+    connection.setSSLSocketFactory(sslSocketFactory);
+    return connection;
+  }
+
+  protected void loadTruststore(String trustStorePath, String trustStoreType,
+                                String trustStorePassword) {
+    if (sslSocketFactory == null) {
+      if (trustStorePath == null || trustStorePassword == null) {
+        String msg = "Can't load TrustStore. Truststore path or password is not set.";
+        LOG.error(msg);
+        throw new IllegalStateException(msg);
+      }
+      FileInputStream in = null;
+      try {
+        in = new FileInputStream(new File(trustStorePath));
+        KeyStore store = KeyStore.getInstance(trustStoreType == null ?
+          KeyStore.getDefaultType() : trustStoreType);
+        store.load(in, trustStorePassword.toCharArray());
+        TrustManagerFactory tmf = TrustManagerFactory
+          .getInstance(TrustManagerFactory.getDefaultAlgorithm());
+        tmf.init(store);
+        SSLContext context = SSLContext.getInstance("TLS");
+        context.init(null, tmf.getTrustManagers(), null);
+        sslSocketFactory = context.getSocketFactory();
+      } catch (Exception e) {
+        LOG.error("Unable to load TrustStore", e);
+      } finally {
+        if (in != null) {
+          try {
+            in.close();
+          } catch (IOException e) {
+            LOG.error("Unable to load TrustStore", e);
+          }
+        }
+      }
+    }
+  }
+
+  class DefaultHttpMetricsSink implements ExternalMetricsSink {
+
+    @Override
+    public int getSinkTimeOutSeconds() {
+      try {
+        return conf.getMetricsConf().getInt("timeline.metrics.service.external.http.sink.timeout.seconds", 10);
+      } catch (Exception e) {
+        return 10;
+      }
+    }
+
+    @Override
+    public int getFlushSeconds() {
+      try {
+        return conf.getMetricsConf().getInt(TIMELINE_METRICS_CACHE_COMMIT_INTERVAL, 3);
+      } catch (Exception e) {
+        LOG.warn("Cannot read cache commit interval.");
+      }
+      return 3;
+    }
+
+    /**
+     * Cleans up and closes an input stream
+     * see http://docs.oracle.com/javase/6/docs/technotes/guides/net/http-keepalive.html
+     * @param is the InputStream to clean up
+     * @return string read from the InputStream
+     * @throws IOException
+     */
+    protected String cleanupInputStream(InputStream is) throws IOException {
+      StringBuilder sb = new StringBuilder();
+      if (is != null) {
+        try (
+          InputStreamReader isr = new InputStreamReader(is);
+          BufferedReader br = new BufferedReader(isr)
+        ) {
+          // read the response body
+          String line;
+          while ((line = br.readLine()) != null) {
+            if (LOG.isDebugEnabled()) {
+              sb.append(line);
+            }
+          }
+        } finally {
+          is.close();
+        }
+      }
+      return sb.toString();
+    }
+
+    @Override
+    public void sinkMetricData(Collection<TimelineMetrics> metrics) {
+      HttpURLConnection connection = null;
+      try {
+        connection = connectUrl.startsWith("https") ? getSSLConnection(connectUrl) : getConnection(connectUrl);
+
+        connection.setRequestMethod("POST");
+        connection.setRequestProperty("Content-Type", "application/json");
+        connection.setRequestProperty("Connection", "Keep-Alive");
+        connection.setConnectTimeout(getSinkTimeOutSeconds());
+        connection.setReadTimeout(getSinkTimeOutSeconds());
+        connection.setDoOutput(true);
+
+        if (metrics != null) {
+          String jsonData = mapper.writeValueAsString(metrics);
+          try (OutputStream os = connection.getOutputStream()) {
+            os.write(jsonData.getBytes("UTF-8"));
+          }
+        }
+
+        int statusCode = connection.getResponseCode();
+
+        if (statusCode != 200) {
+          LOG.info("Unable to POST metrics to external sink, " + connectUrl +
+            ", statusCode = " + statusCode);
+        } else {
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("Metrics posted to external sink " + connectUrl);
+          }
+        }
+        cleanupInputStream(connection.getInputStream());
+
+      } catch (IOException io) {
+        LOG.warn("Unable to sink data to external system.", io);
+      }
+    }
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/DefaultInternalMetricsSourceProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/DefaultInternalMetricsSourceProvider.java
new file mode 100644
index 0000000..b97c39f
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/DefaultInternalMetricsSourceProvider.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink.ExternalMetricsSink;
+
+public class DefaultInternalMetricsSourceProvider implements InternalSourceProvider {
+  private static final Log LOG = LogFactory.getLog(DefaultInternalMetricsSourceProvider.class);
+
+  // TODO: Implement read based sources for higher level data
+  @Override
+  public InternalMetricsSource getInternalMetricsSource(SOURCE_NAME sourceName, int sinkIntervalSeconds, ExternalMetricsSink sink) {
+    if (sink == null) {
+      LOG.warn("No external sink configured for source " + sourceName);
+      return null;
+    }
+
+    switch (sourceName) {
+      case RAW_METRICS:
+        return new RawMetricsSource(sinkIntervalSeconds, sink);
+      default:
+        throw new UnsupportedOperationException("Unimplemented source type " + sourceName);
+    }
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/InternalMetricsSource.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/InternalMetricsSource.java
new file mode 100644
index 0000000..a6e1092
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/InternalMetricsSource.java
@@ -0,0 +1,30 @@
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source;
+
+import java.util.Collection;
+
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+public interface InternalMetricsSource {
+  /**
+   * Write metrics to external sink.
+   * Allows pre-processing and caching capabilities to the consumer.
+   */
+  void publishTimelineMetrics(Collection<TimelineMetrics> metrics);
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/InternalSourceProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/InternalSourceProvider.java
new file mode 100644
index 0000000..9d8ca36
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/InternalSourceProvider.java
@@ -0,0 +1,39 @@
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source;
+
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink.ExternalMetricsSink;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+public interface InternalSourceProvider {
+
+  enum SOURCE_NAME {
+    RAW_METRICS,
+    MINUTE_HOST_AGGREAGATE_METRICS,
+    HOURLY_HOST_AGGREAGATE_METRICS,
+    DAILY_HOST_AGGREAGATE_METRICS,
+    MINUTE_CLUSTER_AGGREAGATE_METRICS,
+    HOURLY_CLUSTER_AGGREAGATE_METRICS,
+    DAILY_CLUSTER_AGGREAGATE_METRICS,
+  }
+
+  /**
+   * Provide Source for metrics data.
+   * @return {@link InternalMetricsSource}
+   */
+  InternalMetricsSource getInternalMetricsSource(SOURCE_NAME sourceName, int sinkIntervalSeconds, ExternalMetricsSink sink);
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSource.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSource.java
new file mode 100644
index 0000000..967d819
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSource.java
@@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source;
+
+import java.util.Collection;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink.ExternalMetricsSink;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.cache.InternalMetricsCache;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.cache.InternalMetricsCacheProvider;
+
+public class RawMetricsSource implements InternalMetricsSource {
+  private static final Log LOG = LogFactory.getLog(RawMetricsSource.class);
+  private final int internalCacheInterval;
+  private final ExternalMetricsSink rawMetricsSink;
+  private final ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor();
+  private final InternalMetricsCache cache;
+  static final String RAW_METRICS_CACHE = "RAW_METRICS_CACHE_INSTANCE";
+
+  public RawMetricsSource(int internalCacheInterval, ExternalMetricsSink rawMetricsSink) {
+    this.internalCacheInterval = internalCacheInterval;
+    this.rawMetricsSink = rawMetricsSink;
+    this.cache = InternalMetricsCacheProvider.getInstance().getCacheInstance(RAW_METRICS_CACHE);
+    if (rawMetricsSink.getFlushSeconds() > internalCacheInterval) {
+      initializeFixedRateScheduler();
+    }
+  }
+
+  @Override
+  public void publishTimelineMetrics(Collection<TimelineMetrics> metrics) {
+    // TODO: Adjust default flush to reasonable defaults > 3 seconds
+    if (rawMetricsSink.getFlushSeconds() > internalCacheInterval) {
+      // Need to cache only if external sink cannot keep up and thereby has
+      // different flush interval as compared to HBase flush
+      cache.putAll(metrics); // Scheduler initialized already for flush
+    } else {
+      submitDataWithTimeout(metrics);
+    }
+  }
+
+  private void initializeFixedRateScheduler() {
+    executorService.scheduleAtFixedRate(new Runnable() {
+      @Override
+      public void run() {
+        rawMetricsSink.sinkMetricData(cache.evictAll());
+      }
+    }, rawMetricsSink.getFlushSeconds(), rawMetricsSink.getFlushSeconds(), TimeUnit.SECONDS);
+  }
+
+  private void submitDataWithTimeout(final Collection<TimelineMetrics> metrics) {
+    Future f = executorService.submit(new Callable<Object>() {
+      @Override
+      public Object call() throws Exception {
+        rawMetricsSink.sinkMetricData(metrics);
+        return null;
+      }
+    });
+    try {
+      f.get(rawMetricsSink.getSinkTimeOutSeconds(), TimeUnit.SECONDS);
+    } catch (InterruptedException e) {
+      LOG.warn("Raw metrics sink interrupted.");
+    } catch (ExecutionException e) {
+      LOG.warn("Exception on sinking metrics", e);
+    } catch (TimeoutException e) {
+      LOG.warn("Timeout exception on sinking metrics", e);
+    }
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricCacheKey.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricCacheKey.java
new file mode 100644
index 0000000..28d457d
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricCacheKey.java
@@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.cache;
+
+public class InternalMetricCacheKey {
+  private String metricName;
+  private String appId;
+  private String instanceId;
+  private String hostname;
+  private long startTime; // Useful for debugging
+
+  public InternalMetricCacheKey(String metricName, String appId, String instanceId, String hostname, long startTime) {
+    this.metricName = metricName;
+    this.appId = appId;
+    this.instanceId = instanceId;
+    this.hostname = hostname;
+    this.startTime = startTime;
+  }
+
+  public String getMetricName() {
+    return metricName;
+  }
+
+  public void setMetricName(String metricName) {
+    this.metricName = metricName;
+  }
+
+  public String getAppId() {
+    return appId;
+  }
+
+  public void setAppId(String appId) {
+    this.appId = appId;
+  }
+
+  public String getInstanceId() {
+    return instanceId;
+  }
+
+  public void setInstanceId(String instanceId) {
+    this.instanceId = instanceId;
+  }
+
+  public String getHostname() {
+    return hostname;
+  }
+
+  public void setHostname(String hostname) {
+    this.hostname = hostname;
+  }
+
+  public long getStartTime() {
+    return startTime;
+  }
+
+  public void setStartTime(long startTime) {
+    this.startTime = startTime;
+  }
+
+  @Override
+  public boolean equals(Object o) {
+    if (this == o) return true;
+    if (o == null || getClass() != o.getClass()) return false;
+
+    InternalMetricCacheKey that = (InternalMetricCacheKey) o;
+
+    if (!getMetricName().equals(that.getMetricName())) return false;
+    if (!getAppId().equals(that.getAppId())) return false;
+    if (getInstanceId() != null ? !getInstanceId().equals(that.getInstanceId()) : that.getInstanceId() != null)
+      return false;
+    return getHostname() != null ? getHostname().equals(that.getHostname()) : that.getHostname() == null;
+
+  }
+
+  @Override
+  public int hashCode() {
+    int result = getMetricName().hashCode();
+    result = 31 * result + getAppId().hashCode();
+    result = 31 * result + (getInstanceId() != null ? getInstanceId().hashCode() : 0);
+    result = 31 * result + (getHostname() != null ? getHostname().hashCode() : 0);
+    return result;
+  }
+
+  @Override
+  public String toString() {
+    return "InternalMetricCacheKey{" +
+      "metricName='" + metricName + '\'' +
+      ", appId='" + appId + '\'' +
+      ", instanceId='" + instanceId + '\'' +
+      ", hostname='" + hostname + '\'' +
+      ", startTime=" + startTime +
+      '}';
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricCacheValue.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricCacheValue.java
new file mode 100644
index 0000000..a4dabe7
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricCacheValue.java
@@ -0,0 +1,37 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.cache;
+
+import java.util.Map;
+import java.util.TreeMap;
+
+public class InternalMetricCacheValue {
+  private TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
+
+  public TreeMap<Long, Double> getMetricValues() {
+    return metricValues;
+  }
+
+  public void setMetricValues(TreeMap<Long, Double> metricValues) {
+    this.metricValues = metricValues;
+  }
+
+  public void addMetricValues(Map<Long, Double> metricValues) {
+    this.metricValues.putAll(metricValues);
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCache.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCache.java
new file mode 100644
index 0000000..a4ed9bc
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCache.java
@@ -0,0 +1,231 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.cache;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Date;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+
+import net.sf.ehcache.Cache;
+import net.sf.ehcache.CacheException;
+import net.sf.ehcache.CacheManager;
+import net.sf.ehcache.Ehcache;
+import net.sf.ehcache.Element;
+import net.sf.ehcache.config.CacheConfiguration;
+import net.sf.ehcache.config.PersistenceConfiguration;
+import net.sf.ehcache.config.SizeOfPolicyConfiguration;
+import net.sf.ehcache.event.CacheEventListener;
+import net.sf.ehcache.store.MemoryStoreEvictionPolicy;
+
+public class InternalMetricsCache {
+  private static final Log LOG = LogFactory.getLog(InternalMetricsCache.class);
+  private final String instanceName;
+  private final String maxHeapPercent;
+  private volatile boolean isCacheInitialized = false;
+  private Cache cache;
+  static final String TIMELINE_METRIC_CACHE_MANAGER_NAME = "internalMetricsCacheManager";
+  private final Lock lock = new ReentrantLock();
+  private static final int LOCK_TIMEOUT_SECONDS = 2;
+
+  public InternalMetricsCache(String instanceName, String maxHeapPercent) {
+    this.instanceName = instanceName;
+    this.maxHeapPercent = maxHeapPercent;
+    initialize();
+  }
+
+  private void initialize() {
+    // Check in case of contention to avoid ObjectExistsException
+    if (isCacheInitialized) {
+      throw new RuntimeException("Cannot initialize internal cache twice");
+    }
+
+    System.setProperty("net.sf.ehcache.skipUpdateCheck", "true");
+    System.setProperty("net.sf.ehcache.sizeofengine." + TIMELINE_METRIC_CACHE_MANAGER_NAME,
+      "org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.cache.InternalMetricsCacheSizeOfEngine");
+
+    net.sf.ehcache.config.Configuration managerConfig =
+      new net.sf.ehcache.config.Configuration();
+    managerConfig.setName(TIMELINE_METRIC_CACHE_MANAGER_NAME);
+
+    // Set max heap available to the cache manager
+    managerConfig.setMaxBytesLocalHeap(maxHeapPercent);
+
+    //Create a singleton CacheManager using defaults
+    CacheManager manager = CacheManager.create(managerConfig);
+
+    LOG.info("Creating Metrics Cache with maxHeapPercent => " + maxHeapPercent);
+
+    // Create a Cache specifying its configuration.
+    CacheConfiguration cacheConfiguration = new CacheConfiguration()
+      .name(instanceName)
+      .memoryStoreEvictionPolicy(MemoryStoreEvictionPolicy.LRU)
+      .sizeOfPolicy(new SizeOfPolicyConfiguration() // Set sizeOf policy to continue on max depth reached - avoid OOM
+        .maxDepth(10000)
+        .maxDepthExceededBehavior(SizeOfPolicyConfiguration.MaxDepthExceededBehavior.CONTINUE))
+      .eternal(true) // infinite time until eviction
+      .persistence(new PersistenceConfiguration()
+        .strategy(PersistenceConfiguration.Strategy.NONE.name()));
+
+    cache = new Cache(cacheConfiguration);
+    cache.getCacheEventNotificationService().registerListener(new InternalCacheEvictionListener());
+
+    LOG.info("Registering internal metrics cache with provider: name = " +
+      cache.getName() + ", guid: " + cache.getGuid());
+
+    manager.addCache(cache);
+
+    isCacheInitialized = true;
+  }
+
+  public InternalMetricCacheValue getInternalMetricCacheValue(InternalMetricCacheKey key) {
+    Element ele = cache.get(key);
+    if (ele != null) {
+      return (InternalMetricCacheValue) ele.getObjectValue();
+    }
+    return null;
+  }
+
+  public Collection<TimelineMetrics> evictAll() {
+    TimelineMetrics metrics = new TimelineMetrics();
+    try {
+      if (lock.tryLock(LOCK_TIMEOUT_SECONDS, TimeUnit.SECONDS)) {
+        try{
+          List keys = cache.getKeys();
+          for (Object obj : keys) {
+            TimelineMetric metric = new TimelineMetric();
+            InternalMetricCacheKey key = (InternalMetricCacheKey) obj;
+            metric.setMetricName(key.getMetricName());
+            metric.setAppId(key.getAppId());
+            metric.setInstanceId(key.getInstanceId());
+            metric.setHostName(key.getHostname());
+            metric.setStartTime(key.getStartTime());
+            metric.setTimestamp(key.getStartTime());
+            Element ele = cache.get(key);
+            metric.setMetricValues(((InternalMetricCacheValue) ele.getObjectValue()).getMetricValues());
+            metrics.getMetrics().add(metric);
+          }
+          cache.removeAll();
+        } finally {
+          lock.unlock();
+        }
+      } else {
+        LOG.warn("evictAll: Unable to acquire lock on the cache instance. " +
+          "Giving up after " + LOCK_TIMEOUT_SECONDS + " seconds.");
+      }
+    } catch (InterruptedException e) {
+      LOG.warn("Interrupted while waiting to acquire lock");
+    }
+
+    return Collections.singletonList(metrics);
+  }
+
+  public void putAll(Collection<TimelineMetrics> metrics) {
+    try {
+      if (lock.tryLock(LOCK_TIMEOUT_SECONDS, TimeUnit.SECONDS)) {
+        try {
+          if (metrics != null) {
+            for (TimelineMetrics timelineMetrics : metrics) {
+              for (TimelineMetric timelineMetric : timelineMetrics.getMetrics()) {
+                InternalMetricCacheKey key = new InternalMetricCacheKey(
+                  timelineMetric.getMetricName(),
+                  timelineMetric.getAppId(),
+                  timelineMetric.getInstanceId(),
+                  timelineMetric.getHostName(),
+                  timelineMetric.getStartTime()
+                );
+
+                Element ele = cache.get(key);
+                if (ele != null) {
+                  InternalMetricCacheValue value = (InternalMetricCacheValue) ele.getObjectValue();
+                  value.addMetricValues(timelineMetric.getMetricValues());
+                } else {
+                  InternalMetricCacheValue value = new InternalMetricCacheValue();
+                  value.setMetricValues(timelineMetric.getMetricValues());
+                  cache.put(new Element(key, value));
+                }
+              }
+            }
+          }
+        } finally {
+          lock.unlock();
+        }
+      } else {
+        LOG.warn("putAll: Unable to acquire lock on the cache instance. " +
+          "Giving up after " + LOCK_TIMEOUT_SECONDS + " seconds.");
+      }
+    } catch (InterruptedException e) {
+      LOG.warn("Interrupted while waiting to acquire lock");
+    }
+  }
+
+  class InternalCacheEvictionListener implements CacheEventListener {
+
+    @Override
+    public void notifyElementRemoved(Ehcache cache, Element element) throws CacheException {
+      // expected
+    }
+
+    @Override
+    public void notifyElementPut(Ehcache cache, Element element) throws CacheException {
+      // do nothing
+    }
+
+    @Override
+    public void notifyElementUpdated(Ehcache cache, Element element) throws CacheException {
+      // do nothing
+    }
+
+    @Override
+    public void notifyElementExpired(Ehcache cache, Element element) {
+      // do nothing
+    }
+
+    @Override
+    public void notifyElementEvicted(Ehcache cache, Element element) {
+      // Bad - Remote endpoint cannot keep up resulting in flooding
+      InternalMetricCacheKey key = (InternalMetricCacheKey) element.getObjectKey();
+      LOG.warn("Evicting element from internal metrics cache, metric => " + key
+        .getMetricName() + ", startTime = " + new Date(key.getStartTime()));
+    }
+
+    @Override
+    public void notifyRemoveAll(Ehcache cache) {
+      // expected
+    }
+
+    @Override
+    public Object clone() throws CloneNotSupportedException {
+      return null;
+    }
+
+    @Override
+    public void dispose() {
+      // do nothing
+    }
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheProvider.java
new file mode 100644
index 0000000..3e0dc1b
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheProvider.java
@@ -0,0 +1,48 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.cache;
+
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
+
+public class InternalMetricsCacheProvider {
+  private Map<String, InternalMetricsCache> metricsCacheMap = new ConcurrentHashMap<>();
+  private static final InternalMetricsCacheProvider instance = new InternalMetricsCacheProvider();
+
+  private InternalMetricsCacheProvider() {
+  }
+
+  public static InternalMetricsCacheProvider getInstance() {
+    return instance;
+  }
+
+  public InternalMetricsCache getCacheInstance(String instanceName) {
+    if (metricsCacheMap.containsKey(instanceName)) {
+      return metricsCacheMap.get(instanceName);
+    } else {
+      TimelineMetricConfiguration conf = TimelineMetricConfiguration.getInstance();
+      InternalMetricsCache cache = new InternalMetricsCache(instanceName,
+        conf.getInternalCacheHeapPercent(instanceName));
+
+      metricsCacheMap.put(instanceName, cache);
+      return cache;
+    }
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheSizeOfEngine.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheSizeOfEngine.java
new file mode 100644
index 0000000..d1a1a89
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/cache/InternalMetricsCacheSizeOfEngine.java
@@ -0,0 +1,66 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.cache;
+
+import org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsEhCacheSizeOfEngine;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import net.sf.ehcache.pool.Size;
+import net.sf.ehcache.pool.SizeOfEngine;
+
+public class InternalMetricsCacheSizeOfEngine extends TimelineMetricsEhCacheSizeOfEngine {
+  private final static Logger LOG = LoggerFactory.getLogger(InternalMetricsCacheSizeOfEngine.class);
+
+  private InternalMetricsCacheSizeOfEngine(SizeOfEngine underlying) {
+    super(underlying);
+  }
+
+  public InternalMetricsCacheSizeOfEngine() {
+    // Invoke default constructor in base class
+  }
+
+  @Override
+  public Size sizeOf(Object key, Object value, Object container) {
+    try {
+      LOG.debug("BEGIN - Sizeof, key: {}, value: {}", key, value);
+      long size = 0;
+      if (key instanceof InternalMetricCacheKey) {
+        InternalMetricCacheKey metricCacheKey = (InternalMetricCacheKey) key;
+        size += reflectionSizeOf.sizeOf(metricCacheKey.getMetricName());
+        size += reflectionSizeOf.sizeOf(metricCacheKey.getAppId());
+        size += reflectionSizeOf.sizeOf(metricCacheKey.getInstanceId()); // null safe
+        size += reflectionSizeOf.sizeOf(metricCacheKey.getHostname());
+      }
+      if (value instanceof InternalMetricCacheValue) {
+        size += getValueMapSize(((InternalMetricCacheValue) value).getMetricValues());
+      }
+      // Mark size as not being exact
+      return new Size(size, false);
+    } finally {
+      LOG.debug("END - Sizeof, key: {}", key);
+    }
+  }
+
+  @Override
+  public SizeOfEngine copyWith(int maxDepth, boolean abortWhenMaxDepthExceeded) {
+    LOG.debug("Copying tracing sizeof engine, maxdepth: {}, abort: {}",
+      maxDepth, abortWhenMaxDepthExceeded);
+
+    return new InternalMetricsCacheSizeOfEngine(underlying.copyWith(maxDepth, abortWhenMaxDepthExceeded));
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
index 3688630..41ddef5 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
@@ -25,7 +25,7 @@ import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.service.Service.STATE;
 import org.apache.hadoop.util.ExitUtil;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.HBaseTimelineMetricStore;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.HBaseTimelineMetricsService;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
@@ -73,7 +73,7 @@ import static org.powermock.api.support.membermodification.MemberMatcher.method;
 import static org.powermock.api.support.membermodification.MemberModifier.suppress;
 
 @RunWith(PowerMockRunner.class)
-@PrepareForTest({ PhoenixHBaseAccessor.class, HBaseTimelineMetricStore.class, UserGroupInformation.class,
+@PrepareForTest({ PhoenixHBaseAccessor.class, HBaseTimelineMetricsService.class, UserGroupInformation.class,
   ClientCnxn.class, DefaultPhoenixDataSource.class, ConnectionFactory.class,
   TimelineMetricConfiguration.class, ApplicationHistoryServer.class })
 @PowerMockIgnore( {"javax.management.*"})
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
index 611d82e..fbf7b09 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
@@ -17,10 +17,32 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline;
 
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.OUT_OFF_BAND_DATA_TIME_ALLOWANCE;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_RECORD_TABLE_NAME;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_METRICS_SQL;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.easymock.EasyMock.expect;
+import static org.easymock.EasyMock.replay;
+import static org.powermock.api.easymock.PowerMock.mockStatic;
+import static org.powermock.api.easymock.PowerMock.replayAll;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.IntegrationTestingUtility;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
@@ -41,24 +63,6 @@ import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
-import java.io.IOException;
-import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.PreparedStatement;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.sql.Statement;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Properties;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.OUT_OFF_BAND_DATA_TIME_ALLOWANCE;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_RECORD_TABLE_NAME;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.UPSERT_METRICS_SQL;
-import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
-import static org.assertj.core.api.Assertions.assertThat;
-
 public abstract class AbstractMiniHBaseClusterTest extends BaseTest {
 
   protected static final long BATCH_SIZE = 3;
@@ -200,11 +204,8 @@ public abstract class AbstractMiniHBaseClusterTest extends BaseTest {
     metricsConf.setLong(OUT_OFF_BAND_DATA_TIME_ALLOWANCE, 600000);
 
     return
-      new PhoenixHBaseAccessor(
-        new Configuration(),
-        metricsConf,
+      new PhoenixHBaseAccessor(new TimelineMetricConfiguration(new Configuration(), metricsConf),
         new PhoenixConnectionProvider() {
-
           @Override
           public HBaseAdmin getHBaseAdmin() throws IOException {
             try {
@@ -229,7 +230,7 @@ public abstract class AbstractMiniHBaseClusterTest extends BaseTest {
   }
 
   protected void insertMetricRecords(Connection conn, TimelineMetrics metrics, long currentTime)
-                                    throws SQLException, IOException {
+    throws SQLException, IOException {
 
     List<TimelineMetric> timelineMetrics = metrics.getMetrics();
     if (timelineMetrics == null || timelineMetrics.isEmpty()) {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricStoreTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricStoreTest.java
index 70dd583..e06033d 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricStoreTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricStoreTest.java
@@ -33,7 +33,7 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.Function.PostProcessingFunction.RATE;
 import static org.assertj.core.api.Assertions.assertThat;
 
-public class HBaseTimelineMetricStoreTest {
+public class HBaseTimelineMetricsServiceTest {
 
   public static final String MEM_METRIC = "mem";
   public static final String BYTES_IN_METRIC = "bytes_in";
@@ -51,7 +51,7 @@ public class HBaseTimelineMetricStoreTest {
 
     //when
     Multimap<String, List<Function>> multimap =
-      HBaseTimelineMetricStore.parseMetricNamesToAggregationFunctions(metricNames);
+      HBaseTimelineMetricsService.parseMetricNamesToAggregationFunctions(metricNames);
 
     //then
     Assert.assertEquals(multimap.keySet().size(), 3);
@@ -103,7 +103,7 @@ public class HBaseTimelineMetricStoreTest {
     metricValues.put(1454000005000L, 7.0);
 
     // Calculate rate
-    Map<Long, Double> rates = HBaseTimelineMetricStore.updateValuesAsRate(new TreeMap<>(metricValues), false);
+    Map<Long, Double> rates = HBaseTimelineMetricsService.updateValuesAsRate(new TreeMap<>(metricValues), false);
 
     // Make sure rate is zero
     Assert.assertTrue(rates.size() == 4);
@@ -126,7 +126,7 @@ public class HBaseTimelineMetricStoreTest {
     metricValues.put(1454016548371L, 1015.25);
     metricValues.put(1454016608371L, 1020.25);
 
-    Map<Long, Double> rates = HBaseTimelineMetricStore.updateValuesAsRate(new TreeMap<>(metricValues), true);
+    Map<Long, Double> rates = HBaseTimelineMetricsService.updateValuesAsRate(new TreeMap<>(metricValues), true);
 
     Assert.assertTrue(rates.size() == 3);
     Assert.assertTrue(rates.containsValue(2.0));
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
index d5baaef..f6d69f6 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/ITPhoenixHBaseAccessor.java
@@ -17,9 +17,33 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline;
 
-import com.google.common.collect.ArrayListMultimap;
-import com.google.common.collect.Multimap;
-import junit.framework.Assert;
+import static junit.framework.Assert.assertEquals;
+import static junit.framework.Assert.assertTrue;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricTestHelper.createEmptyTimelineClusterMetric;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricTestHelper.createEmptyTimelineMetric;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricTestHelper.createMetricHostAggregate;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricTestHelper.prepareSingleTimelineMetric;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.DATE_TIERED_COMPACTION_POLICY;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.FIFO_COMPACTION_POLICY_CLASS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.HSTORE_COMPACTION_CLASS_KEY;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.HSTORE_ENGINE_CLASS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_AGGREGATE_MINUTE_TABLE_NAME;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_RECORD_TABLE_NAME;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.PHOENIX_TABLES;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
@@ -40,35 +64,10 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultCondition;
 import org.junit.Test;
 
-import java.io.IOException;
-import java.lang.reflect.Array;
-import java.lang.reflect.Field;
-import java.sql.SQLException;
-import java.sql.Connection;
-import java.sql.PreparedStatement;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.sql.Statement;
-import java.util.Arrays;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
 
-import static junit.framework.Assert.assertEquals;
-import static junit.framework.Assert.assertTrue;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricTestHelper.createEmptyTimelineClusterMetric;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricTestHelper.createEmptyTimelineMetric;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricTestHelper.createMetricHostAggregate;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricTestHelper.prepareSingleTimelineMetric;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.DATE_TIERED_COMPACTION_POLICY;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.FIFO_COMPACTION_POLICY_CLASS;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.HSTORE_COMPACTION_CLASS_KEY;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.HSTORE_ENGINE_CLASS;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_AGGREGATE_MINUTE_TABLE_NAME;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_RECORD_TABLE_NAME;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.PHOENIX_TABLES;
+import junit.framework.Assert;
 
 
 
@@ -93,7 +92,8 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
     // WHEN
     long endTime = ctime + minute;
     Condition condition = new DefaultCondition(
-      Collections.singletonList("disk_free"), Collections.singletonList("local1"),
+      new ArrayList<String>() {{ add("disk_free"); }},
+      Collections.singletonList("local1"),
       null, null, startTime, endTime, Precision.SECONDS, null, true);
     TimelineMetrics timelineMetrics = hdb.getMetricRecords(condition,
       singletonValueFunctionMap("disk_free"));
@@ -117,18 +117,19 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
     long ctime = startTime;
     long minute = 60 * 1000;
     hdb.insertMetricRecords(prepareSingleTimelineMetric(ctime, "local1",
-        "disk_free", 1));
+      "disk_free", 1));
     hdb.insertMetricRecords(prepareSingleTimelineMetric(ctime + minute, "local1",
-        "disk_free", 2));
+      "disk_free", 2));
     hdb.insertMetricRecords(prepareSingleTimelineMetric(ctime, "local2",
-        "disk_free", 2));
+      "disk_free", 2));
     long endTime = ctime + minute;
     boolean success = aggregatorMinute.doWork(startTime, endTime);
     assertTrue(success);
 
     // WHEN
     Condition condition = new DefaultCondition(
-      Collections.singletonList("disk_free"), Collections.singletonList("local1"),
+      new ArrayList<String>() {{ add("disk_free"); }},
+      Collections.singletonList("local1"),
       null, null, startTime, endTime, Precision.MINUTES, null, false);
     TimelineMetrics timelineMetrics = hdb.getMetricRecords(condition,
       singletonValueFunctionMap("disk_free"));
@@ -151,10 +152,10 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
       TimelineMetricAggregatorFactory.createTimelineMetricAggregatorHourly(hdb, new Configuration(), null);
 
     MetricHostAggregate expectedAggregate =
-        createMetricHostAggregate(2.0, 0.0, 20, 15.0);
+      createMetricHostAggregate(2.0, 0.0, 20, 15.0);
     Map<TimelineMetric, MetricHostAggregate>
-        aggMap = new HashMap<TimelineMetric,
-        MetricHostAggregate>();
+      aggMap = new HashMap<TimelineMetric,
+      MetricHostAggregate>();
 
     long startTime = System.currentTimeMillis();
     int min_5 = 5 * 60 * 1000;
@@ -179,7 +180,8 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
 
     // WHEN
     Condition condition = new DefaultCondition(
-      Collections.singletonList("disk_used"), Collections.singletonList("test_host"),
+      new ArrayList<String>() {{ add("disk_free"); }},
+      Collections.singletonList("test_host"),
       "test_app", null, startTime, endTime, Precision.HOURS, null, true);
     TimelineMetrics timelineMetrics = hdb.getMetricRecords(condition,
       singletonValueFunctionMap("disk_used"));
@@ -200,20 +202,20 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(
-        hdb, new Configuration(), new TimelineMetricMetadataManager(hdb, new Configuration()), null);
+        hdb, new Configuration(), new TimelineMetricMetadataManager(new Configuration(), hdb), null);
 
     long startTime = System.currentTimeMillis();
     long ctime = startTime + 1;
     long minute = 60 * 1000;
     hdb.insertMetricRecords(prepareSingleTimelineMetric(ctime, "local1",
-        "disk_free", 1));
+      "disk_free", 1));
     hdb.insertMetricRecords(prepareSingleTimelineMetric(ctime, "local2",
-        "disk_free", 2));
+      "disk_free", 2));
     ctime += minute;
     hdb.insertMetricRecords(prepareSingleTimelineMetric(ctime, "local1",
-        "disk_free", 2));
+      "disk_free", 2));
     hdb.insertMetricRecords(prepareSingleTimelineMetric(ctime, "local2",
-        "disk_free", 1));
+      "disk_free", 1));
 
     long endTime = ctime + minute + 1;
     boolean success = agg.doWork(startTime, endTime);
@@ -221,8 +223,8 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
 
     // WHEN
     Condition condition = new DefaultCondition(
-        Collections.singletonList("disk_free"), null, null, null,
-        startTime, endTime, Precision.SECONDS, null, true);
+      new ArrayList<String>() {{ add("disk_free"); }},
+      null, null, null, startTime, endTime, Precision.SECONDS, null, true);
     TimelineMetrics timelineMetrics = hdb.getAggregateMetricRecords(condition,
       singletonValueFunctionMap("disk_free"));
 
@@ -240,7 +242,7 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(hdb,
-        new Configuration(), new TimelineMetricMetadataManager(hdb, new Configuration()), null);
+        new Configuration(), new TimelineMetricMetadataManager(new Configuration(), hdb), null);
 
     long startTime = System.currentTimeMillis();
     long ctime = startTime + 1;
@@ -261,8 +263,8 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
 
     // WHEN
     Condition condition = new DefaultCondition(
-      Collections.singletonList("disk_free"), null, null, null,
-      null, null, Precision.SECONDS, null, true);
+      new ArrayList<String>() {{ add("disk_free"); }},
+      null, null, null, null, null, Precision.SECONDS, null, true);
 
     Multimap<String, List<Function>> mmap = ArrayListMultimap.create();
     mmap.put("disk_free", Collections.singletonList(new Function(Function.ReadFunction.SUM, null)));
@@ -288,14 +290,14 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
     long minute = 60 * 1000;
 
     Map<TimelineClusterMetric, MetricClusterAggregate> records =
-        new HashMap<TimelineClusterMetric, MetricClusterAggregate>();
+      new HashMap<TimelineClusterMetric, MetricClusterAggregate>();
 
     records.put(createEmptyTimelineClusterMetric(ctime),
       new MetricClusterAggregate(4.0, 2, 0.0, 4.0, 0.0));
     records.put(createEmptyTimelineClusterMetric(ctime += minute),
       new MetricClusterAggregate(4.0, 2, 0.0, 4.0, 0.0));
     records.put(createEmptyTimelineClusterMetric(ctime += minute),
-        new MetricClusterAggregate(4.0, 2, 0.0, 4.0, 0.0));
+      new MetricClusterAggregate(4.0, 2, 0.0, 4.0, 0.0));
     records.put(createEmptyTimelineClusterMetric(ctime += minute),
       new MetricClusterAggregate(4.0, 2, 0.0, 4.0, 0.0));
 
@@ -305,8 +307,8 @@ public class ITPhoenixHBaseAccessor extends AbstractMiniHBaseClusterTest {
 
     // WHEN
     Condition condition = new DefaultCondition(
-        Collections.singletonList("disk_used"), null, null, null,
-        startTime, ctime + minute, Precision.HOURS, null, true);
+      new ArrayList<String>() {{ add("disk_free"); }},
+      null, null, null, startTime, ctime + minute, Precision.HOURS, null, true);
     TimelineMetrics timelineMetrics = hdb.getAggregateMetricRecords(condition,
       singletonValueFunctionMap("disk_used"));
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
index d668178..bf9246d 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
@@ -35,6 +35,7 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL;
 import org.apache.phoenix.exception.PhoenixIOException;
 import org.easymock.EasyMock;
+import org.junit.Before;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.powermock.api.easymock.PowerMock;
@@ -42,6 +43,7 @@ import org.powermock.core.classloader.annotations.PrepareForTest;
 import org.powermock.modules.junit4.PowerMockRunner;
 
 import java.io.IOException;
+import java.lang.reflect.Field;
 import java.sql.Connection;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
@@ -53,22 +55,36 @@ import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 
+import static org.easymock.EasyMock.expect;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.fail;
+import static org.powermock.api.easymock.PowerMock.*;
 
 @RunWith(PowerMockRunner.class)
-@PrepareForTest(PhoenixTransactSQL.class)
+@PrepareForTest({PhoenixTransactSQL.class, TimelineMetricConfiguration.class})
 public class PhoenixHBaseAccessorTest {
   private static final String ZOOKEEPER_QUORUM = "hbase.zookeeper.quorum";
 
-  @Test
-  public void testGetMetricRecords() throws SQLException, IOException {
+  PhoenixConnectionProvider connectionProvider;
+  PhoenixHBaseAccessor accessor;
 
+  @Before
+  public void setupConf() throws Exception {
     Configuration hbaseConf = new Configuration();
     hbaseConf.setStrings(ZOOKEEPER_QUORUM, "quorum");
     Configuration metricsConf = new Configuration();
+    metricsConf.setStrings(TimelineMetricConfiguration.TIMELINE_METRICS_CACHE_SIZE, "1");
+    metricsConf.setStrings(TimelineMetricConfiguration.TIMELINE_METRICS_CACHE_COMMIT_INTERVAL, "100");
+    metricsConf.setStrings(
+      TimelineMetricConfiguration.TIMELINE_METRIC_AGGREGATOR_SINK_CLASS,
+      "org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricsAggregatorMemorySink");
+
+    TimelineMetricConfiguration conf = new TimelineMetricConfiguration(hbaseConf, metricsConf);
+    mockStatic(TimelineMetricConfiguration.class);
+    expect(TimelineMetricConfiguration.getInstance()).andReturn(conf).anyTimes();
+    replayAll();
 
-    PhoenixConnectionProvider connectionProvider = new PhoenixConnectionProvider() {
+    connectionProvider = new PhoenixConnectionProvider() {
       @Override
       public HBaseAdmin getHBaseAdmin() throws IOException {
         return null;
@@ -80,21 +96,24 @@ public class PhoenixHBaseAccessorTest {
       }
     };
 
-    PhoenixHBaseAccessor accessor = new PhoenixHBaseAccessor(hbaseConf, metricsConf, connectionProvider);
+    accessor = new PhoenixHBaseAccessor(connectionProvider);
+  }
 
+  @Test
+  public void testGetMetricRecords() throws SQLException, IOException {
     List<String> metricNames = new LinkedList<>();
     List<String> hostnames = new LinkedList<>();
     Multimap<String, List<Function>> metricFunctions = ArrayListMultimap.create();
 
-    PowerMock.mockStatic(PhoenixTransactSQL.class);
+    mockStatic(PhoenixTransactSQL.class);
     PreparedStatement preparedStatementMock = EasyMock.createNiceMock(PreparedStatement.class);
     Condition condition = new DefaultCondition(metricNames, hostnames, "appid", "instanceid", 123L, 234L, Precision.SECONDS, 10, true);
-    EasyMock.expect(PhoenixTransactSQL.prepareGetMetricsSqlStmt(null, condition)).andReturn(preparedStatementMock).once();
+    expect(PhoenixTransactSQL.prepareGetMetricsSqlStmt(null, condition)).andReturn(preparedStatementMock).once();
     ResultSet rsMock = EasyMock.createNiceMock(ResultSet.class);
-    EasyMock.expect(preparedStatementMock.executeQuery()).andReturn(rsMock);
+    expect(preparedStatementMock.executeQuery()).andReturn(rsMock);
 
 
-    PowerMock.replayAll();
+    replayAll();
     EasyMock.replay(preparedStatementMock, rsMock);
 
     // Check when startTime < endTime
@@ -105,104 +124,64 @@ public class PhoenixHBaseAccessorTest {
     TimelineMetrics tml2 = accessor.getMetricRecords(condition2, metricFunctions);
     assertEquals(0, tml2.getMetrics().size());
 
-    PowerMock.verifyAll();
+    verifyAll();
     EasyMock.verify(preparedStatementMock, rsMock);
   }
 
   @Test
-  public void testGetMetricRecordsIOException()
-    throws SQLException, IOException {
-
-    Configuration hbaseConf = new Configuration();
-    hbaseConf.setStrings(ZOOKEEPER_QUORUM, "quorum");
-    Configuration metricsConf = new Configuration();
-
-    PhoenixConnectionProvider connectionProvider = new PhoenixConnectionProvider() {
-      @Override
-      public HBaseAdmin getHBaseAdmin() throws IOException {
-        return null;
-      }
-
-      @Override
-      public Connection getConnection() throws SQLException {
-        return null;
-      }
-    };
-
-    PhoenixHBaseAccessor accessor = new PhoenixHBaseAccessor(hbaseConf, metricsConf, connectionProvider);
-
+  public void testGetMetricRecordsIOException() throws SQLException, IOException {
     List<String> metricNames = new LinkedList<>();
     List<String> hostnames = new LinkedList<>();
     Multimap<String, List<Function>> metricFunctions = ArrayListMultimap.create();
 
-    PowerMock.mockStatic(PhoenixTransactSQL.class);
+    mockStatic(PhoenixTransactSQL.class);
     PreparedStatement preparedStatementMock = EasyMock.createNiceMock(PreparedStatement.class);
     Condition condition = new DefaultCondition(metricNames, hostnames, "appid", "instanceid", 123L, 234L, Precision.SECONDS, 10, true);
-    EasyMock.expect(PhoenixTransactSQL.prepareGetMetricsSqlStmt(null, condition)).andReturn(preparedStatementMock).once();
+    expect(PhoenixTransactSQL.prepareGetMetricsSqlStmt(null, condition)).andReturn(preparedStatementMock).once();
     ResultSet rsMock = EasyMock.createNiceMock(ResultSet.class);
     RuntimeException runtimeException = EasyMock.createNiceMock(RuntimeException.class);
     IOException io = EasyMock.createNiceMock(IOException.class);
-    EasyMock.expect(preparedStatementMock.executeQuery()).andThrow(runtimeException);
-    EasyMock.expect(runtimeException.getCause()).andReturn(io).atLeastOnce();
+    expect(preparedStatementMock.executeQuery()).andThrow(runtimeException);
+    expect(runtimeException.getCause()).andReturn(io).atLeastOnce();
     StackTraceElement stackTrace[] = new StackTraceElement[]{new StackTraceElement("TimeRange","method","file",1)};
-    EasyMock.expect(io.getStackTrace()).andReturn(stackTrace).atLeastOnce();
+    expect(io.getStackTrace()).andReturn(stackTrace).atLeastOnce();
 
 
-    PowerMock.replayAll();
+    replayAll();
     EasyMock.replay(preparedStatementMock, rsMock, io, runtimeException);
 
     TimelineMetrics tml = accessor.getMetricRecords(condition, metricFunctions);
 
     assertEquals(0, tml.getMetrics().size());
 
-    PowerMock.verifyAll();
+    verifyAll();
     EasyMock.verify(preparedStatementMock, rsMock, io, runtimeException);
   }
 
   @Test
-  public void testGetMetricRecordsPhoenixIOExceptionDoNotRetryException()
-    throws SQLException, IOException {
-
-    Configuration hbaseConf = new Configuration();
-    hbaseConf.setStrings(ZOOKEEPER_QUORUM, "quorum");
-    Configuration metricsConf = new Configuration();
-
-    PhoenixConnectionProvider connectionProvider = new PhoenixConnectionProvider() {
-      @Override
-      public HBaseAdmin getHBaseAdmin() throws IOException {
-        return null;
-      }
-
-      @Override
-      public Connection getConnection() throws SQLException {
-        return null;
-      }
-    };
-
-    PhoenixHBaseAccessor accessor = new PhoenixHBaseAccessor(hbaseConf, metricsConf, connectionProvider);
-
+  public void testGetMetricRecordsPhoenixIOExceptionDoNotRetryException() throws SQLException, IOException {
     List<String> metricNames = new LinkedList<>();
     List<String> hostnames = new LinkedList<>();
     Multimap<String, List<Function>> metricFunctions = ArrayListMultimap.create();
 
-    PowerMock.mockStatic(PhoenixTransactSQL.class);
+    mockStatic(PhoenixTransactSQL.class);
     PreparedStatement preparedStatementMock = EasyMock.createNiceMock(PreparedStatement.class);
     Condition condition = new DefaultCondition(metricNames, hostnames, "appid", "instanceid", null, null, Precision.SECONDS, 10, true);
-    EasyMock.expect(PhoenixTransactSQL.prepareGetLatestMetricSqlStmt(null, condition)).andReturn(preparedStatementMock).once();
+    expect(PhoenixTransactSQL.prepareGetLatestMetricSqlStmt(null, condition)).andReturn(preparedStatementMock).once();
     PhoenixTransactSQL.setSortMergeJoinEnabled(true);
     EasyMock.expectLastCall();
     ResultSet rsMock = EasyMock.createNiceMock(ResultSet.class);
     PhoenixIOException pioe1 = EasyMock.createNiceMock(PhoenixIOException.class);
     PhoenixIOException pioe2 = EasyMock.createNiceMock(PhoenixIOException.class);
     DoNotRetryIOException dnrioe = EasyMock.createNiceMock(DoNotRetryIOException.class);
-    EasyMock.expect(preparedStatementMock.executeQuery()).andThrow(pioe1);
-    EasyMock.expect(pioe1.getCause()).andReturn(pioe2).atLeastOnce();
-    EasyMock.expect(pioe2.getCause()).andReturn(dnrioe).atLeastOnce();
+    expect(preparedStatementMock.executeQuery()).andThrow(pioe1);
+    expect(pioe1.getCause()).andReturn(pioe2).atLeastOnce();
+    expect(pioe2.getCause()).andReturn(dnrioe).atLeastOnce();
     StackTraceElement stackTrace[] = new StackTraceElement[]{new StackTraceElement("HashJoinRegionScanner","method","file",1)};
-    EasyMock.expect(dnrioe.getStackTrace()).andReturn(stackTrace).atLeastOnce();
+    expect(dnrioe.getStackTrace()).andReturn(stackTrace).atLeastOnce();
 
 
-    PowerMock.replayAll();
+    replayAll();
     EasyMock.replay(preparedStatementMock, rsMock, pioe1, pioe2, dnrioe);
     try {
       accessor.getMetricRecords(condition, metricFunctions);
@@ -210,20 +189,17 @@ public class PhoenixHBaseAccessorTest {
     } catch (Exception e) {
       //NOP
     }
-    PowerMock.verifyAll();
+    verifyAll();
   }
 
   @Test
   public void testMetricsCacheCommittingWhenFull() throws IOException, SQLException {
     Configuration hbaseConf = new Configuration();
     hbaseConf.setStrings(ZOOKEEPER_QUORUM, "quorum");
-    Configuration metricsConf = new Configuration();
-    metricsConf.setStrings(TimelineMetricConfiguration.TIMELINE_METRICS_CACHE_SIZE, "1");
-    metricsConf.setStrings(TimelineMetricConfiguration.TIMELINE_METRICS_CACHE_COMMIT_INTERVAL, "100");
-    final Connection connection = EasyMock.createNiceMock(Connection.class);
 
+    final Connection connection = EasyMock.createNiceMock(Connection.class);
 
-    PhoenixHBaseAccessor accessor = new PhoenixHBaseAccessor(hbaseConf, metricsConf) {
+    accessor = new PhoenixHBaseAccessor(connectionProvider) {
       @Override
       public void commitMetrics(Collection<TimelineMetrics> timelineMetricsCollection) {
         try {
@@ -235,7 +211,7 @@ public class PhoenixHBaseAccessorTest {
     };
 
     TimelineMetrics timelineMetrics = EasyMock.createNiceMock(TimelineMetrics.class);
-    EasyMock.expect(timelineMetrics.getMetrics()).andReturn(Collections.singletonList(new TimelineMetric())).anyTimes();
+    expect(timelineMetrics.getMetrics()).andReturn(Collections.singletonList(new TimelineMetric())).anyTimes();
     connection.commit();
     EasyMock.expectLastCall().once();
 
@@ -250,44 +226,33 @@ public class PhoenixHBaseAccessorTest {
 
   @Test
   public void testMetricsAggregatorSink() throws IOException, SQLException {
-    Configuration hbaseConf = new Configuration();
-    hbaseConf.setStrings(ZOOKEEPER_QUORUM, "quorum");
-    Configuration metricsConf = new Configuration();
     Map<TimelineClusterMetric, MetricClusterAggregate> clusterAggregateMap =
         new HashMap<>();
     Map<TimelineClusterMetric, MetricHostAggregate> clusterTimeAggregateMap =
         new HashMap<>();
     Map<TimelineMetric, MetricHostAggregate> hostAggregateMap = new HashMap<>();
 
-    metricsConf.setStrings(
-        TimelineMetricConfiguration.TIMELINE_METRICS_CACHE_SIZE, "1");
-    metricsConf.setStrings(
-        TimelineMetricConfiguration.TIMELINE_METRICS_CACHE_COMMIT_INTERVAL,
-        "100");
-    metricsConf.setStrings(
-        TimelineMetricConfiguration.TIMELINE_METRIC_AGGREGATOR_SINK_CLASS,
-        "org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricsAggregatorMemorySink");
 
     final Connection connection = EasyMock.createNiceMock(Connection.class);
-    final PreparedStatement statement =
-        EasyMock.createNiceMock(PreparedStatement.class);
-    EasyMock.expect(connection.prepareStatement(EasyMock.anyString()))
-        .andReturn(statement).anyTimes();
+    final PreparedStatement statement = EasyMock.createNiceMock(PreparedStatement.class);
+    expect(connection.prepareStatement(EasyMock.anyString())).andReturn(statement).anyTimes();
     EasyMock.replay(statement);
     EasyMock.replay(connection);
 
-    PhoenixConnectionProvider connectionProvider =
-        new PhoenixConnectionProvider() {
-          @Override
-          public HBaseAdmin getHBaseAdmin() throws IOException {
-            return null;
-          }
+    connectionProvider = new PhoenixConnectionProvider() {
+
+      @Override
+      public HBaseAdmin getHBaseAdmin() throws IOException {
+        return null;
+      }
+
+      @Override
+      public Connection getConnection() throws SQLException {
+        return connection;
+      }
+    };
 
-          @Override
-          public Connection getConnection() throws SQLException {
-            return connection;
-          }
-        };
+    accessor = new PhoenixHBaseAccessor(connectionProvider);
 
     TimelineClusterMetric clusterMetric =
         new TimelineClusterMetric("metricName", "appId", "instanceId",
@@ -303,12 +268,10 @@ public class PhoenixHBaseAccessorTest {
     clusterTimeAggregateMap.put(clusterMetric, new MetricHostAggregate());
     hostAggregateMap.put(timelineMetric, new MetricHostAggregate());
 
-    PhoenixHBaseAccessor accessor =
-        new PhoenixHBaseAccessor(hbaseConf, metricsConf, connectionProvider);
     accessor.saveClusterAggregateRecords(clusterAggregateMap);
     accessor.saveHostAggregateRecords(hostAggregateMap,
         PhoenixTransactSQL.METRICS_AGGREGATE_MINUTE_TABLE_NAME);
-    accessor.saveClusterTimeAggregateRecords(clusterTimeAggregateMap,
+    accessor.saveClusterAggregateRecordsSecond(clusterTimeAggregateMap,
         PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_HOURLY_TABLE_NAME);
 
     TimelineMetricsAggregatorMemorySink memorySink =
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStoreWatcherTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStoreWatcherTest.java
index 54b8442..dd0378d 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStoreWatcherTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStoreWatcherTest.java
@@ -66,7 +66,7 @@ public class TimelineMetricStoreWatcherTest {
     replay(metricStore);
 
     TimelineMetricStoreWatcher timelineMetricStoreWatcher =
-      new TimelineMetricStoreWatcher(metricStore, new TimelineMetricConfiguration());
+      new TimelineMetricStoreWatcher(metricStore, TimelineMetricConfiguration.getInstance());
     timelineMetricStoreWatcher.run();
     timelineMetricStoreWatcher.run();
     timelineMetricStoreWatcher.run();
@@ -97,7 +97,7 @@ public class TimelineMetricStoreWatcherTest {
     replayAll();
 
     TimelineMetricStoreWatcher timelineMetricStoreWatcher =
-      new TimelineMetricStoreWatcher(metricStore, new TimelineMetricConfiguration());
+      new TimelineMetricStoreWatcher(metricStore, TimelineMetricConfiguration.getInstance());
     timelineMetricStoreWatcher.run();
     timelineMetricStoreWatcher.run();
     timelineMetricStoreWatcher.run();
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java
index 07fd85d..86c9b40 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/ITClusterAggregator.java
@@ -18,7 +18,29 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
 
 
-import junit.framework.Assert;
+import static junit.framework.Assert.assertEquals;
+import static junit.framework.Assert.assertNotNull;
+import static junit.framework.Assert.assertTrue;
+import static junit.framework.Assert.fail;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricTestHelper.createEmptyTimelineClusterMetric;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricTestHelper.prepareSingleTimelineMetric;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_APP_IDS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_CLUSTER_AGGREGATE_SQL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_HOURLY_TABLE_NAME;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_TABLE_NAME;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.NATIVE_TIME_RANGE_DELTA;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.TreeMap;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
 import org.apache.hadoop.metrics2.sink.timeline.MetricHostAggregate;
@@ -26,42 +48,13 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.AbstractMiniHBaseClusterTest;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricTestHelper;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineMetricAggregator;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineMetricAggregatorFactory;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineMetricReadHelper;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultCondition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL;
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
-import org.junit.After;
-import org.junit.Before;
 import org.junit.Test;
 
-import java.sql.Connection;
-import java.sql.PreparedStatement;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.sql.Statement;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.TreeMap;
-
-import static junit.framework.Assert.assertEquals;
-import static junit.framework.Assert.assertNotNull;
-import static junit.framework.Assert.assertTrue;
-import static junit.framework.Assert.fail;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricTestHelper.createEmptyTimelineClusterMetric;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricTestHelper.prepareSingleTimelineMetric;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_APP_IDS;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_CLUSTER_AGGREGATE_SQL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_HOURLY_TABLE_NAME;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_TABLE_NAME;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.NATIVE_TIME_RANGE_DELTA;
+import junit.framework.Assert;
 
 public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
   private final TimelineMetricReadHelper metricReader = new TimelineMetricReadHelper(false);
@@ -77,7 +70,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(hdb,
-        getConfigurationForTest(false), new TimelineMetricMetadataManager(hdb, new Configuration()), null);
+        getConfigurationForTest(false), new TimelineMetricMetadataManager(new Configuration(), hdb), null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
 
     long startTime = System.currentTimeMillis();
@@ -130,7 +123,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(hdb,
-        getConfigurationForTest(false), new TimelineMetricMetadataManager(hdb, new Configuration()), null);
+        getConfigurationForTest(false), new TimelineMetricMetadataManager(new Configuration(), hdb), null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
 
     long startTime = System.currentTimeMillis();
@@ -206,7 +199,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
     // GIVEN
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(hdb,
-        getConfigurationForTest(false), new TimelineMetricMetadataManager(hdb, new Configuration()), null);
+        getConfigurationForTest(false), new TimelineMetricMetadataManager(new Configuration(), hdb), null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
 
     // here we put some metrics tha will be aggregated
@@ -290,7 +283,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
       MetricTestHelper.createMetricHostAggregate(4.0, 0.0, 2, 4.0));
 
 
-    hdb.saveClusterTimeAggregateRecords(records, METRICS_CLUSTER_AGGREGATE_HOURLY_TABLE_NAME);
+    hdb.saveClusterAggregateRecordsSecond(records, METRICS_CLUSTER_AGGREGATE_HOURLY_TABLE_NAME);
 
     // WHEN
     agg.doWork(startTime, ctime + hour + 1000);
@@ -490,7 +483,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
     conf.set(CLUSTER_AGGREGATOR_APP_IDS, "app1");
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(hdb,
-        conf, new TimelineMetricMetadataManager(hdb, new Configuration()), null);
+        conf, new TimelineMetricMetadataManager(new Configuration(), hdb), null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
 
     long startTime = System.currentTimeMillis();
@@ -511,14 +504,13 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
 
     //THEN
     Condition condition = new DefaultCondition(
-      Collections.singletonList("cpu_user"), null, "app1", null,
+      new ArrayList<String>() {{ add("cpu_user"); }}, null, "app1", null,
       startTime, endTime, null, null, true);
     condition.setStatement(String.format(GET_CLUSTER_AGGREGATE_SQL,
       PhoenixTransactSQL.getNaiveTimeRangeHint(startTime, NATIVE_TIME_RANGE_DELTA),
       METRICS_CLUSTER_AGGREGATE_TABLE_NAME));
 
-    PreparedStatement pstmt = PhoenixTransactSQL.prepareGetMetricsSqlStmt
-      (conn, condition);
+    PreparedStatement pstmt = PhoenixTransactSQL.prepareGetMetricsSqlStmt(conn, condition);
     ResultSet rs = pstmt.executeQuery();
 
     int recordCount = 0;
@@ -542,7 +534,7 @@ public class ITClusterAggregator extends AbstractMiniHBaseClusterTest {
   public void testClusterAggregateMetricNormalization() throws Exception {
     TimelineMetricAggregator agg =
       TimelineMetricAggregatorFactory.createTimelineClusterAggregatorSecond(hdb,
-        getConfigurationForTest(false), new TimelineMetricMetadataManager(hdb, new Configuration()), null);
+        getConfigurationForTest(false), new TimelineMetricMetadataManager(new Configuration(), hdb), null);
     TimelineMetricReadHelper readHelper = new TimelineMetricReadHelper(false);
 
     // Sample data
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java
index c62fd34..3adf770 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataManager.java
@@ -45,7 +45,7 @@ public class TestMetadataManager extends AbstractMiniHBaseClusterTest {
   @Before
   public void insertDummyRecords() throws IOException, SQLException, URISyntaxException {
     // Initialize new manager
-    metadataManager = new TimelineMetricMetadataManager(hdb, new Configuration());
+    metadataManager = new TimelineMetricMetadataManager(new Configuration(), hdb);
     final long now = System.currentTimeMillis();
 
     TimelineMetrics timelineMetrics = new TimelineMetrics();
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataSync.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataSync.java
index 181abca..a524b13 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataSync.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TestMetadataSync.java
@@ -68,8 +68,7 @@ public class TestMetadataSync {
 
     replay(configuration, hBaseAccessor);
 
-    TimelineMetricMetadataManager metadataManager = new
-      TimelineMetricMetadataManager(hBaseAccessor, configuration);
+    TimelineMetricMetadataManager metadataManager = new TimelineMetricMetadataManager(new Configuration(), hBaseAccessor);
 
     metadataManager.metricMetadataSync = new TimelineMetricMetadataSync(metadataManager);
 
@@ -110,8 +109,7 @@ public class TestMetadataSync {
 
     replay(configuration, hBaseAccessor);
 
-    TimelineMetricMetadataManager metadataManager = new
-      TimelineMetricMetadataManager(hBaseAccessor, configuration);
+    TimelineMetricMetadataManager metadataManager = new TimelineMetricMetadataManager(configuration, hBaseAccessor);
 
     metadataManager.putIfModifiedTimelineMetricMetadata(metadata1);
     metadataManager.putIfModifiedTimelineMetricMetadata(metadata2);
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSourceTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSourceTest.java
new file mode 100644
index 0000000..5d3aacb
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSourceTest.java
@@ -0,0 +1,142 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source;
+
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source.InternalSourceProvider.SOURCE_NAME.RAW_METRICS;
+import static org.easymock.EasyMock.capture;
+import static org.easymock.EasyMock.createNiceMock;
+import static org.easymock.EasyMock.expect;
+import static org.easymock.EasyMock.expectLastCall;
+import static org.easymock.EasyMock.replay;
+import static org.easymock.EasyMock.verify;
+import static org.powermock.api.easymock.PowerMock.mockStatic;
+import static org.powermock.api.easymock.PowerMock.replayAll;
+
+import java.util.Collection;
+import java.util.Collections;
+import java.util.TreeMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink.ExternalMetricsSink;
+import org.easymock.Capture;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.powermock.core.classloader.annotations.PrepareForTest;
+import org.powermock.modules.junit4.PowerMockRunner;
+
+import junit.framework.Assert;
+
+@RunWith(PowerMockRunner.class)
+@PrepareForTest(TimelineMetricConfiguration.class)
+public class RawMetricsSourceTest {
+
+  @Before
+  public void setupConf() throws Exception {
+    TimelineMetricConfiguration conf = new TimelineMetricConfiguration(new
+      Configuration(), new Configuration());
+    mockStatic(TimelineMetricConfiguration.class);
+    expect(TimelineMetricConfiguration.getInstance()).andReturn(conf).anyTimes();
+    replayAll();
+  }
+
+  @Test
+  public void testRawMetricsSourcedAtFlushInterval() throws Exception {
+    InternalSourceProvider internalSourceProvider = new DefaultInternalMetricsSourceProvider();
+    ExternalMetricsSink rawMetricsSink = createNiceMock(ExternalMetricsSink.class);
+    expect(rawMetricsSink.getFlushSeconds()).andReturn(1);
+    expect(rawMetricsSink.getSinkTimeOutSeconds()).andReturn(1);
+    Capture<Collection<TimelineMetrics>> metricsCapture = new Capture<>();
+    rawMetricsSink.sinkMetricData(capture(metricsCapture));
+    expectLastCall();
+    replay(rawMetricsSink);
+
+    InternalMetricsSource rawMetricsSource = internalSourceProvider.getInternalMetricsSource(RAW_METRICS, 1, rawMetricsSink);
+    TimelineMetrics timelineMetrics = new TimelineMetrics();
+
+    final long now = System.currentTimeMillis();
+    TimelineMetric metric1 = new TimelineMetric();
+    metric1.setMetricName("m1");
+    metric1.setAppId("a1");
+    metric1.setInstanceId("i1");
+    metric1.setHostName("h1");
+    metric1.setStartTime(now - 200);
+    metric1.setMetricValues(new TreeMap<Long, Double>() {{
+      put(now - 100, 1.0);
+      put(now - 200, 2.0);
+    }});
+    timelineMetrics.getMetrics().add(metric1);
+
+    rawMetricsSource.publishTimelineMetrics(Collections.singletonList(timelineMetrics));
+
+    verify(rawMetricsSink);
+  }
+
+  @Test(timeout = 10000)
+  public void testRawMetricsCachedAndSourced() throws Exception {
+    ExternalMetricsSink rawMetricsSink = createNiceMock(ExternalMetricsSink.class);
+    expect(rawMetricsSink.getFlushSeconds()).andReturn(2).anyTimes();
+    expect(rawMetricsSink.getSinkTimeOutSeconds()).andReturn(1).anyTimes();
+
+    class CaptureOnce<T> extends Capture<T> {
+      @Override
+      public void setValue(T value) {
+        if (!hasCaptured()) {
+          super.setValue(value);
+        }
+      }
+    }
+    Capture<Collection<TimelineMetrics>> metricsCapture = new CaptureOnce<>();
+
+    rawMetricsSink.sinkMetricData(capture(metricsCapture));
+    expectLastCall();
+    replay(rawMetricsSink);
+
+    InternalSourceProvider internalSourceProvider = new DefaultInternalMetricsSourceProvider();
+    InternalMetricsSource rawMetricsSource = internalSourceProvider.getInternalMetricsSource(RAW_METRICS, 1, rawMetricsSink);
+    TimelineMetrics timelineMetrics = new TimelineMetrics();
+
+    final long now = System.currentTimeMillis();
+    TimelineMetric metric1 = new TimelineMetric();
+    metric1.setMetricName("m1");
+    metric1.setAppId("a1");
+    metric1.setInstanceId("i1");
+    metric1.setHostName("h1");
+    metric1.setStartTime(now - 200);
+    metric1.setTimestamp(now - 200);
+    metric1.setMetricValues(new TreeMap<Long, Double>() {{
+      put(now - 100, 1.0);
+      put(now - 200, 2.0);
+    }});
+    timelineMetrics.getMetrics().add(metric1);
+
+    rawMetricsSource.publishTimelineMetrics(Collections.singletonList(timelineMetrics));
+
+    // Wait on eviction
+    Thread.sleep(5000);
+
+    verify(rawMetricsSink);
+
+    Assert.assertTrue(metricsCapture.hasCaptured());
+    Assert.assertTrue(metricsCapture.getValue().iterator().next().getMetrics().iterator().next().equals(metric1));
+  }
+
+}
diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java
index 2401d75..a3fd5f3 100644
--- a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java
+++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/cache/TimelineMetricsCacheSizeOfEngine.java
@@ -17,57 +17,27 @@
  */
 package org.apache.ambari.server.controller.metrics.timeline.cache;
 
-import java.util.Map;
-import java.util.TreeMap;
-
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsEhCacheSizeOfEngine;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import net.sf.ehcache.pool.Size;
 import net.sf.ehcache.pool.SizeOfEngine;
-import net.sf.ehcache.pool.impl.DefaultSizeOfEngine;
-import net.sf.ehcache.pool.sizeof.ReflectionSizeOf;
-import net.sf.ehcache.pool.sizeof.SizeOf;
 
 /**
  * Cache sizing engine that reduces reflective calls over the Object graph to
  * find total Heap usage.
  */
-public class TimelineMetricsCacheSizeOfEngine implements SizeOfEngine {
+public class TimelineMetricsCacheSizeOfEngine extends TimelineMetricsEhCacheSizeOfEngine {
 
   private final static Logger LOG = LoggerFactory.getLogger(TimelineMetricsCacheSizeOfEngine.class);
-  public static int DEFAULT_MAX_DEPTH = 1000;
-  public static boolean DEFAULT_ABORT_WHEN_MAX_DEPTH_EXCEEDED = false;
-
-  private SizeOfEngine underlying = null;
-  SizeOf reflectionSizeOf = new ReflectionSizeOf();
-
-  // Optimizations
-  private volatile long timelineMetricPrimitivesApproximation = 0;
-
-  private long sizeOfMapEntry;
-  private long sizeOfMapEntryOverhead;
 
   private TimelineMetricsCacheSizeOfEngine(SizeOfEngine underlying) {
-    this.underlying = underlying;
+    super(underlying);
   }
 
   public TimelineMetricsCacheSizeOfEngine() {
-    this(new DefaultSizeOfEngine(DEFAULT_MAX_DEPTH, DEFAULT_ABORT_WHEN_MAX_DEPTH_EXCEEDED));
-
-    this.sizeOfMapEntry = reflectionSizeOf.sizeOf(new Long(1)) +
-      reflectionSizeOf.sizeOf(new Double(2.0));
-
-    //SizeOfMapEntryOverhead = SizeOfMapWithOneEntry - (SizeOfEmptyMap + SizeOfOneEntry)
-    TreeMap<Long, Double> map = new TreeMap<>();
-    long emptyMapSize = reflectionSizeOf.sizeOf(map);
-    map.put(new Long(1), new Double(2.0));
-    long sizeOfMapOneEntry = reflectionSizeOf.deepSizeOf(DEFAULT_MAX_DEPTH, DEFAULT_ABORT_WHEN_MAX_DEPTH_EXCEEDED, map).getCalculated();
-    this.sizeOfMapEntryOverhead =  sizeOfMapOneEntry - (emptyMapSize + this.sizeOfMapEntry);
-
-    LOG.info("Creating custom sizeof engine for TimelineMetrics.");
+    // Invoke default constructor in base class
   }
 
   @Override
@@ -108,36 +78,10 @@ public class TimelineMetricsCacheSizeOfEngine implements SizeOfEngine {
 
   private long getTimelineMetricCacheValueSize(TimelineMetricsCacheValue value) {
     long size = 16; // startTime + endTime
-    TimelineMetrics metrics = value.getTimelineMetrics();
+
     size += 8; // Object reference
 
-    if (metrics != null) {
-      for (TimelineMetric metric : metrics.getMetrics()) {
-
-        if (timelineMetricPrimitivesApproximation == 0) {
-          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getMetricName());
-          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getAppId());
-          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getHostName());
-          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getInstanceId());
-          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getTimestamp());
-          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getStartTime());
-          timelineMetricPrimitivesApproximation += reflectionSizeOf.sizeOf(metric.getType());
-          timelineMetricPrimitivesApproximation += 8; // Object overhead
-
-          LOG.debug("timelineMetricPrimitivesApproximation bytes = {}", timelineMetricPrimitivesApproximation);
-        }
-        size += timelineMetricPrimitivesApproximation;
-
-        Map<Long, Double> metricValues = metric.getMetricValues();
-        if (metricValues != null && !metricValues.isEmpty()) {
-          // Numeric wrapper: 12 bytes + 8 bytes Data type + 4 bytes alignment = 48 (Long, Double)
-          // Tree Map: 12 bytes for header + 20 bytes for 5 object fields : pointers + 1 byte for flag = 40
-          LOG.debug("Size of metric value: {}", (sizeOfMapEntry + sizeOfMapEntryOverhead) * metricValues.size());
-          size += (sizeOfMapEntry + sizeOfMapEntryOverhead) * metricValues.size(); // Treemap size is O(1)
-        }
-      }
-      LOG.debug("Total Size of metric values in cache: {}", size);
-    }
+    size += getTimelineMetricsSize(value.getTimelineMetrics()); // TreeMap
 
     return size;
   }
@@ -147,7 +91,6 @@ public class TimelineMetricsCacheSizeOfEngine implements SizeOfEngine {
     LOG.debug("Copying tracing sizeof engine, maxdepth: {}, abort: {}",
       maxDepth, abortWhenMaxDepthExceeded);
 
-    return new TimelineMetricsCacheSizeOfEngine(
-      underlying.copyWith(maxDepth, abortWhenMaxDepthExceeded));
+    return new TimelineMetricsCacheSizeOfEngine(underlying.copyWith(maxDepth, abortWhenMaxDepthExceeded));
   }
 }
diff --git a/pom.xml b/pom.xml
index 4a231e3..b2469c4 100644
--- a/pom.xml
+++ b/pom.xml
@@ -386,7 +386,19 @@
 
             <!--Velocity log -->
             <exclude>**/velocity.log*</exclude>
-
+            <!-- Metrics module -->
+            <!-- grafana -->
+            <exclude>ambari-metrics/ambari-metrics-grafana/conf/unix/ams-grafana.ini</exclude>
+            <!-- psutil : external lib, Apache 2.0 license included as a source file -->
+            <exclude>ambari-metrics/target/**</exclude>
+            <exclude>ambari-metrics/ambari-metrics-host-monitoring/src/main/python/psutil/**</exclude>
+            <exclude>ambari-metrics/target/rpm/ambari-metrics/SPECS/ambari-metrics.spec</exclude>
+            <exclude>ambari-metrics/ambari-metrics-timelineservice/src/test/resources/lib/org/apache/phoenix/phoenix-core-tests/4.2.0/phoenix-core-tests-4.2.0.pom</exclude>
+            <exclude>ambari-metrics/ambari-metrics-timelineservice/src/test/resources/lib/org/apache/phoenix/phoenix-core-tests/maven-metadata-local.xml</exclude>
+            <exclude>ambari-metrics/ambari-metrics-alertservice/*.iml</exclude>
+            <exclude>ambari-metrics/*/target/**</exclude>
+            <!-- ignore .settings and .project  -->
+            <exclude>ambari-metrics/**/.*/**</exclude>
             <!-- generated DDL-->
             <exclude>**/createDDL.jdbc</exclude>
             <exclude>**/yarn.lock</exclude>

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 34/39: AMBARI-22744. Fix issues with webapp deployment with new Hadoop common changes. Addendum. (swagle)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit a0b8d064e7981a8118818d7e5fbec5f49c4fee6a
Author: Siddharth Wagle <sw...@hortonworks.com>
AuthorDate: Mon Jan 8 14:08:22 2018 -0800

    AMBARI-22744. Fix issues with webapp deployment with new Hadoop common changes. Addendum. (swagle)
---
 .../yarn/server/applicationhistoryservice/AMSApplicationServer.java   | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/AMSApplicationServer.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/AMSApplicationServer.java
index 38d46ef..db889bf 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/AMSApplicationServer.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/AMSApplicationServer.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.yarn.server.applicationhistoryservice;
 
+import static org.apache.hadoop.http.HttpServer2.HTTP_MAX_THREADS_KEY;
+
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -120,7 +122,7 @@ public class AMSApplicationServer extends CompositeService {
     LOG.info("Instantiating metrics collector at " + bindAddress);
     try {
       Configuration conf = metricConfiguration.getMetricsConf();
-      conf.set("hadoop.http.max.threads", String.valueOf(metricConfiguration
+      conf.set(HTTP_MAX_THREADS_KEY, String.valueOf(metricConfiguration
         .getTimelineMetricsServiceHandlerThreadCount()));
       HttpConfig.Policy policy = HttpConfig.Policy.valueOf(
         conf.get(TimelineMetricConfiguration.TIMELINE_SERVICE_HTTP_POLICY,

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 33/39: AMBARI-22744. Fix issues with webapp deployment with new Hadoop common changes. (swagle)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 84f939d9f9448aab3e9250f6953996fb3dcb67e4
Author: Siddharth Wagle <sw...@hortonworks.com>
AuthorDate: Fri Jan 5 13:34:57 2018 -0800

    AMBARI-22744. Fix issues with webapp deployment with new Hadoop common changes. (swagle)
---
 .../AMSApplicationServer.java                      | 10 ++--
 .../metrics/loadsimulator/LoadRunner.java          | 40 +++++++--------
 .../loadsimulator/MetricsLoadSimulator.java        |  6 +--
 .../metrics/loadsimulator/MetricsSenderWorker.java | 21 +++-----
 .../loadsimulator/data/HostMetricsGenerator.java   |  8 ++-
 .../data/MetricsGeneratorConfigurer.java           | 12 ++---
 .../loadsimulator/net/RestMetricsSender.java       |  9 ++--
 .../metrics/loadsimulator/util/Json.java           |  4 +-
 .../timeline/HBaseTimelineMetricsService.java      |  1 -
 .../metrics/timeline/PhoenixHBaseAccessor.java     |  1 -
 .../timeline/TimelineMetricConfiguration.java      |  2 -
 .../timeline/TimelineMetricDistributedCache.java   |  6 +--
 .../metrics/timeline/TimelineMetricStore.java      | 12 ++---
 .../timeline/TimelineMetricStoreWatcher.java       | 14 +++---
 .../timeline/TimelineMetricsIgniteCache.java       | 57 +++++++++++-----------
 .../aggregators/AbstractTimelineAggregator.java    | 35 ++++++-------
 .../timeline/aggregators/DownSamplerUtils.java     | 10 ++--
 .../TimelineMetricAggregatorFactory.java           | 20 ++++----
 .../aggregators/TimelineMetricAppAggregator.java   | 20 ++++----
 .../TimelineMetricClusterAggregator.java           | 20 ++++----
 ...tricClusterAggregatorSecondWithCacheSource.java | 14 +++---
 .../TimelineMetricFilteringHostAggregator.java     | 14 +++---
 .../aggregators/TimelineMetricHostAggregator.java  | 16 +++---
 .../aggregators/TimelineMetricReadHelper.java      | 10 ++--
 .../timeline/aggregators/TopNDownSampler.java      | 14 +++---
 .../v2/TimelineMetricClusterAggregator.java        | 16 +++---
 .../v2/TimelineMetricFilteringHostAggregator.java  | 14 +++---
 .../v2/TimelineMetricHostAggregator.java           | 14 +++---
 .../availability/AggregationTaskRunner.java        | 24 ++++-----
 .../timeline/availability/CheckpointManager.java   |  4 +-
 .../availability/MetricCollectorHAController.java  | 26 +++++-----
 .../OnlineOfflineStateModelFactory.java            |  4 +-
 .../discovery/TimelineMetricMetadataKey.java       |  4 +-
 .../discovery/TimelineMetricMetadataManager.java   | 43 ++++++++--------
 .../discovery/TimelineMetricMetadataSync.java      |  7 +--
 ...ractTimelineMetricsSeriesAggregateFunction.java |  9 ++--
 .../metrics/timeline/query/Condition.java          |  4 +-
 .../metrics/timeline/query/ConditionBuilder.java   |  6 +--
 .../metrics/timeline/query/ConnectionProvider.java |  2 -
 .../metrics/timeline/query/DefaultCondition.java   | 13 ++---
 .../timeline/query/DefaultPhoenixDataSource.java   | 12 ++---
 .../metrics/timeline/query/EmptyCondition.java     |  4 +-
 .../timeline/query/PhoenixConnectionProvider.java  |  5 +-
 .../metrics/timeline/query/PhoenixTransactSQL.java | 15 +++---
 .../query/SplitByMetricNamesCondition.java         |  4 +-
 .../metrics/timeline/query/TopNCondition.java      |  3 +-
 .../metrics/timeline/source/RawMetricsSource.java  |  1 -
 .../timeline/uuid/HashBasedUuidGenStrategy.java    |  8 +--
 .../timeline/uuid/MetricUuidGenStrategy.java       |  1 -
 .../timeline/uuid/RandomUuidGenStrategy.java       |  6 +--
 .../timeline/TimelineWriter.java                   |  4 +-
 .../webapp/TimelineWebServices.java                |  3 --
 ambari-metrics/pom.xml                             |  2 +-
 53 files changed, 306 insertions(+), 328 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/AMSApplicationServer.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/AMSApplicationServer.java
index f576362..38d46ef 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/AMSApplicationServer.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/AMSApplicationServer.java
@@ -20,7 +20,6 @@ package org.apache.hadoop.yarn.server.applicationhistoryservice;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.http.HttpConfig;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
@@ -35,13 +34,10 @@ import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.HBaseTimelineMetricsService;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStore;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.TimelineStore;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AMSWebApp;
 import org.apache.hadoop.yarn.webapp.WebApp;
 import org.apache.hadoop.yarn.webapp.WebApps;
 
-import com.google.common.annotations.VisibleForTesting;
-
 /**
  * Metrics collector web server
  */
@@ -86,7 +82,7 @@ public class AMSApplicationServer extends CompositeService {
     super.serviceStop();
   }
   
-  static AMSApplicationServer launchAppHistoryServer(String[] args) {
+  static AMSApplicationServer launchAMSApplicationServer(String[] args) {
     Thread.setDefaultUncaughtExceptionHandler(new YarnUncaughtExceptionHandler());
     StringUtils.startupShutdownMessage(AMSApplicationServer.class, args, LOG);
     AMSApplicationServer amsApplicationServer = null;
@@ -106,7 +102,7 @@ public class AMSApplicationServer extends CompositeService {
   }
 
   public static void main(String[] args) {
-    launchAppHistoryServer(args);
+    launchAMSApplicationServer(args);
   }
 
   protected TimelineMetricStore createTimelineMetricStore(Configuration conf) {
@@ -131,7 +127,7 @@ public class AMSApplicationServer extends CompositeService {
           HttpConfig.Policy.HTTP_ONLY.name()));
       webApp =
           WebApps
-            .$for("ambarimetrics", null, null, "ws")
+            .$for("timeline", null, null, "ws")
             .withHttpPolicy(conf, policy)
             .at(bindAddress)
             .start(new AMSWebApp(timelineMetricStore));
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/LoadRunner.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/LoadRunner.java
index cd1dd1b..a58ebd2 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/LoadRunner.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/LoadRunner.java
@@ -18,33 +18,29 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
   .loadsimulator;
 
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.data.AppID;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.data.ApplicationInstance;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.data.HostMetricsGenerator;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.data.MetricsGeneratorConfigurer;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.net.MetricsSender;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.net.RestMetricsSender;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.util.TimeStampProvider;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.data.AppID.MASTER_APPS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.data.AppID.SLAVE_APPS;
 
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Date;
 import java.util.List;
-import java.util.concurrent.*;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.data.AppID.MASTER_APPS;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.data.AppID.SLAVE_APPS;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.data.AppID;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.data.ApplicationInstance;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.data.HostMetricsGenerator;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.data.MetricsGeneratorConfigurer;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.net.MetricsSender;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.net.RestMetricsSender;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.util.TimeStampProvider;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  *
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/MetricsLoadSimulator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/MetricsLoadSimulator.java
index b64f542..e85c7a5 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/MetricsLoadSimulator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/MetricsLoadSimulator.java
@@ -18,13 +18,13 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
   .loadsimulator;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
 import java.io.IOException;
 import java.util.HashMap;
 import java.util.Map;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 /**
  * Sample Usage:
  * <pre>
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/MetricsSenderWorker.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/MetricsSenderWorker.java
index d111eb6..71f2bc5 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/MetricsSenderWorker.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/MetricsSenderWorker.java
@@ -18,22 +18,17 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator;
 
 
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.data.AppMetrics;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.data.HostMetricsGenerator;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.net.MetricsSender;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.net.RestMetricsSender;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.util.Json;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
 import java.io.IOException;
 import java.util.concurrent.Callable;
 
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.data.AppMetrics;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.data.HostMetricsGenerator;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.net.MetricsSender;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.net.RestMetricsSender;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.util.Json;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 /**
  */
 public class MetricsSenderWorker implements Callable<String> {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/data/HostMetricsGenerator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/data/HostMetricsGenerator.java
index 61a6624..f628f2c 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/data/HostMetricsGenerator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/data/HostMetricsGenerator.java
@@ -18,14 +18,12 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.data;
 
 
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.util.RandomMetricsProvider;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.util.TimeStampProvider;
-
 import java.util.HashMap;
 import java.util.Map;
 
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.util.RandomMetricsProvider;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.util.TimeStampProvider;
+
 /**
  */
 public class HostMetricsGenerator {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/data/MetricsGeneratorConfigurer.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/data/MetricsGeneratorConfigurer.java
index b3401b2..b315541 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/data/MetricsGeneratorConfigurer.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/data/MetricsGeneratorConfigurer.java
@@ -18,13 +18,6 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.data;
 
 
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.util.RandomMetricsProvider;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
-  .loadsimulator.util.TimeStampProvider;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
 import java.io.BufferedReader;
 import java.io.IOException;
 import java.io.InputStream;
@@ -32,6 +25,11 @@ import java.io.InputStreamReader;
 import java.util.HashMap;
 import java.util.Map;
 
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.util.RandomMetricsProvider;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.util.TimeStampProvider;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 /**
  * MetricsGeneratorConfigurer is a factory that reads metrics definition from a file,
  * and returns an Single HostMetricsGenerator. Check createMetricsForHost method
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/net/RestMetricsSender.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/net/RestMetricsSender.java
index 32af851..8eb3fec 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/net/RestMetricsSender.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/net/RestMetricsSender.java
@@ -17,14 +17,13 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.loadsimulator.net;
 
-import com.google.common.base.Stopwatch;
+import java.io.IOException;
+import java.util.concurrent.TimeUnit;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import java.io.IOException;
-import java.net.MalformedURLException;
-import java.net.ProtocolException;
-import java.util.concurrent.TimeUnit;
+import com.google.common.base.Stopwatch;
 
 /**
  * Implements MetricsSender and provides a way of pushing metrics to application metrics history service using REST
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/util/Json.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/util/Json.java
index 61a3903..982f48c 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/util/Json.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/loadsimulator/util/Json.java
@@ -18,13 +18,13 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics
   .loadsimulator.util;
 
+import java.io.IOException;
+
 import org.codehaus.jackson.annotate.JsonAutoDetect;
 import org.codehaus.jackson.annotate.JsonMethod;
 import org.codehaus.jackson.map.ObjectMapper;
 import org.codehaus.jackson.map.SerializationConfig;
 
-import java.io.IOException;
-
 /**
  * Small wrapper that configures the ObjectMapper with some defaults.
  */
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
index b09f876..2746119 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
@@ -44,7 +44,6 @@ import java.util.concurrent.TimeUnit;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
-import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.collections.MapUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
index fc59063..7cb317b 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
@@ -117,7 +117,6 @@ import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
 import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
 import org.apache.hadoop.hbase.client.Durability;
-import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.TableDescriptor;
 import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
 import org.apache.hadoop.hbase.util.RetryCounter;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
index 7c6f62b..90a5db6 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricConfiguration.java
@@ -36,8 +36,6 @@ import java.util.Set;
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink.ExternalSinkProvider;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricDistributedCache.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricDistributedCache.java
index 3480545..abedc7b 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricDistributedCache.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricDistributedCache.java
@@ -17,14 +17,14 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline;
 
+import java.util.Collection;
+import java.util.Map;
+
 import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 
-import java.util.Collection;
-import java.util.Map;
-
 public interface TimelineMetricDistributedCache {
   Map<TimelineClusterMetric, MetricClusterAggregate> evictMetricAggregates(Long startTime, Long endTime);
   void putMetrics(Collection<TimelineMetric> elements, TimelineMetricMetadataManager metricMetadataManager);
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
index b2cd1c2..1a9c725 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
@@ -17,6 +17,12 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline;
 
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
 import org.apache.hadoop.metrics2.sink.timeline.AggregationResult;
 import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
@@ -26,12 +32,6 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
 import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
 
-import java.io.IOException;
-import java.sql.SQLException;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-
 public interface TimelineMetricStore {
   /**
    * This method retrieves metrics stored by the Timeline store.
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStoreWatcher.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStoreWatcher.java
index 517d3a4..a1ba5d7 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStoreWatcher.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStoreWatcher.java
@@ -18,13 +18,6 @@
 
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.Precision;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-import org.apache.hadoop.util.ExitUtil;
-
 import java.util.Collections;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutorService;
@@ -32,6 +25,13 @@ import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 import java.util.concurrent.TimeUnit;
 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.Precision;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+import org.apache.hadoop.util.ExitUtil;
+
 /**
  * Acts as the single TimetineMetricStore Watcher.
  */
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCache.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCache.java
index 6441c9c..24fc938 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCache.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricsIgniteCache.java
@@ -17,6 +17,35 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline;
 
+import static java.util.concurrent.TimeUnit.SECONDS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_SECOND_SLEEP_INTERVAL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_TIMESLICE_INTERVAL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_APP_ID;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_CLUSTER_AGGREGATOR_INTERPOLATION_ENABLED;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_COLLECTOR_IGNITE_BACKUPS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_COLLECTOR_IGNITE_NODES;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRIC_AGGREGATION_SQL_FILTERS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_SERVICE_HTTP_POLICY;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedCheckPointTimeMillis;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getTimeSlices;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.sliceFromTimelineMetric;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.locks.Lock;
+
+import javax.cache.Cache;
+import javax.cache.expiry.CreatedExpiryPolicy;
+import javax.cache.expiry.Duration;
+
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -42,34 +71,6 @@ import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
 import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
 import org.apache.ignite.ssl.SslContextFactory;
 
-import javax.cache.Cache;
-import javax.cache.expiry.CreatedExpiryPolicy;
-import javax.cache.expiry.Duration;
-import java.net.MalformedURLException;
-import java.net.URISyntaxException;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.locks.Lock;
-
-import static java.util.concurrent.TimeUnit.SECONDS;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_SECOND_SLEEP_INTERVAL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_TIMESLICE_INTERVAL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_APP_ID;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_CLUSTER_AGGREGATOR_INTERPOLATION_ENABLED;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_COLLECTOR_IGNITE_BACKUPS;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_COLLECTOR_IGNITE_NODES;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRIC_AGGREGATION_SQL_FILTERS;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_SERVICE_HTTP_POLICY;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedCheckPointTimeMillis;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getTimeSlices;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.sliceFromTimelineMetric;
-
 public class TimelineMetricsIgniteCache implements TimelineMetricDistributedCache {
   private static final Log LOG =
       LogFactory.getLog(TimelineMetricsIgniteCache.class);
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java
index 89428c0..0203f88 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/AbstractTimelineAggregator.java
@@ -17,6 +17,23 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
 
+import static java.util.concurrent.TimeUnit.SECONDS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.AGGREGATOR_CHECKPOINT_DELAY;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.RESULTSET_FETCH_SIZE;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedAggregateTimeMillis;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedCheckPointTimeMillis;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.ACTUAL_AGGREGATOR_NAMES;
+
+import java.io.File;
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.Date;
+import java.util.Iterator;
+import java.util.List;
+
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.io.FileUtils;
 import org.apache.commons.lang.StringUtils;
@@ -28,24 +45,8 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.EmptyCondition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL;
-import org.slf4j.LoggerFactory;
 import org.slf4j.Logger;
-import java.io.File;
-import java.io.IOException;
-import java.sql.Connection;
-import java.sql.PreparedStatement;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.util.Date;
-import java.util.Iterator;
-import java.util.List;
-
-import static java.util.concurrent.TimeUnit.SECONDS;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.AGGREGATOR_CHECKPOINT_DELAY;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.RESULTSET_FETCH_SIZE;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedAggregateTimeMillis;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getRoundedCheckPointTimeMillis;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.ACTUAL_AGGREGATOR_NAMES;
+import org.slf4j.LoggerFactory;
 
 /**
  * Base class for all runnable aggregators. Provides common functions like
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/DownSamplerUtils.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/DownSamplerUtils.java
index 649ecee..8dece1d 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/DownSamplerUtils.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/DownSamplerUtils.java
@@ -18,16 +18,16 @@
 
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
 
-import org.apache.commons.lang.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-
 import java.util.ArrayList;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+
 /**
  * DownSampler utility class. Responsible for fetching downsampler configs from Metrics config, and determine if
  * any downsamplers are configured.
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
index 9e493ea..3728d19 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAggregatorFactory.java
@@ -17,16 +17,6 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
 
-import org.apache.commons.io.FilenameUtils;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricDistributedCache;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
-
-import java.util.concurrent.ConcurrentHashMap;
-
 import static java.util.concurrent.TimeUnit.SECONDS;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_DAILY_CHECKPOINT_CUTOFF_MULTIPLIER;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_DAILY_DISABLED;
@@ -69,6 +59,16 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_TABLE_NAME;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_RECORD_TABLE_NAME;
 
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.commons.io.FilenameUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricDistributedCache;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
+
 /**
  * Factory class that knows how to create a aggregator instance using
  * TimelineMetricConfiguration
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAppAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAppAggregator.java
index 09fbe81..b06b147 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAppAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAppAggregator.java
@@ -17,6 +17,16 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
 
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_APP_IDS;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_APP_ID;
+
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -28,16 +38,6 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 
-import java.util.Arrays;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.concurrent.ConcurrentHashMap;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.CLUSTER_AGGREGATOR_APP_IDS;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.HOST_APP_ID;
-
 /**
  * Aggregator responsible for providing app level host aggregates. This task
  * is accomplished without doing a round trip to storage, rather
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java
index 7368bfb..5c370f4 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregator.java
@@ -17,6 +17,16 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
 
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_CLUSTER_AGGREGATE_SQL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_CLUSTER_AGGREGATE_TIME_SQL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_TABLE_NAME;
+
+import java.io.IOException;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
 import org.apache.hadoop.metrics2.sink.timeline.MetricHostAggregate;
@@ -27,16 +37,6 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultCondition;
 
-import java.io.IOException;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.util.HashMap;
-import java.util.Map;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_CLUSTER_AGGREGATE_SQL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_CLUSTER_AGGREGATE_TIME_SQL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_TABLE_NAME;
-
 public class TimelineMetricClusterAggregator extends AbstractTimelineAggregator {
   private final TimelineMetricReadHelper readHelper;
   private final boolean isClusterPrecisionInputTable;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSource.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSource.java
index 888044a..dc31086 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSource.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondWithCacheSource.java
@@ -17,6 +17,13 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
 
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getTimeSlices;
+
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
 import org.apache.commons.lang.mutable.MutableInt;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
@@ -26,13 +33,6 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 
-import java.util.Date;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils.getTimeSlices;
-
 public class TimelineMetricClusterAggregatorSecondWithCacheSource extends TimelineMetricClusterAggregatorSecond {
   private TimelineMetricDistributedCache distributedCache;
   public TimelineMetricClusterAggregatorSecondWithCacheSource(AggregationTaskRunner.AGGREGATOR_NAME metricAggregateSecond, TimelineMetricMetadataManager metricMetadataManager, PhoenixHBaseAccessor hBaseAccessor, Configuration metricsConf, String checkpointLocation, long sleepIntervalMillis, int checkpointCutOffMultiplier, String aggregatorDisabledParam, String inputTableName, String outputTableName,
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricFilteringHostAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricFilteringHostAggregator.java
index 6e766e9..a75d2c4 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricFilteringHostAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricFilteringHostAggregator.java
@@ -17,6 +17,13 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
 
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_METRIC_AGGREGATE_ONLY_SQL;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -27,13 +34,6 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultCondition;
 
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-import java.util.concurrent.ConcurrentHashMap;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_METRIC_AGGREGATE_ONLY_SQL;
-
 public class TimelineMetricFilteringHostAggregator extends TimelineMetricHostAggregator {
   private static final Log LOG = LogFactory.getLog(TimelineMetricFilteringHostAggregator.class);
   private TimelineMetricMetadataManager metricMetadataManager;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricHostAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricHostAggregator.java
index f9f92db..6a11599 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricHostAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricHostAggregator.java
@@ -17,6 +17,14 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
 
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_METRIC_AGGREGATE_ONLY_SQL;
+
+import java.io.IOException;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -29,14 +37,6 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultCondition;
 
-import java.io.IOException;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.util.HashMap;
-import java.util.Map;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_METRIC_AGGREGATE_ONLY_SQL;
-
 public class TimelineMetricHostAggregator extends AbstractTimelineAggregator {
   private static final Log LOG = LogFactory.getLog(TimelineMetricHostAggregator.class);
   TimelineMetricReadHelper readHelper;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricReadHelper.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricReadHelper.java
index 539190b..e4633fa 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricReadHelper.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricReadHelper.java
@@ -18,6 +18,11 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
 
 
+import java.io.IOException;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.TreeMap;
+
 import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
 import org.apache.hadoop.metrics2.sink.timeline.MetricHostAggregate;
 import org.apache.hadoop.metrics2.sink.timeline.SingleValuedTimelineMetric;
@@ -25,11 +30,6 @@ import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 
-import java.io.IOException;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.util.TreeMap;
-
 public class TimelineMetricReadHelper {
 
   private boolean ignoreInstance = false;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TopNDownSampler.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TopNDownSampler.java
index 520da0a..d0dc0fd 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TopNDownSampler.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TopNDownSampler.java
@@ -18,19 +18,19 @@
 
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators;
 
-import org.apache.commons.lang3.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.TopNCondition;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.TOPN_DOWNSAMPLER_CLUSTER_METRIC_SELECT_SQL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.TOPN_DOWNSAMPLER_HOST_METRIC_SELECT_SQL;
 
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
 import java.util.Map;
 
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.TOPN_DOWNSAMPLER_CLUSTER_METRIC_SELECT_SQL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.TOPN_DOWNSAMPLER_HOST_METRIC_SELECT_SQL;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.TopNCondition;
 
 public class TopNDownSampler implements CustomDownSampler {
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricClusterAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricClusterAggregator.java
index e6d0b32..06552a6 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricClusterAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricClusterAggregator.java
@@ -17,6 +17,14 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.v2;
 
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_AGGREGATED_APP_METRIC_GROUPBY_SQL;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_TABLE_NAME;
+
+import java.io.IOException;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.Date;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AbstractTimelineAggregator;
@@ -25,14 +33,6 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.EmptyCondition;
 
-import java.io.IOException;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.util.Date;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_AGGREGATED_APP_METRIC_GROUPBY_SQL;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.METRICS_CLUSTER_AGGREGATE_TABLE_NAME;
-
 public class TimelineMetricClusterAggregator extends AbstractTimelineAggregator {
   private final String aggregateColumnName;
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricFilteringHostAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricFilteringHostAggregator.java
index f6e0d8f..a15ab2e 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricFilteringHostAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricFilteringHostAggregator.java
@@ -17,6 +17,13 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.v2;
 
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_AGGREGATED_HOST_METRIC_GROUPBY_SQL;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
@@ -26,13 +33,6 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.EmptyCondition;
 
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-import java.util.concurrent.ConcurrentHashMap;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_AGGREGATED_HOST_METRIC_GROUPBY_SQL;
-
 public class TimelineMetricFilteringHostAggregator extends TimelineMetricHostAggregator {
   private TimelineMetricMetadataManager metricMetadataManager;
   private ConcurrentHashMap<String, Long> postedAggregatedMap;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricHostAggregator.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricHostAggregator.java
index 5cec65d..3eb2be3 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricHostAggregator.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/v2/TimelineMetricHostAggregator.java
@@ -17,6 +17,13 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.v2;
 
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_AGGREGATED_HOST_METRIC_GROUPBY_SQL;
+
+import java.io.IOException;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.Date;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AbstractTimelineAggregator;
@@ -25,13 +32,6 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.EmptyCondition;
 
-import java.io.IOException;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.util.Date;
-
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL.GET_AGGREGATED_HOST_METRIC_GROUPBY_SQL;
-
 public class TimelineMetricHostAggregator extends AbstractTimelineAggregator {
 
   public TimelineMetricHostAggregator(AGGREGATOR_NAME aggregatorName,
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/AggregationTaskRunner.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/AggregationTaskRunner.java
index 9a92d41..fef9dc9 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/AggregationTaskRunner.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/AggregationTaskRunner.java
@@ -17,18 +17,6 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineMetricAggregator.AGGREGATOR_TYPE;
-import org.apache.helix.HelixManager;
-import org.apache.helix.HelixManagerFactory;
-import org.apache.helix.InstanceType;
-import org.apache.helix.participant.StateMachineEngine;
-
-import java.util.HashMap;
-import java.util.Map;
-import java.util.concurrent.atomic.AtomicBoolean;
-
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineMetricAggregator.AGGREGATOR_TYPE.CLUSTER;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineMetricAggregator.AGGREGATOR_TYPE.HOST;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.AGGREGATOR_NAME.METRIC_AGGREGATE_DAILY;
@@ -41,6 +29,18 @@ import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.ti
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController.DEFAULT_STATE_MODEL;
 import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController.METRIC_AGGREGATORS;
 
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineMetricAggregator.AGGREGATOR_TYPE;
+import org.apache.helix.HelixManager;
+import org.apache.helix.HelixManagerFactory;
+import org.apache.helix.InstanceType;
+import org.apache.helix.participant.StateMachineEngine;
+
 public class AggregationTaskRunner {
   private final String instanceName;
   private final String zkAddress;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/CheckpointManager.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/CheckpointManager.java
index 439102f..3293ead 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/CheckpointManager.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/CheckpointManager.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability;
 
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.ACTUAL_AGGREGATOR_NAMES;
+
 import org.I0Itec.zkclient.DataUpdater;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -26,8 +28,6 @@ import org.apache.helix.ZNRecord;
 import org.apache.helix.store.zk.ZkHelixPropertyStore;
 import org.apache.zookeeper.data.Stat;
 
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.ACTUAL_AGGREGATOR_NAMES;
-
 public class CheckpointManager {
   private final ZkHelixPropertyStore<ZNRecord> propertyStore;
   private static final Log LOG = LogFactory.getLog(CheckpointManager.class);
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/MetricCollectorHAController.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/MetricCollectorHAController.java
index d387394..d74f253 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/MetricCollectorHAController.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/MetricCollectorHAController.java
@@ -17,7 +17,16 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability;
 
-import com.google.common.base.Joiner;
+import static org.apache.helix.model.IdealState.RebalanceMode.FULL_AUTO;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeSet;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
 import org.I0Itec.zkclient.exception.ZkNoNodeException;
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.lang.StringUtils;
@@ -38,16 +47,11 @@ import org.apache.helix.model.InstanceConfig;
 import org.apache.helix.model.LiveInstance;
 import org.apache.helix.model.OnlineOfflineSMD;
 import org.apache.helix.model.StateModelDefinition;
-import org.apache.helix.tools.StateModelConfigGenerator;;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.List;
-import java.util.Map;
-import java.util.TreeSet;
-import java.util.concurrent.Executors;
-import java.util.concurrent.ScheduledExecutorService;
-import java.util.concurrent.TimeUnit;
-import static org.apache.helix.model.IdealState.RebalanceMode.FULL_AUTO;
+import org.apache.helix.tools.StateModelConfigGenerator;
+
+import com.google.common.base.Joiner;
+
+;
 
 public class MetricCollectorHAController {
   private static final Log LOG = LogFactory.getLog(MetricCollectorHAController.class);
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/OnlineOfflineStateModelFactory.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/OnlineOfflineStateModelFactory.java
index 3486c4d..a53dc3b 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/OnlineOfflineStateModelFactory.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/availability/OnlineOfflineStateModelFactory.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability;
 
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.PARTITION_AGGREGATION_TYPES;
+
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineMetricAggregator.AGGREGATOR_TYPE;
@@ -25,8 +27,6 @@ import org.apache.helix.model.Message;
 import org.apache.helix.participant.statemachine.StateModel;
 import org.apache.helix.participant.statemachine.StateModelFactory;
 
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.AggregationTaskRunner.PARTITION_AGGREGATION_TYPES;
-
 public class OnlineOfflineStateModelFactory extends StateModelFactory<StateModel> {
   private static final Log LOG = LogFactory.getLog(OnlineOfflineStateModelFactory.class);
   private final String instanceName;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataKey.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataKey.java
index 6aeb2dd..0c0ee5b 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataKey.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataKey.java
@@ -17,10 +17,10 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery;
 
-import org.apache.commons.lang3.StringUtils;
-
 import javax.xml.bind.annotation.XmlRootElement;
 
+import org.apache.commons.lang3.StringUtils;
+
 @XmlRootElement
 public class TimelineMetricMetadataKey {
   String metricName;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
index 6b926ac..beac866 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java
@@ -17,22 +17,11 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery;
 
-import org.apache.commons.collections.CollectionUtils;
-import org.apache.commons.collections.MapUtils;
-import org.apache.commons.lang.ArrayUtils;
-import org.apache.commons.lang.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.metrics2.sink.timeline.MetadataException;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid.HashBasedUuidGenStrategy;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid.MetricUuidGenStrategy;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid.RandomUuidGenStrategy;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.DISABLE_METRIC_METADATA_MGMT;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.METRICS_METADATA_SYNC_INIT_DELAY;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.METRICS_METADATA_SYNC_SCHEDULE_DELAY;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_UUID_GEN_STRATEGY;
+import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRIC_METADATA_FILTERS;
 
 import java.net.MalformedURLException;
 import java.net.URISyntaxException;
@@ -49,11 +38,23 @@ import java.util.concurrent.Executors;
 import java.util.concurrent.ScheduledExecutorService;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.DISABLE_METRIC_METADATA_MGMT;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.METRICS_METADATA_SYNC_INIT_DELAY;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.METRICS_METADATA_SYNC_SCHEDULE_DELAY;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRICS_UUID_GEN_STRATEGY;
-import static org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration.TIMELINE_METRIC_METADATA_FILTERS;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
+import org.apache.commons.lang.ArrayUtils;
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.metrics2.sink.timeline.MetadataException;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid.HashBasedUuidGenStrategy;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid.MetricUuidGenStrategy;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid.RandomUuidGenStrategy;
 
 public class TimelineMetricMetadataManager {
   private static final Log LOG = LogFactory.getLog(TimelineMetricMetadataManager.class);
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataSync.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataSync.java
index 96af877..fa5f55a 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataSync.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataSync.java
@@ -17,9 +17,6 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import java.sql.SQLException;
 import java.util.ArrayList;
 import java.util.HashMap;
@@ -27,6 +24,10 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
+
 /**
  * Sync metadata info with the store
  */
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/function/AbstractTimelineMetricsSeriesAggregateFunction.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/function/AbstractTimelineMetricsSeriesAggregateFunction.java
index 634e51d..5a5dde4 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/function/AbstractTimelineMetricsSeriesAggregateFunction.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/function/AbstractTimelineMetricsSeriesAggregateFunction.java
@@ -17,10 +17,6 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.function;
 
-import com.google.common.base.Joiner;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-
 import java.util.Iterator;
 import java.util.LinkedList;
 import java.util.List;
@@ -29,6 +25,11 @@ import java.util.Set;
 import java.util.TreeMap;
 import java.util.TreeSet;
 
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+
+import com.google.common.base.Joiner;
+
 public abstract class AbstractTimelineMetricsSeriesAggregateFunction
     implements TimelineMetricsSeriesAggregateFunction {
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java
index 4e04e6c..8d8cca3 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/Condition.java
@@ -1,9 +1,9 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
-import org.apache.hadoop.metrics2.sink.timeline.Precision;
-
 import java.util.List;
 
+import org.apache.hadoop.metrics2.sink.timeline.Precision;
+
 /**
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConditionBuilder.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConditionBuilder.java
index f395c3e..f330b60 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConditionBuilder.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConditionBuilder.java
@@ -17,13 +17,13 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
-import org.apache.hadoop.metrics2.sink.timeline.Precision;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.Function;
-
 import java.util.LinkedHashSet;
 import java.util.List;
 import java.util.Set;
 
+import org.apache.hadoop.metrics2.sink.timeline.Precision;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.Function;
+
 public class ConditionBuilder {
 
   private List<String> metricNames;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConnectionProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConnectionProvider.java
index 391af27..24239a0 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConnectionProvider.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConnectionProvider.java
@@ -18,8 +18,6 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
 
-import org.apache.hadoop.hbase.util.RetryCounterFactory;
-
 import java.sql.Connection;
 import java.sql.SQLException;
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultCondition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultCondition.java
index 763e4c7..a88f44e 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultCondition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultCondition.java
@@ -16,18 +16,19 @@
  * limitations under the License.
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
-import org.apache.commons.collections.CollectionUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
-import org.apache.hadoop.metrics2.sink.timeline.Precision;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 
 import java.util.ArrayList;
 import java.util.LinkedHashSet;
 import java.util.List;
 import java.util.Set;
 
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.Precision;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
+
 public class DefaultCondition implements Condition {
   List<String> metricNames;
   List<String> hostnames;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultPhoenixDataSource.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultPhoenixDataSource.java
index 67afe6b..78fad62 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultPhoenixDataSource.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultPhoenixDataSource.java
@@ -18,18 +18,16 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
 
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.client.ConnectionFactory;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
-import org.apache.hadoop.hbase.util.RetryCounter;
-import org.apache.hadoop.hbase.util.RetryCounterFactory;
-
-import java.io.IOException;
-import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.SQLException;
 
 public class DefaultPhoenixDataSource implements PhoenixConnectionProvider {
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/EmptyCondition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/EmptyCondition.java
index 6d43179..5d1e244 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/EmptyCondition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/EmptyCondition.java
@@ -17,11 +17,11 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
+import java.util.List;
+
 import org.apache.commons.lang.NotImplementedException;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
 
-import java.util.List;
-
 /**
  * Encapsulate a Condition with pre-formatted and pre-parsed query string.
  */
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java
index a7a20fd..cdb3b4e 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java
@@ -1,10 +1,9 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
-import org.apache.hadoop.hbase.client.Admin;
-import org.apache.hadoop.hbase.client.HBaseAdmin;
-
 import java.io.IOException;
 
+import org.apache.hadoop.hbase.client.Admin;
+
 /**
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
index 9077ac6..beaff69 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixTransactSQL.java
@@ -17,21 +17,20 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.Precision;
-import org.apache.hadoop.metrics2.sink.timeline.PrecisionLimitExceededException;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
-
 import java.sql.Connection;
 import java.sql.PreparedStatement;
 import java.sql.SQLException;
-import java.sql.Statement;
 import java.util.List;
 import java.util.concurrent.TimeUnit;
 import java.util.regex.Pattern;
 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.Precision;
+import org.apache.hadoop.metrics2.sink.timeline.PrecisionLimitExceededException;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
+
 /**
  * Encapsulate all metrics related SQL queries.
  */
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/SplitByMetricNamesCondition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/SplitByMetricNamesCondition.java
index 554d2e8..6eadcea 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/SplitByMetricNamesCondition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/SplitByMetricNamesCondition.java
@@ -17,10 +17,10 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
-import org.apache.hadoop.metrics2.sink.timeline.Precision;
-
 import java.util.Collections;
 import java.util.List;
+
+import org.apache.hadoop.metrics2.sink.timeline.Precision;
 // TODO get rid of this class
 public class SplitByMetricNamesCondition implements Condition {
   private final Condition adaptee;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/TopNCondition.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/TopNCondition.java
index 38d0c6f..4a5491f 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/TopNCondition.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/TopNCondition.java
@@ -17,12 +17,13 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
+import java.util.List;
+
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.Function;
-import java.util.List;
 
 public class TopNCondition extends DefaultCondition{
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSource.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSource.java
index 879577a..6475536 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSource.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/source/RawMetricsSource.java
@@ -18,7 +18,6 @@
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.source;
 
 import java.util.Collection;
-import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java
index 10e9c61..3acf656 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/HashBasedUuidGenStrategy.java
@@ -18,15 +18,15 @@
 
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid;
 
-import org.apache.commons.lang.ArrayUtils;
-import org.apache.commons.lang3.StringUtils;
-import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
-
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
 
+import org.apache.commons.lang.ArrayUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
+
 public class HashBasedUuidGenStrategy implements MetricUuidGenStrategy {
 
   /**
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/MetricUuidGenStrategy.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/MetricUuidGenStrategy.java
index 9aab96a..b6a1890 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/MetricUuidGenStrategy.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/MetricUuidGenStrategy.java
@@ -17,7 +17,6 @@
  */
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid;
 
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
 
 public interface MetricUuidGenStrategy {
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/RandomUuidGenStrategy.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/RandomUuidGenStrategy.java
index 39d9549..1443067 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/RandomUuidGenStrategy.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/uuid/RandomUuidGenStrategy.java
@@ -18,11 +18,11 @@
 
 package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.uuid;
 
-import com.google.common.primitives.Longs;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import java.security.SecureRandom;
+
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.TimelineClusterMetric;
 
-import java.security.SecureRandom;
+import com.google.common.primitives.Longs;
 
 public class RandomUuidGenStrategy implements MetricUuidGenStrategy {
   private static SecureRandom randomGenerator;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TimelineWriter.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TimelineWriter.java
index 8f28d82..bc8aada 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TimelineWriter.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/timeline/TimelineWriter.java
@@ -18,13 +18,13 @@
 
 package org.apache.hadoop.yarn.server.applicationhistoryservice.timeline;
 
+import java.io.IOException;
+
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities;
 import org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse;
 
-import java.io.IOException;
-
 /**
  * This interface is for storing timeline information.
  */
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
index c09900d..d3da500 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
@@ -21,9 +21,7 @@ package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
 import java.io.IOException;
 import java.sql.SQLException;
 import java.util.ArrayList;
-import java.util.Arrays;
 import java.util.Collection;
-import java.util.Collections;
 import java.util.EnumSet;
 import java.util.HashSet;
 import java.util.List;
@@ -49,7 +47,6 @@ import javax.xml.bind.annotation.XmlAccessorType;
 import javax.xml.bind.annotation.XmlElement;
 import javax.xml.bind.annotation.XmlRootElement;
 
-import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
diff --git a/ambari-metrics/pom.xml b/ambari-metrics/pom.xml
index 32dfad7..d52f93d 100644
--- a/ambari-metrics/pom.xml
+++ b/ambari-metrics/pom.xml
@@ -41,7 +41,7 @@
     <deb.python.ver>python (&gt;= 2.6)</deb.python.ver>
     <!--TODO change to HDP URL-->
     <hbase.tar>http://dev.hortonworks.com.s3.amazonaws.com/HDP/centos7/3.x/BUILDS/3.0.0.0-623/tars/hbase/hbase-2.0.0.3.0.0.0-623-bin.tar.gz</hbase.tar>
-    <hbase.folder>hbase-1.1.2.2.6.0.3-8</hbase.folder>
+    <hbase.folder>hbase-2.0.0.3.0.0.0-623</hbase.folder>
     <hadoop.tar>http://dev.hortonworks.com.s3.amazonaws.com/HDP/centos7/3.x/BUILDS/3.0.0.0-623/tars/hadoop/hadoop-3.0.0.3.0.0.0-623.tar.gz</hadoop.tar>
     <hadoop.folder>hadoop-3.0.0.3.0.0.0-623</hadoop.folder>
     <grafana.folder>grafana-2.6.0</grafana.folder>

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 29/39: AMBARI-22470 : Refine Metric Definition Service and AD Query service. (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 8357de8ad1c56abb0daff44c4905301cd69ff1a3
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Mon Nov 20 10:46:13 2017 -0800

    AMBARI-22470 : Refine Metric Definition Service and AD Query service. (avijayan)
---
 .../pom.xml                                        |  63 +++--
 .../src/main/resources/config.yml                  |   2 +-
 .../src/main/resources/hbase-site.xml              | 286 +++++++++++++++++++++
 .../adservice/app/ADServiceScalaModule.scala       |  50 ++++
 .../adservice/app/AnomalyDetectionApp.scala        |  10 +-
 .../adservice/app/AnomalyDetectionAppConfig.scala  |   4 +-
 .../adservice/app/AnomalyDetectionAppModule.scala  |   5 +-
 .../configuration/HBaseConfiguration.scala         |   3 +
 .../AdAnomalyStoreAccessor.scala}                  |  19 +-
 .../adservice/db/AdMetadataStoreAccessorImpl.scala |  96 +++++++
 .../metrics/adservice/db/ConnectionProvider.scala  |  45 ++++
 .../adservice/db/DefaultPhoenixDataSource.scala    |  79 ++++++
 .../adservice/db/LevelDbStoreAccessor.scala        |  56 ----
 .../metrics/adservice/db/MetadataDatasource.scala  |   6 +
 .../adservice/db/PhoenixAnomalyStoreAccessor.scala |  75 ++++--
 .../adservice/db/PhoenixConnectionProvider.scala   |  66 +++++
 .../adservice/db/PhoenixQueryConstants.scala       |  12 +-
 .../adservice/leveldb/LevelDBDatasource.scala      |  17 +-
 .../adservice/metadata/ADMetadataProvider.scala    |  86 ++++---
 .../metadata/InputMetricDefinitionParser.scala     |  24 +-
 .../adservice/metadata/MetricDefinition.scala      |   2 +
 .../metadata/MetricDefinitionService.scala         |  16 +-
 .../metadata/MetricDefinitionServiceImpl.scala     |  73 ++++--
 .../metrics/adservice/metadata/MetricKey.scala     |   3 +
 .../metadata/MetricMetadataProvider.scala          |   2 +-
 ...yInstance.scala => MetricAnomalyInstance.scala} |   7 +-
 .../adservice/resource/AnomalyResource.scala       |  55 +++-
 .../resource/MetricDefinitionResource.scala        |  77 +++++-
 .../metrics/adservice/resource/RootResource.scala  |   5 +-
 .../metrics/adservice/service/ADQueryService.scala |   6 +-
 .../adservice/service/ADQueryServiceImpl.scala     |  25 +-
 .../adservice/service/AbstractADService.scala      |  44 ++++
 .../pointintime/PointInTimeAnomalyInstance.scala   |   4 +-
 .../subsystem/trend/TrendAnomalyInstance.scala     |   4 +-
 .../adservice/app/DefaultADResourceSpecTest.scala  |   5 +-
 .../metadata/AMSMetadataProviderTest.scala         |  16 +-
 .../metadata/MetricSourceDefinitionTest.scala      |  16 +-
 ambari-metrics/ambari-metrics-common/pom.xml       |  45 ----
 .../metrics2/sink/timeline/TimelineMetricKey.java  |  59 -----
 .../timeline/HBaseTimelineMetricsService.java      |  36 ++-
 .../metrics/timeline/PhoenixHBaseAccessor.java     |  26 +-
 .../metrics/timeline/TimelineMetricStore.java      |   3 +-
 .../timeline/query/ConnectionProvider.java         |   3 +-
 .../timeline/query/DefaultPhoenixDataSource.java   |  18 +-
 .../timeline/query/PhoenixConnectionProvider.java  |   2 +-
 .../webapp/TimelineWebServices.java                |  12 +-
 .../TestApplicationHistoryServer.java              |   2 +-
 .../timeline/AbstractMiniHBaseClusterTest.java     |  13 +-
 .../metrics/timeline/PhoenixHBaseAccessorTest.java |  11 +-
 .../metrics/timeline/TestTimelineMetricStore.java  |   3 +-
 50 files changed, 1223 insertions(+), 374 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
index 142f02f..c6927dd 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
@@ -34,10 +34,12 @@
   <properties>
     <scala.version>2.12.3</scala.version>
     <scala.binary.version>2.11</scala.binary.version>
-    <hadoop.version>2.7.3.2.6.0.3-8</hadoop.version>
     <jackson.version>2.9.1</jackson.version>
     <dropwizard.version>1.2.0</dropwizard.version>
     <spark.version>2.2.0</spark.version>
+    <hadoop.version>2.7.3.2.6.0.3-8</hadoop.version>
+    <hbase.version>1.1.2.2.6.0.3-8</hbase.version>
+    <phoenix.version>4.7.0.2.6.0.3-8</phoenix.version>
   </properties>
   
   <repositories>
@@ -64,6 +66,7 @@
         <directory>src/main/resources</directory>
         <includes>
           <include>**/*.yml</include>
+          <include>**/*.xml</include>
           <include>**/*.txt</include>
         </includes>
       </resource>
@@ -145,6 +148,28 @@
                 <exclude>META-INF/*.RSA</exclude>
               </excludes>
             </filter>
+            <filter>
+              <artifact>org.apache.phoenix:phoenix-core</artifact>
+              <excludes>
+                <exclude>org/joda/time/**</exclude>
+                <exclude>com/codahale/metrics/**</exclude>
+                <exclude>com/google/common/collect/**</exclude>
+              </excludes>
+            </filter>
+            <filter>
+              <artifact>org.apache.phoenix:phoenix-core</artifact>
+              <excludes>
+                <exclude>org/joda/time/**</exclude>
+                <exclude>com/codahale/metrics/**</exclude>
+                <exclude>com/google/common/collect/**</exclude>
+              </excludes>
+            </filter>
+            <filter>
+              <artifact>*:*</artifact>
+              <excludes>
+                <exclude>com/sun/jersey/**</exclude>
+              </excludes>
+            </filter>
           </filters>
         </configuration>
         <executions>
@@ -245,33 +270,25 @@
     </dependency>
     <dependency>
       <groupId>org.apache.phoenix</groupId>
-      <artifactId>phoenix-spark</artifactId>
-      <version>4.10.0-HBase-1.1</version>
+      <artifactId>phoenix-core</artifactId>
+      <version>${phoenix.version}</version>
       <exclusions>
         <exclusion>
-          <artifactId>jersey-server</artifactId>
-          <groupId>com.sun.jersey</groupId>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-common</artifactId>
         </exclusion>
         <exclusion>
-          <artifactId>jersey-core</artifactId>
-          <groupId>com.sun.jersey</groupId>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-annotations</artifactId>
         </exclusion>
         <exclusion>
-          <artifactId>jersey-client</artifactId>
+          <artifactId>jersey-core</artifactId>
           <groupId>com.sun.jersey</groupId>
         </exclusion>
         <exclusion>
-          <artifactId>jersey-guice</artifactId>
-          <groupId>com.sun.jersey.contribs</groupId>
-        </exclusion>
-        <exclusion>
-          <artifactId>jersey-json</artifactId>
+          <artifactId>jersey-server</artifactId>
           <groupId>com.sun.jersey</groupId>
         </exclusion>
-        <exclusion>
-          <groupId>com.fasterxml.jackson.core</groupId>
-          <artifactId>jackson-databind</artifactId>
-        </exclusion>
       </exclusions>
     </dependency>
     <dependency>
@@ -379,6 +396,10 @@
           <groupId>org.slf4j</groupId>
           <artifactId>log4j-over-slf4j</artifactId>
         </exclusion>
+        <exclusion>
+          <artifactId>jersey-server</artifactId>
+          <groupId>org.glassfish.jersey.core</groupId>
+        </exclusion>
       </exclusions>
     </dependency>
     <dependency>
@@ -444,6 +465,12 @@
       <artifactId>leveldb</artifactId>
       <version>0.9</version>
     </dependency>
+    <!-- https://mvnrepository.com/artifact/org.scalaj/scalaj-http -->
+    <dependency>
+      <groupId>org.scalaj</groupId>
+      <artifactId>scalaj-http_2.12</artifactId>
+      <version>2.3.0</version>
+    </dependency>
 
     <dependency>
       <groupId>junit</groupId>
@@ -454,7 +481,7 @@
     <dependency>
       <groupId>com.google.guava</groupId>
       <artifactId>guava</artifactId>
-      <version>21.0</version>
+      <version>18.0</version>
     </dependency>
     <dependency>
       <groupId>io.dropwizard.metrics</groupId>
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
index 9402f6e..7de06b4 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
@@ -27,7 +27,7 @@ metricsCollector:
   hosts: host1,host2
   port: 6188
   protocol: http
-  metadataEndpoint: /v1/timeline/metrics/metadata/keys
+  metadataEndpoint: /ws/v1/timeline/metrics/metadata/key
 
 adQueryService:
   anomalyDataTtl: 604800
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/hbase-site.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/hbase-site.xml
new file mode 100644
index 0000000..66f0454
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/hbase-site.xml
@@ -0,0 +1,286 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+    
+    <property>
+      <name>dfs.client.read.shortcircuit</name>
+      <value>true</value>
+    </property>
+    
+    <property>
+      <name>hbase.client.scanner.caching</name>
+      <value>10000</value>
+    </property>
+    
+    <property>
+      <name>hbase.client.scanner.timeout.period</name>
+      <value>300000</value>
+    </property>
+    
+    <property>
+      <name>hbase.cluster.distributed</name>
+      <value>false</value>
+    </property>
+    
+    <property>
+      <name>hbase.hregion.majorcompaction</name>
+      <value>0</value>
+    </property>
+    
+    <property>
+      <name>hbase.hregion.max.filesize</name>
+      <value>4294967296</value>
+    </property>
+    
+    <property>
+      <name>hbase.hregion.memstore.block.multiplier</name>
+      <value>4</value>
+    </property>
+    
+    <property>
+      <name>hbase.hregion.memstore.flush.size</name>
+      <value>134217728</value>
+    </property>
+    
+    <property>
+      <name>hbase.hstore.blockingStoreFiles</name>
+      <value>200</value>
+    </property>
+    
+    <property>
+      <name>hbase.hstore.flusher.count</name>
+      <value>2</value>
+    </property>
+    
+    <property>
+      <name>hbase.local.dir</name>
+      <value>${hbase.tmp.dir}/local</value>
+    </property>
+    
+    <property>
+      <name>hbase.master.info.bindAddress</name>
+      <value>0.0.0.0</value>
+    </property>
+    
+    <property>
+      <name>hbase.master.info.port</name>
+      <value>61310</value>
+    </property>
+    
+    <property>
+      <name>hbase.master.normalizer.class</name>
+      <value>org.apache.hadoop.hbase.master.normalizer.SimpleRegionNormalizer</value>
+    </property>
+    
+    <property>
+      <name>hbase.master.port</name>
+      <value>61300</value>
+    </property>
+    
+    <property>
+      <name>hbase.master.wait.on.regionservers.mintostart</name>
+      <value>1</value>
+    </property>
+    
+    <property>
+      <name>hbase.normalizer.enabled</name>
+      <value>false</value>
+    </property>
+    
+    <property>
+      <name>hbase.normalizer.period</name>
+      <value>600000</value>
+    </property>
+    
+    <property>
+      <name>hbase.regionserver.global.memstore.lowerLimit</name>
+      <value>0.3</value>
+    </property>
+    
+    <property>
+      <name>hbase.regionserver.global.memstore.upperLimit</name>
+      <value>0.35</value>
+    </property>
+    
+    <property>
+      <name>hbase.regionserver.info.port</name>
+      <value>61330</value>
+    </property>
+    
+    <property>
+      <name>hbase.regionserver.port</name>
+      <value>61320</value>
+    </property>
+    
+    <property>
+      <name>hbase.regionserver.thread.compaction.large</name>
+      <value>2</value>
+    </property>
+    
+    <property>
+      <name>hbase.regionserver.thread.compaction.small</name>
+      <value>3</value>
+    </property>
+    
+    <property>
+      <name>hbase.replication</name>
+      <value>false</value>
+    </property>
+    
+    <property>
+      <name>hbase.rootdir</name>
+      <value>file:///var/lib/ambari-metrics-collector/hbase</value>
+    </property>
+    
+    <property>
+      <name>hbase.rpc.timeout</name>
+      <value>300000</value>
+    </property>
+    
+    <property>
+      <name>hbase.snapshot.enabled</name>
+      <value>false</value>
+    </property>
+    
+    <property>
+      <name>hbase.superuser</name>
+      <value>activity_explorer,activity_analyzer</value>
+    </property>
+    
+    <property>
+      <name>hbase.tmp.dir</name>
+      <value>/var/lib/ambari-metrics-collector/hbase-tmp</value>
+    </property>
+    
+    <property>
+      <name>hbase.zookeeper.leaderport</name>
+      <value>61388</value>
+    </property>
+    
+    <property>
+      <name>hbase.zookeeper.peerport</name>
+      <value>61288</value>
+    </property>
+    
+    <property>
+      <name>hbase.zookeeper.property.clientPort</name>
+      <value>61181</value>
+    </property>
+    
+    <property>
+      <name>hbase.zookeeper.property.dataDir</name>
+      <value>${hbase.tmp.dir}/zookeeper</value>
+    </property>
+    
+    <property>
+      <name>hbase.zookeeper.property.tickTime</name>
+      <value>6000</value>
+    </property>
+    
+    <property>
+      <name>hbase.zookeeper.quorum</name>
+      <value>c6401.ambari.apache.org</value>
+      <final>true</final>
+    </property>
+    
+    <property>
+      <name>hfile.block.cache.size</name>
+      <value>0.3</value>
+    </property>
+    
+    <property>
+      <name>phoenix.coprocessor.maxMetaDataCacheSize</name>
+      <value>20480000</value>
+    </property>
+    
+    <property>
+      <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name>
+      <value>60000</value>
+    </property>
+    
+    <property>
+      <name>phoenix.groupby.maxCacheSize</name>
+      <value>307200000</value>
+    </property>
+    
+    <property>
+      <name>phoenix.mutate.batchSize</name>
+      <value>10000</value>
+    </property>
+    
+    <property>
+      <name>phoenix.query.keepAliveMs</name>
+      <value>300000</value>
+    </property>
+    
+    <property>
+      <name>phoenix.query.maxGlobalMemoryPercentage</name>
+      <value>15</value>
+    </property>
+    
+    <property>
+      <name>phoenix.query.rowKeyOrderSaltedTable</name>
+      <value>true</value>
+    </property>
+    
+    <property>
+      <name>phoenix.query.spoolThresholdBytes</name>
+      <value>20971520</value>
+    </property>
+    
+    <property>
+      <name>phoenix.query.timeoutMs</name>
+      <value>300000</value>
+    </property>
+    
+    <property>
+      <name>phoenix.sequence.saltBuckets</name>
+      <value>2</value>
+    </property>
+    
+    <property>
+      <name>phoenix.spool.directory</name>
+      <value>${hbase.tmp.dir}/phoenix-spool</value>
+    </property>
+    
+    <property>
+      <name>zookeeper.session.timeout</name>
+      <value>120000</value>
+    </property>
+    
+    <property>
+      <name>zookeeper.session.timeout.localHBaseCluster</name>
+      <value>120000</value>
+    </property>
+    
+    <property>
+      <name>zookeeper.znode.parent</name>
+      <value>/ams-hbase-unsecure</value>
+    </property>
+
+    <property>
+      <name>hbase.use.dynamic.jars</name>
+      <value>false</value>
+    </property>
+
+  </configuration>
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/ADServiceScalaModule.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/ADServiceScalaModule.scala
new file mode 100644
index 0000000..8578a80
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/ADServiceScalaModule.scala
@@ -0,0 +1,50 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+
+package org.apache.ambari.metrics.adservice.app
+
+import com.fasterxml.jackson.module.scala._
+import com.fasterxml.jackson.module.scala.deser.{ScalaNumberDeserializersModule, UntypedObjectDeserializerModule}
+import com.fasterxml.jackson.module.scala.introspect.{ScalaAnnotationIntrospector, ScalaAnnotationIntrospectorModule}
+
+/**
+  * Extended Jackson Module that fixes the Scala-Jackson BytecodeReadingParanamer issue.
+  */
+class ADServiceScalaModule extends JacksonModule
+  with IteratorModule
+  with EnumerationModule
+  with OptionModule
+  with SeqModule
+  with IterableModule
+  with TupleModule
+  with MapModule
+  with SetModule
+  with FixedScalaAnnotationIntrospectorModule
+  with UntypedObjectDeserializerModule
+  with EitherModule {
+
+  override def getModuleName = "ADServiceScalaModule"
+
+  object ADServiceScalaModule extends ADServiceScalaModule
+
+}
+
+
+trait FixedScalaAnnotationIntrospectorModule extends JacksonModule {
+  this += { _.appendAnnotationIntrospector(ScalaAnnotationIntrospector) }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala
index 8b3a829..2d0dbdf 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala
@@ -21,6 +21,9 @@ import javax.ws.rs.Path
 import javax.ws.rs.container.{ContainerRequestFilter, ContainerResponseFilter}
 
 import org.apache.ambari.metrics.adservice.app.GuiceInjector.{withInjector, wrap}
+import org.apache.ambari.metrics.adservice.db.{AdAnomalyStoreAccessor, MetadataDatasource}
+import org.apache.ambari.metrics.adservice.metadata.MetricDefinitionService
+import org.apache.ambari.metrics.adservice.service.ADQueryService
 import org.glassfish.jersey.filter.LoggingFilter
 
 import com.codahale.metrics.health.HealthCheck
@@ -45,6 +48,11 @@ class AnomalyDetectionApp extends Application[AnomalyDetectionAppConfig] {
       injector.instancesOfType(classOf[HealthCheck]).foreach { h => env.healthChecks.register(h.getClass.getName, h) }
       injector.instancesOfType(classOf[ContainerRequestFilter]).foreach { f => env.jersey().register(f) }
       injector.instancesOfType(classOf[ContainerResponseFilter]).foreach { f => env.jersey().register(f) }
+
+      //Initialize Services
+      injector.getInstance(classOf[MetadataDatasource]).initialize
+      injector.getInstance(classOf[MetricDefinitionService]).initialize
+      injector.getInstance(classOf[ADQueryService]).initialize
     }
     env.jersey.register(jacksonJaxbJsonProvider)
     env.jersey.register(new LoggingFilter)
@@ -53,7 +61,7 @@ class AnomalyDetectionApp extends Application[AnomalyDetectionAppConfig] {
   private def jacksonJaxbJsonProvider: JacksonJaxbJsonProvider = {
     val provider = new JacksonJaxbJsonProvider()
     val objectMapper = new ObjectMapper()
-    objectMapper.registerModule(DefaultScalaModule)
+    objectMapper.registerModule(new ADServiceScalaModule)
     objectMapper.registerModule(new JodaModule)
     objectMapper.configure(SerializationFeature.WRAP_ROOT_VALUE, false)
     objectMapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false)
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
index 93f6b28..f9ed4b2 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
@@ -42,7 +42,7 @@ class AnomalyDetectionAppConfig extends Configuration {
   private val metricCollectorConfiguration = new MetricCollectorConfiguration
 
   /*
-   Anomaly Service configuration
+   Anomaly Query Service configuration
     */
   @Valid
   private val adServiceConfiguration = new AdServiceConfiguration
@@ -54,7 +54,7 @@ class AnomalyDetectionAppConfig extends Configuration {
   private val metricDefinitionDBConfiguration = new MetricDefinitionDBConfiguration
 
   /*
-   HBase Conf
+   AMS HBase Conf
     */
   @JsonIgnore
   def getHBaseConf : org.apache.hadoop.conf.Configuration = {
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
index a896563..68e9df9 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
@@ -17,7 +17,7 @@
   */
 package org.apache.ambari.metrics.adservice.app
 
-import org.apache.ambari.metrics.adservice.db.{AdMetadataStoreAccessor, LevelDbStoreAccessor, MetadataDatasource}
+import org.apache.ambari.metrics.adservice.db._
 import org.apache.ambari.metrics.adservice.leveldb.LevelDBDataSource
 import org.apache.ambari.metrics.adservice.metadata.{MetricDefinitionService, MetricDefinitionServiceImpl}
 import org.apache.ambari.metrics.adservice.resource.{AnomalyResource, MetricDefinitionResource, RootResource}
@@ -38,9 +38,10 @@ class AnomalyDetectionAppModule(config: AnomalyDetectionAppConfig, env: Environm
     bind(classOf[AnomalyResource])
     bind(classOf[MetricDefinitionResource])
     bind(classOf[RootResource])
-    bind(classOf[AdMetadataStoreAccessor]).to(classOf[LevelDbStoreAccessor])
+    bind(classOf[AdMetadataStoreAccessor]).to(classOf[AdMetadataStoreAccessorImpl])
     bind(classOf[ADQueryService]).to(classOf[ADQueryServiceImpl])
     bind(classOf[MetricDefinitionService]).to(classOf[MetricDefinitionServiceImpl])
     bind(classOf[MetadataDatasource]).to(classOf[LevelDBDataSource])
+    bind(classOf[AdAnomalyStoreAccessor]).to(classOf[PhoenixAnomalyStoreAccessor])
   }
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala
index a51a959..a95ff15 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala
@@ -28,6 +28,9 @@ object HBaseConfiguration {
   var isInitialized: Boolean = false
   val LOG : Logger = LoggerFactory.getLogger("HBaseConfiguration")
 
+  /**
+    * Initialize the hbase conf from hbase-site present in classpath.
+    */
   def initConfigs(): Unit = {
     if (!isInitialized) {
       var classLoader: ClassLoader = Thread.currentThread.getContextClassLoader
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SingleMetricAnomalyInstance.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdAnomalyStoreAccessor.scala
similarity index 67%
copy from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SingleMetricAnomalyInstance.scala
copy to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdAnomalyStoreAccessor.scala
index 981a893..676b09a 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SingleMetricAnomalyInstance.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdAnomalyStoreAccessor.scala
@@ -16,14 +16,21 @@
   * limitations under the License.
   */
 
-package org.apache.ambari.metrics.adservice.model
+package org.apache.ambari.metrics.adservice.db
 
-import org.apache.ambari.metrics.adservice.metadata.MetricKey
 import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
+import org.apache.ambari.metrics.adservice.model.MetricAnomalyInstance
 
-abstract class SingleMetricAnomalyInstance {
+/**
+  * Trait for anomaly store accessor. (Phoenix)
+  */
+trait AdAnomalyStoreAccessor {
+
+  def initialize(): Unit
 
-  val metricKey: MetricKey
-  val anomalyType: AnomalyType
+  def getMetricAnomalies(anomalyType: AnomalyType,
+                         startTime: Long,
+                         endTime: Long,
+                         limit: Int) : List[MetricAnomalyInstance]
 
-}
+  }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreAccessorImpl.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreAccessorImpl.scala
new file mode 100644
index 0000000..7405459
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreAccessorImpl.scala
@@ -0,0 +1,96 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+
+package org.apache.ambari.metrics.adservice.db
+
+import org.apache.ambari.metrics.adservice.metadata.MetricSourceDefinition
+import org.apache.commons.lang.SerializationUtils
+
+import com.google.inject.Inject
+
+/**
+  * Implementation of the AdMetadataStoreAccessor.
+  * Serves as the adaptor between metric definition service and LevelDB worlds.
+  */
+class AdMetadataStoreAccessorImpl extends AdMetadataStoreAccessor {
+
+  @Inject
+  var metadataDataSource: MetadataDatasource = _
+
+  @Inject
+  def this(metadataDataSource: MetadataDatasource) = {
+    this
+    this.metadataDataSource = metadataDataSource
+  }
+
+  /**
+    * Return all saved component definitions from DB.
+    *
+    * @return
+    */
+  override def getSavedInputDefinitions: List[MetricSourceDefinition] = {
+    val valuesFromStore : List[MetadataDatasource#Value] = metadataDataSource.getAll
+    val definitions = scala.collection.mutable.MutableList.empty[MetricSourceDefinition]
+
+    for (value : Array[Byte] <- valuesFromStore) {
+      val definition : MetricSourceDefinition = SerializationUtils.deserialize(value).asInstanceOf[MetricSourceDefinition]
+      if (definition != null) {
+        definitions.+=(definition)
+      }
+    }
+    definitions.toList
+  }
+
+  /**
+    * Save a set of component definitions
+    *
+    * @param metricSourceDefinitions Set of component definitions
+    * @return Success / Failure
+    */
+  override def saveInputDefinitions(metricSourceDefinitions: List[MetricSourceDefinition]): Boolean = {
+    for (definition <- metricSourceDefinitions) {
+      saveInputDefinition(definition)
+    }
+    true
+  }
+
+  /**
+    * Save a component definition
+    *
+    * @param metricSourceDefinition component definition
+    * @return Success / Failure
+    */
+  override def saveInputDefinition(metricSourceDefinition: MetricSourceDefinition): Boolean = {
+    val storeValue : MetadataDatasource#Value = SerializationUtils.serialize(metricSourceDefinition)
+    val storeKey : MetadataDatasource#Key = metricSourceDefinition.definitionName.getBytes()
+    metadataDataSource.put(storeKey, storeValue)
+    true
+  }
+
+  /**
+    * Delete a component definition
+    *
+    * @param definitionName component definition
+    * @return
+    */
+  override def removeInputDefinition(definitionName: String): Boolean = {
+    val storeKey : MetadataDatasource#Key = definitionName.getBytes()
+    metadataDataSource.delete(storeKey)
+    true
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/ConnectionProvider.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/ConnectionProvider.scala
new file mode 100644
index 0000000..cc02ed4
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/ConnectionProvider.scala
@@ -0,0 +1,45 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  *//**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.db
+
+import java.sql.Connection
+import java.sql.SQLException
+
+/**
+  * Provides a connection to the anomaly store.
+  */
+trait ConnectionProvider {
+  @throws[SQLException]
+  def getConnection: Connection
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/DefaultPhoenixDataSource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/DefaultPhoenixDataSource.scala
new file mode 100644
index 0000000..d9396de
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/DefaultPhoenixDataSource.scala
@@ -0,0 +1,79 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.db
+
+import org.apache.commons.logging.LogFactory
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.client.ConnectionFactory
+import org.apache.hadoop.hbase.client.HBaseAdmin
+import java.io.IOException
+import java.sql.Connection
+import java.sql.DriverManager
+import java.sql.SQLException
+
+object DefaultPhoenixDataSource {
+  private[db] val LOG = LogFactory.getLog(classOf[DefaultPhoenixDataSource])
+  private val ZOOKEEPER_CLIENT_PORT = "hbase.zookeeper.property.clientPort"
+  private val ZOOKEEPER_QUORUM = "hbase.zookeeper.quorum"
+  private val ZNODE_PARENT = "zookeeper.znode.parent"
+  private val connectionUrl = "jdbc:phoenix:%s:%s:%s"
+}
+
+class DefaultPhoenixDataSource(var hbaseConf: Configuration) extends PhoenixConnectionProvider {
+
+  val zookeeperClientPort: String = hbaseConf.getTrimmed(DefaultPhoenixDataSource.ZOOKEEPER_CLIENT_PORT, "2181")
+  val zookeeperQuorum: String = hbaseConf.getTrimmed(DefaultPhoenixDataSource.ZOOKEEPER_QUORUM)
+  val znodeParent: String = hbaseConf.getTrimmed(DefaultPhoenixDataSource.ZNODE_PARENT, "/ams-hbase-unsecure")
+  final private var url : String = _
+
+  if (zookeeperQuorum == null || zookeeperQuorum.isEmpty) {
+    throw new IllegalStateException("Unable to find Zookeeper quorum to access HBase store using Phoenix.")
+  }
+  url = String.format(DefaultPhoenixDataSource.connectionUrl, zookeeperQuorum, zookeeperClientPort, znodeParent)
+
+
+  /**
+    * Get HBaseAdmin for table ops.
+    *
+    * @return @HBaseAdmin
+    * @throws IOException
+    */
+  @throws[IOException]
+  override def getHBaseAdmin: HBaseAdmin = ConnectionFactory.createConnection(hbaseConf).getAdmin.asInstanceOf[HBaseAdmin]
+
+  /**
+    * Get JDBC connection to HBase store. Assumption is that the hbase
+    * configuration is present on the classpath and loaded by the caller into
+    * the Configuration object.
+    * Phoenix already caches the HConnection between the client and HBase
+    * cluster.
+    *
+    * @return @java.sql.Connection
+    */
+  @throws[SQLException]
+  override def getConnection: Connection = {
+    DefaultPhoenixDataSource.LOG.debug("Metric store connection url: " + url)
+    try DriverManager.getConnection(url)
+    catch {
+      case e: SQLException =>
+        DefaultPhoenixDataSource.LOG.warn("Unable to connect to HBase store using Phoenix.", e)
+        throw e
+    }
+  }
+
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/LevelDbStoreAccessor.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/LevelDbStoreAccessor.scala
deleted file mode 100644
index baad57d..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/LevelDbStoreAccessor.scala
+++ /dev/null
@@ -1,56 +0,0 @@
-package org.apache.ambari.metrics.adservice.db
-
-import org.apache.ambari.metrics.adservice.metadata.MetricSourceDefinition
-
-import com.google.inject.Inject
-
-class LevelDbStoreAccessor extends AdMetadataStoreAccessor{
-
-  @Inject
-  var levelDbDataSource : MetadataDatasource = _
-
-  @Inject
-  def this(levelDbDataSource: MetadataDatasource) = {
-    this
-    this.levelDbDataSource = levelDbDataSource
-  }
-
-  /**
-    * Return all saved component definitions from DB.
-    *
-    * @return
-    */
-  override def getSavedInputDefinitions: List[MetricSourceDefinition] = {
-    List.empty[MetricSourceDefinition]
-  }
-
-  /**
-    * Save a set of component definitions
-    *
-    * @param metricSourceDefinitions Set of component definitions
-    * @return Success / Failure
-    */
-override def saveInputDefinitions(metricSourceDefinitions: List[MetricSourceDefinition]): Boolean = {
-  true
-}
-
-  /**
-    * Save a component definition
-    *
-    * @param metricSourceDefinition component definition
-    * @return Success / Failure
-    */
-  override def saveInputDefinition(metricSourceDefinition: MetricSourceDefinition): Boolean = {
-    true
-  }
-
-  /**
-    * Delete a component definition
-    *
-    * @param definitionName component definition
-    * @return
-    */
-  override def removeInputDefinition(definitionName: String): Boolean = {
-    true
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/MetadataDatasource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/MetadataDatasource.scala
index aa6694a..7b223a2 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/MetadataDatasource.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/MetadataDatasource.scala
@@ -44,6 +44,12 @@ trait MetadataDatasource {
     */
   def get(key: Key): Option[Value]
 
+  /**
+    * This function obtains all the values
+    *
+    * @return the list of values
+    */
+  def getAll: List[Value]
 
   /**
     * This function associates a key to a value, overwriting if necessary
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
index 36aea21..147d1f7 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
@@ -23,48 +23,60 @@ import java.util.concurrent.TimeUnit.SECONDS
 import org.apache.ambari.metrics.adservice.app.AnomalyDetectionAppConfig
 import org.apache.ambari.metrics.adservice.common._
 import org.apache.ambari.metrics.adservice.configuration.HBaseConfiguration
-import org.apache.ambari.metrics.adservice.metadata.MetricKey
+import org.apache.ambari.metrics.adservice.metadata.{MetricDefinitionService, MetricKey}
 import org.apache.ambari.metrics.adservice.model.AnomalyDetectionMethod.AnomalyDetectionMethod
 import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-import org.apache.ambari.metrics.adservice.model.{AnomalyDetectionMethod, AnomalyType, SingleMetricAnomalyInstance}
+import org.apache.ambari.metrics.adservice.model.{AnomalyDetectionMethod, AnomalyType, MetricAnomalyInstance}
 import org.apache.ambari.metrics.adservice.subsystem.pointintime.PointInTimeAnomalyInstance
 import org.apache.ambari.metrics.adservice.subsystem.trend.TrendAnomalyInstance
 import org.apache.hadoop.hbase.util.RetryCounterFactory
-import org.apache.hadoop.metrics2.sink.timeline.query.{DefaultPhoenixDataSource, PhoenixConnectionProvider}
+import org.slf4j.{Logger, LoggerFactory}
 
 import com.google.inject.Inject
 
-object PhoenixAnomalyStoreAccessor  {
+/**
+  * Phoenix query handler class.
+  */
+class PhoenixAnomalyStoreAccessor extends AdAnomalyStoreAccessor {
 
   @Inject
   var configuration: AnomalyDetectionAppConfig = _
 
+  @Inject
+  var metricDefinitionService: MetricDefinitionService = _
+
   var datasource: PhoenixConnectionProvider = _
+  val LOG : Logger = LoggerFactory.getLogger(classOf[PhoenixAnomalyStoreAccessor])
 
-  def initAnomalyMetricSchema(): Unit = {
+  @Override
+  def initialize(): Unit = {
 
-    val datasource: PhoenixConnectionProvider = new DefaultPhoenixDataSource(HBaseConfiguration.getHBaseConf)
+    datasource = new DefaultPhoenixDataSource(HBaseConfiguration.getHBaseConf)
     val retryCounterFactory = new RetryCounterFactory(10, SECONDS.toMillis(3).toInt)
 
     val ttl = configuration.getAdServiceConfiguration.getAnomalyDataTtl
     try {
-      var conn = datasource.getConnectionRetryingOnException(retryCounterFactory)
+      var conn : Connection = getConnectionRetryingOnException(retryCounterFactory)
       var stmt = conn.createStatement
 
+      //Create Method parameters table.
       val methodParametersSql = String.format(PhoenixQueryConstants.CREATE_METHOD_PARAMETERS_TABLE,
         PhoenixQueryConstants.METHOD_PARAMETERS_TABLE_NAME)
       stmt.executeUpdate(methodParametersSql)
 
+      //Create Point in Time anomaly table
       val pointInTimeAnomalySql = String.format(PhoenixQueryConstants.CREATE_PIT_ANOMALY_METRICS_TABLE_SQL,
         PhoenixQueryConstants.PIT_ANOMALY_METRICS_TABLE_NAME,
         ttl.asInstanceOf[Object])
       stmt.executeUpdate(pointInTimeAnomalySql)
 
+      //Create Trend Anomaly table
       val trendAnomalySql = String.format(PhoenixQueryConstants.CREATE_TREND_ANOMALY_METRICS_TABLE_SQL,
         PhoenixQueryConstants.TREND_ANOMALY_METRICS_TABLE_NAME,
         ttl.asInstanceOf[Object])
       stmt.executeUpdate(trendAnomalySql)
 
+      //Create model snapshot table.
       val snapshotSql = String.format(PhoenixQueryConstants.CREATE_MODEL_SNAPSHOT_TABLE,
         PhoenixQueryConstants.MODEL_SNAPSHOT)
       stmt.executeUpdate(snapshotSql)
@@ -75,11 +87,9 @@ object PhoenixAnomalyStoreAccessor  {
     }
   }
 
-  @throws[SQLException]
-  def getConnection: Connection = datasource.getConnection
-
-  def getSingleMetricAnomalies(anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int) : scala.collection.mutable.MutableList[SingleMetricAnomalyInstance] = {
-    val anomalies = scala.collection.mutable.MutableList.empty[SingleMetricAnomalyInstance]
+  @Override
+  def getMetricAnomalies(anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int) : List[MetricAnomalyInstance] = {
+    val anomalies = scala.collection.mutable.MutableList.empty[MetricAnomalyInstance]
     val conn : Connection = getConnection
     var stmt : PreparedStatement = null
     var rs : ResultSet = null
@@ -98,8 +108,8 @@ object PhoenixAnomalyStoreAccessor  {
           val anomalyScore: Double = rs.getDouble("ANOMALY_SCORE")
           val modelSnapshot: String = rs.getString("MODEL_PARAMETERS")
 
-          val metricKey: MetricKey = null //MetricManager.getMetricKeyFromUuid(uuid) //TODO
-          val anomalyInstance: SingleMetricAnomalyInstance = new PointInTimeAnomalyInstance(metricKey, timestamp,
+          val metricKey: MetricKey = metricDefinitionService.getMetricKeyFromUuid(uuid)
+          val anomalyInstance: MetricAnomalyInstance = new PointInTimeAnomalyInstance(metricKey, timestamp,
             metricValue, methodType, anomalyScore, season, modelSnapshot)
           anomalies.+=(anomalyInstance)
         }
@@ -115,8 +125,8 @@ object PhoenixAnomalyStoreAccessor  {
           val anomalyScore: Double = rs.getDouble("ANOMALY_SCORE")
           val modelSnapshot: String = rs.getString("MODEL_PARAMETERS")
 
-          val metricKey: MetricKey = null //MetricManager.getMetricKeyFromUuid(uuid) //TODO
-          val anomalyInstance: SingleMetricAnomalyInstance = TrendAnomalyInstance(metricKey,
+          val metricKey: MetricKey = metricDefinitionService.getMetricKeyFromUuid(uuid)
+          val anomalyInstance: MetricAnomalyInstance = TrendAnomalyInstance(metricKey,
             TimeRange(anomalyStart, anomalyEnd),
             TimeRange(referenceStart, referenceEnd),
             methodType, anomalyScore, season, modelSnapshot)
@@ -127,11 +137,11 @@ object PhoenixAnomalyStoreAccessor  {
       case e: SQLException => throw e
     }
 
-    anomalies
+    anomalies.toList
   }
 
   @throws[SQLException]
-  def prepareAnomalyMetricsGetSqlStatement(connection: Connection, anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int): PreparedStatement = {
+  private def prepareAnomalyMetricsGetSqlStatement(connection: Connection, anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int): PreparedStatement = {
 
     val sb = new StringBuilder
 
@@ -145,11 +155,11 @@ object PhoenixAnomalyStoreAccessor  {
     var stmt: java.sql.PreparedStatement = null
     try {
       stmt = connection.prepareStatement(sb.toString)
-      var pos = 1
 
-      pos += 1
+      var pos = 1
       stmt.setLong(pos, startTime)
 
+      pos += 1
       stmt.setLong(pos, endTime)
 
       stmt.setFetchSize(limit)
@@ -157,9 +167,32 @@ object PhoenixAnomalyStoreAccessor  {
     } catch {
       case e: SQLException =>
         if (stmt != null)
-          stmt
+          return stmt
         throw e
     }
     stmt
   }
+
+  @throws[SQLException]
+  private def getConnection: Connection = datasource.getConnection
+
+  @throws[SQLException]
+  @throws[InterruptedException]
+  private def getConnectionRetryingOnException (retryCounterFactory : RetryCounterFactory) : Connection = {
+    val retryCounter = retryCounterFactory.create
+    while(true) {
+      try
+        return getConnection
+      catch {
+        case e: SQLException =>
+          if (!retryCounter.shouldRetry) {
+            LOG.error("HBaseAccessor getConnection failed after " + retryCounter.getMaxAttempts + " attempts")
+            throw e
+          }
+      }
+      retryCounter.sleepUntilNextRetry()
+    }
+    null
+  }
+
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixConnectionProvider.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixConnectionProvider.scala
new file mode 100644
index 0000000..1faf1ba
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixConnectionProvider.scala
@@ -0,0 +1,66 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  *//**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+
+package org.apache.ambari.metrics.adservice.db
+
+import org.apache.hadoop.hbase.client.HBaseAdmin
+import java.io.IOException
+
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  * <p/>
+  * http://www.apache.org/licenses/LICENSE-2.0
+  * <p/>
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+trait PhoenixConnectionProvider extends ConnectionProvider {
+  /**
+    * Get HBaseAdmin for the Phoenix connection
+    *
+    * @return
+    * @throws IOException
+    */
+    @throws[IOException]
+    def getHBaseAdmin: HBaseAdmin
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixQueryConstants.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixQueryConstants.scala
index 5379c91..d9774e0 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixQueryConstants.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixQueryConstants.scala
@@ -54,25 +54,25 @@ object PhoenixQueryConstants {
 
   val CREATE_TREND_ANOMALY_METRICS_TABLE_SQL: String = "CREATE TABLE IF NOT EXISTS %s (" +
     "METRIC_UUID BINARY(20) NOT NULL, " +
+    "METHOD_NAME VARCHAR, " +
     "ANOMALY_PERIOD_START UNSIGNED_LONG NOT NULL, " +
     "ANOMALY_PERIOD_END UNSIGNED_LONG NOT NULL, " +
     "TEST_PERIOD_START UNSIGNED_LONG NOT NULL, " +
     "TEST_PERIOD_END UNSIGNED_LONG NOT NULL, " +
-    "METHOD_NAME VARCHAR, " +
     "SEASONAL_INFO VARCHAR, " +
     "ANOMALY_SCORE DOUBLE, " +
     "MODEL_PARAMETERS VARCHAR, " +
     "DETECTION_TIME UNSIGNED_LONG " +
     "CONSTRAINT pk PRIMARY KEY (METRIC_UUID, METHOD_NAME, ANOMALY_PERIOD_START, ANOMALY_PERIOD_END, TEST_PERIOD_START, TEST_PERIOD_END)) " +
-    "DATA_BLOCK_ENCODING='FAST_DIFF' IMMUTABLE_ROWS=true, TTL=%s, COMPRESSION='SNAPPY'"
+    "DATA_BLOCK_ENCODING='FAST_DIFF', IMMUTABLE_ROWS=true, TTL=%s, COMPRESSION='SNAPPY'"
 
   val CREATE_MODEL_SNAPSHOT_TABLE: String = "CREATE TABLE IF NOT EXISTS %s (" +
-    "METRIC_UUID BINARY(20), " +
+    "METRIC_UUID BINARY(20) NOT NULL, " +
     "METHOD_NAME VARCHAR, " +
     "METHOD_TYPE VARCHAR, " +
-    "PARAMETERS VARCHAR " +
-    "SNAPSHOT_TIME UNSIGNED LONG NOT NULL " +
-    "CONSTRAINT pk PRIMARY KEY (METRIC_UUID, METHOD_NAME)) " +
+    "PARAMETERS VARCHAR, " +
+    "SNAPSHOT_TIME UNSIGNED_LONG NOT NULL " +
+    "CONSTRAINT pk PRIMARY KEY (METRIC_UUID, METHOD_NAME, SNAPSHOT_TIME)) " +
     "DATA_BLOCK_ENCODING='FAST_DIFF', IMMUTABLE_ROWS=true, COMPRESSION='SNAPPY'"
 
   //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala
index a34a60a..49ef272 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala
@@ -42,7 +42,6 @@ class LevelDBDataSource() extends MetadataDatasource {
   def this(appConfig: AnomalyDetectionAppConfig) = {
     this
     this.appConfig = appConfig
-    initialize()
   }
 
   override def initialize(): Unit = {
@@ -83,6 +82,22 @@ class LevelDBDataSource() extends MetadataDatasource {
   override def get(key: Key): Option[Value] = Option(db.get(key))
 
   /**
+    * This function obtains all the values
+    *
+    * @return the list of values
+    */
+  def getAll: List[Value] = {
+    val values = scala.collection.mutable.MutableList.empty[Value]
+    val iterator = db.iterator()
+    iterator.seekToFirst()
+    while (iterator.hasNext) {
+      val entry: java.util.Map.Entry[Key, Value] = iterator.next()
+      values.+=(entry.getValue)
+    }
+    values.toList
+  }
+
+  /**
     * This function updates the DataSource by deleting, updating and inserting new (key-value) pairs.
     *
     * @param toRemove which includes all the keys to be removed from the DataSource.
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala
index 95b1b63..c277221 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala
@@ -17,15 +17,17 @@
 
 package org.apache.ambari.metrics.adservice.metadata
 
-import java.net.{HttpURLConnection, URL}
+import javax.ws.rs.core.Response
 
 import org.apache.ambari.metrics.adservice.configuration.MetricCollectorConfiguration
 import org.apache.commons.lang.StringUtils
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricKey
+import org.slf4j.{Logger, LoggerFactory}
 
 import com.fasterxml.jackson.databind.ObjectMapper
 import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
 
+import scalaj.http.{Http, HttpRequest, HttpResponse}
+
 /**
   * Class to invoke Metrics Collector metadata API.
   * TODO : Instantiate a sync thread that regularly updates the internal maps by reading off AMS metadata.
@@ -36,6 +38,7 @@ class ADMetadataProvider extends MetricMetadataProvider {
   var metricCollectorPort: String = _
   var metricCollectorProtocol: String = _
   var metricMetadataPath: String = "/v1/timeline/metrics/metadata/keys"
+  val LOG : Logger = LoggerFactory.getLogger(classOf[ADMetadataProvider])
 
   val connectTimeout: Int = 10000
   val readTimeout: Int = 10000
@@ -52,10 +55,8 @@ class ADMetadataProvider extends MetricMetadataProvider {
     metricMetadataPath = configuration.getMetadataEndpoint
   }
 
-  override def getMetricKeysForDefinitions(metricSourceDefinition: MetricSourceDefinition): (Map[MetricDefinition,
-    Set[MetricKey]], Set[MetricKey]) = {
+  override def getMetricKeysForDefinitions(metricSourceDefinition: MetricSourceDefinition): Set[MetricKey] = {
 
-    val keysMap = scala.collection.mutable.Map[MetricDefinition, Set[MetricKey]]()
     val numDefinitions: Int = metricSourceDefinition.metricDefinitions.size
     val metricKeySet: scala.collection.mutable.Set[MetricKey] = scala.collection.mutable.Set.empty[MetricKey]
 
@@ -64,52 +65,79 @@ class ADMetadataProvider extends MetricMetadataProvider {
         for (host <- metricCollectorHosts) {
           val metricKeys: Set[MetricKey] = getKeysFromMetricsCollector(metricCollectorProtocol, host, metricCollectorPort, metricMetadataPath, metricDef)
           if (metricKeys != null) {
-            keysMap += (metricDef -> metricKeys)
-            metricKeySet.++(metricKeys)
+            metricKeySet.++=(metricKeys)
           }
         }
       }
     }
-    (keysMap.toMap, metricKeySet.toSet)
+    metricKeySet.toSet
   }
 
   /**
-    * Make Metrics Collector REST API call to fetch keys.
     *
-    * @param url
+    * @param protocol
+    * @param host
+    * @param port
+    * @param path
     * @param metricDefinition
     * @return
     */
   def getKeysFromMetricsCollector(protocol: String, host: String, port: String, path: String, metricDefinition: MetricDefinition): Set[MetricKey] = {
 
-    val url: String = protocol + "://" + host + port + "/" + path
+    val url: String = protocol + "://" + host + ":" + port + path
     val mapper = new ObjectMapper() with ScalaObjectMapper
+
+    if (metricDefinition.hosts == null || metricDefinition.hosts.isEmpty) {
+      val request: HttpRequest = Http(url)
+        .param("metricName", metricDefinition.metricName)
+        .param("appId", metricDefinition.appId)
+      makeHttpGetCall(request, mapper)
+    } else {
+      val metricKeySet: scala.collection.mutable.Set[MetricKey] = scala.collection.mutable.Set.empty[MetricKey]
+
+      for (h <- metricDefinition.hosts) {
+        val request: HttpRequest = Http(url)
+          .param("metricName", metricDefinition.metricName)
+          .param("appId", metricDefinition.appId)
+          .param("hostname", h)
+
+        val metricKeys = makeHttpGetCall(request, mapper)
+        metricKeySet.++=(metricKeys)
+      }
+      metricKeySet.toSet
+    }
+  }
+
+  private def makeHttpGetCall(request: HttpRequest, mapper: ObjectMapper): Set[MetricKey] = {
+
     try {
-      val connection = new URL(url).openConnection.asInstanceOf[HttpURLConnection]
-      connection.setConnectTimeout(connectTimeout)
-      connection.setReadTimeout(readTimeout)
-      connection.setRequestMethod("GET")
-      val inputStream = connection.getInputStream
-      val content = scala.io.Source.fromInputStream(inputStream).mkString
-      if (inputStream != null) inputStream.close()
-      val metricKeySet: Set[MetricKey] = fromTimelineMetricKey(mapper.readValue[java.util.Set[TimelineMetricKey]](content))
-      return metricKeySet
+      val result: HttpResponse[String] = request.asString
+      if (result.code == Response.Status.OK.getStatusCode) {
+        LOG.info("Successfully fetched metric keys from metrics collector")
+        val metricKeySet: java.util.Set[java.util.Map[String, String]] = mapper.readValue(result.body,
+          classOf[java.util.Set[java.util.Map[String, String]]])
+        getMetricKeys(metricKeySet)
+      } else {
+        LOG.error("Got an error when trying to fetch metric key from metrics collector. Code = " + result.code + ", Message = " + result.body)
+      }
     } catch {
-      case _: java.io.IOException | _: java.net.SocketTimeoutException => // handle this
+      case _: java.io.IOException | _: java.net.SocketTimeoutException => LOG.error("Unable to fetch metric keys from Metrics collector for : " + request.toString)
     }
-    null
+    Set.empty[MetricKey]
   }
 
-  def fromTimelineMetricKey(timelineMetricKeys: java.util.Set[TimelineMetricKey]): Set[MetricKey] = {
+
+  def getMetricKeys(timelineMetricKeys: java.util.Set[java.util.Map[String, String]]): Set[MetricKey] = {
     val metricKeySet: scala.collection.mutable.Set[MetricKey] = scala.collection.mutable.Set.empty[MetricKey]
     val iter = timelineMetricKeys.iterator()
     while (iter.hasNext) {
-      val timelineMetricKey: TimelineMetricKey = iter.next()
-      val metricKey: MetricKey = MetricKey(timelineMetricKey.metricName,
-        timelineMetricKey.appId,
-        timelineMetricKey.instanceId,
-        timelineMetricKey.hostName,
-        timelineMetricKey.uuid)
+      val timelineMetricKey: java.util.Map[String, String] = iter.next()
+      val metricKey: MetricKey = MetricKey(
+        timelineMetricKey.get("metricName"),
+        timelineMetricKey.get("appId"),
+        timelineMetricKey.get("instanceId"),
+        timelineMetricKey.get("hostname"),
+        timelineMetricKey.get("uuid").getBytes())
 
       metricKeySet.add(metricKey)
     }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/InputMetricDefinitionParser.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/InputMetricDefinitionParser.scala
index cc66c90..3c8ea84 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/InputMetricDefinitionParser.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/InputMetricDefinitionParser.scala
@@ -19,6 +19,8 @@ package org.apache.ambari.metrics.adservice.metadata
 
 import java.io.File
 
+import org.apache.ambari.metrics.adservice.app.ADServiceScalaModule
+
 import com.fasterxml.jackson.databind.ObjectMapper
 import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
 
@@ -30,15 +32,19 @@ object InputMetricDefinitionParser {
       return List.empty[MetricSourceDefinition]
     }
     val mapper = new ObjectMapper() with ScalaObjectMapper
-
-    def metricSourceDefinitions: List[MetricSourceDefinition] =
-      for {
-        file <- getFilesInDirectory(directory)
-        definition: MetricSourceDefinition = mapper.readValue[MetricSourceDefinition](file)
-        if definition != null
-      } yield definition
-
-    metricSourceDefinitions
+    mapper.registerModule(new ADServiceScalaModule)
+    val metricSourceDefinitions: scala.collection.mutable.MutableList[MetricSourceDefinition] =
+      scala.collection.mutable.MutableList.empty[MetricSourceDefinition]
+
+    for (file <- getFilesInDirectory(directory)) {
+      val source = scala.io.Source.fromFile(file)
+      val lines = try source.mkString finally source.close()
+      val definition: MetricSourceDefinition = mapper.readValue[MetricSourceDefinition](lines)
+      if (definition != null) {
+        metricSourceDefinitions.+=(definition)
+      }
+    }
+    metricSourceDefinitions.toList
   }
 
   private def getFilesInDirectory(directory: String): List[File] = {
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala
index 036867b..c668dfa 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala
@@ -19,6 +19,8 @@
 package org.apache.ambari.metrics.adservice.metadata
 
 import org.apache.commons.lang3.StringUtils
+
+import com.fasterxml.jackson.annotation.JsonIgnore
 /*
    {
        "metric-name": "mem_free",
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionService.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionService.scala
index 635dc60..52ce39e 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionService.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionService.scala
@@ -17,7 +17,9 @@
 
 package org.apache.ambari.metrics.adservice.metadata
 
-trait MetricDefinitionService {
+import org.apache.ambari.metrics.adservice.service.AbstractADService
+
+trait MetricDefinitionService extends AbstractADService{
 
   /**
     * Given a 'UUID', return the metric key associated with it.
@@ -27,6 +29,12 @@ trait MetricDefinitionService {
   def getMetricKeyFromUuid(uuid: Array[Byte]) : MetricKey
 
   /**
+    * Return all the definitions being tracked.
+    * @return Map of Metric Source Definition name to Metric Source Definition.
+    */
+  def getDefinitions: List[MetricSourceDefinition]
+
+  /**
     * Given a component definition name, return the definition associated with it.
     * @param name component definition name
     * @return
@@ -61,4 +69,10 @@ trait MetricDefinitionService {
     */
   def getDefinitionByAppId(appId: String) : List[MetricSourceDefinition]
 
+  /**
+    * Return the mapping between definition name to set of metric keys.
+    * @return Map of Metric Source Definition to set of metric keys associated with it.
+    */
+  def getMetricKeys:  Map[String, Set[MetricKey]]
+
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala
index c34d2dd..b9b4a7c 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala
@@ -32,31 +32,24 @@ class MetricDefinitionServiceImpl extends MetricDefinitionService {
   var configuration: AnomalyDetectionAppConfig = _
   var metricMetadataProvider: MetricMetadataProvider = _
 
-  var metricSourceDefinitionMap: Map[String, MetricSourceDefinition] = Map()
-  var metricKeys: Set[MetricKey] = Set.empty[MetricKey]
-  var metricDefinitionMetricKeyMap: Map[MetricDefinition, Set[MetricKey]] = Map()
+  val metricSourceDefinitionMap: scala.collection.mutable.Map[String, MetricSourceDefinition] = scala.collection.mutable.Map()
+  val metricDefinitionMetricKeyMap: scala.collection.mutable.Map[MetricSourceDefinition, Set[MetricKey]] = scala.collection.mutable.Map()
+  val metricKeys: scala.collection.mutable.Set[MetricKey] = scala.collection.mutable.Set.empty[MetricKey]
 
   @Inject
   def this (anomalyDetectionAppConfig: AnomalyDetectionAppConfig, metadataStoreAccessor: AdMetadataStoreAccessor) = {
     this ()
     adMetadataStoreAccessor = metadataStoreAccessor
     configuration = anomalyDetectionAppConfig
-    initializeService()
   }
 
-  def initializeService() : Unit = {
-
-    //Create AD Metadata Schema
-    //TODO Make sure AD Metadata DB is initialized here.
+  @Override
+  def initialize() : Unit = {
+    LOG.info("Initializing Metric Definition Service...")
 
     //Initialize Metric Metadata Provider
     metricMetadataProvider = new ADMetadataProvider(configuration.getMetricCollectorConfiguration)
 
-    loadMetricSourceDefinitions()
-  }
-
-  def loadMetricSourceDefinitions() : Unit = {
-
     //Load definitions from metadata store
     val definitionsFromStore: List[MetricSourceDefinition] = adMetadataStoreAccessor.getSavedInputDefinitions
     for (definition <- definitionsFromStore) {
@@ -71,14 +64,16 @@ class MetricDefinitionServiceImpl extends MetricDefinitionService {
 
     //Union the 2 sources, with DB taking precedence.
     //Save new definition list to DB.
-    metricSourceDefinitionMap = metricSourceDefinitionMap.++(combineDefinitionSources(definitionsFromConfig, definitionsFromStore))
+    metricSourceDefinitionMap.++=(combineDefinitionSources(definitionsFromConfig, definitionsFromStore))
 
-    //Reach out to AMS Metadata and get Metric Keys. Pass in List<CD> and get back (Map<MD,Set<MK>>, Set<MK>)
+    //Reach out to AMS Metadata and get Metric Keys. Pass in MSD and get back Set<MK>
     for (definition <- metricSourceDefinitionMap.values) {
-      val (definitionKeyMap: Map[MetricDefinition, Set[MetricKey]], keys: Set[MetricKey])= metricMetadataProvider.getMetricKeysForDefinitions(definition)
-      metricDefinitionMetricKeyMap = metricDefinitionMetricKeyMap.++(definitionKeyMap)
-      metricKeys = metricKeys.++(keys)
+      val keys: Set[MetricKey] = metricMetadataProvider.getMetricKeysForDefinitions(definition)
+      metricDefinitionMetricKeyMap(definition) = keys
+      metricKeys.++=(keys)
     }
+
+    LOG.info("Successfully initialized Metric Definition Service.")
   }
 
   def getMetricKeyFromUuid(uuid: Array[Byte]): MetricKey = {
@@ -92,16 +87,24 @@ class MetricDefinitionServiceImpl extends MetricDefinitionService {
   }
 
   @Override
+  def getDefinitions: List[MetricSourceDefinition] = {
+    metricSourceDefinitionMap.values.toList
+  }
+
+  @Override
   def getDefinitionByName(name: String): MetricSourceDefinition = {
     if (!metricSourceDefinitionMap.contains(name)) {
       LOG.warn("Metric Source Definition with name " + name + " not found")
+      null
+    } else {
+      metricSourceDefinitionMap.apply(name)
     }
-    metricSourceDefinitionMap.apply(name)
   }
 
   @Override
   def addDefinition(definition: MetricSourceDefinition): Boolean = {
     if (metricSourceDefinitionMap.contains(definition.definitionName)) {
+      LOG.info("Definition with name " + definition.definitionName + " already present.")
       return false
     }
     definition.definitionSource = MetricSourceDefinitionType.API
@@ -109,6 +112,10 @@ class MetricDefinitionServiceImpl extends MetricDefinitionService {
     val success: Boolean = adMetadataStoreAccessor.saveInputDefinition(definition)
     if (success) {
       metricSourceDefinitionMap += definition.definitionName -> definition
+      val keys: Set[MetricKey] = metricMetadataProvider.getMetricKeysForDefinitions(definition)
+      metricDefinitionMetricKeyMap(definition) = keys
+      metricKeys.++=(keys)
+      LOG.info("Successfully created metric source definition : " + definition.definitionName)
     }
     success
   }
@@ -116,16 +123,22 @@ class MetricDefinitionServiceImpl extends MetricDefinitionService {
   @Override
   def updateDefinition(definition: MetricSourceDefinition): Boolean = {
     if (!metricSourceDefinitionMap.contains(definition.definitionName)) {
+      LOG.warn("Metric Source Definition with name " + definition.definitionName + " not found")
       return false
     }
 
     if (metricSourceDefinitionMap.apply(definition.definitionName).definitionSource != MetricSourceDefinitionType.API) {
       return false
     }
+    definition.definitionSource = MetricSourceDefinitionType.API
 
     val success: Boolean = adMetadataStoreAccessor.saveInputDefinition(definition)
     if (success) {
       metricSourceDefinitionMap += definition.definitionName -> definition
+      val keys: Set[MetricKey] = metricMetadataProvider.getMetricKeysForDefinitions(definition)
+      metricDefinitionMetricKeyMap(definition) = keys
+      metricKeys.++=(keys)
+      LOG.info("Successfully updated metric source definition : " + definition.definitionName)
     }
     success
   }
@@ -133,17 +146,22 @@ class MetricDefinitionServiceImpl extends MetricDefinitionService {
   @Override
   def deleteDefinitionByName(name: String): Boolean = {
     if (!metricSourceDefinitionMap.contains(name)) {
+      LOG.warn("Metric Source Definition with name " + name + " not found")
       return false
     }
 
     val definition : MetricSourceDefinition = metricSourceDefinitionMap.apply(name)
     if (definition.definitionSource != MetricSourceDefinitionType.API) {
+      LOG.warn("Cannot delete metric source definition which was not created through API.")
       return false
     }
 
     val success: Boolean = adMetadataStoreAccessor.removeInputDefinition(name)
     if (success) {
-      metricSourceDefinitionMap += definition.definitionName -> definition
+      metricSourceDefinitionMap -= definition.definitionName
+      metricKeys.--=(metricDefinitionMetricKeyMap.apply(definition))
+      metricDefinitionMetricKeyMap -= definition
+      LOG.info("Successfully deleted metric source definition : " + name)
     }
     success
   }
@@ -183,7 +201,6 @@ class MetricDefinitionServiceImpl extends MetricDefinitionService {
     this.adMetadataStoreAccessor = adMetadataStoreAccessor
   }
 
-
   /**
     * Look into the Metric Definitions inside a Metric Source definition, and push down source level appId &
     * hosts to Metric definition if they do not have an override.
@@ -202,7 +219,7 @@ class MetricDefinitionServiceImpl extends MetricDefinitionService {
         }
       }
 
-      if (metricDef.isValid && metricDef.hosts.isEmpty) {
+      if (metricDef.isValid && (metricDef.hosts == null || metricDef.hosts.isEmpty)) {
         if (sourceLevelHostList != null && sourceLevelHostList.nonEmpty) {
           metricDef.hosts = sourceLevelHostList
         }
@@ -210,4 +227,16 @@ class MetricDefinitionServiceImpl extends MetricDefinitionService {
     }
   }
 
+  /**
+    * Return the mapping between definition name to set of metric keys.
+    *
+    * @return Map of Metric Source Definition to set of metric keys associated with it.
+    */
+  override def getMetricKeys: Map[String, Set[MetricKey]] = {
+    val metricKeyMap: scala.collection.mutable.Map[String, Set[MetricKey]] = scala.collection.mutable.Map()
+    for (definition <- metricSourceDefinitionMap.values) {
+      metricKeyMap(definition.definitionName) = metricDefinitionMetricKeyMap.apply(definition)
+    }
+    metricKeyMap.toMap
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricKey.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricKey.scala
index afad617..65c496e 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricKey.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricKey.scala
@@ -18,6 +18,9 @@
 
 package org.apache.ambari.metrics.adservice.metadata
 
+import javax.xml.bind.annotation.XmlRootElement
+
+@XmlRootElement
 case class MetricKey (metricName: String, appId: String, instanceId: String, hostname: String, uuid: Array[Byte]) {
 
   @Override
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricMetadataProvider.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricMetadataProvider.scala
index 5f9c0a0..b5ba15e 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricMetadataProvider.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricMetadataProvider.scala
@@ -27,5 +27,5 @@ trait MetricMetadataProvider {
     * @param metricSourceDefinition component definition
     * @return
     */
-  def getMetricKeysForDefinitions(metricSourceDefinition: MetricSourceDefinition): (Map[MetricDefinition, Set[MetricKey]], Set[MetricKey])
+  def getMetricKeysForDefinitions(metricSourceDefinition: MetricSourceDefinition): Set[MetricKey]
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SingleMetricAnomalyInstance.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/MetricAnomalyInstance.scala
similarity index 91%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SingleMetricAnomalyInstance.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/MetricAnomalyInstance.scala
index 981a893..248a380 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SingleMetricAnomalyInstance.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/MetricAnomalyInstance.scala
@@ -18,12 +18,15 @@
 
 package org.apache.ambari.metrics.adservice.model
 
+import javax.xml.bind.annotation.XmlRootElement
+
 import org.apache.ambari.metrics.adservice.metadata.MetricKey
 import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
 
-abstract class SingleMetricAnomalyInstance {
+@XmlRootElement
+abstract class MetricAnomalyInstance {
 
   val metricKey: MetricKey
   val anomalyType: AnomalyType
 
-}
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala
index 98ce0c4..db12307 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala
@@ -19,17 +19,62 @@ package org.apache.ambari.metrics.adservice.resource
 
 import javax.ws.rs.core.MediaType.APPLICATION_JSON
 import javax.ws.rs.core.Response
-import javax.ws.rs.{GET, Path, Produces}
+import javax.ws.rs.{GET, Path, Produces, QueryParam}
 
-import org.joda.time.DateTime
+import org.apache.ambari.metrics.adservice.model.{AnomalyType, MetricAnomalyInstance}
+import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
+import org.apache.ambari.metrics.adservice.service.ADQueryService
+import org.apache.commons.lang.StringUtils
+
+import com.google.inject.Inject
 
 @Path("/anomaly")
 class AnomalyResource {
 
+  @Inject
+  var aDQueryService: ADQueryService = _
+
   @GET
   @Produces(Array(APPLICATION_JSON))
-  def default: Response = {
-    Response.ok.entity(Map("message" -> "Anomaly Detection Service!",
-      "today" -> DateTime.now.toString("MM-dd-yyyy hh:mm"))).build()
+  def getTopNAnomalies(@QueryParam("type") anType: String,
+                       @QueryParam("startTime") startTime: Long,
+                       @QueryParam("endTime") endTime: Long,
+                       @QueryParam("top") limit: Int): Response = {
+
+    val anomalies: List[MetricAnomalyInstance] = aDQueryService.getTopNAnomaliesByType(
+      parseAnomalyType(anType),
+      parseStartTime(startTime),
+      parseEndTime(endTime),
+      parseTop(limit))
+
+    Response.ok.entity(anomalies).build()
+  }
+
+  private def parseAnomalyType(anomalyType: String) : AnomalyType = {
+    if (StringUtils.isEmpty(anomalyType)) {
+      return AnomalyType.POINT_IN_TIME
+    }
+    AnomalyType.withName(anomalyType.toUpperCase)
+  }
+
+  private def parseStartTime(startTime: Long) : Long = {
+    if (startTime > 0l) {
+      return startTime
+    }
+    System.currentTimeMillis() - 60*60*1000
+  }
+
+  private def parseEndTime(endTime: Long) : Long = {
+    if (endTime > 0l) {
+      return endTime
+    }
+    System.currentTimeMillis()
+  }
+
+  private def parseTop(limit: Int) : Int = {
+    if (limit > 0) {
+      return limit
+    }
+    5
   }
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala
index 16125fa..442bf46 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala
@@ -17,10 +17,11 @@
 
 package org.apache.ambari.metrics.adservice.resource
 
-import javax.ws.rs.{GET, Path, Produces}
+import javax.ws.rs._
 import javax.ws.rs.core.MediaType.APPLICATION_JSON
+import javax.ws.rs.core.Response
 
-import org.apache.ambari.metrics.adservice.metadata.{MetricDefinitionService, MetricSourceDefinition}
+import org.apache.ambari.metrics.adservice.metadata.{MetricDefinitionService, MetricKey, MetricSourceDefinition}
 import org.apache.commons.lang.StringUtils
 
 import com.google.inject.Inject
@@ -33,8 +34,76 @@ class MetricDefinitionResource {
 
   @GET
   @Produces(Array(APPLICATION_JSON))
-  def getMetricDefinition (definitionName: String) : MetricSourceDefinition = {
-    null
+  @Path("/{name}")
+  def defaultGet(@PathParam("name") definitionName: String): Response  = {
+
+    if (StringUtils.isEmpty(definitionName)) {
+      Response.ok.entity(Map("message" -> "Definition name cannot be empty. Use query parameter 'name'")).build()
+    }
+    val metricSourceDefinition = metricDefinitionService.getDefinitionByName(definitionName)
+    if (metricSourceDefinition != null) {
+      Response.ok.entity(metricSourceDefinition).build()
+    } else {
+      Response.ok.entity(Map("message" -> "Definition not found")).build()
+    }
+  }
+
+  @GET
+  @Produces(Array(APPLICATION_JSON))
+  def getAllMetricDefinitions: Response  = {
+    val metricSourceDefinitionMap: List[MetricSourceDefinition] = metricDefinitionService.getDefinitions
+    Response.ok.entity(metricSourceDefinitionMap).build()
+  }
+
+  @GET
+  @Path("/keys")
+  @Produces(Array(APPLICATION_JSON))
+  def getMetricKeys: Response  = {
+    val metricKeyMap:  Map[String, Set[MetricKey]] = metricDefinitionService.getMetricKeys
+    Response.ok.entity(metricKeyMap).build()
   }
 
+  @POST
+  @Produces(Array(APPLICATION_JSON))
+  def defaultPost(definition: MetricSourceDefinition) : Response = {
+    if (definition == null) {
+      Response.ok.entity(Map("message" -> "Definition content cannot be empty.")).build()
+    }
+    val success : Boolean = metricDefinitionService.addDefinition(definition)
+    if (success) {
+      Response.ok.entity(Map("message" -> "Definition saved")).build()
+    } else {
+      Response.ok.entity(Map("message" -> "Definition could not be saved")).build()
+    }
+  }
+
+  @PUT
+  @Produces(Array(APPLICATION_JSON))
+  def defaultPut(definition: MetricSourceDefinition) : Response = {
+    if (definition == null) {
+      Response.ok.entity(Map("message" -> "Definition content cannot be empty.")).build()
+    }
+    val success : Boolean = metricDefinitionService.updateDefinition(definition)
+    if (success) {
+      Response.ok.entity(Map("message" -> "Definition updated")).build()
+    } else {
+      Response.ok.entity(Map("message" -> "Definition could not be updated")).build()
+    }
+  }
+
+  @DELETE
+  @Produces(Array(APPLICATION_JSON))
+  @Path("/{name}")
+  def defaultDelete(@PathParam("name") definitionName: String): Response  = {
+
+    if (StringUtils.isEmpty(definitionName)) {
+      Response.ok.entity(Map("message" -> "Definition name cannot be empty. Use query parameter 'name'")).build()
+    }
+    val success: Boolean = metricDefinitionService.deleteDefinitionByName(definitionName)
+    if (success) {
+      Response.ok.entity(Map("message" -> "Definition deleted")).build()
+    } else {
+      Response.ok.entity(Map("message" -> "Definition could not be deleted")).build()
+    }
+  }
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala
index 22fe0ac..fd55b64 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala
@@ -17,6 +17,8 @@
   */
 package org.apache.ambari.metrics.adservice.resource
 
+import java.time.LocalDateTime
+
 import javax.ws.rs.core.MediaType.APPLICATION_JSON
 import javax.ws.rs.core.Response
 import javax.ws.rs.{GET, Path, Produces}
@@ -29,7 +31,8 @@ class RootResource {
   @Produces(Array(APPLICATION_JSON))
   @GET
   def default: Response = {
-    Response.ok.entity(Map("name" -> "anomaly-detection-service", "today" -> DateTime.now.toString("MM-dd-yyyy hh:mm"))).build()
+    val dtf = java.time.format.DateTimeFormatter.ofPattern("yyyy/MM/dd HH:mm")
+    Response.ok.entity(Map("name" -> "anomaly-detection-service", "today" -> LocalDateTime.now)).build()
   }
 
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala
index 8e6f511..2cfa30f 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala
@@ -18,9 +18,9 @@
 package org.apache.ambari.metrics.adservice.service
 
 import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-import org.apache.ambari.metrics.adservice.model.SingleMetricAnomalyInstance
+import org.apache.ambari.metrics.adservice.model.MetricAnomalyInstance
 
-trait ADQueryService {
+trait ADQueryService extends AbstractADService{
 
   /**
     * API to return list of single metric anomalies satisfying a set of conditions from the anomaly store.
@@ -30,5 +30,5 @@ trait ADQueryService {
     * @param limit Maximim number of anomaly metrics that need to be returned based on anomaly score.
     * @return
     */
-  def getTopNAnomaliesByType(anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int): List[SingleMetricAnomalyInstance]
+  def getTopNAnomaliesByType(anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int): List[MetricAnomalyInstance]
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala
index e5efa44..3b49208 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala
@@ -16,11 +16,30 @@
   * limitations under the License.
   */
 package org.apache.ambari.metrics.adservice.service
+import org.apache.ambari.metrics.adservice.db.AdAnomalyStoreAccessor
 import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-import org.apache.ambari.metrics.adservice.model.SingleMetricAnomalyInstance
+import org.apache.ambari.metrics.adservice.model.MetricAnomalyInstance
+import org.slf4j.{Logger, LoggerFactory}
 
+import com.google.inject.{Inject, Singleton}
+
+@Singleton
 class ADQueryServiceImpl extends ADQueryService {
 
+  val LOG : Logger = LoggerFactory.getLogger(classOf[ADQueryServiceImpl])
+
+  @Inject
+  var adAnomalyStoreAccessor: AdAnomalyStoreAccessor = _
+
+  /**
+    * Initialize Service
+    */
+  override def initialize(): Unit = {
+    LOG.info("Initializing AD Query Service...")
+    adAnomalyStoreAccessor.initialize()
+    LOG.info("Successfully initialized AD Query Service.")
+  }
+
   /**
     * Implementation to return list of anomalies satisfying a set of conditions from the anomaly store.
     *
@@ -30,8 +49,8 @@ class ADQueryServiceImpl extends ADQueryService {
     * @param limit       Maximim number of anomaly metrics that need to be returned based on anomaly score.
     * @return
     */
-  override def getTopNAnomaliesByType(anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int): List[SingleMetricAnomalyInstance] = {
-    val anomalies = List.empty[SingleMetricAnomalyInstance]
+  override def getTopNAnomaliesByType(anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int): List[MetricAnomalyInstance] = {
+    val anomalies = adAnomalyStoreAccessor.getMetricAnomalies(anomalyType, startTime, endTime, limit)
     anomalies
   }
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/AbstractADService.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/AbstractADService.scala
new file mode 100644
index 0000000..56bb999
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/AbstractADService.scala
@@ -0,0 +1,44 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  *//**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+
+package org.apache.ambari.metrics.adservice.service
+
+trait AbstractADService {
+
+  /**
+    * Initialize Service
+    */
+  def initialize(): Unit
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/pointintime/PointInTimeAnomalyInstance.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/pointintime/PointInTimeAnomalyInstance.scala
index 63cf8c7..56ca2c1 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/pointintime/PointInTimeAnomalyInstance.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/pointintime/PointInTimeAnomalyInstance.scala
@@ -23,7 +23,7 @@ import org.apache.ambari.metrics.adservice.common.Season
 import org.apache.ambari.metrics.adservice.metadata.MetricKey
 import org.apache.ambari.metrics.adservice.model.AnomalyDetectionMethod.AnomalyDetectionMethod
 import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-import org.apache.ambari.metrics.adservice.model.{AnomalyType, SingleMetricAnomalyInstance}
+import org.apache.ambari.metrics.adservice.model.{AnomalyType, MetricAnomalyInstance}
 
 class PointInTimeAnomalyInstance(val metricKey: MetricKey,
                                  val timestamp: Long,
@@ -31,7 +31,7 @@ class PointInTimeAnomalyInstance(val metricKey: MetricKey,
                                  val methodType: AnomalyDetectionMethod,
                                  val anomalyScore: Double,
                                  val anomalousSeason: Season,
-                                 val modelParameters: String) extends SingleMetricAnomalyInstance {
+                                 val modelParameters: String) extends MetricAnomalyInstance {
 
   override val anomalyType: AnomalyType = AnomalyType.POINT_IN_TIME
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala
index 3fc0d6f..7392d59 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala
@@ -21,7 +21,7 @@ import org.apache.ambari.metrics.adservice.common.{Season, TimeRange}
 import org.apache.ambari.metrics.adservice.metadata.MetricKey
 import org.apache.ambari.metrics.adservice.model.AnomalyDetectionMethod.AnomalyDetectionMethod
 import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-import org.apache.ambari.metrics.adservice.model.{AnomalyType, SingleMetricAnomalyInstance}
+import org.apache.ambari.metrics.adservice.model.{AnomalyType, MetricAnomalyInstance}
 
 case class TrendAnomalyInstance (metricKey: MetricKey,
                                  anomalousPeriod: TimeRange,
@@ -29,7 +29,7 @@ case class TrendAnomalyInstance (metricKey: MetricKey,
                                  methodType: AnomalyDetectionMethod,
                                  anomalyScore: Double,
                                  seasonInfo: Season,
-                                 modelParameters: String) extends SingleMetricAnomalyInstance {
+                                 modelParameters: String) extends MetricAnomalyInstance {
 
   override val anomalyType: AnomalyType = AnomalyType.POINT_IN_TIME
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
index 2a4999c..e38ea40 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
@@ -17,6 +17,8 @@
   */
 package org.apache.ambari.metrics.adservice.app
 
+import java.time.LocalDateTime
+
 import javax.ws.rs.client.Client
 import javax.ws.rs.core.MediaType.APPLICATION_JSON
 
@@ -37,7 +39,8 @@ class DefaultADResourceSpecTest extends FunSpec with Matchers {
       withAppRunning(classOf[AnomalyDetectionApp], Resources.getResource("config.yml").getPath) { rule =>
         val json = client.target(s"http://localhost:${rule.getLocalPort}/anomaly")
           .request().accept(APPLICATION_JSON).buildGet().invoke(classOf[String])
-        val now = DateTime.now.toString("MM-dd-yyyy hh:mm")
+        val dtf = java.time.format.DateTimeFormatter.ofPattern("yyyy/MM/dd HH:mm")
+        val now = LocalDateTime.now
         assert(json == "{\"message\":\"Anomaly Detection Service!\"," + "\"today\":\"" + now + "\"}")
       }
     }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/AMSMetadataProviderTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/AMSMetadataProviderTest.scala
index bd38e9a..79366b1 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/AMSMetadataProviderTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/AMSMetadataProviderTest.scala
@@ -18,26 +18,32 @@
 
 package org.apache.ambari.metrics.adservice.metadata
 
+import java.util
+
 import org.apache.ambari.metrics.adservice.configuration.MetricCollectorConfiguration
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricKey
 import org.scalatest.FunSuite
 
 class AMSMetadataProviderTest extends FunSuite {
 
   test("testFromTimelineMetricKey") {
-    val timelineMetricKeys: java.util.Set[TimelineMetricKey] = new java.util.HashSet[TimelineMetricKey]()
+    val timelineMetricKeys: java.util.Set[java.util.Map[String, String]] = new java.util.HashSet[java.util.Map[String, String]]()
 
     val uuid: Array[Byte] = Array.empty[Byte]
 
     for (i <- 1 to 3) {
-      val key: TimelineMetricKey = new TimelineMetricKey("M" + i, "App", null, "H", uuid)
-      timelineMetricKeys.add(key)
+      val keyMap: java.util.Map[String, String] = new util.HashMap[String, String]()
+      keyMap.put("metricName", "M" + i)
+      keyMap.put("appId", "App")
+      keyMap.put("hostname", "H")
+      keyMap.put("uuid", new String(uuid))
+      timelineMetricKeys.add(keyMap)
     }
 
     val aMSMetadataProvider : ADMetadataProvider = new ADMetadataProvider(new MetricCollectorConfiguration)
 
-    val metricKeys : Set[MetricKey] = aMSMetadataProvider.fromTimelineMetricKey(timelineMetricKeys)
+    val metricKeys : Set[MetricKey] = aMSMetadataProvider.getMetricKeys(timelineMetricKeys)
     assert(metricKeys.size == 3)
   }
 
+
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionTest.scala
index 0149673..c4d4dbc 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionTest.scala
@@ -20,6 +20,10 @@ package org.apache.ambari.metrics.adservice.metadata
 import org.apache.commons.lang.SerializationUtils
 import org.scalatest.FunSuite
 
+import com.fasterxml.jackson.databind.ObjectMapper
+import com.fasterxml.jackson.module.scala.DefaultScalaModule
+import org.apache.ambari.metrics.adservice.app.ADServiceScalaModule
+
 class MetricSourceDefinitionTest extends FunSuite {
 
   test("createNewMetricSourceDefinition") {
@@ -65,7 +69,12 @@ class MetricSourceDefinitionTest extends FunSuite {
   }
 
   test("serializeDeserialize") {
-    val msd : MetricSourceDefinition = new MetricSourceDefinition("testDefinition", "testAppId", MetricSourceDefinitionType.API)
+
+    val msd : MetricSourceDefinition = new MetricSourceDefinition("testDefinition", "A1", MetricSourceDefinitionType.API)
+    msd.hosts = List("h1")
+    msd.addMetricDefinition(MetricDefinition("M1", null, List("h2")))
+    msd.addMetricDefinition(MetricDefinition("M1", "A2", null))
+
     val msdByteArray: Array[Byte] = SerializationUtils.serialize(msd)
     assert(msdByteArray.nonEmpty)
 
@@ -73,5 +82,10 @@ class MetricSourceDefinitionTest extends FunSuite {
     assert(msd2 != null)
     assert(msd == msd2)
 
+    val mapper : ObjectMapper = new ObjectMapper()
+    mapper.registerModule(new ADServiceScalaModule)
+
+    System.out.print(mapper.writeValueAsString(msd))
+
   }
 }
diff --git a/ambari-metrics/ambari-metrics-common/pom.xml b/ambari-metrics/ambari-metrics-common/pom.xml
index 34bf5cb..af68ed9 100644
--- a/ambari-metrics/ambari-metrics-common/pom.xml
+++ b/ambari-metrics/ambari-metrics-common/pom.xml
@@ -27,12 +27,6 @@
   <artifactId>ambari-metrics-common</artifactId>
   <name>Ambari Metrics Common</name>
 
-  <properties>
-    <hadoop.version>2.7.3.2.6.0.3-8</hadoop.version>
-    <hbase.version>1.1.2.2.6.0.3-8</hbase.version>
-    <phoenix.version>4.7.0.2.6.0.3-8</phoenix.version>
-  </properties>
-
   <build>
     <plugins>
       <plugin>
@@ -143,45 +137,6 @@
 
   <dependencies>
     <dependency>
-      <groupId>org.apache.phoenix</groupId>
-      <artifactId>phoenix-core</artifactId>
-      <version>${phoenix.version}</version>
-      <exclusions>
-        <exclusion>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-common</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-annotations</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-common</artifactId>
-      <version>${hadoop.version}</version>
-      <scope>provided</scope>
-      <exclusions>
-        <exclusion>
-          <groupId>commons-el</groupId>
-          <artifactId>commons-el</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>tomcat</groupId>
-          <artifactId>jasper-runtime</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>tomcat</groupId>
-          <artifactId>jasper-compiler</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>org.mortbay.jetty</groupId>
-          <artifactId>jsp-2.1-jetty</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
       <groupId>net.sf.ehcache</groupId>
       <artifactId>ehcache</artifactId>
       <version>2.10.0</version>
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetricKey.java b/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetricKey.java
deleted file mode 100644
index 7619811..0000000
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/TimelineMetricKey.java
+++ /dev/null
@@ -1,59 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.metrics2.sink.timeline;
-
-import org.apache.commons.lang.StringUtils;
-
-public class TimelineMetricKey {
-  public String metricName;
-  public String appId;
-  public String instanceId = null;
-  public String hostName;
-  public byte[] uuid;
-
-  public TimelineMetricKey(String metricName, String appId, String instanceId, String hostName, byte[] uuid) {
-    this.metricName = metricName;
-    this.appId = appId;
-    this.instanceId = instanceId;
-    this.hostName = hostName;
-    this.uuid = uuid;
-  }
-
-  @Override
-  public boolean equals(Object o) {
-    if (this == o) return true;
-    if (o == null || getClass() != o.getClass()) return false;
-
-    TimelineMetricKey that = (TimelineMetricKey) o;
-
-    if (!metricName.equals(that.metricName)) return false;
-    if (!appId.equals(that.appId)) return false;
-    if (!hostName.equals(that.hostName)) return false;
-    return (StringUtils.isNotEmpty(instanceId) ? instanceId.equals(that.instanceId) : StringUtils.isEmpty(that.instanceId));
-  }
-
-  @Override
-  public int hashCode() {
-    int result = metricName.hashCode();
-    result = 31 * result + (appId != null ? appId.hashCode() : 0);
-    result = 31 * result + (instanceId != null ? instanceId.hashCode() : 0);
-    result = 31 * result + (hostName != null ? hostName.hashCode() : 0);
-    return result;
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
index a96be30..c2e9448 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/HBaseTimelineMetricsService.java
@@ -44,6 +44,7 @@ import java.util.concurrent.TimeUnit;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
+import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.collections.MapUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
@@ -54,7 +55,6 @@ import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
 import org.apache.hadoop.metrics2.sink.timeline.MetricHostAggregate;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricKey;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricWithAggregatedValues;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
@@ -496,30 +496,44 @@ public class HBaseTimelineMetricsService extends AbstractService implements Time
    * @param metricName
    * @param appId
    * @param instanceId
-   * @param hostname
+   * @param hosts
    * @return
    * @throws SQLException
    * @throws IOException
    */
   @Override
-  public Set<TimelineMetricKey> getTimelineMetricKey(String metricName, String appId, String instanceId, String hostname) throws SQLException, IOException {
+  public Set<Map<String, String>> getTimelineMetricKeys(String metricName, String appId, String instanceId, List<String> hosts)
+    throws SQLException, IOException {
+    Set<Map<String, String>> timelineMetricKeys = new HashSet<>();
 
-    if (StringUtils.isEmpty(hostname)) {
-      Set<String> hosts = new HashSet<>();
+    if (CollectionUtils.isEmpty(hosts)) {
+      Set<String> hostsFromMetadata = new HashSet<>();
       for (String host : metricMetadataManager.getHostedAppsCache().keySet()) {
         if (metricMetadataManager.getHostedAppsCache().get(host).getHostedApps().contains(appId)) {
-          hosts.add(host);
+          hostsFromMetadata.add(host);
         }
       }
-      Set<TimelineMetricKey> timelineMetricKeys = new HashSet<>();
-      for (String host : hosts) {
+      for (String host : hostsFromMetadata) {
         byte[] uuid = metricMetadataManager.getUuid(metricName, appId, instanceId, host);
-        timelineMetricKeys.add(new TimelineMetricKey(metricName, appId, instanceId, host, uuid));
+        Map<String, String> keyMap = new HashMap<>();
+        keyMap.put("metricName", metricName);
+        keyMap.put("appId", appId);
+        keyMap.put("hostname", host);
+        keyMap.put("uuid", new String(uuid));
+        timelineMetricKeys.add(keyMap);
       }
       return timelineMetricKeys;
     } else {
-      byte[] uuid = metricMetadataManager.getUuid(metricName, appId, instanceId, hostname);
-      return Collections.singleton(new TimelineMetricKey(metricName, appId, instanceId, hostname, uuid));
+      for (String host : hosts) {
+        byte[] uuid = metricMetadataManager.getUuid(metricName, appId, instanceId, host);
+        Map<String, String> keyMap = new HashMap<>();
+        keyMap.put("metricName", metricName);
+        keyMap.put("appId", appId);
+        keyMap.put("hostname", host);
+        keyMap.put("uuid", new String(uuid));
+        timelineMetricKeys.add(keyMap);
+      }
+      return timelineMetricKeys;
     }
   }
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
index 65b4614..0626e8e 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessor.java
@@ -120,6 +120,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.client.Durability;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.util.RetryCounter;
 import org.apache.hadoop.hbase.util.RetryCounterFactory;
 import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
 import org.apache.hadoop.metrics2.sink.timeline.MetricClusterAggregate;
@@ -139,8 +140,8 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataKey;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
-import org.apache.hadoop.metrics2.sink.timeline.query.DefaultPhoenixDataSource;
-import org.apache.hadoop.metrics2.sink.timeline.query.PhoenixConnectionProvider;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultPhoenixDataSource;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixConnectionProvider;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.SplitByMetricNamesCondition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.sink.ExternalMetricsSink;
@@ -210,7 +211,7 @@ public class PhoenixHBaseAccessor {
   private HashMap<String, String> tableTTL = new HashMap<>();
 
   private final TimelineMetricConfiguration configuration;
-  private List<InternalMetricsSource> rawMetricsSources;
+  private List<InternalMetricsSource> rawMetricsSources = new ArrayList<>();
 
   public PhoenixHBaseAccessor(PhoenixConnectionProvider dataSource) {
     this(TimelineMetricConfiguration.getInstance(), dataSource);
@@ -459,6 +460,23 @@ public class PhoenixHBaseAccessor {
     return mapper.readValue(json, metricValuesTypeRef);
   }
 
+  private Connection getConnectionRetryingOnException()
+    throws SQLException, InterruptedException {
+    RetryCounter retryCounter = retryCounterFactory.create();
+    while (true) {
+      try{
+        return getConnection();
+      } catch (SQLException e) {
+        if(!retryCounter.shouldRetry()){
+          LOG.error("HBaseAccessor getConnection failed after "
+            + retryCounter.getMaxAttempts() + " attempts");
+          throw e;
+        }
+      }
+      retryCounter.sleepUntilNextRetry();
+    }
+  }
+
   /**
    * Get JDBC connection to HBase store. Assumption is that the hbase
    * configuration is present on the classpath and loaded by the caller into
@@ -491,7 +509,7 @@ public class PhoenixHBaseAccessor {
 
     try {
       LOG.info("Initializing metrics schema...");
-      conn = dataSource.getConnectionRetryingOnException(retryCounterFactory);
+      conn = getConnectionRetryingOnException();
       stmt = conn.createStatement();
 
       // Metadata
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
index f00bd91..349ef83 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TimelineMetricStore.java
@@ -21,7 +21,6 @@ import org.apache.hadoop.metrics2.sink.timeline.AggregationResult;
 import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricKey;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
@@ -111,6 +110,6 @@ public interface TimelineMetricStore {
 
   TimelineMetrics getAnomalyMetrics(String method, long startTime, long endTime, Integer limit) throws SQLException;
 
-  Set<TimelineMetricKey> getTimelineMetricKey(String metricName, String appId, String instanceId, String hostname) throws SQLException, IOException;
+  Set<Map<String, String>> getTimelineMetricKeys(String metricName, String appId, String instanceId,  List<String> hosts) throws SQLException, IOException;
 
 }
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/ConnectionProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConnectionProvider.java
similarity index 84%
rename from ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/ConnectionProvider.java
rename to ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConnectionProvider.java
index 72e5fb5..391af27 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/ConnectionProvider.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/ConnectionProvider.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.hadoop.metrics2.sink.timeline.query;
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
 
 import org.apache.hadoop.hbase.util.RetryCounterFactory;
@@ -28,5 +28,4 @@ import java.sql.SQLException;
  */
 public interface ConnectionProvider {
   public Connection getConnection() throws SQLException;
-  public Connection getConnectionRetryingOnException(RetryCounterFactory retryCounterFactory) throws SQLException, InterruptedException;
 }
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/DefaultPhoenixDataSource.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultPhoenixDataSource.java
similarity index 84%
rename from ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/DefaultPhoenixDataSource.java
rename to ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultPhoenixDataSource.java
index a28a433..67afe6b 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/DefaultPhoenixDataSource.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultPhoenixDataSource.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.hadoop.metrics2.sink.timeline.query;
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
 
 import org.apache.commons.logging.Log;
@@ -89,20 +89,4 @@ public class DefaultPhoenixDataSource implements PhoenixConnectionProvider {
     }
   }
 
-  public Connection getConnectionRetryingOnException(RetryCounterFactory retryCounterFactory)
-    throws SQLException, InterruptedException {
-    RetryCounter retryCounter = retryCounterFactory.create();
-    while (true) {
-      try{
-        return getConnection();
-      } catch (SQLException e) {
-        if(!retryCounter.shouldRetry()){
-          LOG.error("HBaseAccessor getConnection failed after "
-            + retryCounter.getMaxAttempts() + " attempts");
-          throw e;
-        }
-      }
-      retryCounter.sleepUntilNextRetry();
-    }
-  }
 }
diff --git a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/PhoenixConnectionProvider.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java
similarity index 92%
rename from ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/PhoenixConnectionProvider.java
rename to ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java
index 194c769..cacbcfb 100644
--- a/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/query/PhoenixConnectionProvider.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/PhoenixConnectionProvider.java
@@ -1,4 +1,4 @@
-package org.apache.hadoop.metrics2.sink.timeline.query;
+package org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query;
 
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
index db35686..dc401e6 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TimelineWebServices.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.server.applicationhistoryservice.webapp;
 
 import com.google.inject.Inject;
 import com.google.inject.Singleton;
+import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
@@ -27,7 +28,6 @@ import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.metrics2.sink.timeline.AggregationResult;
 import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
 import org.apache.hadoop.metrics2.sink.timeline.PrecisionLimitExceededException;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricKey;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities;
@@ -69,7 +69,9 @@ import javax.xml.bind.annotation.XmlRootElement;
 import java.io.IOException;
 import java.sql.SQLException;
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collection;
+import java.util.Collections;
 import java.util.EnumSet;
 import java.util.HashSet;
 import java.util.List;
@@ -516,7 +518,7 @@ public class TimelineWebServices {
   @GET
   @Path("/metrics/metadata/key")
   @Produces({ MediaType.APPLICATION_JSON })
-  public Set<TimelineMetricKey> getTimelineMetricKey(
+  public Set<Map<String, String>> getTimelineMetricKey(
     @Context HttpServletRequest req,
     @Context HttpServletResponse res,
     @QueryParam("metricName") String metricName,
@@ -527,7 +529,11 @@ public class TimelineWebServices {
     init(res);
 
     try {
-      return timelineMetricStore.getTimelineMetricKey(metricName, appId, instanceId, hostname);
+      if (StringUtils.isEmpty(hostname)) {
+        return timelineMetricStore.getTimelineMetricKeys(metricName, appId, instanceId, Collections.EMPTY_LIST);
+      } else {
+        return timelineMetricStore.getTimelineMetricKeys(metricName, appId, instanceId, Arrays.asList(StringUtils.split(hostname, ",")));
+      }
     } catch (Exception e) {
       throw new WebApplicationException(e, Response.Status.INTERNAL_SERVER_ERROR);
     }
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
index 7b70a80..03205e7 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryServer.java
@@ -29,7 +29,7 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricConfiguration;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.availability.MetricCollectorHAController;
-import org.apache.hadoop.metrics2.sink.timeline.query.DefaultPhoenixDataSource;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultPhoenixDataSource;
 import org.apache.zookeeper.ClientCnxn;
 import org.easymock.EasyMock;
 import org.junit.After;
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
index 9c55305..741bb3c 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/AbstractMiniHBaseClusterTest.java
@@ -45,7 +45,7 @@ import org.apache.hadoop.hbase.util.RetryCounterFactory;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.aggregators.AggregatorUtils;
-import org.apache.hadoop.metrics2.sink.timeline.query.PhoenixConnectionProvider;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixConnectionProvider;
 import org.apache.hadoop.yarn.util.timeline.TimelineUtils;
 import org.apache.log4j.Level;
 import org.apache.log4j.Logger;
@@ -224,17 +224,6 @@ public abstract class AbstractMiniHBaseClusterTest extends BaseTest {
             return connection;
           }
 
-          @Override
-          public Connection getConnectionRetryingOnException(RetryCounterFactory retryCounterFactory) throws SQLException, InterruptedException {
-            Connection connection = null;
-            try {
-              connection = DriverManager.getConnection(getUrl());
-            } catch (SQLException e) {
-              LOG.warn("Unable to connect to HBase store using Phoenix.", e);
-            }
-            return connection;
-          }
-
         });
   }
 
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
index 5d81faa..50ff656 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/PhoenixHBaseAccessorTest.java
@@ -33,7 +33,7 @@ import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.discovery.TimelineMetricMetadataManager;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.Condition;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.DefaultCondition;
-import org.apache.hadoop.metrics2.sink.timeline.query.PhoenixConnectionProvider;
+import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixConnectionProvider;
 import org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.query.PhoenixTransactSQL;
 import org.apache.phoenix.exception.PhoenixIOException;
 import org.easymock.EasyMock;
@@ -96,10 +96,6 @@ public class PhoenixHBaseAccessorTest {
         return null;
       }
 
-      @Override
-      public Connection getConnectionRetryingOnException(RetryCounterFactory retryCounterFactory) throws SQLException, InterruptedException {
-        return null;
-      }
       };
 
     accessor = new PhoenixHBaseAccessor(connectionProvider);
@@ -256,11 +252,6 @@ public class PhoenixHBaseAccessorTest {
       public Connection getConnection() throws SQLException {
         return connection;
       }
-
-      @Override
-      public Connection getConnectionRetryingOnException(RetryCounterFactory retryCounterFactory) throws SQLException, InterruptedException {
-        return connection;
-      }
     };
 
     accessor = new PhoenixHBaseAccessor(connectionProvider);
diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java
index 42175a7..de24c68 100644
--- a/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java
+++ b/ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestTimelineMetricStore.java
@@ -21,7 +21,6 @@ import org.apache.hadoop.metrics2.sink.timeline.AggregationResult;
 import org.apache.hadoop.metrics2.sink.timeline.ContainerMetric;
 import org.apache.hadoop.metrics2.sink.timeline.Precision;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricKey;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetricMetadata;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.apache.hadoop.metrics2.sink.timeline.TopNConfig;
@@ -127,7 +126,7 @@ public class TestTimelineMetricStore implements TimelineMetricStore {
   }
 
   @Override
-  public Set<TimelineMetricKey> getTimelineMetricKey(String metricName, String appId, String instanceId, String hostname) throws SQLException, IOException {
+  public Set<Map<String, String>> getTimelineMetricKeys(String metricName, String appId, String instanceId, List<String> hosts) throws SQLException, IOException {
     return Collections.emptySet();
   }
 

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 30/39: AMBARI-22567 : Integrate Spark lifecycle management into AMS AD Manager. (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit d46abc7ac19c4316463214022b28c95d63058764
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Thu Nov 30 16:07:56 2017 -0800

    AMBARI-22567 : Integrate Spark lifecycle management into AMS AD Manager. (avijayan)
---
 ...trics-admanager.sh => ambari-metrics-admanager} |  75 ++++++++---
 .../resources/config.yml => conf/unix/config.yaml} |   9 +-
 .../pom.xml                                        |  44 ++++--
 .../src/main/assemblies/empty.xml                  |  21 +++
 .../adservice/app/AnomalyDetectionAppConfig.scala  |  10 ++
 .../MetricDefinitionServiceConfiguration.scala     |   3 -
 .../configuration/SparkConfiguration.scala         |  39 ++++++
 .../adservice/db/PhoenixAnomalyStoreAccessor.scala |   5 +-
 .../PointInTimeAnomalyInstance.scala               |   4 +-
 .../adservice/{common => model}/Range.scala        |   2 +-
 .../adservice/{common => model}/Season.scala       |   4 +-
 .../adservice/{common => model}/SeasonType.scala   |   2 +-
 .../adservice/{common => model}/TimeRange.scala    |   2 +-
 .../trend => model}/TrendAnomalyInstance.scala     |   4 +-
 .../config.yml => test/resources/config.yaml}      |  17 +--
 .../app/AnomalyDetectionAppConfigTest.scala        |  19 +--
 .../adservice/app/DefaultADResourceSpecTest.scala  |   2 +-
 .../adservice/{common => model}/RangeTest.scala    |   7 +-
 .../adservice/{common => model}/SeasonTest.scala   |  19 +--
 ambari-metrics/ambari-metrics-assembly/pom.xml     | 148 +++++++++++++++++++++
 .../src/main/assembly/anomaly-detection.xml        |  60 +++++++++
 .../package/rpm/anomaly-detection/postinstall.sh   |  27 ++++
 ambari-metrics/pom.xml                             |   4 +-
 .../0.1.0/configuration/ams-admanager-config.xml   |   4 +
 .../0.1.0/configuration/ams-admanager-env.xml      |   6 +-
 .../0.1.0/configuration/ams-admanager-log4j.xml    |   2 +-
 .../configuration/ams-admanager-spark-env.xml      | 129 ++++++++++++++++++
 .../AMBARI_METRICS/0.1.0/metainfo.xml              |   1 +
 .../AMBARI_METRICS/0.1.0/package/scripts/ams.py    |  14 +-
 .../0.1.0/package/scripts/ams_admanager.py         |   4 +-
 .../AMBARI_METRICS/0.1.0/package/scripts/params.py |  22 ++-
 .../0.1.0/package/scripts/status_params.py         |   2 +-
 32 files changed, 613 insertions(+), 98 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/ambari-metrics-admanager.sh b/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/ambari-metrics-admanager
similarity index 70%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/ambari-metrics-admanager.sh
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/ambari-metrics-admanager
index f1a1ae3..98b7606 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/ambari-metrics-admanager.sh
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/ambari-metrics-admanager
@@ -14,13 +14,44 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific
 
-PIDFILE=/var/run//var/run/ambari-metrics-anomaly-detection/ambari-metrics-admanager.pid
+PIDFILE=/var/run/ambari-metrics-anomaly-detection/ambari-metrics-admanager.pid
 OUTFILE=/var/log/ambari-metrics-anomaly-detection/ambari-metrics-admanager.out
 
 CONF_DIR=/etc/ambari-metrics-anomaly-detection/conf
 DAEMON_NAME=ams_admanager
+SPARK_HOME=/usr/lib/ambari-metrics-anomaly-detection/spark
 
-STOP_TIMEOUT=5
+SPARK_MASTER_PID=/var/run/ambari-metrics-anomaly-detection/spark-ams-org.apache.spark.deploy.master.Master.pid
+
+STOP_TIMEOUT=10
+
+function spark_daemon
+{
+    local cmd=$1
+    local pid
+
+    if [[ "${cmd}" == "start" ]]
+      then
+
+        ${SPARK_HOME}/sbin/start-master.sh
+        sleep 2
+        master_pid=$(cat "$SPARK_MASTER_PID")
+        if [ -z "`ps ax | grep -w ${master_pid} | grep org.apache.spark.deploy.master.Master`" ]; then
+          echo "ERROR: Spark Master start failed. For more details, see outfile in log directory."
+          exit -1
+        fi
+
+        ${SPARK_HOME}/sbin/start-slave.sh spark://${SPARK_MASTER_HOST}:${SPARK_MASTER_PORT}
+    elif [[ "${cmd}" == "stop" ]]
+      then
+        ${SPARK_HOME}/sbin/stop-slave.sh
+        ${SPARK_HOME}/sbin/stop-master.sh
+    else
+        pid=${SPARK_MASTER_PID}
+        daemon_status "${pid}"
+    fi
+
+}
 
 function write_pidfile
 {
@@ -55,22 +86,6 @@ function java_setup
 
 function daemon_status()
 {
-  #
-  # LSB 4.1.0 compatible status command (1)
-  #
-  # 0 = program is running
-  # 1 = dead, but still a pid (2)
-  # 2 = (not used by us)
-  # 3 = not running
-  #
-  # 1 - this is not an endorsement of the LSB
-  #
-  # 2 - technically, the specification says /var/run/pid, so
-  #     we should never return this value, but we're giving
-  #     them the benefit of a doubt and returning 1 even if
-  #     our pid is not in in /var/run .
-  #
-
   local pidfile="$1"
   shift
 
@@ -90,6 +105,12 @@ function start()
 {
   java_setup
 
+
+  if [[ "${AMS_AD_STANDALONE_SPARK_ENABLED}" == "true" || "${AMS_AD_STANDALONE_SPARK_ENABLED}" == "True" ]]
+  then
+    spark_daemon "start"
+  fi
+
   daemon_status "${PIDFILE}"
   if [[ $? == 0  ]]; then
     echo "AMS AD Manager is running as process $(cat "${PIDFILE}"). Exiting" | tee -a $STARTUPFILE
@@ -144,9 +165,12 @@ function stop()
       rm -f "${pidfile}" >/dev/null 2>&1
     fi
   fi
+
+  #Let's try to stop spark always since if the user has flipped the spark mode to 'yarn', the enabled flag becomes obsolete.
+  spark_daemon "stop"
 }
 
-# execute ams-env.sh
+# execute ams-admanager-env.sh
 if [[ -f "${CONF_DIR}/ams-admanager-env.sh" ]]; then
   . "${CONF_DIR}/ams-admanager-env.sh"
 else
@@ -154,12 +178,21 @@ else
   exit 1
 fi
 
-# set these env variables only if they were not set by ams-env.sh
+if [[ -f "${CONF_DIR}/ams-admanager-spark-env.sh" ]]; then
+  . "${CONF_DIR}/ams-admanager-spark-env.sh"
+else
+  echo "ERROR: Cannot execute ${CONF_DIR}/ams-admanager-spark-env.sh." 2>&1
+  exit 1
+fi
+
+# set these env variables only if they were not set by ams-admanager-env.sh
 : ${AMS_AD_LOG_DIR:=/var/log/ambari-metrics-anomaly-detection}
+: ${AMS_AD_STANDALONE_SPARK_ENABLED:=true}
 
 # set pid dir path
 if [[ -n "${AMS_AD_PID_DIR}" ]]; then
-  PIDFILE=${AMS_AD_PID_DIR}/admanager.pid
+  PIDFILE=${AMS_AD_PID_DIR}/ambari-metrics-admanager.pid
+  SPARK_MASTER_PID=${AMS_AD_PID_DIR}/spark-${USER}-org.apache.spark.deploy.master.Master-1.pid
 fi
 
 # set out file path
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml b/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/config.yaml
similarity index 91%
copy from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
copy to ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/config.yaml
index 7de06b4..85e4004 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/config.yaml
@@ -38,9 +38,8 @@ metricDefinitionDB:
   # raise an error as soon as it detects an internal corruption
   performParanoidChecks: false
   # Path to Level DB directory
-  dbDirPath: /var/lib/ambari-metrics-anomaly-detection/
+  dbDirPath: /tmp/ambari-metrics-anomaly-detection/db
 
-#subsystemService:
-#  spark:
-#  pointInTime:
-#  trend:
\ No newline at end of file
+spark:
+  mode: standalone
+  masterHostPort: localhost:7077
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
index c6927dd..50d7ef6 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
@@ -36,7 +36,7 @@
     <scala.binary.version>2.11</scala.binary.version>
     <jackson.version>2.9.1</jackson.version>
     <dropwizard.version>1.2.0</dropwizard.version>
-    <spark.version>2.2.0</spark.version>
+    <spark.version>2.1.1</spark.version>
     <hadoop.version>2.7.3.2.6.0.3-8</hadoop.version>
     <hbase.version>1.1.2.2.6.0.3-8</hbase.version>
     <phoenix.version>4.7.0.2.6.0.3-8</phoenix.version>
@@ -59,7 +59,7 @@
   </pluginRepositories>
 
   <build>
-    <finalName>${project.artifactId}</finalName>
+    <finalName>${project.artifactId}-${project.version}</finalName>
     <resources>
       <resource>
         <filtering>true</filtering>
@@ -157,14 +157,6 @@
               </excludes>
             </filter>
             <filter>
-              <artifact>org.apache.phoenix:phoenix-core</artifact>
-              <excludes>
-                <exclude>org/joda/time/**</exclude>
-                <exclude>com/codahale/metrics/**</exclude>
-                <exclude>com/google/common/collect/**</exclude>
-              </excludes>
-            </filter>
-            <filter>
               <artifact>*:*</artifact>
               <excludes>
                 <exclude>com/sun/jersey/**</exclude>
@@ -191,6 +183,38 @@
           </execution>
         </executions>
       </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-antrun-plugin</artifactId>
+        <version>1.7</version>
+        <executions>
+          <execution>
+            <phase>generate-resources</phase>
+            <goals>
+              <goal>run</goal>
+            </goals>
+            <configuration>
+              <target name="Download Spark">
+                <mkdir dir="${project.build.directory}/embedded"/>
+                <get
+                        src="${spark.tar}"
+                        dest="${project.build.directory}/embedded/spark.tar.gz"
+                        usetimestamp="true"
+                />
+                <untar
+                        src="${project.build.directory}/embedded/spark.tar.gz"
+                        dest="${project.build.directory}/embedded"
+                        compression="gzip"
+                />
+                <move
+                        todir="${project.build.directory}/embedded/spark" >
+                        <fileset dir="${project.build.directory}/embedded/${spark.folder}" includes="**"/>
+                </move>
+              </target>
+            </configuration>
+          </execution>
+        </executions>
+      </plugin>
     </plugins>
   </build>
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/assemblies/empty.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/assemblies/empty.xml
new file mode 100644
index 0000000..35738b1
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/assemblies/empty.xml
@@ -0,0 +1,21 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+  
+       http://www.apache.org/licenses/LICENSE-2.0
+  
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<assembly>
+    <id>empty</id>
+    <formats/>
+</assembly>
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
index f9ed4b2..58efa97 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
@@ -53,6 +53,12 @@ class AnomalyDetectionAppConfig extends Configuration {
   @Valid
   private val metricDefinitionDBConfiguration = new MetricDefinitionDBConfiguration
 
+  /**
+    * Spark configurations
+    */
+  @Valid
+  private val sparkConfiguration = new SparkConfiguration
+
   /*
    AMS HBase Conf
     */
@@ -76,4 +82,8 @@ class AnomalyDetectionAppConfig extends Configuration {
 
   @JsonProperty("metricDefinitionDB")
   def getMetricDefinitionDBConfiguration: MetricDefinitionDBConfiguration = metricDefinitionDBConfiguration
+
+  @JsonProperty("spark")
+  def getSparkConfiguration: SparkConfiguration = sparkConfiguration
+
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionServiceConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionServiceConfiguration.scala
index b560713..a453f03 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionServiceConfiguration.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionServiceConfiguration.scala
@@ -17,8 +17,6 @@
 
 package org.apache.ambari.metrics.adservice.configuration
 
-import javax.validation.constraints.NotNull
-
 import com.fasterxml.jackson.annotation.JsonProperty
 
 /**
@@ -26,7 +24,6 @@ import com.fasterxml.jackson.annotation.JsonProperty
   */
 class MetricDefinitionServiceConfiguration {
 
-  @NotNull
   private val inputDefinitionDirectory: String = ""
 
   @JsonProperty
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/SparkConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/SparkConfiguration.scala
new file mode 100644
index 0000000..30efdc7
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/SparkConfiguration.scala
@@ -0,0 +1,39 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+
+package org.apache.ambari.metrics.adservice.configuration
+
+import javax.validation.constraints.NotNull
+
+import com.fasterxml.jackson.annotation.JsonProperty
+
+class SparkConfiguration {
+
+  @NotNull
+  private var mode: String = _
+
+  @NotNull
+  private var masterHostPort: String = _
+
+  @JsonProperty
+  def getMode: String = mode
+
+  @JsonProperty
+  def getMasterHostPort: String = masterHostPort
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
index 147d1f7..53e6dee 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
@@ -21,14 +21,11 @@ import java.sql.{Connection, PreparedStatement, ResultSet, SQLException}
 import java.util.concurrent.TimeUnit.SECONDS
 
 import org.apache.ambari.metrics.adservice.app.AnomalyDetectionAppConfig
-import org.apache.ambari.metrics.adservice.common._
 import org.apache.ambari.metrics.adservice.configuration.HBaseConfiguration
 import org.apache.ambari.metrics.adservice.metadata.{MetricDefinitionService, MetricKey}
 import org.apache.ambari.metrics.adservice.model.AnomalyDetectionMethod.AnomalyDetectionMethod
 import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-import org.apache.ambari.metrics.adservice.model.{AnomalyDetectionMethod, AnomalyType, MetricAnomalyInstance}
-import org.apache.ambari.metrics.adservice.subsystem.pointintime.PointInTimeAnomalyInstance
-import org.apache.ambari.metrics.adservice.subsystem.trend.TrendAnomalyInstance
+import org.apache.ambari.metrics.adservice.model._
 import org.apache.hadoop.hbase.util.RetryCounterFactory
 import org.slf4j.{Logger, LoggerFactory}
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/pointintime/PointInTimeAnomalyInstance.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/PointInTimeAnomalyInstance.scala
similarity index 90%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/pointintime/PointInTimeAnomalyInstance.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/PointInTimeAnomalyInstance.scala
index 56ca2c1..470cc2c 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/pointintime/PointInTimeAnomalyInstance.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/PointInTimeAnomalyInstance.scala
@@ -15,15 +15,13 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.adservice.subsystem.pointintime
+package org.apache.ambari.metrics.adservice.model
 
 import java.util.Date
 
-import org.apache.ambari.metrics.adservice.common.Season
 import org.apache.ambari.metrics.adservice.metadata.MetricKey
 import org.apache.ambari.metrics.adservice.model.AnomalyDetectionMethod.AnomalyDetectionMethod
 import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-import org.apache.ambari.metrics.adservice.model.{AnomalyType, MetricAnomalyInstance}
 
 class PointInTimeAnomalyInstance(val metricKey: MetricKey,
                                  val timestamp: Long,
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Range.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/Range.scala
similarity index 96%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Range.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/Range.scala
index 003c18f..4ad35e7 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Range.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/Range.scala
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.adservice.common
+package org.apache.ambari.metrics.adservice.model
 
 /**
   * Class to capture a Range in a Season.
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Season.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/Season.scala
similarity index 96%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Season.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/Season.scala
index f875e3b..84784bc 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/Season.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/Season.scala
@@ -15,14 +15,14 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.adservice.common
+package org.apache.ambari.metrics.adservice.model
 
 import java.time.DayOfWeek
 import java.util.Calendar
 
 import javax.xml.bind.annotation.XmlRootElement
 
-import org.apache.ambari.metrics.adservice.common.SeasonType.SeasonType
+import org.apache.ambari.metrics.adservice.model.SeasonType.SeasonType
 
 import com.fasterxml.jackson.databind.ObjectMapper
 import com.fasterxml.jackson.module.scala.DefaultScalaModule
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/SeasonType.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SeasonType.scala
similarity index 94%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/SeasonType.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SeasonType.scala
index 067972c..b510531 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/SeasonType.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SeasonType.scala
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.adservice.common
+package org.apache.ambari.metrics.adservice.model
 
 object SeasonType extends Enumeration{
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/TimeRange.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/TimeRange.scala
similarity index 96%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/TimeRange.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/TimeRange.scala
index 50df658..0be2564 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/TimeRange.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/TimeRange.scala
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.adservice.common
+package org.apache.ambari.metrics.adservice.model
 
 import java.util.Date
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/TrendAnomalyInstance.scala
similarity index 90%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/TrendAnomalyInstance.scala
index 7392d59..d67747c 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/subsystem/trend/TrendAnomalyInstance.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/TrendAnomalyInstance.scala
@@ -15,13 +15,11 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.adservice.subsystem.trend
+package org.apache.ambari.metrics.adservice.model
 
-import org.apache.ambari.metrics.adservice.common.{Season, TimeRange}
 import org.apache.ambari.metrics.adservice.metadata.MetricKey
 import org.apache.ambari.metrics.adservice.model.AnomalyDetectionMethod.AnomalyDetectionMethod
 import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-import org.apache.ambari.metrics.adservice.model.{AnomalyType, MetricAnomalyInstance}
 
 case class TrendAnomalyInstance (metricKey: MetricKey,
                                  anomalousPeriod: TimeRange,
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/resources/config.yaml
similarity index 86%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/resources/config.yaml
index 7de06b4..6b09499 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/resources/config.yaml
@@ -10,16 +10,6 @@
 #See the License for the specific language governing permissions and
 #limitations under the License.
 
-server:
-  applicationConnectors:
-   - type: http
-     port: 9999
-  requestLog:
-    type: external
-
-logging:
-  type: external
-
 metricDefinitionService:
   inputDefinitionDirectory: /etc/ambari-metrics-anomaly-detection/conf/definitionDirectory
 
@@ -40,7 +30,6 @@ metricDefinitionDB:
   # Path to Level DB directory
   dbDirPath: /var/lib/ambari-metrics-anomaly-detection/
 
-#subsystemService:
-#  spark:
-#  pointInTime:
-#  trend:
\ No newline at end of file
+spark:
+  mode: standalone
+  masterHostPort: localhost:7077
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala
index 989ba21..76391a0 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala
@@ -18,28 +18,31 @@
 package org.apache.ambari.metrics.adservice.app
 
 import java.io.File
+import java.net.URL
 
 import javax.validation.Validator
 
 import org.scalatest.FunSuite
 
 import com.fasterxml.jackson.databind.ObjectMapper
+import com.fasterxml.jackson.datatype.guava.GuavaModule
 
 import io.dropwizard.configuration.YamlConfigurationFactory
-import io.dropwizard.jackson.Jackson
 import io.dropwizard.jersey.validation.Validators
 
 class AnomalyDetectionAppConfigTest extends FunSuite {
 
   test("testConfiguration") {
 
-    val objectMapper: ObjectMapper = Jackson.newObjectMapper()
+    val classLoader = getClass.getClassLoader
+    val url: URL = classLoader.getResource("config.yaml")
+    val file = new File(url.getFile)
+
+    val objectMapper: ObjectMapper = new ObjectMapper()
+    objectMapper.registerModule(new GuavaModule)
     val validator: Validator = Validators.newValidator
     val factory: YamlConfigurationFactory[AnomalyDetectionAppConfig] =
       new YamlConfigurationFactory[AnomalyDetectionAppConfig](classOf[AnomalyDetectionAppConfig], validator, objectMapper, "")
-
-    val classLoader = getClass.getClassLoader
-    val file = new File(classLoader.getResource("config.yml").getFile)
     val config = factory.build(file)
 
     assert(config.isInstanceOf[AnomalyDetectionAppConfig])
@@ -48,17 +51,17 @@ class AnomalyDetectionAppConfigTest extends FunSuite {
       "/etc/ambari-metrics-anomaly-detection/conf/definitionDirectory")
 
     assert(config.getMetricCollectorConfiguration.getHosts == "host1,host2")
-
     assert(config.getMetricCollectorConfiguration.getPort == "6188")
 
     assert(config.getAdServiceConfiguration.getAnomalyDataTtl == 604800)
 
     assert(config.getMetricDefinitionDBConfiguration.getDbDirPath == "/var/lib/ambari-metrics-anomaly-detection/")
-
     assert(config.getMetricDefinitionDBConfiguration.getVerifyChecksums)
-
     assert(!config.getMetricDefinitionDBConfiguration.getPerformParanoidChecks)
 
+    assert(config.getSparkConfiguration.getMode.equals("standalone"))
+    assert(config.getSparkConfiguration.getMasterHostPort.equals("localhost:7077"))
+
   }
 
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
index e38ea40..7330ff9 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
@@ -36,7 +36,7 @@ class DefaultADResourceSpecTest extends FunSpec with Matchers {
 
   describe("/anomaly") {
     it("Must return default message") {
-      withAppRunning(classOf[AnomalyDetectionApp], Resources.getResource("config.yml").getPath) { rule =>
+      withAppRunning(classOf[AnomalyDetectionApp], Resources.getResource("config.yaml").getPath) { rule =>
         val json = client.target(s"http://localhost:${rule.getLocalPort}/anomaly")
           .request().accept(APPLICATION_JSON).buildGet().invoke(classOf[String])
         val dtf = java.time.format.DateTimeFormatter.ofPattern("yyyy/MM/dd HH:mm")
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/RangeTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/model/RangeTest.scala
similarity index 85%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/RangeTest.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/model/RangeTest.scala
index b610b97..16f4951 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/RangeTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/model/RangeTest.scala
@@ -15,14 +15,15 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.adservice.common
+package org.apache.ambari.metrics.adservice.model
 
+import org.apache.ambari.metrics.adservice.model
 import org.scalatest.FlatSpec
 
 class RangeTest extends FlatSpec {
 
   "A Range " should " return true for inner and boundary values" in {
-    val range : Range = Range(4,6)
+    val range : model.Range = model.Range(4,6)
     assert(range.withinRange(5))
     assert(range.withinRange(6))
     assert(range.withinRange(4))
@@ -30,7 +31,7 @@ class RangeTest extends FlatSpec {
   }
 
   it should "accept same lower and higher range values" in {
-    val range : Range = Range(4,4)
+    val range : model.Range = model.Range(4,4)
     assert(range.withinRange(4))
   }
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/SeasonTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/model/SeasonTest.scala
similarity index 75%
rename from ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/SeasonTest.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/model/SeasonTest.scala
index a823c73..a661c05 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/SeasonTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/model/SeasonTest.scala
@@ -15,10 +15,11 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.adservice.common
+package org.apache.ambari.metrics.adservice.model
 
 import java.util.Calendar
 
+import org.apache.ambari.metrics.adservice.model
 import org.scalatest.FunSuite
 
 class SeasonTest extends FunSuite {
@@ -26,7 +27,7 @@ class SeasonTest extends FunSuite {
   test("testBelongsTo") {
 
     //Create Season for weekdays. Mon to Friday and 9AM - 5PM
-    var season : Season = Season(Range(Calendar.MONDAY,Calendar.FRIDAY), Range(9,17))
+    var season : Season = Season(model.Range(Calendar.MONDAY,Calendar.FRIDAY), model.Range(9,17))
 
     //Try with a timestamp on a Monday, @ 9AM.
     val c = Calendar.getInstance
@@ -41,7 +42,7 @@ class SeasonTest extends FunSuite {
     assert(!season.belongsTo(c.getTimeInMillis))
 
     //Create Season for Monday 11AM - 12Noon.
-    season = Season(Range(Calendar.MONDAY,Calendar.MONDAY), Range(11,12))
+    season = Season(model.Range(Calendar.MONDAY,Calendar.MONDAY), model.Range(11,12))
     c.set(2017, Calendar.OCTOBER, 30, 9, 0, 0)
     assert(!season.belongsTo(c.getTimeInMillis))
 
@@ -50,7 +51,7 @@ class SeasonTest extends FunSuite {
 
 
     //Create Season from Friday to Monday and 9AM - 5PM
-    season = Season(Range(Calendar.FRIDAY,Calendar.MONDAY), Range(9,17))
+    season = Season(model.Range(Calendar.FRIDAY,Calendar.MONDAY), model.Range(9,17))
 
     //Try with a timestamp on a Monday, @ 9AM.
     c.set(2017, Calendar.OCTOBER, 30, 9, 0, 0)
@@ -67,23 +68,23 @@ class SeasonTest extends FunSuite {
 
   test("testEquals") {
 
-    var season1: Season =  Season(Range(4,5), Range(2,3))
-    var season2: Season =  Season(Range(4,5), Range(2,3))
+    var season1: Season =  Season(model.Range(4,5), model.Range(2,3))
+    var season2: Season =  Season(model.Range(4,5), model.Range(2,3))
     assert(season1 == season2)
 
-    var season3: Season =  Season(Range(4,4), Range(2,3))
+    var season3: Season =  Season(model.Range(4,4), model.Range(2,3))
     assert(!(season1 == season3))
   }
 
   test("testSerialize") {
-    val season1 : Season = Season(Range(Calendar.MONDAY,Calendar.FRIDAY), Range(9,17))
+    val season1 : Season = Season(model.Range(Calendar.MONDAY,Calendar.FRIDAY), model.Range(9,17))
 
     val seasonString = Season.toJson(season1)
 
     val season2 : Season = Season.fromJson(seasonString)
     assert(season1 == season2)
 
-    val season3 : Season = Season(Range(Calendar.MONDAY,Calendar.THURSDAY), Range(9,17))
+    val season3 : Season = Season(model.Range(Calendar.MONDAY,Calendar.THURSDAY), model.Range(9,17))
     assert(!(season2 == season3))
 
   }
diff --git a/ambari-metrics/ambari-metrics-assembly/pom.xml b/ambari-metrics/ambari-metrics-assembly/pom.xml
index 43ff285..b1a6430 100644
--- a/ambari-metrics/ambari-metrics-assembly/pom.xml
+++ b/ambari-metrics/ambari-metrics-assembly/pom.xml
@@ -42,6 +42,7 @@
     <storm-sink-legacy.dir>${project.basedir}/../ambari-metrics-storm-sink-legacy</storm-sink-legacy.dir>
     <flume-sink.dir>${project.basedir}/../ambari-metrics-flume-sink</flume-sink.dir>
     <kafka-sink.dir>${project.basedir}/../ambari-metrics-kafka-sink</kafka-sink.dir>
+    <anomaly-detection.dir>${project.basedir}/../ambari-metrics-anomaly-detection-service</anomaly-detection.dir>
     <python.ver>python &gt;= 2.6</python.ver>
     <python.devel>python-devel</python.devel>
     <deb.publisher>Apache</deb.publisher>
@@ -56,6 +57,7 @@
     <storm.sink.legacy.jar>ambari-metrics-storm-sink-legacy-with-common-${project.version}.jar</storm.sink.legacy.jar>
     <flume.sink.jar>ambari-metrics-flume-sink-with-common-${project.version}.jar</flume.sink.jar>
     <kafka.sink.jar>ambari-metrics-kafka-sink-with-common-${project.version}.jar</kafka.sink.jar>
+    <anomaly.detection.jar>ambari-metrics-anomaly-detection-service-${project.version}.jar</anomaly.detection.jar>
   </properties>
 
   <build>
@@ -139,6 +141,22 @@
             </configuration>
           </execution>
           <execution>
+            <id>anomaly-detection</id>
+            <phase>prepare-package</phase>
+            <goals>
+              <goal>single</goal>
+            </goals>
+            <configuration>
+              <attach>false</attach>
+              <finalName>ambari-metrics-anomaly-detection-${project.version}</finalName>
+              <appendAssemblyId>false</appendAssemblyId>
+              <descriptors>
+                <descriptor>${assemblydescriptor.anomaly-detection}</descriptor>
+              </descriptors>
+              <tarLongFileMode>gnu</tarLongFileMode>
+            </configuration>
+          </execution>
+          <execution>
             <id>hadoop-sink</id>
             <phase>prepare-package</phase>
             <goals>
@@ -638,6 +656,81 @@
                 </configuration>
               </execution>
 
+              <!--ambari-metrics-anomaly-detection-->
+              <execution>
+                <id>ambari-metrics-anomaly-detection</id>
+                <phase>package</phase>
+                <goals>
+                  <goal>rpm</goal>
+                </goals>
+                <configuration>
+                <name>ambari-metrics-anomaly-detection</name>
+                <copyright>2012, Apache Software Foundation</copyright>
+                <group>Development</group>
+                <description>Maven Recipe: RPM Package.</description>
+                <autoRequires>false</autoRequires>
+
+
+                <defaultFilemode>644</defaultFilemode>
+                <defaultDirmode>755</defaultDirmode>
+                <defaultUsername>root</defaultUsername>
+                <defaultGroupname>root</defaultGroupname>
+
+                <postinstallScriptlet>
+                  <scriptFile>${project.build.directory}/resources/rpm/anomaly-detection/postinstall.sh</scriptFile>
+                  <fileEncoding>utf-8</fileEncoding>
+                </postinstallScriptlet>
+
+                <mappings>
+                  <mapping>
+                    <!--jars-->
+                    <directory>/usr/lib/ambari-metrics-anomaly-detection/</directory>
+                    <sources>
+                      <source>
+                        <location>
+                          ${anomaly-detection.dir}/target/ambari-metrics-anomaly-detection-service-${project.version}.jar
+                        </location>
+                      </source>
+                    </sources>
+                  </mapping>
+                  <mapping>
+                    <directory>/usr/lib/ambari-metrics-anomaly-detection/spark</directory>
+                    <sources>
+                      <source>
+                        <location>
+                          ${anomaly-detection.dir}/target/embedded/spark
+                        </location>
+                      </source>
+                    </sources>
+                  </mapping>
+                  <mapping>
+                    <directory>/usr/sbin</directory>
+                    <filemode>755</filemode>
+                    <username>root</username>
+                    <groupname>root</groupname>
+                    <directoryIncluded>false</directoryIncluded>
+                    <sources>
+                      <source>
+                        <location>${anomaly-detection.dir}/conf/unix/ambari-metrics-admanager</location>
+                        <filter>false</filter>
+                      </source>
+                    </sources>
+                  </mapping>
+                  <mapping>
+                    <directory>/etc/ambari-metrics-anomaly-detection/conf</directory>
+                    <configuration>true</configuration>
+                    <sources>
+                      <source>
+                        <location>${anomaly-detection.dir}/conf/unix/config.yaml</location>
+                      </source>
+                      <source>
+                        <location>${anomaly-detection.dir}/conf/unix/log4j.properties</location>
+                      </source>
+                    </sources>
+                  </mapping>
+                </mappings>
+                </configuration>
+              </execution>
 
             </executions>
           </plugin>
@@ -757,10 +850,13 @@
                     <path>/etc/ambari-metrics-collector/conf</path>
                     <path>/etc/ambari-metrics-grafana/conf</path>
                     <path>/etc/ams-hbase/conf</path>
+                    <path>/etc/ambari-metrics-anomaly-detection/conf</path>
                     <path>/var/run/ams-hbase</path>
                     <path>/var/run/ambari-metrics-grafana</path>
                     <path>/var/log/ambari-metrics-grafana</path>
                     <path>/var/lib/ambari-metrics-collector</path>
+                    <path>/usr/lib/ambari-metrics-anomaly-detection</path>
+                    <path>/var/lib/ambari-metrics-anomaly-detection</path>
                     <path>/var/lib/ambari-metrics-monitor/lib</path>
                     <path>/var/lib/ambari-metrics-grafana</path>
                     <path>/usr/lib/ambari-metrics-hadoop-sink</path>
@@ -979,6 +1075,49 @@
                   </mapper>
                 </data>
 
+                <!-- Anomaly Detection -->
+                <data>
+                  <src>${anomaly-detection.dir}/target/${anomaly.detection.jar}</src>
+                  <type>file</type>
+                  <mapper>
+                    <type>perm</type>
+                    <dirmode>644</dirmode>
+                    <prefix>/usr/lib/ambari-metrics-anomaly-detection</prefix>
+                  </mapper>
+                </data>
+                <data>
+                  <type>link</type>
+                  <linkName>/usr/lib/ambari-metrics-anomaly-detection/ambari-metrics-anomaly-detection-service.jar</linkName>
+                  <linkTarget>/usr/lib/ambari-metrics-anomaly-detection/${anomaly.detection.jar}</linkTarget>
+                  <symlink>true</symlink>
+                </data>
+                <data>
+                  <src>${anomaly-detection.dir}/target/embedded/spark</src>
+                  <type>directory</type>
+                  <mapper>
+                    <type>perm</type>
+                    <prefix>/usr/lib/ambari-metrics-anomaly-detection/spark</prefix>
+                    <filemode>644</filemode>
+                  </mapper>
+                </data>
+                <data>
+                  <src>${anomaly-detection.dir}/conf/unix/config.yaml</src>
+                  <type>file</type>
+                  <mapper>
+                    <type>perm</type>
+                    <filemode>755</filemode>
+                    <prefix>/etc/ambari-metrics-anomaly-detection/conf</prefix>
+                  </mapper>
+                </data>
+                <data>
+                  <src>${anomaly-detection.dir}/conf/unix/log4j.properties</src>
+                  <type>file</type>
+                  <mapper>
+                    <type>perm</type>
+                    <filemode>755</filemode>
+                    <prefix>/etc/ambari-metrics-anomaly-detection/conf</prefix>
+                  </mapper>
+                </data>
                 <!-- hadoop sink -->
 
                 <data>
@@ -1075,6 +1214,8 @@
         <assemblydescriptor.monitor>src/main/assembly/monitor.xml</assemblydescriptor.monitor>
         <assemblydescriptor.sink>src/main/assembly/sink.xml</assemblydescriptor.sink>
         <assemblydescriptor.grafana>src/main/assembly/grafana.xml</assemblydescriptor.grafana>
+        <assemblydescriptor.anomaly-detection>src/main/assembly/anomaly-detection.xml</assemblydescriptor.anomaly-detection>
+
         <packagingFormat>jar</packagingFormat>
       </properties>
       <build>
@@ -1354,6 +1495,13 @@
       <artifactId>ambari-metrics-host-aggregator</artifactId>
       <version>${project.version}</version>
     </dependency>
+    <dependency>
+      <groupId>org.apache.ambari</groupId>
+      <artifactId>ambari-metrics-anomaly-detection-service</artifactId>
+      <version>${project.version}</version>
+      <type>pom</type>
+      <optional>true</optional>
+    </dependency>
   </dependencies>
 
 
diff --git a/ambari-metrics/ambari-metrics-assembly/src/main/assembly/anomaly-detection.xml b/ambari-metrics/ambari-metrics-assembly/src/main/assembly/anomaly-detection.xml
new file mode 100644
index 0000000..b05aaf3
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-assembly/src/main/assembly/anomaly-detection.xml
@@ -0,0 +1,60 @@
+<?xml version="1.0"?>
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~     http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing, software
+  ~ distributed under the License is distributed on an "AS IS" BASIS,
+  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~ See the License for the specific language governing permissions and
+  ~ limitations under the License.
+  -->
+
+<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1"
+          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+          xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
+  <id>anomaly-detection</id>
+  <formats>
+    <format>dir</format>
+    <format>tar.gz</format>
+  </formats>
+
+  <fileSets>
+    <fileSet>
+      <directory>${anomaly-detection.dir}/target/embedded/spark</directory>
+      <outputDirectory>anomaly-detection/spark</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>${anomaly-detection.dir}/conf/unix</directory>
+      <outputDirectory>anomaly-detection/bin</outputDirectory>
+      <includes>
+        <include>ambari-metrics-admanager</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>${anomaly-detection.dir}/conf/unix</directory>
+      <outputDirectory>anomaly-detection/conf</outputDirectory>
+      <includes>
+        <include>config.yaml</include>
+        <include>log4j.properties</include>
+      </includes>
+    </fileSet>
+  </fileSets>
+
+  <files>
+    <file>
+      <fileMode>644</fileMode>
+      <source>${anomaly-detection.dir}/target/ambari-metrics-anomaly-detection-service-${project.version}.jar
+      </source>
+      <outputDirectory>anomaly-detection</outputDirectory>
+    </file>
+  </files>
+</assembly>
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-assembly/src/main/package/rpm/anomaly-detection/postinstall.sh b/ambari-metrics/ambari-metrics-assembly/src/main/package/rpm/anomaly-detection/postinstall.sh
new file mode 100644
index 0000000..399c439
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-assembly/src/main/package/rpm/anomaly-detection/postinstall.sh
@@ -0,0 +1,27 @@
+#!/bin/bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License
+
+
+ANOMALY_DETECTION_LINK_NAME="/usr/lib/ambari-metrics-anomaly-detection/ambari-metrics-anomaly-detection-service.jar"
+ANOMALY_DETECTION_JAR="/usr/lib/ambari-metrics-anomaly-detection/${anomaly.detection.jar}"
+
+JARS=(${ANOMALY_DETECTION_JAR})
+LINKS=(${ANOMALY_DETECTION_LINK_NAME})
+
+for index in ${!LINKS[*]}
+do
+  rm -f ${LINKS[$index]} ; ln -s ${JARS[$index]} ${LINKS[$index]}
+done
diff --git a/ambari-metrics/pom.xml b/ambari-metrics/pom.xml
index a8c71e6..98559e6 100644
--- a/ambari-metrics/pom.xml
+++ b/ambari-metrics/pom.xml
@@ -32,9 +32,9 @@
     <module>ambari-metrics-timelineservice</module>
     <module>ambari-metrics-host-monitoring</module>
     <module>ambari-metrics-grafana</module>
-    <module>ambari-metrics-assembly</module>
     <module>ambari-metrics-host-aggregator</module>
     <module>ambari-metrics-anomaly-detection-service</module>
+    <module>ambari-metrics-assembly</module>
   </modules>
   <properties>
     <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
@@ -53,6 +53,8 @@
     <grafana.tar>https://grafanarel.s3.amazonaws.com/builds/grafana-2.6.0.linux-x64.tar.gz</grafana.tar>
     <phoenix.tar>https://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.4.0/tars/phoenix/phoenix-4.7.0.2.6.4.0-91.tar.gz</phoenix.tar>
     <phoenix.folder>phoenix-4.7.0.2.6.4.0-91</phoenix.folder>
+    <spark.tar>http://dev.hortonworks.com.s3.amazonaws.com/HDP/centos7/3.x/BUILDS/3.0.0.0-439/tars/spark2/spark-2.1.0.3.0.0.0-439-bin-3.0.0.3.0.0.0-439.tgz</spark.tar>
+    <spark.folder>spark-2.1.0.3.0.0.0-439-bin-3.0.0.3.0.0.0-439</spark.folder>
     <resmonitor.install.dir>
       /usr/lib/python2.6/site-packages/resource_monitoring
     </resmonitor.install.dir>
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-config.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-config.xml
index 2c6bbf7..9862f10 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-config.xml
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-config.xml
@@ -102,6 +102,10 @@
         performParanoidChecks: false
         # Path to Level DB directory
         dbDirPath: {{ams_ad_data_dir}}
+
+      spark:
+        mode: {{admanager_spark_op_mode}}
+        masterHostPort: {{admanager_spark_hostport}}
     </value>
     <value-attributes>
       <type>content</type>
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-env.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-env.xml
index a79796b..91073ee 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-env.xml
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-env.xml
@@ -83,10 +83,10 @@
       export AMS_AD_HEAPSIZE={{ams_admanager_heapsize}}
 
       # Anomaly Detection Manager data dir
-      export AMS_AD_DATA_DIR={{ams_admanager_data_dir}}
+      export AMS_AD_DATA_DIR={{ams_ad_data_dir}}
 
       # Anomaly Detection Manager options
-      export AMS_AD_OPTS="
+      export AMS_AD_OPTS=$AMS_AD_OPTS
       {% if security_enabled %}
       export AMS_AD_OPTS="$AMS_AD_OPTS -Djava.security.auth.login.config={{ams_ad_jaas_config_file}}"
       {% endif %}
@@ -95,6 +95,8 @@
       export AMS_AD_GC_OPTS="-XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:{{ams_ad_log_dir}}/admanager-gc.log-`date +'%Y%m%d%H%M'`"
       export AMS_AD_OPTS="$AMS_AD_OPTS $AMS_AD_GC_OPTS"
 
+      # Anomaly Detection Manager data dir
+      export AMS_AD_STANDALONE_SPARK_ENABLED={{ams_ad_standalone_spark_enabled}}
     </value>
     <value-attributes>
       <type>content</type>
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-log4j.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-log4j.xml
index b1f821e..ad28dcd 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-log4j.xml
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-log4j.xml
@@ -63,7 +63,7 @@
             #
 
             # Define some default values that can be overridden by system properties
-            ams.ad.log.dir=.
+            ams.ad.log.dir={{ams_ad_log_dir}}
             ams.ad.log.file=ambari-metrics-admanager.log
 
             # Root logger option
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-spark-env.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-spark-env.xml
new file mode 100644
index 0000000..3c2fb89
--- /dev/null
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-admanager-spark-env.xml
@@ -0,0 +1,129 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration supports_adding_forbidden="true">
+  <property>
+    <name>spark_daemon_memory</name>
+    <value>512</value>
+    <description>Memory for Master, Worker and history server (default: 1G)</description>
+    <value-attributes>
+      <type>int</type>
+      <unit>MB</unit>
+    </value-attributes>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>spark_master_port</name>
+    <value>6190</value>
+    <description>Start the master on this port</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>spark_master_webui_port</name>
+    <value>6180</value>
+    <description>Port for the master web UI</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>spark_worker_cores</name>
+    <value>4</value>
+    <description>Total number of cores to allow Spark applications to use on the machine</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>spark_worker_memory</name>
+    <value>2048</value>
+    <description>Total amount of memory to allow Spark applications to use on the machine</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>spark_worker_webui_port</name>
+    <value>6181</value>
+    <description>Port for the worker web UI</description>
+    <on-ambari-upgrade add="true"/>
+  </property>
+  <property>
+    <name>content</name>
+    <description>This is the jinja template for spark-env.sh file</description>
+    <value>
+      #!/usr/bin/env bash
+
+      # This file is sourced when running various Spark programs.
+      # Copy it as spark-env.sh and edit that to configure Spark for your site.
+
+      # Generic options for the daemons used in the standalone deploy mode
+
+      export SPARK_MASTER_HOST={{hostname}}
+      export SPARK_MASTER_PORT={{spark_master_port}}
+      export SPARK_MASTER_WEBUI_PORT={{spark_master_webui_port}}
+      export SPARK_WORKER_CORES={{spark_worker_cores}}
+      export SPARK_WORKER_MEMORY={{spark_worker_memory}}m
+      export SPARK_WORKER_WEBUI_PORT={{spark_worker_webui_port}}
+      export SPARK_WORKER_DIR={{ams_ad_log_dir}}
+
+      export SPARK_MASTER_OPTS=$SPARK_MASTER_OPTS
+      export SPARK_WORKER_OPTS=$SPARK_WORKER_OPTS
+
+      export SPARK_MASTER_PORT={{spark_master_port}}
+      # Alternate conf dir. (Default: ${SPARK_HOME}/conf)
+      export SPARK_CONF_DIR={{ams_ad_conf_dir}}
+
+      # Where log files are stored.(Default:${SPARK_HOME}/logs)
+      export SPARK_LOG_DIR={{ams_ad_log_dir}}
+
+      # Where the pid file is stored. (Default: /tmp)
+      export SPARK_PID_DIR={{ams_ad_pid_dir}}
+
+      #Memory for Master, Worker and history server (default: 1024MB)
+      export SPARK_DAEMON_MEMORY={{spark_daemon_memory}}m
+
+      # A string representing this instance of spark.(Default: $USER)
+      SPARK_IDENT_STRING=$USER
+
+      # The scheduling priority for daemons. (Default: 0)
+      SPARK_NICENESS=0
+
+      # Options read in YARN client mode
+      #SPARK_EXECUTOR_INSTANCES="2" #Number of workers to start (Default: 2)
+      #SPARK_EXECUTOR_CORES="1" #Number of cores for the workers (Default: 1).
+      #SPARK_EXECUTOR_MEMORY="1G" #Memory per Worker (e.g. 1000M, 2G) (Default: 1G)
+      #SPARK_DRIVER_MEMORY="512M" #Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb)
+      #SPARK_YARN_APP_NAME="spark" #The name of your application (Default: Spark)
+      #SPARK_YARN_QUEUE="default" #The hadoop queue to use for allocation requests (Default: default)
+      #SPARK_YARN_DIST_FILES="" #Comma separated list of files to be distributed with the job.
+      #SPARK_YARN_DIST_ARCHIVES="" #Comma separated list of archives to be distributed with the job.
+
+      #export HADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}}
+      #export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-{{hadoop_conf_dir}}}
+
+      # The java implementation to use.
+      export JAVA_HOME={{java_home}}
+
+      #HDP Version
+      export HDP_VERSION=3.0.0
+
+    </value>
+    <value-attributes>
+      <type>content</type>
+    </value-attributes>
+    <on-ambari-upgrade add="false"/>
+  </property>
+</configuration>
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml
index 41e278d..bcf6268 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml
@@ -138,6 +138,7 @@
             <config-type>ams-admanager-config</config-type>
             <config-type>ams-admanager-env</config-type>
             <config-type>ams-admanager-log4j</config-type>
+            <config-type>ams-admanager-spark-env</config-type>
           </configuration-dependencies>
           <logs>
             <log>
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
index 7ab0547..4c6951a 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
@@ -555,14 +555,26 @@ def ams(name=None, action=None):
     if (params.ams_ad_log4j_props != None):
       File(os.path.join(params.ams_ad_conf_dir, "log4j.properties"),
          owner=params.ams_user,
-         content=params.ams_ad_log4j_props
+         content=InlineTemplate(params.ams_ad_log4j_props)
          )
 
+    File(format("{ams_ad_conf_dir}/ams-admanager-spark-env.sh"),
+          owner=params.ams_user,
+          group=params.user_group,
+          content=InlineTemplate(params.ams_ad_spark_env_sh_template)
+        )
+
     if action != 'stop':
       for dir in ams_ad_directories:
         Execute(('chown', '-R', params.ams_user, dir),
                 sudo=True
                 )
+      Execute(('chmod', '-R', '755', format("{ams_admanager_lib_dir}/spark/bin")),
+                sudo = True,
+                )
+      Execute(('chmod', '-R', '755', format("{ams_admanager_lib_dir}/spark/sbin")),
+              sudo = True,
+              )
 
 def is_spnego_enabled(params):
   if 'core-site' in params.config['configurations'] \
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams_admanager.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams_admanager.py
index 96c4454..33c8832 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams_admanager.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams_admanager.py
@@ -45,7 +45,7 @@ class AmsADManager(Script):
     Execute(start_cmd,
             user=params.ams_user
             )
-    pidfile = format("{ams_ad_pid_dir}/admanager.pid")
+    pidfile = format("{ams_ad_pid_dir}/ambari-metrics-admanager.pid")
     if not sudo.path_exists(pidfile):
       Logger.warning("Pid file doesn't exist after starting of the component.")
     else:
@@ -57,7 +57,7 @@ class AmsADManager(Script):
     env.set_params(params)
     self.configure(env, action = 'stop')
     Execute((format("{ams_admanager_script}"), 'stop'),
-            sudo=True
+            user=params.ams_user
             )
 
   def status(self, env):
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py
index 40d3db6..dd2a686 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py
@@ -154,6 +154,7 @@ def get_ambari_version():
     pass
   return ambari_version
 
+hostname = config['agentLevelParams']['hostname']
 
 ams_collector_log_dir = config['configurations']['ams-env']['metrics_collector_log_dir']
 ams_collector_conf_dir = "/etc/ambari-metrics-collector/conf"
@@ -199,6 +200,26 @@ ams_admanager_config_template = config['configurations']['ams-admanager-config']
 ams_admanager_script = "/usr/sbin/ambari-metrics-admanager"
 ams_admanager_port = config['configurations']['ams-admanager-config']['ambari.metrics.admanager.application.port']
 ams_admanager_heapsize = config['configurations']['ams-admanager-env']['ams_admanager_heapsize']
+ams_admanager_lib_dir = "/usr/lib/ambari-metrics-anomaly-detection"
+ams_admanager_jar = format("{ams_admanager_lib_dir}/ambari-metrics-anomaly-detection-service-*.jar")
+ams_ad_log_max_backup_size = default('configurations/ams-admanager-log4j/ams_ad_log_max_backup_size',80)
+ams_ad_log_number_of_backup_files = default('configurations/ams-admanager-log4j/ams_ad_log_number_of_backup_files',60)
+
+admanager_spark_op_mode = config['configurations']['ams-admanager-config']['ambari.metrics.admanager.spark.operation.mode']
+ams_ad_spark_env_sh_template = config['configurations']['ams-admanager-spark-env']['content']
+spark_master_port = default("/configurations/ams-admanager-spark-env/spark_master_port", 6190)
+spark_master_webui_port = default("/configurations/ams-admanager-spark-env/spark_master_webui_port", 6180)
+spark_worker_cores = default("/configurations/ams-admanager-spark-env/spark_worker_cores", 4)
+spark_worker_memory = default("/configurations/ams-admanager-spark-env/spark_worker_memory", 2048)
+spark_worker_webui_port = default("/configurations/ams-admanager-spark-env/spark_worker_webui_port", 6181)
+spark_daemon_memory = default("/configurations/ams-admanager-spark-env/spark_daemon_memory", 1024)
+
+if admanager_spark_op_mode == 'spark-on-yarn':
+  admanager_spark_hostport = hostname + ":" + spark_master_port #TODO : Fix for spark on yarn mode.
+  ams_ad_standalone_spark_enabled = False
+else:
+  admanager_spark_hostport = hostname + ":" + spark_master_port
+  ams_ad_standalone_spark_enabled = True
 
 if (('ams-admanager-log4j' in config['configurations']) and ('content' in config['configurations']['ams-admanager-log4j'])):
   ams_ad_log4j_props = config['configurations']['ams-admanager-log4j']['content']
@@ -289,7 +310,6 @@ else:
   hbase_heapsize = master_heapsize
 
 max_open_files_limit = default("/configurations/ams-hbase-env/max_open_files_limit", "32768")
-hostname = config['agentLevelParams']['hostname']
 
 cluster_zookeeper_quorum_hosts = ",".join(config['clusterHostInfo']['zookeeper_server_hosts'])
 if 'zoo.cfg' in config['configurations'] and 'clientPort' in config['configurations']['zoo.cfg']:
diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py
index 3373592..347c290 100644
--- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py
+++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py
@@ -37,7 +37,7 @@ ams_ad_pid_dir = config['configurations']['ams-admanager-env']['ams_ad_pid_dir']
 
 monitor_pid_file = format("{ams_monitor_pid_dir}/ambari-metrics-monitor.pid")
 grafana_pid_file = format("{ams_grafana_pid_dir}/grafana-server.pid")
-ams_ad_pid_file = format("{ams_ad_pid_dir}/admanager.pid")
+ams_ad_pid_file = format("{ams_ad_pid_dir}/ambari-metrics-admanager.pid")
 
 security_enabled = config['configurations']['cluster-env']['security_enabled']
 ams_hbase_conf_dir = format("{hbase_conf_dir}")

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 16/39: Fixed compile errors from Merge trunk into branch-3.0-ams

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 6830f00de4de3977e9a57d56378fb1864f1868ca
Author: Siddharth Wagle <sw...@hortonworks.com>
AuthorDate: Tue Sep 26 15:56:28 2017 -0700

    Fixed compile errors from Merge trunk into branch-3.0-ams
---
 .../metrics2/sink/timeline/cache/HandleConnectExceptionTest.java     | 5 +++++
 .../server/controller/metrics/timeline/MetricsRequestHelper.java     | 2 +-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/cache/HandleConnectExceptionTest.java b/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/cache/HandleConnectExceptionTest.java
index 77aba6b..4bcc2fb 100644
--- a/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/cache/HandleConnectExceptionTest.java
+++ b/ambari-metrics/ambari-metrics-common/src/test/java/org/apache/hadoop/metrics2/sink/timeline/cache/HandleConnectExceptionTest.java
@@ -213,6 +213,11 @@ public class HandleConnectExceptionTest {
     }
 
     @Override
+    protected String getHostInMemoryAggregationProtocol() {
+      return "http";
+    }
+
+    @Override
     public boolean emitMetrics(TimelineMetrics metrics) {
       super.init();
       return super.emitMetrics(metrics);
diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/MetricsRequestHelper.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/MetricsRequestHelper.java
index ce0fe6d..062c228 100644
--- a/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/MetricsRequestHelper.java
+++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/MetricsRequestHelper.java
@@ -87,7 +87,7 @@ public class MetricsRequestHelper {
           uriBuilder.setParameter("precision", higherPrecision);
           String newSpec = uriBuilder.toString();
           connection = streamProvider.processURL(newSpec, HttpMethod.GET, (String) null,
-            Collections.<String, List<String>>emptyMap());
+            Collections.<String, List<String>>  emptyMap());
           if (!checkConnectionForPrecisionException(connection)) {
             throw new IOException("Encountered Precision exception : Higher precision request also failed.");
           }

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 32/39: AMBARI-22717 : Remove Anomaly Detection code from branch-3.0-ams. (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 541e2a5eca2e019607923c8a641dbb74bebe0258
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Tue Jan 2 11:34:16 2018 -0800

    AMBARI-22717 : Remove Anomaly Detection code from branch-3.0-ams. (avijayan)
---
 .../libraries/functions/package_conditions.py      |   4 -
 .../conf/unix/ambari-metrics-admanager             | 227 ---------
 .../conf/unix/config.yaml                          |  45 --
 .../conf/unix/log4j.properties                     |  31 --
 .../pom.xml                                        | 528 ---------------------
 .../src/main/assemblies/empty.xml                  |  21 -
 .../adservice/prototype/common/DataSeries.java     |  38 --
 .../adservice/prototype/common/ResultSet.java      |  43 --
 .../adservice/prototype/common/StatisticUtils.java |  59 ---
 .../prototype/core/AmbariServerInterface.java      | 119 -----
 .../prototype/core/MetricKafkaProducer.java        |  56 ---
 .../prototype/core/MetricSparkConsumer.java        | 244 ----------
 .../prototype/core/MetricsCollectorInterface.java  | 237 ---------
 .../prototype/core/PointInTimeADSystem.java        | 260 ----------
 .../adservice/prototype/core/RFunctionInvoker.java | 222 ---------
 .../adservice/prototype/core/TrendADSystem.java    | 317 -------------
 .../adservice/prototype/core/TrendMetric.java      |  33 --
 .../methods/AnomalyDetectionTechnique.java         |  30 --
 .../adservice/prototype/methods/MetricAnomaly.java |  84 ----
 .../adservice/prototype/methods/ema/EmaModel.java  | 131 -----
 .../prototype/methods/ema/EmaModelLoader.java      |  40 --
 .../prototype/methods/ema/EmaTechnique.java        | 151 ------
 .../prototype/methods/hsdev/HsdevTechnique.java    |  82 ----
 .../prototype/methods/kstest/KSTechnique.java      | 101 ----
 .../utilities/MetricAnomalyDetectorTestInput.java  | 126 -----
 .../testing/utilities/MetricAnomalyTester.java     | 151 ------
 .../utilities/TestMetricSeriesGenerator.java       |  92 ----
 .../testing/utilities/TestSeriesInputRequest.java  |  88 ----
 .../src/main/resources/R-scripts/ema.R             |  96 ----
 .../src/main/resources/R-scripts/hsdev.r           |  67 ---
 .../src/main/resources/R-scripts/iforest.R         |  52 --
 .../src/main/resources/R-scripts/kstest.r          |  38 --
 .../src/main/resources/R-scripts/test.R            |  85 ----
 .../src/main/resources/R-scripts/tukeys.r          |  51 --
 .../src/main/resources/hbase-site.xml              | 286 -----------
 .../src/main/resources/input-config.properties     |  42 --
 .../adservice/app/ADServiceScalaModule.scala       |  50 --
 .../adservice/app/AnomalyDetectionApp.scala        |  80 ----
 .../adservice/app/AnomalyDetectionAppConfig.scala  |  89 ----
 .../adservice/app/AnomalyDetectionAppModule.scala  |  47 --
 .../metrics/adservice/app/DefaultHealthCheck.scala |  25 -
 .../metrics/adservice/app/GuiceInjector.scala      |  56 ---
 .../configuration/AdServiceConfiguration.scala     |  40 --
 .../configuration/HBaseConfiguration.scala         |  59 ---
 .../MetricCollectorConfiguration.scala             |  54 ---
 .../MetricDefinitionDBConfiguration.scala          |  40 --
 .../MetricDefinitionServiceConfiguration.scala     |  31 --
 .../configuration/SparkConfiguration.scala         |  39 --
 .../adservice/db/AdAnomalyStoreAccessor.scala      |  36 --
 .../adservice/db/AdMetadataStoreAccessor.scala     |  53 ---
 .../adservice/db/AdMetadataStoreAccessorImpl.scala |  96 ----
 .../adservice/db/AdMetadataStoreConstants.scala    |  39 --
 .../metrics/adservice/db/ConnectionProvider.scala  |  45 --
 .../adservice/db/DefaultPhoenixDataSource.scala    |  79 ---
 .../metrics/adservice/db/MetadataDatasource.scala  |  79 ---
 .../adservice/db/PhoenixAnomalyStoreAccessor.scala | 195 --------
 .../adservice/db/PhoenixConnectionProvider.scala   |  66 ---
 .../adservice/db/PhoenixQueryConstants.scala       | 108 -----
 .../adservice/leveldb/LevelDBDatasource.scala      | 128 -----
 .../adservice/metadata/ADMetadataProvider.scala    | 147 ------
 .../metadata/InputMetricDefinitionParser.scala     |  58 ---
 .../adservice/metadata/MetricDefinition.scala      | 105 ----
 .../metadata/MetricDefinitionService.scala         |  78 ---
 .../metadata/MetricDefinitionServiceImpl.scala     | 242 ----------
 .../metrics/adservice/metadata/MetricKey.scala     |  53 ---
 .../metadata/MetricMetadataProvider.scala          |  31 --
 .../metadata/MetricSourceDefinition.scala          |  85 ----
 .../metadata/MetricSourceDefinitionType.scala      |  26 -
 .../adservice/model/AnomalyDetectionMethod.scala   |  23 -
 .../metrics/adservice/model/AnomalyType.scala      |  26 -
 .../adservice/model/MetricAnomalyInstance.scala    |  32 --
 .../model/PointInTimeAnomalyInstance.scala         |  46 --
 .../ambari/metrics/adservice/model/Range.scala     |  44 --
 .../ambari/metrics/adservice/model/Season.scala    | 122 -----
 .../metrics/adservice/model/SeasonType.scala       |  24 -
 .../ambari/metrics/adservice/model/TimeRange.scala |  39 --
 .../adservice/model/TrendAnomalyInstance.scala     |  44 --
 .../adservice/resource/AnomalyResource.scala       |  80 ----
 .../resource/MetricDefinitionResource.scala        | 109 -----
 .../metrics/adservice/resource/RootResource.scala  |  38 --
 .../adservice/resource/SubsystemResource.scala     |  26 -
 .../metrics/adservice/service/ADQueryService.scala |  34 --
 .../adservice/service/ADQueryServiceImpl.scala     |  56 ---
 .../adservice/service/AbstractADService.scala      |  44 --
 .../spark/prototype/MetricAnomalyDetector.scala    | 110 -----
 .../spark/prototype/SparkPhoenixReader.scala       |  73 ---
 .../adservice/prototype/TestEmaTechnique.java      | 106 -----
 .../adservice/prototype/TestRFunctionInvoker.java  | 161 -------
 .../metrics/adservice/prototype/TestTukeys.java    | 100 ----
 .../seriesgenerator/AbstractMetricSeries.java      |  25 -
 .../seriesgenerator/DualBandMetricSeries.java      |  88 ----
 .../MetricSeriesGeneratorFactory.java              | 377 ---------------
 .../seriesgenerator/MetricSeriesGeneratorTest.java | 101 ----
 .../seriesgenerator/MonotonicMetricSeries.java     | 101 ----
 .../seriesgenerator/NormalMetricSeries.java        |  81 ----
 .../SteadyWithTurbulenceMetricSeries.java          | 115 -----
 .../seriesgenerator/StepFunctionMetricSeries.java  | 107 -----
 .../seriesgenerator/UniformMetricSeries.java       |  95 ----
 .../src/test/resources/config.yaml                 |  35 --
 .../app/AnomalyDetectionAppConfigTest.scala        |  67 ---
 .../adservice/app/DefaultADResourceSpecTest.scala  |  57 ---
 .../adservice/app/DropwizardAppRuleHelper.scala    |  39 --
 .../app/DropwizardResourceTestRuleHelper.scala     |  33 --
 .../db/PhoenixAnomalyStoreAccessorTest.scala       |  26 -
 .../adservice/leveldb/LevelDBDataSourceTest.scala  |  57 ---
 .../metadata/AMSMetadataProviderTest.scala         |  49 --
 .../metadata/MetricDefinitionServiceTest.scala     | 130 -----
 .../metadata/MetricSourceDefinitionTest.scala      |  91 ----
 .../ambari/metrics/adservice/model/RangeTest.scala |  38 --
 .../metrics/adservice/model/SeasonTest.scala       |  92 ----
 ambari-metrics/ambari-metrics-assembly/pom.xml     | 148 ------
 .../src/main/assembly/anomaly-detection.xml        |  60 ---
 .../package/rpm/anomaly-detection/postinstall.sh   |  27 --
 .../ambari-metrics-grafana/src/main/scripted.js    | 118 -----
 .../ambari-metrics-timelineservice/pom.xml         |   2 +-
 .../timeline/HBaseTimelineMetricsService.java      |  65 ---
 .../metrics/timeline/PhoenixHBaseAccessor.java     | 115 -----
 .../metrics/timeline/TimelineMetricStore.java      |   5 -
 .../metrics/timeline/query/PhoenixTransactSQL.java |  93 ----
 .../webapp/TimelineWebServices.java                |  44 --
 .../metrics/timeline/TestTimelineMetricStore.java  |  10 -
 ambari-metrics/pom.xml                             |   3 -
 .../AMBARI_METRICS/0.1.0/alerts.json               |  70 ---
 .../0.1.0/configuration/ams-admanager-config.xml   | 115 -----
 .../0.1.0/configuration/ams-admanager-env.xml      | 107 -----
 .../0.1.0/configuration/ams-admanager-log4j.xml    |  86 ----
 .../configuration/ams-admanager-spark-env.xml      | 129 -----
 .../AMBARI_METRICS/0.1.0/metainfo.xml              |  30 --
 .../alerts/alert_point_in_time_metric_anomalies.py | 185 --------
 .../package/alerts/alert_trend_metric_anomalies.py | 185 --------
 .../AMBARI_METRICS/0.1.0/package/scripts/ams.py    |  64 ---
 .../0.1.0/package/scripts/ams_admanager.py         |  73 ---
 .../AMBARI_METRICS/0.1.0/package/scripts/params.py |  38 --
 .../AMBARI_METRICS/0.1.0/package/scripts/status.py |   2 -
 .../0.1.0/package/scripts/status_params.py         |   2 -
 .../package/templates/admanager_config.yaml.j2     |  44 --
 .../stacks/2.0.6/AMBARI_METRICS/test_admanager.py  | 106 -----
 .../test/python/stacks/2.0.6/configs/default.json  |  13 -
 .../stacks/2.0.6/configs/default_ams_embedded.json |  13 -
 139 files changed, 1 insertion(+), 11728 deletions(-)

diff --git a/ambari-common/src/main/python/resource_management/libraries/functions/package_conditions.py b/ambari-common/src/main/python/resource_management/libraries/functions/package_conditions.py
index 64cda98..ebc1aba 100644
--- a/ambari-common/src/main/python/resource_management/libraries/functions/package_conditions.py
+++ b/ambari-common/src/main/python/resource_management/libraries/functions/package_conditions.py
@@ -50,10 +50,6 @@ def should_install_phoenix():
   has_phoenix = len(phoenix_hosts) > 0
   return phoenix_enabled or has_phoenix
 
-def should_install_ams_admanager():
-  config = Script.get_config()
-  return _has_applicable_local_component(config, ["AD_MANAGER"])
-
 def should_install_ams_collector():
   config = Script.get_config()
   return _has_applicable_local_component(config, ["METRICS_COLLECTOR"])
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/ambari-metrics-admanager b/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/ambari-metrics-admanager
deleted file mode 100644
index 98b7606..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/ambari-metrics-admanager
+++ /dev/null
@@ -1,227 +0,0 @@
-#!/usr/bin/env bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific
-
-PIDFILE=/var/run/ambari-metrics-anomaly-detection/ambari-metrics-admanager.pid
-OUTFILE=/var/log/ambari-metrics-anomaly-detection/ambari-metrics-admanager.out
-
-CONF_DIR=/etc/ambari-metrics-anomaly-detection/conf
-DAEMON_NAME=ams_admanager
-SPARK_HOME=/usr/lib/ambari-metrics-anomaly-detection/spark
-
-SPARK_MASTER_PID=/var/run/ambari-metrics-anomaly-detection/spark-ams-org.apache.spark.deploy.master.Master.pid
-
-STOP_TIMEOUT=10
-
-function spark_daemon
-{
-    local cmd=$1
-    local pid
-
-    if [[ "${cmd}" == "start" ]]
-      then
-
-        ${SPARK_HOME}/sbin/start-master.sh
-        sleep 2
-        master_pid=$(cat "$SPARK_MASTER_PID")
-        if [ -z "`ps ax | grep -w ${master_pid} | grep org.apache.spark.deploy.master.Master`" ]; then
-          echo "ERROR: Spark Master start failed. For more details, see outfile in log directory."
-          exit -1
-        fi
-
-        ${SPARK_HOME}/sbin/start-slave.sh spark://${SPARK_MASTER_HOST}:${SPARK_MASTER_PORT}
-    elif [[ "${cmd}" == "stop" ]]
-      then
-        ${SPARK_HOME}/sbin/stop-slave.sh
-        ${SPARK_HOME}/sbin/stop-master.sh
-    else
-        pid=${SPARK_MASTER_PID}
-        daemon_status "${pid}"
-    fi
-
-}
-
-function write_pidfile
-{
-    local pidfile="$1"
-    echo $! > "${pidfile}" 2>/dev/null
-    if [[ $? -gt 0 ]]; then
-      echo "ERROR:  Cannot write pid ${pidfile}." | tee -a $STARTUPFILE
-      exit 1;
-    fi
-}
-
-function java_setup
-{
-  # Bail if we did not detect it
-  if [[ -z "${JAVA_HOME}" ]]; then
-    echo "ERROR: JAVA_HOME is not set and could not be found."
-    exit 1
-  fi
-
-  if [[ ! -d "${JAVA_HOME}" ]]; then
-    echo "ERROR: JAVA_HOME ${JAVA_HOME} does not exist."
-    exit 1
-  fi
-
-  JAVA="${JAVA_HOME}/bin/java"
-
-  if [[ ! -x "$JAVA" ]]; then
-    echo "ERROR: $JAVA is not executable."
-    exit 1
-  fi
-}
-
-function daemon_status()
-{
-  local pidfile="$1"
-  shift
-
-  local pid
-
-  if [[ -f "${pidfile}" ]]; then
-    pid=$(cat "${pidfile}")
-    if ps -p "${pid}" > /dev/null 2>&1; then
-      return 0
-    fi
-    return 1
-  fi
-  return 3
-}
-
-function start()
-{
-  java_setup
-
-
-  if [[ "${AMS_AD_STANDALONE_SPARK_ENABLED}" == "true" || "${AMS_AD_STANDALONE_SPARK_ENABLED}" == "True" ]]
-  then
-    spark_daemon "start"
-  fi
-
-  daemon_status "${PIDFILE}"
-  if [[ $? == 0  ]]; then
-    echo "AMS AD Manager is running as process $(cat "${PIDFILE}"). Exiting" | tee -a $STARTUPFILE
-    exit 0
-  else
-    # stale pid file, so just remove it and continue on
-    rm -f "${PIDFILE}" >/dev/null 2>&1
-  fi
-
-  nohup "${JAVA}" "-Xms$AMS_AD_HEAPSIZE" "-Xmx$AMS_AD_HEAPSIZE" ${AMS_AD_OPTS} "-Dlog4j.configuration=file://$CONF_DIR/log4j.properties" "-jar" "/usr/lib/ambari-metrics-anomaly-detection/ambari-metrics-anomaly-detection-service.jar" "server" "${CONF_DIR}/config.yaml" "$@" > $OUTFILE 2>&1 &
-  PID=$!
-  write_pidfile "${PIDFILE}"
-  sleep 2
-
-  echo "Verifying ${DAEMON_NAME} process status..."
-  if [ -z "`ps ax -o pid | grep ${PID}`" ]; then
-    if [ -s ${OUTFILE} ]; then
-      echo "ERROR: ${DAEMON_NAME} start failed. For more details, see ${OUTFILE}:"
-      echo "===================="
-      tail -n 10 ${OUTFILE}
-      echo "===================="
-    else
-      echo "ERROR: ${DAEMON_NAME} start failed"
-      rm -f ${PIDFILE}
-    fi
-    echo "Anomaly Detection Manager out at: ${OUTFILE}"
-    exit -1
-  fi
-
-  rm -f $STARTUPFILE #Deleting startup file
-  echo "Anomaly Detection Manager successfully started."
-  }
-
-function stop()
-{
-  pidfile=${PIDFILE}
-
-  if [[ -f "${pidfile}" ]]; then
-    pid=$(cat "$pidfile")
-
-    kill "${pid}" >/dev/null 2>&1
-    sleep "${STOP_TIMEOUT}"
-
-    if kill -0 "${pid}" > /dev/null 2>&1; then
-      echo "WARNING: ${DAEMON_NAME} did not stop gracefully after ${STOP_TIMEOUT} seconds: Trying to kill with kill -9"
-      kill -9 "${pid}" >/dev/null 2>&1
-    fi
-
-    if ps -p "${pid}" > /dev/null 2>&1; then
-      echo "ERROR: Unable to kill ${pid}"
-    else
-      rm -f "${pidfile}" >/dev/null 2>&1
-    fi
-  fi
-
-  #Let's try to stop spark always since if the user has flipped the spark mode to 'yarn', the enabled flag becomes obsolete.
-  spark_daemon "stop"
-}
-
-# execute ams-admanager-env.sh
-if [[ -f "${CONF_DIR}/ams-admanager-env.sh" ]]; then
-  . "${CONF_DIR}/ams-admanager-env.sh"
-else
-  echo "ERROR: Cannot execute ${CONF_DIR}/ams-admanager-env.sh." 2>&1
-  exit 1
-fi
-
-if [[ -f "${CONF_DIR}/ams-admanager-spark-env.sh" ]]; then
-  . "${CONF_DIR}/ams-admanager-spark-env.sh"
-else
-  echo "ERROR: Cannot execute ${CONF_DIR}/ams-admanager-spark-env.sh." 2>&1
-  exit 1
-fi
-
-# set these env variables only if they were not set by ams-admanager-env.sh
-: ${AMS_AD_LOG_DIR:=/var/log/ambari-metrics-anomaly-detection}
-: ${AMS_AD_STANDALONE_SPARK_ENABLED:=true}
-
-# set pid dir path
-if [[ -n "${AMS_AD_PID_DIR}" ]]; then
-  PIDFILE=${AMS_AD_PID_DIR}/ambari-metrics-admanager.pid
-  SPARK_MASTER_PID=${AMS_AD_PID_DIR}/spark-${USER}-org.apache.spark.deploy.master.Master-1.pid
-fi
-
-# set out file path
-if [[ -n "${AMS_AD_LOG_DIR}" ]]; then
-  OUTFILE=${AMS_AD_LOG_DIR}/ambari-metrics-admanager.out
-fi
-
-#TODO manage 3 hbase daemons for start/stop/status
-case "$1" in
-
-	start)
-    start
-
-    ;;
-	stop)
-    stop
-
-    ;;
-	status)
-	    daemon_status "${PIDFILE}"
-	    if [[ $? == 0  ]]; then
-            echo "AMS AD Manager is running as process $(cat "${PIDFILE}")."
-        else
-            echo "AMS AD Manager is not running."
-        fi
-    ;;
-	restart)
-	  stop
-	  start
-	;;
-
-esac
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/config.yaml b/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/config.yaml
deleted file mode 100644
index 85e4004..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/config.yaml
+++ /dev/null
@@ -1,45 +0,0 @@
-#Licensed under the Apache License, Version 2.0 (the "License");
-#you may not use this file except in compliance with the License.
-#You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-#Unless required by applicable law or agreed to in writing, software
-#distributed under the License is distributed on an "AS IS" BASIS,
-#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#See the License for the specific language governing permissions and
-#limitations under the License.
-
-server:
-  applicationConnectors:
-   - type: http
-     port: 9999
-  requestLog:
-    type: external
-
-logging:
-  type: external
-
-metricDefinitionService:
-  inputDefinitionDirectory: /etc/ambari-metrics-anomaly-detection/conf/definitionDirectory
-
-metricsCollector:
-  hosts: host1,host2
-  port: 6188
-  protocol: http
-  metadataEndpoint: /ws/v1/timeline/metrics/metadata/key
-
-adQueryService:
-  anomalyDataTtl: 604800
-
-metricDefinitionDB:
-  # force checksum verification of all data that is read from the file system on behalf of a particular read
-  verifyChecksums: true
-  # raise an error as soon as it detects an internal corruption
-  performParanoidChecks: false
-  # Path to Level DB directory
-  dbDirPath: /tmp/ambari-metrics-anomaly-detection/db
-
-spark:
-  mode: standalone
-  masterHostPort: localhost:7077
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/log4j.properties b/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/log4j.properties
deleted file mode 100644
index 9dba1da..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/conf/unix/log4j.properties
+++ /dev/null
@@ -1,31 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-# Define some default values that can be overridden by system properties
-# Root logger option
-log4j.rootLogger=INFO,file
-
-# Direct log messages to a log file
-log4j.appender.file=org.apache.log4j.RollingFileAppender
-log4j.appender.file.File=/var/log/ambari-metrics-anomaly-detection/ambari-metrics-admanager.log
-log4j.appender.file.MaxFileSize=80MB
-log4j.appender.file.MaxBackupIndex=60
-log4j.appender.file.layout=org.apache.log4j.PatternLayout
-log4j.appender.file.layout.ConversionPattern=%d{ABSOLUTE} %5p [%t] %c{1}:%L - %m%n
-
-
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
deleted file mode 100644
index 50d7ef6..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
+++ /dev/null
@@ -1,528 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one
-  ~ or more contributor license agreements.  See the NOTICE file
-  ~ distributed with this work for additional information
-  ~ regarding copyright ownership.  The ASF licenses this file
-  ~ to you under the Apache License, Version 2.0 (the
-  ~ "License"); you may not use this file except in compliance
-  ~ with the License.  You may obtain a copy of the License at
-  ~
-  ~     http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing, software
-  ~ distributed under the License is distributed on an "AS IS" BASIS,
-  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  ~ See the License for the specific language governing permissions and
-  ~ limitations under the License.
-  -->
-
-<project xmlns="http://maven.apache.org/POM/4.0.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <parent>
-    <artifactId>ambari-metrics</artifactId>
-    <groupId>org.apache.ambari</groupId>
-    <version>2.0.0.0-SNAPSHOT</version>
-  </parent>
-  <modelVersion>4.0.0</modelVersion>
-  <artifactId>ambari-metrics-anomaly-detection-service</artifactId>
-  <version>2.0.0.0-SNAPSHOT</version>
-  <name>Ambari Metrics Anomaly Detection Service</name>
-  <packaging>jar</packaging>
-
-  <properties>
-    <scala.version>2.12.3</scala.version>
-    <scala.binary.version>2.11</scala.binary.version>
-    <jackson.version>2.9.1</jackson.version>
-    <dropwizard.version>1.2.0</dropwizard.version>
-    <spark.version>2.1.1</spark.version>
-    <hadoop.version>2.7.3.2.6.0.3-8</hadoop.version>
-    <hbase.version>1.1.2.2.6.0.3-8</hbase.version>
-    <phoenix.version>4.7.0.2.6.0.3-8</phoenix.version>
-  </properties>
-  
-  <repositories>
-    <repository>
-      <id>scala-tools.org</id>
-      <name>Scala-Tools Maven2 Repository</name>
-      <url>http://scala-tools.org/repo-releases</url>
-    </repository>
-  </repositories>
-
-  <pluginRepositories>
-    <pluginRepository>
-      <id>scala-tools.org</id>
-      <name>Scala-Tools Maven2 Repository</name>
-      <url>http://scala-tools.org/repo-releases</url>
-    </pluginRepository>
-  </pluginRepositories>
-
-  <build>
-    <finalName>${project.artifactId}-${project.version}</finalName>
-    <resources>
-      <resource>
-        <filtering>true</filtering>
-        <directory>src/main/resources</directory>
-        <includes>
-          <include>**/*.yml</include>
-          <include>**/*.xml</include>
-          <include>**/*.txt</include>
-        </includes>
-      </resource>
-    </resources>
-    <plugins>
-      <plugin>
-        <artifactId>maven-compiler-plugin</artifactId>
-        <configuration>
-          <source>1.8</source>
-          <target>1.8</target>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>net.alchim31.maven</groupId>
-        <artifactId>scala-maven-plugin</artifactId>
-        <version>3.3.1</version>
-        <executions>
-          <execution>
-            <id>scala-compile-first</id>
-            <phase>process-resources</phase>
-            <goals>
-              <goal>add-source</goal>
-              <goal>compile</goal>
-            </goals>
-          </execution>
-          <execution>
-            <id>scala-test-compile</id>
-            <phase>process-test-resources</phase>
-            <goals>
-              <goal>testCompile</goal>
-            </goals>
-          </execution>
-        </executions>
-        <configuration>
-          <jvmArgs>
-            <jvmArg>-Xms512m</jvmArg>
-            <jvmArg>-Xmx2048m</jvmArg>
-          </jvmArgs>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.scalatest</groupId>
-        <artifactId>scalatest-maven-plugin</artifactId>
-        <version>1.0</version>
-      </plugin>
-      <plugin>
-        <groupId>org.scala-tools</groupId>
-        <artifactId>maven-scala-plugin</artifactId>
-        <executions>
-          <execution>
-            <goals>
-              <goal>compile</goal>
-              <goal>testCompile</goal>
-            </goals>
-          </execution>
-        </executions>
-        <configuration>
-          <scalaVersion>${scala.version}</scalaVersion>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-jar-plugin</artifactId>
-        <version>2.5</version>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-shade-plugin</artifactId>
-        <version>3.1.0</version>
-        <configuration>
-          <createDependencyReducedPom>false</createDependencyReducedPom>
-          <!--<minimizeJar>true</minimizeJar>-->
-          <filters>
-            <filter>
-              <artifact>*:*</artifact>
-              <excludes>
-                <exclude>META-INF/*.SF</exclude>
-                <exclude>META-INF/*.DSA</exclude>
-                <exclude>META-INF/*.RSA</exclude>
-              </excludes>
-            </filter>
-            <filter>
-              <artifact>org.apache.phoenix:phoenix-core</artifact>
-              <excludes>
-                <exclude>org/joda/time/**</exclude>
-                <exclude>com/codahale/metrics/**</exclude>
-                <exclude>com/google/common/collect/**</exclude>
-              </excludes>
-            </filter>
-            <filter>
-              <artifact>*:*</artifact>
-              <excludes>
-                <exclude>com/sun/jersey/**</exclude>
-              </excludes>
-            </filter>
-          </filters>
-        </configuration>
-        <executions>
-          <execution>
-            <phase>package</phase>
-            <goals>
-              <goal>shade</goal>
-            </goals>
-            <configuration>
-              <transformers>
-                <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
-                <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
-                  <mainClass>
-                    org.apache.ambari.metrics.adservice.app.AnomalyDetectionApp
-                  </mainClass>
-                </transformer>
-              </transformers>
-            </configuration>
-          </execution>
-        </executions>
-      </plugin>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-antrun-plugin</artifactId>
-        <version>1.7</version>
-        <executions>
-          <execution>
-            <phase>generate-resources</phase>
-            <goals>
-              <goal>run</goal>
-            </goals>
-            <configuration>
-              <target name="Download Spark">
-                <mkdir dir="${project.build.directory}/embedded"/>
-                <get
-                        src="${spark.tar}"
-                        dest="${project.build.directory}/embedded/spark.tar.gz"
-                        usetimestamp="true"
-                />
-                <untar
-                        src="${project.build.directory}/embedded/spark.tar.gz"
-                        dest="${project.build.directory}/embedded"
-                        compression="gzip"
-                />
-                <move
-                        todir="${project.build.directory}/embedded/spark" >
-                        <fileset dir="${project.build.directory}/embedded/${spark.folder}" includes="**"/>
-                </move>
-              </target>
-            </configuration>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-
-  <dependencies>
-    <dependency>
-      <groupId>commons-lang</groupId>
-      <artifactId>commons-lang</artifactId>
-      <version>2.5</version>
-    </dependency>
-    <dependency>
-      <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-api</artifactId>
-      <version>1.7.2</version>
-    </dependency>
-    <dependency>
-      <groupId>com.github.lucarosellini.rJava</groupId>
-      <artifactId>JRI</artifactId>
-      <version>0.9-7</version>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.spark</groupId>
-      <artifactId>spark-streaming_${scala.binary.version}</artifactId>
-      <version>${spark.version}</version>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.kafka</groupId>
-      <artifactId>kafka_2.10</artifactId>
-      <version>0.10.1.0</version>
-      <exclusions>
-        <exclusion>
-          <groupId>com.sun.jdmk</groupId>
-          <artifactId>jmxtools</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>com.sun.jmx</groupId>
-          <artifactId>jmxri</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>javax.mail</groupId>
-          <artifactId>mail</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>javax.jms</groupId>
-          <artifactId>jmx</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>javax.jms</groupId>
-          <artifactId>jms</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.kafka</groupId>
-      <artifactId>kafka-clients</artifactId>
-      <version>0.10.1.0</version>
-    </dependency>
-    <dependency>
-      <groupId>com.fasterxml.jackson.core</groupId>
-      <artifactId>jackson-databind</artifactId>
-      <version>${jackson.version}</version>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.kafka</groupId>
-      <artifactId>connect-json</artifactId>
-      <version>0.10.1.0</version>
-      <exclusions>
-        <exclusion>
-          <artifactId>jackson-databind</artifactId>
-          <groupId>com.fasterxml.jackson.core</groupId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.spark</groupId>
-      <artifactId>spark-streaming-kafka_2.10</artifactId>
-      <version>1.6.3</version>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.phoenix</groupId>
-      <artifactId>phoenix-core</artifactId>
-      <version>${phoenix.version}</version>
-      <exclusions>
-        <exclusion>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-common</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-annotations</artifactId>
-        </exclusion>
-        <exclusion>
-          <artifactId>jersey-core</artifactId>
-          <groupId>com.sun.jersey</groupId>
-        </exclusion>
-        <exclusion>
-          <artifactId>jersey-server</artifactId>
-          <groupId>com.sun.jersey</groupId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.ambari</groupId>
-      <artifactId>ambari-metrics-common</artifactId>
-      <version>${project.version}</version>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.httpcomponents</groupId>
-      <artifactId>httpclient</artifactId>
-      <version>4.2.5</version>
-    </dependency>
-    <dependency>
-      <groupId>org.scala-lang</groupId>
-      <artifactId>scala-library</artifactId>
-      <version>${scala.version}</version>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.spark</groupId>
-      <artifactId>spark-core_${scala.binary.version}</artifactId>
-      <version>${spark.version}</version>
-      <scope>provided</scope>
-      <exclusions>
-        <exclusion>
-          <groupId>com.fasterxml.jackson.module</groupId>
-          <artifactId>jackson-module-scala_2.11</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.spark</groupId>
-      <artifactId>spark-mllib_${scala.binary.version}</artifactId>
-      <version>${spark.version}</version>
-      <scope>provided</scope>
-      <exclusions>
-        <exclusion>
-          <groupId>com.fasterxml.jackson.core</groupId>
-          <artifactId>jackson-databind</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-common</artifactId>
-      <version>${hadoop.version}</version>
-      <exclusions>
-        <exclusion>
-          <groupId>commons-el</groupId>
-          <artifactId>commons-el</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>tomcat</groupId>
-          <artifactId>jasper-runtime</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>tomcat</groupId>
-          <artifactId>jasper-compiler</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>org.mortbay.jetty</groupId>
-          <artifactId>jsp-2.1-jetty</artifactId>
-        </exclusion>
-        <exclusion>
-          <artifactId>jersey-server</artifactId>
-          <groupId>com.sun.jersey</groupId>
-        </exclusion>
-        <exclusion>
-          <artifactId>jersey-core</artifactId>
-          <groupId>com.sun.jersey</groupId>
-        </exclusion>
-        <exclusion>
-          <artifactId>jersey-json</artifactId>
-          <groupId>com.sun.jersey</groupId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
-      <groupId>org.scalatest</groupId>
-      <artifactId>scalatest_2.12</artifactId>
-      <version>3.0.1</version>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>io.dropwizard</groupId>
-      <artifactId>dropwizard-core</artifactId>
-      <version>${dropwizard.version}</version>
-      <exclusions>
-        <exclusion>
-          <groupId>org.glassfish.hk2.external</groupId>
-          <artifactId>javax.inject</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>org.glassfish.hk2.external</groupId>
-          <artifactId>aopalliance-repackaged</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>ch.qos.logback</groupId>
-          <artifactId>logback-classic</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>ch.qos.logback</groupId>
-          <artifactId>logback-access</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>org.slf4j</groupId>
-          <artifactId>log4j-over-slf4j</artifactId>
-        </exclusion>
-        <exclusion>
-          <artifactId>jersey-server</artifactId>
-          <groupId>org.glassfish.jersey.core</groupId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
-      <groupId>log4j</groupId>
-      <artifactId>log4j</artifactId>
-      <version>1.2.17</version>
-    </dependency>
-    <dependency>
-      <groupId>org.slf4j</groupId>
-      <artifactId>slf4j-log4j12</artifactId>
-      <version>1.7.21</version>
-    </dependency>
-    <dependency>
-      <groupId>io.dropwizard</groupId>
-      <artifactId>dropwizard-testing</artifactId>
-      <version>${dropwizard.version}</version>
-      <scope>test</scope>
-      <exclusions>
-        <exclusion>
-          <groupId>org.glassfish.hk2.external</groupId>
-          <artifactId>javax.inject</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
-      <groupId>joda-time</groupId>
-      <artifactId>joda-time</artifactId>
-      <version>2.9.4</version>
-    </dependency>
-    <dependency>
-      <groupId>org.joda</groupId>
-      <artifactId>joda-convert</artifactId>
-      <version>1.8.1</version>
-    </dependency>
-    <dependency>
-      <groupId>com.google.inject</groupId>
-      <artifactId>guice</artifactId>
-      <version>4.1.0</version>
-    </dependency>
-    <dependency>
-      <groupId>com.google.inject.extensions</groupId>
-      <artifactId>guice-multibindings</artifactId>
-      <version>4.1.0</version>
-    </dependency>
-    <dependency>
-      <groupId>com.fasterxml.jackson.module</groupId>
-      <artifactId>jackson-module-scala_2.12</artifactId>
-      <version>${jackson.version}</version>
-    </dependency>
-    <dependency>
-      <groupId>com.fasterxml.jackson.datatype</groupId>
-      <artifactId>jackson-datatype-jdk8</artifactId>
-      <version>${jackson.version}</version>
-    </dependency>
-
-    <dependency>
-      <groupId>org.fusesource.leveldbjni</groupId>
-      <artifactId>leveldbjni-all</artifactId>
-      <version>1.8</version>
-    </dependency>
-    <dependency>
-      <groupId>org.iq80.leveldb</groupId>
-      <artifactId>leveldb</artifactId>
-      <version>0.9</version>
-    </dependency>
-    <!-- https://mvnrepository.com/artifact/org.scalaj/scalaj-http -->
-    <dependency>
-      <groupId>org.scalaj</groupId>
-      <artifactId>scalaj-http_2.12</artifactId>
-      <version>2.3.0</version>
-    </dependency>
-
-    <dependency>
-      <groupId>junit</groupId>
-      <artifactId>junit</artifactId>
-      <version>4.12</version>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>com.google.guava</groupId>
-      <artifactId>guava</artifactId>
-      <version>18.0</version>
-    </dependency>
-    <dependency>
-      <groupId>io.dropwizard.metrics</groupId>
-      <artifactId>metrics-core</artifactId>
-      <version>3.2.5</version>
-    </dependency>
-    <dependency>
-      <groupId>org.easymock</groupId>
-      <artifactId>easymock</artifactId>
-      <version>2.5</version>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.mockito</groupId>
-      <artifactId>mockito-all</artifactId>
-      <version>1.8.4</version>
-      <scope>test</scope>
-    </dependency>
-  </dependencies>
-</project>
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/assemblies/empty.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/assemblies/empty.xml
deleted file mode 100644
index 35738b1..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/assemblies/empty.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-       http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<assembly>
-    <id>empty</id>
-    <formats/>
-</assembly>
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/DataSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/DataSeries.java
deleted file mode 100644
index 54b402f..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/DataSeries.java
+++ /dev/null
@@ -1,38 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.common;
-
-import java.util.Arrays;
-
-public class DataSeries {
-
-    public String seriesName;
-    public double[] ts;
-    public double[] values;
-
-    public DataSeries(String seriesName, double[] ts, double[] values) {
-        this.seriesName = seriesName;
-        this.ts = ts;
-        this.values = values;
-    }
-
-    @Override
-    public String toString() {
-        return seriesName + Arrays.toString(ts) + Arrays.toString(values);
-    }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/ResultSet.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/ResultSet.java
deleted file mode 100644
index dd3038f..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/ResultSet.java
+++ /dev/null
@@ -1,43 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.common;
-
-
-import java.util.ArrayList;
-import java.util.List;
-
-public class ResultSet {
-
-    public List<double[]> resultset = new ArrayList<>();
-
-    public ResultSet(List<double[]> resultset) {
-        this.resultset = resultset;
-    }
-
-    public void print() {
-        System.out.println("Result : ");
-        if (!resultset.isEmpty()) {
-            for (int i = 0; i<resultset.get(0).length;i++) {
-                for (double[] entity : resultset) {
-                    System.out.print(entity[i] + " ");
-                }
-                System.out.println();
-            }
-        }
-    }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/StatisticUtils.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/StatisticUtils.java
deleted file mode 100644
index 0a22e50..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/StatisticUtils.java
+++ /dev/null
@@ -1,59 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.common;
-
-
-import java.util.Arrays;
-
-public class StatisticUtils {
-
-  public static double mean(double[] values) {
-    double sum = 0;
-    for (double d : values) {
-      sum += d;
-    }
-    return sum / values.length;
-  }
-
-  public static double variance(double[] values) {
-    double avg =  mean(values);
-    double variance = 0;
-    for (double d : values) {
-      variance += Math.pow(d - avg, 2.0);
-    }
-    return variance;
-  }
-
-  public static double sdev(double[]  values, boolean useBesselsCorrection) {
-    double variance = variance(values);
-    int n = (useBesselsCorrection) ? values.length - 1 : values.length;
-    return Math.sqrt(variance / n);
-  }
-
-  public static double median(double[] values) {
-    double[] clonedValues = Arrays.copyOf(values, values.length);
-    Arrays.sort(clonedValues);
-    int n = values.length;
-
-    if (n % 2 != 0) {
-      return clonedValues[(n-1)/2];
-    } else {
-      return ( clonedValues[(n-1)/2] + clonedValues[n/2] ) / 2;
-    }
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/AmbariServerInterface.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/AmbariServerInterface.java
deleted file mode 100644
index ac50c54..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/AmbariServerInterface.java
+++ /dev/null
@@ -1,119 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.prototype.core;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-
-import java.io.BufferedReader;
-import java.io.IOException;
-import java.io.InputStreamReader;
-import java.io.Serializable;
-import java.net.HttpURLConnection;
-import java.net.URL;
-import java.nio.charset.StandardCharsets;
-import java.util.Base64;
-
-public class AmbariServerInterface implements Serializable{
-
-  private static final Log LOG = LogFactory.getLog(AmbariServerInterface.class);
-
-  private String ambariServerHost;
-  private String clusterName;
-
-  public AmbariServerInterface(String ambariServerHost, String clusterName) {
-    this.ambariServerHost = ambariServerHost;
-    this.clusterName = clusterName;
-  }
-
-  public int getPointInTimeSensitivity() {
-
-    String url = constructUri("http", ambariServerHost, "8080", "/api/v1/clusters/" + clusterName + "/alert_definitions?fields=*");
-
-    URL obj = null;
-    BufferedReader in = null;
-
-    try {
-      obj = new URL(url);
-      HttpURLConnection con = (HttpURLConnection) obj.openConnection();
-      con.setRequestMethod("GET");
-
-      String encoded = Base64.getEncoder().encodeToString(("admin:admin").getBytes(StandardCharsets.UTF_8));
-      con.setRequestProperty("Authorization", "Basic "+encoded);
-
-      int responseCode = con.getResponseCode();
-      LOG.info("Sending 'GET' request to URL : " + url);
-      LOG.info("Response Code : " + responseCode);
-
-      in = new BufferedReader(
-        new InputStreamReader(con.getInputStream()));
-
-      StringBuilder responseJsonSb = new StringBuilder();
-      String line;
-      while ((line = in.readLine()) != null) {
-        responseJsonSb.append(line);
-      }
-
-//      JSONObject jsonObject = new JSONObject(responseJsonSb.toString());
-//      JSONArray array = jsonObject.getJSONArray("items");
-//      for(int i = 0 ; i < array.length() ; i++){
-//        JSONObject alertDefn = array.getJSONObject(i).getJSONObject("AlertDefinition");
-//        if (alertDefn.get("name") != null && alertDefn.get("name").equals("point_in_time_metrics_anomalies")) {
-//          JSONObject sourceNode = alertDefn.getJSONObject("source");
-//          JSONArray params = sourceNode.getJSONArray("parameters");
-//          for(int j = 0 ; j < params.length() ; j++){
-//            JSONObject param = params.getJSONObject(j);
-//            if (param.get("name").equals("sensitivity")) {
-//              return param.getInt("value");
-//            }
-//          }
-//          break;
-//        }
-//      }
-
-    } catch (Exception e) {
-      LOG.error(e);
-    } finally {
-      if (in != null) {
-        try {
-          in.close();
-        } catch (IOException e) {
-          LOG.warn(e);
-        }
-      }
-    }
-
-    return -1;
-  }
-
-  private String constructUri(String protocol, String host, String port, String path) {
-    StringBuilder sb = new StringBuilder(protocol);
-    sb.append("://");
-    sb.append(host);
-    sb.append(":");
-    sb.append(port);
-    sb.append(path);
-    return sb.toString();
-  }
-
-//  public static void main(String[] args) {
-//    AmbariServerInterface ambariServerInterface = new AmbariServerInterface();
-//    ambariServerInterface.getPointInTimeSensitivity("avijayan-ams-1.openstacklocal","c1");
-//  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricKafkaProducer.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricKafkaProducer.java
deleted file mode 100644
index 167fbb3..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricKafkaProducer.java
+++ /dev/null
@@ -1,56 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.core;
-
-import com.fasterxml.jackson.databind.JsonNode;
-import com.fasterxml.jackson.databind.ObjectMapper;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-import org.apache.kafka.clients.producer.KafkaProducer;
-import org.apache.kafka.clients.producer.Producer;
-import org.apache.kafka.clients.producer.ProducerConfig;
-import org.apache.kafka.clients.producer.ProducerRecord;
-import org.apache.kafka.clients.producer.RecordMetadata;
-
-import java.util.Properties;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.Future;
-
-public class MetricKafkaProducer {
-
-    Producer producer;
-    private static String topicName = "ambari-metrics-topic";
-
-    public MetricKafkaProducer(String kafkaServers) {
-        Properties configProperties = new Properties();
-        configProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaServers); //"avijayan-ams-2.openstacklocal:6667"
-        configProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.ByteArraySerializer");
-        configProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.connect.json.JsonSerializer");
-        producer = new KafkaProducer(configProperties);
-    }
-
-    public void sendMetrics(TimelineMetrics timelineMetrics) throws InterruptedException, ExecutionException {
-
-        ObjectMapper objectMapper = new ObjectMapper();
-        JsonNode jsonNode = objectMapper.valueToTree(timelineMetrics);
-        ProducerRecord<String, JsonNode> rec = new ProducerRecord<String, JsonNode>(topicName,jsonNode);
-        Future<RecordMetadata> kafkaFuture =  producer.send(rec);
-
-        System.out.println(kafkaFuture.isDone());
-        System.out.println(kafkaFuture.get().topic());
-    }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricSparkConsumer.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricSparkConsumer.java
deleted file mode 100644
index addeda7..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricSparkConsumer.java
+++ /dev/null
@@ -1,244 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.core;
-
-import com.fasterxml.jackson.databind.ObjectMapper;
-import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
-import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-import org.apache.spark.SparkConf;
-import org.apache.spark.api.java.function.Function;
-import org.apache.spark.broadcast.Broadcast;
-import org.apache.spark.streaming.Duration;
-import org.apache.spark.streaming.api.java.JavaDStream;
-import org.apache.spark.streaming.api.java.JavaPairDStream;
-import org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream;
-import org.apache.spark.streaming.api.java.JavaStreamingContext;
-import org.apache.spark.streaming.kafka.KafkaUtils;
-import scala.Tuple2;
-
-import java.io.FileInputStream;
-import java.io.IOException;
-import java.io.InputStream;
-import java.util.Arrays;
-import java.util.Collections;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Properties;
-import java.util.Set;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-public class MetricSparkConsumer {
-
-  private static final Log LOG = LogFactory.getLog(MetricSparkConsumer.class);
-  private static String groupId = "ambari-metrics-group";
-  private static String topicName = "ambari-metrics-topic";
-  private static int numThreads = 1;
-  private static long pitStartTime = System.currentTimeMillis();
-  private static long ksStartTime = pitStartTime;
-  private static long hdevStartTime = ksStartTime;
-  private static Set<Pattern> includeMetricPatterns = new HashSet<>();
-  private static Set<String> includedHosts = new HashSet<>();
-  private static Set<TrendMetric> trendMetrics = new HashSet<>();
-
-  public MetricSparkConsumer() {
-  }
-
-  public static Properties readProperties(String propertiesFile) {
-    try {
-      Properties properties = new Properties();
-      InputStream inputStream = ClassLoader.getSystemResourceAsStream(propertiesFile);
-      if (inputStream == null) {
-        inputStream = new FileInputStream(propertiesFile);
-      }
-      properties.load(inputStream);
-      return properties;
-    } catch (IOException ioEx) {
-      LOG.error("Error reading properties file for jmeter");
-      return null;
-    }
-  }
-
-  public static void main(String[] args) throws InterruptedException {
-
-    if (args.length < 1) {
-      System.err.println("Usage: MetricSparkConsumer <input-config-file>");
-      System.exit(1);
-    }
-
-    Properties properties = readProperties(args[0]);
-
-    List<String> appIds = Arrays.asList(properties.getProperty("appIds").split(","));
-
-    String collectorHost = properties.getProperty("collectorHost");
-    String collectorPort = properties.getProperty("collectorPort");
-    String collectorProtocol = properties.getProperty("collectorProtocol");
-
-    String zkQuorum = properties.getProperty("zkQuorum");
-
-    double emaW = Double.parseDouble(properties.getProperty("emaW"));
-    double emaN = Double.parseDouble(properties.getProperty("emaN"));
-    int emaThreshold = Integer.parseInt(properties.getProperty("emaThreshold"));
-    double tukeysN = Double.parseDouble(properties.getProperty("tukeysN"));
-
-    long pitTestInterval = Long.parseLong(properties.getProperty("pointInTimeTestInterval"));
-    long pitTrainInterval = Long.parseLong(properties.getProperty("pointInTimeTrainInterval"));
-
-    long ksTestInterval = Long.parseLong(properties.getProperty("ksTestInterval"));
-    long ksTrainInterval = Long.parseLong(properties.getProperty("ksTrainInterval"));
-    int hsdevNhp = Integer.parseInt(properties.getProperty("hsdevNhp"));
-    long hsdevInterval = Long.parseLong(properties.getProperty("hsdevInterval"));
-
-    String ambariServerHost = properties.getProperty("ambariServerHost");
-    String clusterName = properties.getProperty("clusterName");
-
-    String includeMetricPatternStrings = properties.getProperty("includeMetricPatterns");
-    if (includeMetricPatternStrings != null && !includeMetricPatternStrings.isEmpty()) {
-      String[] patterns = includeMetricPatternStrings.split(",");
-      for (String p : patterns) {
-        LOG.info("Included Pattern : " + p);
-        includeMetricPatterns.add(Pattern.compile(p));
-      }
-    }
-
-    String includedHostList = properties.getProperty("hosts");
-    if (includedHostList != null && !includedHostList.isEmpty()) {
-      String[] hosts = includedHostList.split(",");
-      includedHosts.addAll(Arrays.asList(hosts));
-    }
-
-    MetricsCollectorInterface metricsCollectorInterface = new MetricsCollectorInterface(collectorHost, collectorProtocol, collectorPort);
-
-    SparkConf sparkConf = new SparkConf().setAppName("AmbariMetricsAnomalyDetector");
-
-    JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(10000));
-
-    EmaTechnique emaTechnique = new EmaTechnique(emaW, emaN, emaThreshold);
-    PointInTimeADSystem pointInTimeADSystem = new PointInTimeADSystem(metricsCollectorInterface,
-      tukeysN,
-      pitTestInterval,
-      pitTrainInterval,
-      ambariServerHost,
-      clusterName);
-
-    TrendADSystem trendADSystem = new TrendADSystem(metricsCollectorInterface,
-      ksTestInterval,
-      ksTrainInterval,
-      hsdevNhp);
-
-    Broadcast<EmaTechnique> emaTechniqueBroadcast = jssc.sparkContext().broadcast(emaTechnique);
-    Broadcast<PointInTimeADSystem> pointInTimeADSystemBroadcast = jssc.sparkContext().broadcast(pointInTimeADSystem);
-    Broadcast<TrendADSystem> trendADSystemBroadcast = jssc.sparkContext().broadcast(trendADSystem);
-    Broadcast<MetricsCollectorInterface> metricsCollectorInterfaceBroadcast = jssc.sparkContext().broadcast(metricsCollectorInterface);
-    Broadcast<Set<Pattern>> includePatternBroadcast = jssc.sparkContext().broadcast(includeMetricPatterns);
-    Broadcast<Set<String>> includedHostBroadcast = jssc.sparkContext().broadcast(includedHosts);
-
-    JavaPairReceiverInputDStream<String, String> messages =
-      KafkaUtils.createStream(jssc, zkQuorum, groupId, Collections.singletonMap(topicName, numThreads));
-
-    //Convert JSON string to TimelineMetrics.
-    JavaDStream<TimelineMetrics> timelineMetricsStream = messages.map(new Function<Tuple2<String, String>, TimelineMetrics>() {
-      @Override
-      public TimelineMetrics call(Tuple2<String, String> message) throws Exception {
-        ObjectMapper mapper = new ObjectMapper();
-        TimelineMetrics metrics = mapper.readValue(message._2, TimelineMetrics.class);
-        return metrics;
-      }
-    });
-
-    timelineMetricsStream.print();
-
-    //Group TimelineMetric by AppId.
-    JavaPairDStream<String, TimelineMetrics> appMetricStream = timelineMetricsStream.mapToPair(
-      timelineMetrics -> timelineMetrics.getMetrics().isEmpty()  ?  new Tuple2<>("TEST", new TimelineMetrics()) : new Tuple2<String, TimelineMetrics>(timelineMetrics.getMetrics().get(0).getAppId(), timelineMetrics)
-    );
-
-    appMetricStream.print();
-
-    //Filter AppIds that are not needed.
-    JavaPairDStream<String, TimelineMetrics> filteredAppMetricStream = appMetricStream.filter(new Function<Tuple2<String, TimelineMetrics>, Boolean>() {
-      @Override
-      public Boolean call(Tuple2<String, TimelineMetrics> appMetricTuple) throws Exception {
-        return appIds.contains(appMetricTuple._1);
-      }
-    });
-
-    filteredAppMetricStream.print();
-
-    filteredAppMetricStream.foreachRDD(rdd -> {
-      rdd.foreach(
-        tuple2 -> {
-          long currentTime = System.currentTimeMillis();
-          EmaTechnique ema = emaTechniqueBroadcast.getValue();
-          if (currentTime > pitStartTime + pitTestInterval) {
-            LOG.info("Running Tukeys....");
-            pointInTimeADSystemBroadcast.getValue().runTukeysAndRefineEma(ema, currentTime);
-            pitStartTime = pitStartTime + pitTestInterval;
-          }
-
-          if (currentTime > ksStartTime + ksTestInterval) {
-            LOG.info("Running KS Test....");
-            trendADSystemBroadcast.getValue().runKSTest(currentTime, trendMetrics);
-            ksStartTime = ksStartTime + ksTestInterval;
-          }
-
-          if (currentTime > hdevStartTime + hsdevInterval) {
-            LOG.info("Running HSdev Test....");
-            trendADSystemBroadcast.getValue().runHsdevMethod();
-            hdevStartTime = hdevStartTime + hsdevInterval;
-          }
-
-          TimelineMetrics metrics = tuple2._2();
-          for (TimelineMetric timelineMetric : metrics.getMetrics()) {
-
-            boolean includeHost = includedHostBroadcast.getValue().contains(timelineMetric.getHostName());
-            boolean includeMetric = false;
-            if (includeHost) {
-              if (includePatternBroadcast.getValue().isEmpty()) {
-                includeMetric = true;
-              }
-              for (Pattern p : includePatternBroadcast.getValue()) {
-                Matcher m = p.matcher(timelineMetric.getMetricName());
-                if (m.find()) {
-                  includeMetric = true;
-                }
-              }
-            }
-
-            if (includeMetric) {
-              trendMetrics.add(new TrendMetric(timelineMetric.getMetricName(), timelineMetric.getAppId(),
-                timelineMetric.getHostName()));
-              List<MetricAnomaly> anomalies = ema.test(timelineMetric);
-              metricsCollectorInterfaceBroadcast.getValue().publish(anomalies);
-            }
-          }
-        });
-    });
-
-    jssc.start();
-    jssc.awaitTermination();
-  }
-}
-
-
-
-
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricsCollectorInterface.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricsCollectorInterface.java
deleted file mode 100644
index da3999a..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricsCollectorInterface.java
+++ /dev/null
@@ -1,237 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.core;
-
-import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
-import org.apache.commons.collections.CollectionUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-import org.codehaus.jackson.map.AnnotationIntrospector;
-import org.codehaus.jackson.map.ObjectMapper;
-import org.codehaus.jackson.map.ObjectReader;
-import org.codehaus.jackson.map.annotate.JsonSerialize;
-import org.codehaus.jackson.xc.JaxbAnnotationIntrospector;
-
-import java.io.BufferedReader;
-import java.io.IOException;
-import java.io.InputStreamReader;
-import java.io.OutputStream;
-import java.io.Serializable;
-import java.net.HttpURLConnection;
-import java.net.InetAddress;
-import java.net.URL;
-import java.net.UnknownHostException;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.TreeMap;
-
-public class MetricsCollectorInterface implements Serializable {
-
-  private static String hostName = null;
-  private String instanceId = null;
-  public final static String serviceName = "anomaly-engine";
-  private String collectorHost;
-  private String protocol;
-  private String port;
-  private static final String WS_V1_TIMELINE_METRICS = "/ws/v1/timeline/metrics";
-  private static final Log LOG = LogFactory.getLog(MetricsCollectorInterface.class);
-  private static ObjectMapper mapper;
-  private final static ObjectReader timelineObjectReader;
-
-  static {
-    mapper = new ObjectMapper();
-    AnnotationIntrospector introspector = new JaxbAnnotationIntrospector();
-    mapper.setAnnotationIntrospector(introspector);
-    mapper.getSerializationConfig()
-      .withSerializationInclusion(JsonSerialize.Inclusion.NON_NULL);
-    timelineObjectReader = mapper.reader(TimelineMetrics.class);
-  }
-
-  public MetricsCollectorInterface(String collectorHost, String protocol, String port) {
-    this.collectorHost = collectorHost;
-    this.protocol = protocol;
-    this.port = port;
-    this.hostName = getDefaultLocalHostName();
-  }
-
-  public static String getDefaultLocalHostName() {
-
-    if (hostName != null) {
-      return hostName;
-    }
-
-    try {
-      return InetAddress.getLocalHost().getCanonicalHostName();
-    } catch (UnknownHostException e) {
-      LOG.info("Error getting host address");
-    }
-    return null;
-  }
-
-  public void publish(List<MetricAnomaly> metricAnomalies) {
-    if (CollectionUtils.isNotEmpty(metricAnomalies)) {
-      LOG.info("Sending metric anomalies of size : " + metricAnomalies.size());
-      List<TimelineMetric> metricList = getTimelineMetricList(metricAnomalies);
-      if (!metricList.isEmpty()) {
-        TimelineMetrics timelineMetrics = new TimelineMetrics();
-        timelineMetrics.setMetrics(metricList);
-        emitMetrics(timelineMetrics);
-      }
-    } else {
-      LOG.debug("No anomalies to send.");
-    }
-  }
-
-  private List<TimelineMetric> getTimelineMetricList(List<MetricAnomaly> metricAnomalies) {
-    List<TimelineMetric> metrics = new ArrayList<>();
-
-    if (metricAnomalies.isEmpty()) {
-      return metrics;
-    }
-
-    for (MetricAnomaly anomaly : metricAnomalies) {
-      TimelineMetric timelineMetric = new TimelineMetric();
-      timelineMetric.setMetricName(anomaly.getMetricKey());
-      timelineMetric.setAppId(serviceName + "-" + anomaly.getMethodType());
-      timelineMetric.setInstanceId(null);
-      timelineMetric.setHostName(getDefaultLocalHostName());
-      timelineMetric.setStartTime(anomaly.getTimestamp());
-      HashMap<String, String> metadata = new HashMap<>();
-      metadata.put("method", anomaly.getMethodType());
-      metadata.put("anomaly-score", String.valueOf(anomaly.getAnomalyScore()));
-      timelineMetric.setMetadata(metadata);
-      TreeMap<Long,Double> metricValues = new TreeMap<>();
-      metricValues.put(anomaly.getTimestamp(), anomaly.getMetricValue());
-      timelineMetric.setMetricValues(metricValues);
-
-      metrics.add(timelineMetric);
-    }
-    return metrics;
-  }
-
-  public boolean emitMetrics(TimelineMetrics metrics) {
-    String connectUrl = constructTimelineMetricUri();
-    String jsonData = null;
-    LOG.debug("EmitMetrics connectUrl = " + connectUrl);
-    try {
-      jsonData = mapper.writeValueAsString(metrics);
-      LOG.info(jsonData);
-    } catch (IOException e) {
-      LOG.error("Unable to parse metrics", e);
-    }
-    if (jsonData != null) {
-      return emitMetricsJson(connectUrl, jsonData);
-    }
-    return false;
-  }
-
-  private HttpURLConnection getConnection(String spec) throws IOException {
-    return (HttpURLConnection) new URL(spec).openConnection();
-  }
-
-  private boolean emitMetricsJson(String connectUrl, String jsonData) {
-    int timeout = 10000;
-    HttpURLConnection connection = null;
-    try {
-      if (connectUrl == null) {
-        throw new IOException("Unknown URL. Unable to connect to metrics collector.");
-      }
-      connection = getConnection(connectUrl);
-
-      connection.setRequestMethod("POST");
-      connection.setRequestProperty("Content-Type", "application/json");
-      connection.setRequestProperty("Connection", "Keep-Alive");
-      connection.setConnectTimeout(timeout);
-      connection.setReadTimeout(timeout);
-      connection.setDoOutput(true);
-
-      if (jsonData != null) {
-        try (OutputStream os = connection.getOutputStream()) {
-          os.write(jsonData.getBytes("UTF-8"));
-        }
-      }
-
-      int statusCode = connection.getResponseCode();
-
-      if (statusCode != 200) {
-        LOG.info("Unable to POST metrics to collector, " + connectUrl + ", " +
-          "statusCode = " + statusCode);
-      } else {
-        LOG.info("Metrics posted to Collector " + connectUrl);
-      }
-      return true;
-    } catch (IOException ioe) {
-      LOG.error(ioe.getMessage());
-    }
-    return false;
-  }
-
-  private String constructTimelineMetricUri() {
-    StringBuilder sb = new StringBuilder(protocol);
-    sb.append("://");
-    sb.append(collectorHost);
-    sb.append(":");
-    sb.append(port);
-    sb.append(WS_V1_TIMELINE_METRICS);
-    return sb.toString();
-  }
-
-  public TimelineMetrics fetchMetrics(String metricName,
-                                      String appId,
-                                      String hostname,
-                                      long startime,
-                                      long endtime) {
-
-    String url = constructTimelineMetricUri() + "?metricNames=" + metricName + "&appId=" + appId +
-      "&hostname=" + hostname + "&startTime=" + startime + "&endTime=" + endtime;
-    LOG.debug("Fetch metrics URL : " + url);
-
-    URL obj = null;
-    BufferedReader in = null;
-    TimelineMetrics timelineMetrics = new TimelineMetrics();
-
-    try {
-      obj = new URL(url);
-      HttpURLConnection con = (HttpURLConnection) obj.openConnection();
-      con.setRequestMethod("GET");
-      int responseCode = con.getResponseCode();
-      LOG.debug("Sending 'GET' request to URL : " + url);
-      LOG.debug("Response Code : " + responseCode);
-
-      in = new BufferedReader(
-        new InputStreamReader(con.getInputStream()));
-      timelineMetrics = timelineObjectReader.readValue(in);
-    } catch (Exception e) {
-      LOG.error(e);
-    } finally {
-      if (in != null) {
-        try {
-          in.close();
-        } catch (IOException e) {
-          LOG.warn(e);
-        }
-      }
-    }
-
-    LOG.info("Fetched " + timelineMetrics.getMetrics().size() + " metrics.");
-    return timelineMetrics;
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/PointInTimeADSystem.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/PointInTimeADSystem.java
deleted file mode 100644
index f379605..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/PointInTimeADSystem.java
+++ /dev/null
@@ -1,260 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.core;
-
-import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
-import org.apache.ambari.metrics.adservice.prototype.common.ResultSet;
-import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaModel;
-import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-
-import java.io.Serializable;
-import java.util.ArrayList;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.TreeMap;
-
-public class PointInTimeADSystem implements Serializable {
-
-  //private EmaTechnique emaTechnique;
-  private MetricsCollectorInterface metricsCollectorInterface;
-  private Map<String, Double> tukeysNMap;
-  private double defaultTukeysN = 3;
-
-  private long testIntervalMillis = 5*60*1000; //10mins
-  private long trainIntervalMillis = 15*60*1000; //1hour
-
-  private static final Log LOG = LogFactory.getLog(PointInTimeADSystem.class);
-
-  private AmbariServerInterface ambariServerInterface;
-  private int sensitivity = 50;
-  private int minSensitivity = 0;
-  private int maxSensitivity = 100;
-
-  public PointInTimeADSystem(MetricsCollectorInterface metricsCollectorInterface, double defaultTukeysN,
-                             long testIntervalMillis, long trainIntervalMillis, String ambariServerHost, String clusterName) {
-    this.metricsCollectorInterface = metricsCollectorInterface;
-    this.defaultTukeysN = defaultTukeysN;
-    this.tukeysNMap = new HashMap<>();
-    this.testIntervalMillis = testIntervalMillis;
-    this.trainIntervalMillis = trainIntervalMillis;
-    this.ambariServerInterface = new AmbariServerInterface(ambariServerHost, clusterName);
-    LOG.info("Starting PointInTimeADSystem...");
-  }
-
-  public void runTukeysAndRefineEma(EmaTechnique emaTechnique, long startTime) {
-    LOG.info("Running Tukeys for test data interval [" + new Date(startTime - testIntervalMillis) + " : " + new Date(startTime) + "], with train data period [" + new Date(startTime  - testIntervalMillis - trainIntervalMillis) + " : " + new Date(startTime - testIntervalMillis) + "]");
-
-    int requiredSensivity = ambariServerInterface.getPointInTimeSensitivity();
-    if (requiredSensivity == -1 || requiredSensivity == sensitivity) {
-      LOG.info("No change in sensitivity needed.");
-    } else {
-      LOG.info("Current tukey's N value = " + defaultTukeysN);
-      if (requiredSensivity > sensitivity) {
-        int targetSensitivity = Math.min(maxSensitivity, requiredSensivity);
-        while (sensitivity < targetSensitivity) {
-          defaultTukeysN = defaultTukeysN + defaultTukeysN * 0.05;
-          sensitivity++;
-        }
-      } else {
-        int targetSensitivity = Math.max(minSensitivity, requiredSensivity);
-        while (sensitivity > targetSensitivity) {
-          defaultTukeysN = defaultTukeysN - defaultTukeysN * 0.05;
-          sensitivity--;
-        }
-      }
-      LOG.info("New tukey's N value = " + defaultTukeysN);
-    }
-
-    TimelineMetrics timelineMetrics = new TimelineMetrics();
-    for (String metricKey : emaTechnique.getTrackedEmas().keySet()) {
-      LOG.info("EMA key = " + metricKey);
-      EmaModel emaModel = emaTechnique.getTrackedEmas().get(metricKey);
-      String metricName = emaModel.getMetricName();
-      String appId = emaModel.getAppId();
-      String hostname = emaModel.getHostname();
-
-      TimelineMetrics tukeysData = metricsCollectorInterface.fetchMetrics(metricName, appId, hostname, startTime - (testIntervalMillis + trainIntervalMillis),
-        startTime);
-
-      if (tukeysData.getMetrics().isEmpty()) {
-        LOG.info("No metrics fetched for Tukeys, metricKey = " + metricKey);
-        continue;
-      }
-
-      List<Double> trainTsList = new ArrayList<>();
-      List<Double> trainDataList = new ArrayList<>();
-      List<Double> testTsList = new ArrayList<>();
-      List<Double> testDataList = new ArrayList<>();
-
-      for (TimelineMetric metric : tukeysData.getMetrics()) {
-        for (Long timestamp : metric.getMetricValues().keySet()) {
-          if (timestamp <= (startTime - testIntervalMillis)) {
-            trainDataList.add(metric.getMetricValues().get(timestamp));
-            trainTsList.add((double)timestamp);
-          } else {
-            testDataList.add(metric.getMetricValues().get(timestamp));
-            testTsList.add((double)timestamp);
-          }
-        }
-      }
-
-      if (trainDataList.isEmpty() || testDataList.isEmpty() || trainDataList.size() < testDataList.size()) {
-        LOG.info("Not enough train/test data to perform analysis.");
-        continue;
-      }
-
-      String tukeysTrainSeries = "tukeysTrainSeries";
-      double[] trainTs = new double[trainTsList.size()];
-      double[] trainData = new double[trainTsList.size()];
-      for (int i = 0; i < trainTs.length; i++) {
-        trainTs[i] = trainTsList.get(i);
-        trainData[i] = trainDataList.get(i);
-      }
-
-      String tukeysTestSeries = "tukeysTestSeries";
-      double[] testTs = new double[testTsList.size()];
-      double[] testData = new double[testTsList.size()];
-      for (int i = 0; i < testTs.length; i++) {
-        testTs[i] = testTsList.get(i);
-        testData[i] = testDataList.get(i);
-      }
-
-      LOG.info("Train Size = " + trainTs.length + ", Test Size = " + testTs.length);
-
-      DataSeries tukeysTrainData = new DataSeries(tukeysTrainSeries, trainTs, trainData);
-      DataSeries tukeysTestData = new DataSeries(tukeysTestSeries, testTs, testData);
-
-      if (!tukeysNMap.containsKey(metricKey)) {
-        tukeysNMap.put(metricKey, defaultTukeysN);
-      }
-
-      Map<String, String> configs = new HashMap<>();
-      configs.put("tukeys.n", String.valueOf(tukeysNMap.get(metricKey)));
-
-      ResultSet rs = RFunctionInvoker.tukeys(tukeysTrainData, tukeysTestData, configs);
-
-      List<TimelineMetric> tukeysMetrics = getAsTimelineMetric(rs, metricName, appId, hostname);
-      LOG.info("Tukeys anomalies size : " + tukeysMetrics.size());
-      TreeMap<Long, Double> tukeysMetricValues = new TreeMap<>();
-
-      for (TimelineMetric tukeysMetric : tukeysMetrics) {
-        tukeysMetricValues.putAll(tukeysMetric.getMetricValues());
-        timelineMetrics.addOrMergeTimelineMetric(tukeysMetric);
-      }
-
-      TimelineMetrics emaData = metricsCollectorInterface.fetchMetrics(metricKey, MetricsCollectorInterface.serviceName+"-ema", MetricsCollectorInterface.getDefaultLocalHostName(), startTime - testIntervalMillis, startTime);
-      TreeMap<Long, Double> emaMetricValues = new TreeMap();
-      if (!emaData.getMetrics().isEmpty()) {
-        emaMetricValues = emaData.getMetrics().get(0).getMetricValues();
-      }
-
-      LOG.info("Ema anomalies size : " + emaMetricValues.size());
-      int tp = 0;
-      int tn = 0;
-      int fp = 0;
-      int fn = 0;
-
-      for (double ts : testTs) {
-        long timestamp = (long) ts;
-        if (tukeysMetricValues.containsKey(timestamp)) {
-          if (emaMetricValues.containsKey(timestamp)) {
-            tp++;
-          } else {
-            fn++;
-          }
-        } else {
-          if (emaMetricValues.containsKey(timestamp)) {
-            fp++;
-          } else {
-            tn++;
-          }
-        }
-      }
-
-      double recall = (double) tp / (double) (tp + fn);
-      double precision = (double) tp / (double) (tp + fp);
-      LOG.info("----------------------------");
-      LOG.info("Precision Recall values for " + metricKey);
-      LOG.info("tp=" + tp + ", fp=" + fp + ", tn=" + tn + ", fn=" + fn);
-      LOG.info("----------------------------");
-
-      if (recall < 0.5) {
-        LOG.info("Increasing EMA sensitivity by 10%");
-        emaModel.updateModel(true, 5);
-      } else if (precision < 0.5) {
-        LOG.info("Decreasing EMA sensitivity by 10%");
-        emaModel.updateModel(false, 5);
-      }
-
-    }
-
-    if (emaTechnique.getTrackedEmas().isEmpty()){
-      LOG.info("No EMA Technique keys tracked!!!!");
-    }
-
-    if (!timelineMetrics.getMetrics().isEmpty()) {
-      metricsCollectorInterface.emitMetrics(timelineMetrics);
-    }
-  }
-
-  private static List<TimelineMetric> getAsTimelineMetric(ResultSet result, String metricName, String appId, String hostname) {
-
-    List<TimelineMetric> timelineMetrics = new ArrayList<>();
-
-    if (result == null) {
-      LOG.info("ResultSet from R call is null!!");
-      return null;
-    }
-
-    if (result.resultset.size() > 0) {
-      double[] ts = result.resultset.get(0);
-      double[] metrics = result.resultset.get(1);
-      double[] anomalyScore = result.resultset.get(2);
-      for (int i = 0; i < ts.length; i++) {
-        TimelineMetric timelineMetric = new TimelineMetric();
-        timelineMetric.setMetricName(metricName + ":" + appId + ":" + hostname);
-        timelineMetric.setHostName(MetricsCollectorInterface.getDefaultLocalHostName());
-        timelineMetric.setAppId(MetricsCollectorInterface.serviceName + "-tukeys");
-        timelineMetric.setInstanceId(null);
-        timelineMetric.setStartTime((long) ts[i]);
-        TreeMap<Long, Double> metricValues = new TreeMap<>();
-        metricValues.put((long) ts[i], metrics[i]);
-
-        HashMap<String, String> metadata = new HashMap<>();
-        metadata.put("method", "tukeys");
-        if (String.valueOf(anomalyScore[i]).equals("infinity")) {
-          LOG.info("Got anomalyScore = infinity for " + metricName + ":" + appId + ":" + hostname);
-        } else {
-          metadata.put("anomaly-score", String.valueOf(anomalyScore[i]));
-        }
-        timelineMetric.setMetadata(metadata);
-
-        timelineMetric.setMetricValues(metricValues);
-        timelineMetrics.add(timelineMetric);
-      }
-    }
-
-    return timelineMetrics;
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/RFunctionInvoker.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/RFunctionInvoker.java
deleted file mode 100644
index 8f1eba6..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/RFunctionInvoker.java
+++ /dev/null
@@ -1,222 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.core;
-
-
-import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
-import org.apache.ambari.metrics.adservice.prototype.common.ResultSet;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.rosuda.JRI.REXP;
-import org.rosuda.JRI.RVector;
-import org.rosuda.JRI.Rengine;
-
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
-
-public class RFunctionInvoker {
-
-  static final Log LOG = LogFactory.getLog(RFunctionInvoker.class);
-  public static Rengine r = new Rengine(new String[]{"--no-save"}, false, null);
-  private static String rScriptDir = "/usr/lib/ambari-metrics-collector/R-scripts";
-
-  private static void loadDataSets(Rengine r, DataSeries trainData, DataSeries testData) {
-    r.assign("train_ts", trainData.ts);
-    r.assign("train_x", trainData.values);
-    r.eval("train_data <- data.frame(train_ts,train_x)");
-    r.eval("names(train_data) <- c(\"TS\", " + trainData.seriesName + ")");
-
-    r.assign("test_ts", testData.ts);
-    r.assign("test_x", testData.values);
-    r.eval("test_data <- data.frame(test_ts,test_x)");
-    r.eval("names(test_data) <- c(\"TS\", " + testData.seriesName + ")");
-  }
-
-  public static void setScriptsDir(String dir) {
-    rScriptDir = dir;
-  }
-
-  public static ResultSet executeMethod(String methodType, DataSeries trainData, DataSeries testData, Map<String, String> configs) {
-
-    ResultSet result;
-    switch (methodType) {
-      case "tukeys":
-        result = tukeys(trainData, testData, configs);
-        break;
-      case "ema":
-        result = ema_global(trainData, testData, configs);
-        break;
-      case "ks":
-        result = ksTest(trainData, testData, configs);
-        break;
-      case "hsdev":
-        result = hsdev(trainData, testData, configs);
-        break;
-      default:
-        result = tukeys(trainData, testData, configs);
-        break;
-    }
-    return result;
-  }
-
-  public static ResultSet tukeys(DataSeries trainData, DataSeries testData, Map<String, String> configs) {
-    try {
-
-      REXP exp1 = r.eval("source('" + rScriptDir + "/tukeys.r" + "')");
-
-      double n = Double.parseDouble(configs.get("tukeys.n"));
-      r.eval("n <- " + n);
-
-      loadDataSets(r, trainData, testData);
-
-      r.eval("an <- ams_tukeys(train_data, test_data, n)");
-      REXP exp = r.eval("an");
-      RVector cont = (RVector) exp.getContent();
-      List<double[]> result = new ArrayList();
-      for (int i = 0; i < cont.size(); i++) {
-        result.add(cont.at(i).asDoubleArray());
-      }
-      return new ResultSet(result);
-    } catch (Exception e) {
-      LOG.error(e);
-    } finally {
-      r.end();
-    }
-    return null;
-  }
-
-  public static ResultSet ema_global(DataSeries trainData, DataSeries testData, Map<String, String> configs) {
-    try {
-      r.eval("source('" + rScriptDir + "/ema.r" + "')");
-
-      int n = Integer.parseInt(configs.get("ema.n"));
-      r.eval("n <- " + n);
-
-      double w = Double.parseDouble(configs.get("ema.w"));
-      r.eval("w <- " + w);
-
-      loadDataSets(r, trainData, testData);
-
-      r.eval("an <- ema_global(train_data, test_data, w, n)");
-      REXP exp = r.eval("an");
-      RVector cont = (RVector) exp.getContent();
-      List<double[]> result = new ArrayList();
-      for (int i = 0; i < cont.size(); i++) {
-        result.add(cont.at(i).asDoubleArray());
-      }
-      return new ResultSet(result);
-
-    } catch (Exception e) {
-      LOG.error(e);
-    } finally {
-      r.end();
-    }
-    return null;
-  }
-
-  public static ResultSet ema_daily(DataSeries trainData, DataSeries testData, Map<String, String> configs) {
-    try {
-      r.eval("source('" + rScriptDir + "/ema.r" + "')");
-
-      int n = Integer.parseInt(configs.get("ema.n"));
-      r.eval("n <- " + n);
-
-      double w = Double.parseDouble(configs.get("ema.w"));
-      r.eval("w <- " + w);
-
-      loadDataSets(r, trainData, testData);
-
-      r.eval("an <- ema_daily(train_data, test_data, w, n)");
-      REXP exp = r.eval("an");
-      RVector cont = (RVector) exp.getContent();
-      List<double[]> result = new ArrayList();
-      for (int i = 0; i < cont.size(); i++) {
-        result.add(cont.at(i).asDoubleArray());
-      }
-      return new ResultSet(result);
-
-    } catch (Exception e) {
-      LOG.error(e);
-    } finally {
-      r.end();
-    }
-    return null;
-  }
-
-  public static ResultSet ksTest(DataSeries trainData, DataSeries testData, Map<String, String> configs) {
-    try {
-      r.eval("source('" + rScriptDir + "/kstest.r" + "')");
-
-      double p_value = Double.parseDouble(configs.get("ks.p_value"));
-      r.eval("p_value <- " + p_value);
-
-      loadDataSets(r, trainData, testData);
-
-      r.eval("an <- ams_ks(train_data, test_data, p_value)");
-      REXP exp = r.eval("an");
-      RVector cont = (RVector) exp.getContent();
-      List<double[]> result = new ArrayList();
-      for (int i = 0; i < cont.size(); i++) {
-        result.add(cont.at(i).asDoubleArray());
-      }
-      return new ResultSet(result);
-
-    } catch (Exception e) {
-      LOG.error(e);
-    } finally {
-      r.end();
-    }
-    return null;
-  }
-
-  public static ResultSet hsdev(DataSeries trainData, DataSeries testData, Map<String, String> configs) {
-    try {
-      r.eval("source('" + rScriptDir + "/hsdev.r" + "')");
-
-      int n = Integer.parseInt(configs.get("hsdev.n"));
-      r.eval("n <- " + n);
-
-      int nhp = Integer.parseInt(configs.get("hsdev.nhp"));
-      r.eval("nhp <- " + nhp);
-
-      long interval = Long.parseLong(configs.get("hsdev.interval"));
-      r.eval("interval <- " + interval);
-
-      long period = Long.parseLong(configs.get("hsdev.period"));
-      r.eval("period <- " + period);
-
-      loadDataSets(r, trainData, testData);
-
-      r.eval("an2 <- hsdev_daily(train_data, test_data, n, nhp, interval, period)");
-      REXP exp = r.eval("an2");
-      RVector cont = (RVector) exp.getContent();
-
-      List<double[]> result = new ArrayList();
-      for (int i = 0; i < cont.size(); i++) {
-        result.add(cont.at(i).asDoubleArray());
-      }
-      return new ResultSet(result);
-    } catch (Exception e) {
-      LOG.error(e);
-    } finally {
-      r.end();
-    }
-    return null;
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendADSystem.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendADSystem.java
deleted file mode 100644
index 80212b3..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendADSystem.java
+++ /dev/null
@@ -1,317 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.core;
-
-import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
-import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
-import org.apache.ambari.metrics.adservice.prototype.methods.hsdev.HsdevTechnique;
-import org.apache.ambari.metrics.adservice.prototype.methods.kstest.KSTechnique;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-
-import java.io.BufferedReader;
-import java.io.FileReader;
-import java.io.IOException;
-import java.io.Serializable;
-import java.util.ArrayList;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.TreeMap;
-
-public class TrendADSystem implements Serializable {
-
-  private MetricsCollectorInterface metricsCollectorInterface;
-  private List<TrendMetric> trendMetrics;
-
-  private long ksTestIntervalMillis = 10 * 60 * 1000;
-  private long ksTrainIntervalMillis = 10 * 60 * 1000;
-  private KSTechnique ksTechnique;
-
-  private HsdevTechnique hsdevTechnique;
-  private int hsdevNumHistoricalPeriods = 3;
-
-  private Map<KsSingleRunKey, MetricAnomaly> trackedKsAnomalies = new HashMap<>();
-  private static final Log LOG = LogFactory.getLog(TrendADSystem.class);
-  private String inputFile = "";
-
-  public TrendADSystem(MetricsCollectorInterface metricsCollectorInterface,
-                       long ksTestIntervalMillis,
-                       long ksTrainIntervalMillis,
-                       int hsdevNumHistoricalPeriods) {
-
-    this.metricsCollectorInterface = metricsCollectorInterface;
-    this.ksTestIntervalMillis = ksTestIntervalMillis;
-    this.ksTrainIntervalMillis = ksTrainIntervalMillis;
-    this.hsdevNumHistoricalPeriods = hsdevNumHistoricalPeriods;
-
-    this.ksTechnique = new KSTechnique();
-    this.hsdevTechnique = new HsdevTechnique();
-
-    trendMetrics = new ArrayList<>();
-  }
-
-  public void runKSTest(long currentEndTime, Set<TrendMetric> trendMetrics) {
-    readInputFile(inputFile);
-
-    long ksTestIntervalStartTime = currentEndTime - ksTestIntervalMillis;
-    LOG.info("Running KS Test for test data interval [" + new Date(ksTestIntervalStartTime) + " : " +
-      new Date(currentEndTime) + "], with train data period [" + new Date(ksTestIntervalStartTime - ksTrainIntervalMillis)
-      + " : " + new Date(ksTestIntervalStartTime) + "]");
-
-    for (TrendMetric metric : trendMetrics) {
-      String metricName = metric.metricName;
-      String appId = metric.appId;
-      String hostname = metric.hostname;
-      String key = metricName + ":" + appId + ":" + hostname;
-
-      TimelineMetrics ksData = metricsCollectorInterface.fetchMetrics(metricName, appId, hostname, ksTestIntervalStartTime - ksTrainIntervalMillis,
-        currentEndTime);
-
-      if (ksData.getMetrics().isEmpty()) {
-        LOG.info("No metrics fetched for KS, metricKey = " + key);
-        continue;
-      }
-
-      List<Double> trainTsList = new ArrayList<>();
-      List<Double> trainDataList = new ArrayList<>();
-      List<Double> testTsList = new ArrayList<>();
-      List<Double> testDataList = new ArrayList<>();
-
-      for (TimelineMetric timelineMetric : ksData.getMetrics()) {
-        for (Long timestamp : timelineMetric.getMetricValues().keySet()) {
-          if (timestamp <= ksTestIntervalStartTime) {
-            trainDataList.add(timelineMetric.getMetricValues().get(timestamp));
-            trainTsList.add((double) timestamp);
-          } else {
-            testDataList.add(timelineMetric.getMetricValues().get(timestamp));
-            testTsList.add((double) timestamp);
-          }
-        }
-      }
-
-      LOG.info("Train Data size : " + trainDataList.size() + ", Test Data Size : " + testDataList.size());
-      if (trainDataList.isEmpty() || testDataList.isEmpty() || trainDataList.size() < testDataList.size()) {
-        LOG.info("Not enough train/test data to perform KS analysis.");
-        continue;
-      }
-
-      String ksTrainSeries = "KSTrainSeries";
-      double[] trainTs = new double[trainTsList.size()];
-      double[] trainData = new double[trainTsList.size()];
-      for (int i = 0; i < trainTs.length; i++) {
-        trainTs[i] = trainTsList.get(i);
-        trainData[i] = trainDataList.get(i);
-      }
-
-      String ksTestSeries = "KSTestSeries";
-      double[] testTs = new double[testTsList.size()];
-      double[] testData = new double[testTsList.size()];
-      for (int i = 0; i < testTs.length; i++) {
-        testTs[i] = testTsList.get(i);
-        testData[i] = testDataList.get(i);
-      }
-
-      LOG.info("Train Size = " + trainTs.length + ", Test Size = " + testTs.length);
-
-      DataSeries ksTrainData = new DataSeries(ksTrainSeries, trainTs, trainData);
-      DataSeries ksTestData = new DataSeries(ksTestSeries, testTs, testData);
-
-      MetricAnomaly metricAnomaly = ksTechnique.runKsTest(key, ksTrainData, ksTestData);
-      if (metricAnomaly == null) {
-        LOG.info("No anomaly from KS test.");
-      } else {
-        LOG.info("Found Anomaly in KS Test. Publishing KS Anomaly metric....");
-        TimelineMetric timelineMetric = getAsTimelineMetric(metricAnomaly,
-          ksTestIntervalStartTime, currentEndTime, ksTestIntervalStartTime - ksTrainIntervalMillis, ksTestIntervalStartTime);
-        TimelineMetrics timelineMetrics = new TimelineMetrics();
-        timelineMetrics.addOrMergeTimelineMetric(timelineMetric);
-        metricsCollectorInterface.emitMetrics(timelineMetrics);
-
-        trackedKsAnomalies.put(new KsSingleRunKey(ksTestIntervalStartTime, currentEndTime, metricName, appId, hostname), metricAnomaly);
-      }
-    }
-
-    if (trendMetrics.isEmpty()) {
-      LOG.info("No Trend metrics tracked!!!!");
-    }
-
-  }
-
-  private TimelineMetric getAsTimelineMetric(MetricAnomaly metricAnomaly,
-                                   long testStart,
-                                   long testEnd,
-                                   long trainStart,
-                                   long trainEnd) {
-
-    TimelineMetric timelineMetric = new TimelineMetric();
-    timelineMetric.setMetricName(metricAnomaly.getMetricKey());
-    timelineMetric.setAppId(MetricsCollectorInterface.serviceName + "-" + metricAnomaly.getMethodType());
-    timelineMetric.setInstanceId(null);
-    timelineMetric.setHostName(MetricsCollectorInterface.getDefaultLocalHostName());
-    timelineMetric.setStartTime(testEnd);
-    HashMap<String, String> metadata = new HashMap<>();
-    metadata.put("method", metricAnomaly.getMethodType());
-    metadata.put("anomaly-score", String.valueOf(metricAnomaly.getAnomalyScore()));
-    metadata.put("test-start-time", String.valueOf(testStart));
-    metadata.put("train-start-time", String.valueOf(trainStart));
-    metadata.put("train-end-time", String.valueOf(trainEnd));
-    timelineMetric.setMetadata(metadata);
-    TreeMap<Long,Double> metricValues = new TreeMap<>();
-    metricValues.put(testEnd, metricAnomaly.getMetricValue());
-    timelineMetric.setMetricValues(metricValues);
-    return timelineMetric;
-
-  }
-
-  public void runHsdevMethod() {
-
-    List<TimelineMetric> hsdevMetricAnomalies = new ArrayList<>();
-
-    for (KsSingleRunKey ksSingleRunKey : trackedKsAnomalies.keySet()) {
-
-      long hsdevTestEnd = ksSingleRunKey.endTime;
-      long hsdevTestStart = ksSingleRunKey.startTime;
-
-      long period = hsdevTestEnd - hsdevTestStart;
-
-      long hsdevTrainStart = hsdevTestStart - (hsdevNumHistoricalPeriods) * period;
-      long hsdevTrainEnd = hsdevTestStart;
-
-      LOG.info("Running HSdev Test for test data interval [" + new Date(hsdevTestStart) + " : " +
-        new Date(hsdevTestEnd) + "], with train data period [" + new Date(hsdevTrainStart)
-        + " : " + new Date(hsdevTrainEnd) + "]");
-
-      String metricName = ksSingleRunKey.metricName;
-      String appId = ksSingleRunKey.appId;
-      String hostname = ksSingleRunKey.hostname;
-      String key = metricName + "_" + appId + "_" + hostname;
-
-      TimelineMetrics hsdevData = metricsCollectorInterface.fetchMetrics(
-        metricName,
-        appId,
-        hostname,
-        hsdevTrainStart,
-        hsdevTestEnd);
-
-      if (hsdevData.getMetrics().isEmpty()) {
-        LOG.info("No metrics fetched for HSDev, metricKey = " + key);
-        continue;
-      }
-
-      List<Double> trainTsList = new ArrayList<>();
-      List<Double> trainDataList = new ArrayList<>();
-      List<Double> testTsList = new ArrayList<>();
-      List<Double> testDataList = new ArrayList<>();
-
-      for (TimelineMetric timelineMetric : hsdevData.getMetrics()) {
-        for (Long timestamp : timelineMetric.getMetricValues().keySet()) {
-          if (timestamp <= hsdevTestStart) {
-            trainDataList.add(timelineMetric.getMetricValues().get(timestamp));
-            trainTsList.add((double) timestamp);
-          } else {
-            testDataList.add(timelineMetric.getMetricValues().get(timestamp));
-            testTsList.add((double) timestamp);
-          }
-        }
-      }
-
-      if (trainDataList.isEmpty() || testDataList.isEmpty() || trainDataList.size() < testDataList.size()) {
-        LOG.info("Not enough train/test data to perform Hsdev analysis.");
-        continue;
-      }
-
-      String hsdevTrainSeries = "HsdevTrainSeries";
-      double[] trainTs = new double[trainTsList.size()];
-      double[] trainData = new double[trainTsList.size()];
-      for (int i = 0; i < trainTs.length; i++) {
-        trainTs[i] = trainTsList.get(i);
-        trainData[i] = trainDataList.get(i);
-      }
-
-      String hsdevTestSeries = "HsdevTestSeries";
-      double[] testTs = new double[testTsList.size()];
-      double[] testData = new double[testTsList.size()];
-      for (int i = 0; i < testTs.length; i++) {
-        testTs[i] = testTsList.get(i);
-        testData[i] = testDataList.get(i);
-      }
-
-      LOG.info("Train Size = " + trainTs.length + ", Test Size = " + testTs.length);
-
-      DataSeries hsdevTrainData = new DataSeries(hsdevTrainSeries, trainTs, trainData);
-      DataSeries hsdevTestData = new DataSeries(hsdevTestSeries, testTs, testData);
-
-      MetricAnomaly metricAnomaly = hsdevTechnique.runHsdevTest(key, hsdevTrainData, hsdevTestData);
-      if (metricAnomaly == null) {
-        LOG.info("No anomaly from Hsdev test. Mismatch between KS and HSDev. ");
-        ksTechnique.updateModel(key, false, 10);
-      } else {
-        LOG.info("Found Anomaly in Hsdev Test. This confirms KS anomaly.");
-        hsdevMetricAnomalies.add(getAsTimelineMetric(metricAnomaly,
-          hsdevTestStart, hsdevTestEnd, hsdevTrainStart, hsdevTrainEnd));
-      }
-    }
-    clearTrackedKsRunKeys();
-
-    if (!hsdevMetricAnomalies.isEmpty()) {
-      LOG.info("Publishing Hsdev Anomalies....");
-      TimelineMetrics timelineMetrics = new TimelineMetrics();
-      timelineMetrics.setMetrics(hsdevMetricAnomalies);
-      metricsCollectorInterface.emitMetrics(timelineMetrics);
-    }
-  }
-
-  private void clearTrackedKsRunKeys() {
-    trackedKsAnomalies.clear();
-  }
-
-  private void readInputFile(String fileName) {
-    trendMetrics.clear();
-    try (BufferedReader br = new BufferedReader(new FileReader(fileName))) {
-      for (String line; (line = br.readLine()) != null; ) {
-        String[] splits = line.split(",");
-        LOG.info("Adding a new metric to track in Trend AD system : " + splits[0]);
-        trendMetrics.add(new TrendMetric(splits[0], splits[1], splits[2]));
-      }
-    } catch (IOException e) {
-      LOG.error("Error reading input file : " + e);
-    }
-  }
-
-  class KsSingleRunKey implements Serializable{
-
-    long startTime;
-    long endTime;
-    String metricName;
-    String appId;
-    String hostname;
-
-    public KsSingleRunKey(long startTime, long endTime, String metricName, String appId, String hostname) {
-      this.startTime = startTime;
-      this.endTime = endTime;
-      this.metricName = metricName;
-      this.appId = appId;
-      this.hostname = hostname;
-    }
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendMetric.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendMetric.java
deleted file mode 100644
index d4db227..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendMetric.java
+++ /dev/null
@@ -1,33 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.core;
-
-import java.io.Serializable;
-
-public class TrendMetric implements Serializable {
-
-  String metricName;
-  String appId;
-  String hostname;
-
-  public TrendMetric(String metricName, String appId, String hostname) {
-    this.metricName = metricName;
-    this.appId = appId;
-    this.hostname = hostname;
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/AnomalyDetectionTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/AnomalyDetectionTechnique.java
deleted file mode 100644
index c19adda..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/AnomalyDetectionTechnique.java
+++ /dev/null
@@ -1,30 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.methods;
-
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-
-import java.util.List;
-
-public abstract class AnomalyDetectionTechnique {
-
-  protected String methodType;
-
-  public abstract List<MetricAnomaly> test(TimelineMetric metric);
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/MetricAnomaly.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/MetricAnomaly.java
deleted file mode 100644
index 60ff11c..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/MetricAnomaly.java
+++ /dev/null
@@ -1,84 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.methods;
-
-import java.io.Serializable;
-
-public class MetricAnomaly implements Serializable{
-
-  private String methodType;
-  private double anomalyScore;
-  private String metricKey;
-  private long timestamp;
-  private double metricValue;
-
-
-  public MetricAnomaly(String metricKey, long timestamp, double metricValue, String methodType, double anomalyScore) {
-    this.metricKey = metricKey;
-    this.timestamp = timestamp;
-    this.metricValue = metricValue;
-    this.methodType = methodType;
-    this.anomalyScore = anomalyScore;
-
-  }
-
-  public String getMethodType() {
-    return methodType;
-  }
-
-  public void setMethodType(String methodType) {
-    this.methodType = methodType;
-  }
-
-  public double getAnomalyScore() {
-    return anomalyScore;
-  }
-
-  public void setAnomalyScore(double anomalyScore) {
-    this.anomalyScore = anomalyScore;
-  }
-
-  public void setMetricKey(String metricKey) {
-    this.metricKey = metricKey;
-  }
-
-  public String getMetricKey() {
-    return metricKey;
-  }
-
-  public void setMetricName(String metricName) {
-    this.metricKey = metricName;
-  }
-
-  public long getTimestamp() {
-    return timestamp;
-  }
-
-  public void setTimestamp(long timestamp) {
-    this.timestamp = timestamp;
-  }
-
-  public double getMetricValue() {
-    return metricValue;
-  }
-
-  public void setMetricValue(double metricValue) {
-    this.metricValue = metricValue;
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaModel.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaModel.java
deleted file mode 100644
index 593028e..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaModel.java
+++ /dev/null
@@ -1,131 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.methods.ema;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-
-import javax.xml.bind.annotation.XmlRootElement;
-import java.io.Serializable;
-
-import static org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique.suppressAnomaliesTheshold;
-
-@XmlRootElement
-public class EmaModel implements Serializable {
-
-  private String metricName;
-  private String hostname;
-  private String appId;
-  private double ema;
-  private double ems;
-  private double weight;
-  private double timessdev;
-
-  private int ctr = 0;
-
-  private static final Log LOG = LogFactory.getLog(EmaModel.class);
-
-  public EmaModel(String name, String hostname, String appId, double weight, double timessdev) {
-    this.metricName = name;
-    this.hostname = hostname;
-    this.appId = appId;
-    this.weight = weight;
-    this.timessdev = timessdev;
-    this.ema = 0.0;
-    this.ems = 0.0;
-  }
-
-  public String getMetricName() {
-    return metricName;
-  }
-
-  public String getHostname() {
-    return hostname;
-  }
-
-  public String getAppId() {
-    return appId;
-  }
-
-  public double testAndUpdate(double metricValue) {
-
-    double anomalyScore = 0.0;
-    LOG.info("Before Update ->" + metricName + ":" + appId + ":" + hostname + " - " + "ema = " + ema + ", ems = " + ems + ", timessdev = " + timessdev);
-    update(metricValue);
-    if (ctr > suppressAnomaliesTheshold) {
-      anomalyScore = test(metricValue);
-      if (anomalyScore > 0.0) {
-        LOG.info("Anomaly ->" + metricName + ":" + appId + ":" + hostname + " - " + "ema = " + ema + ", ems = " + ems +
-          ", timessdev = " + timessdev + ", metricValue = " + metricValue);
-      } else {
-        LOG.info("Not an Anomaly ->" + metricName + ":" + appId + ":" + hostname + " - " + "ema = " + ema + ", ems = " + ems +
-          ", timessdev = " + timessdev + ", metricValue = " + metricValue);
-      }
-    } else {
-      ctr++;
-      if (ctr > suppressAnomaliesTheshold) {
-        LOG.info("Ema Model for " + metricName + ":" + appId + ":" + hostname + " is ready for testing data.");
-      }
-    }
-    return anomalyScore;
-  }
-
-  public void update(double metricValue) {
-    ema = weight * ema + (1 - weight) * metricValue;
-    ems = Math.sqrt(weight * Math.pow(ems, 2.0) + (1 - weight) * Math.pow(metricValue - ema, 2.0));
-    LOG.debug("In update : ema = " + ema + ", ems = " + ems);
-  }
-
-  public double test(double metricValue) {
-    LOG.debug("In test : ema = " + ema + ", ems = " + ems);
-    double diff = Math.abs(ema - metricValue) - (timessdev * ems);
-    LOG.debug("diff = " + diff);
-    if (diff > 0) {
-      return Math.abs((metricValue - ema) / ems); //Z score
-    } else {
-      return 0.0;
-    }
-  }
-
-  public void updateModel(boolean increaseSensitivity, double percent) {
-    LOG.info("Updating model for " + metricName + " with increaseSensitivity = " + increaseSensitivity + ", percent = " + percent);
-    double delta = percent / 100;
-    if (increaseSensitivity) {
-      delta = delta * -1;
-    }
-    this.timessdev = timessdev + delta * timessdev;
-    //this.weight = Math.min(1.0, weight + delta * weight);
-    LOG.info("New model parameters " + metricName + " : timessdev = " + timessdev + ", weight = " + weight);
-  }
-
-  public double getWeight() {
-    return weight;
-  }
-
-  public void setWeight(double weight) {
-    this.weight = weight;
-  }
-
-  public double getTimessdev() {
-    return timessdev;
-  }
-
-  public void setTimessdev(double timessdev) {
-    this.timessdev = timessdev;
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaModelLoader.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaModelLoader.java
deleted file mode 100644
index 7623f27..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaModelLoader.java
+++ /dev/null
@@ -1,40 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.methods.ema;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.spark.SparkContext;
-import org.apache.spark.mllib.util.Loader;
-
-public class EmaModelLoader implements Loader<EmaTechnique> {
-    private static final Log LOG = LogFactory.getLog(EmaModelLoader.class);
-
-    @Override
-    public EmaTechnique load(SparkContext sc, String path) {
-        return new EmaTechnique(0.5,3);
-//        Gson gson = new Gson();
-//        try {
-//            String fileString = new String(Files.readAllBytes(Paths.get(path)), StandardCharsets.UTF_8);
-//            return gson.fromJson(fileString, EmaTechnique.class);
-//        } catch (IOException e) {
-//            LOG.error(e);
-//        }
-//        return null;
-    }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaTechnique.java
deleted file mode 100644
index 7ec17d8..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaTechnique.java
+++ /dev/null
@@ -1,151 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.methods.ema;
-
-import com.google.gson.Gson;
-import org.apache.ambari.metrics.adservice.prototype.methods.AnomalyDetectionTechnique;
-import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.spark.SparkContext;
-import org.apache.spark.mllib.util.Saveable;
-
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import java.io.BufferedWriter;
-import java.io.FileOutputStream;
-import java.io.IOException;
-import java.io.OutputStreamWriter;
-import java.io.Serializable;
-import java.io.Writer;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-@XmlRootElement
-public class EmaTechnique extends AnomalyDetectionTechnique implements Serializable, Saveable {
-
-  @XmlElement(name = "trackedEmas")
-  private Map<String, EmaModel> trackedEmas;
-  private static final Log LOG = LogFactory.getLog(EmaTechnique.class);
-
-  private double startingWeight = 0.5;
-  private double startTimesSdev = 3.0;
-  private String methodType = "ema";
-  public static int suppressAnomaliesTheshold = 100;
-
-  public EmaTechnique(double startingWeight, double startTimesSdev, int suppressAnomaliesTheshold) {
-    trackedEmas = new HashMap<>();
-    this.startingWeight = startingWeight;
-    this.startTimesSdev = startTimesSdev;
-    EmaTechnique.suppressAnomaliesTheshold = suppressAnomaliesTheshold;
-    LOG.info("New EmaTechnique......");
-  }
-
-  public EmaTechnique(double startingWeight, double startTimesSdev) {
-    trackedEmas = new HashMap<>();
-    this.startingWeight = startingWeight;
-    this.startTimesSdev = startTimesSdev;
-    LOG.info("New EmaTechnique......");
-  }
-
-  public List<MetricAnomaly> test(TimelineMetric metric) {
-    String metricName = metric.getMetricName();
-    String appId = metric.getAppId();
-    String hostname = metric.getHostName();
-    String key = metricName + ":" + appId + ":" + hostname;
-
-    EmaModel emaModel = trackedEmas.get(key);
-    if (emaModel == null) {
-      LOG.debug("EmaModel not present for " + key);
-      LOG.debug("Number of tracked Emas : " + trackedEmas.size());
-      emaModel  = new EmaModel(metricName, hostname, appId, startingWeight, startTimesSdev);
-      trackedEmas.put(key, emaModel);
-    } else {
-      LOG.debug("EmaModel already present for " + key);
-    }
-
-    List<MetricAnomaly> anomalies = new ArrayList<>();
-
-    for (Long timestamp : metric.getMetricValues().keySet()) {
-      double metricValue = metric.getMetricValues().get(timestamp);
-      double anomalyScore = emaModel.testAndUpdate(metricValue);
-      if (anomalyScore > 0.0) {
-        LOG.info("Found anomaly for : " + key + ", anomalyScore = " + anomalyScore);
-        MetricAnomaly metricAnomaly = new MetricAnomaly(key, timestamp, metricValue, methodType, anomalyScore);
-        anomalies.add(metricAnomaly);
-      } else {
-        LOG.debug("Discarding non-anomaly for : " + key);
-      }
-    }
-    return anomalies;
-  }
-
-  public boolean updateModel(TimelineMetric timelineMetric, boolean increaseSensitivity, double percent) {
-    String metricName = timelineMetric.getMetricName();
-    String appId = timelineMetric.getAppId();
-    String hostname = timelineMetric.getHostName();
-    String key = metricName + "_" + appId + "_" + hostname;
-
-
-    EmaModel emaModel = trackedEmas.get(key);
-
-    if (emaModel == null) {
-      LOG.warn("EMA Model for " + key + " not found");
-      return false;
-    }
-    emaModel.updateModel(increaseSensitivity, percent);
-
-    return true;
-  }
-
-  @Override
-  public void save(SparkContext sc, String path) {
-    Gson gson = new Gson();
-    try {
-      String json = gson.toJson(this);
-      try (Writer writer = new BufferedWriter(new OutputStreamWriter(
-        new FileOutputStream(path), "utf-8"))) {
-        writer.write(json);
-      }
-    } catch (IOException e) {
-      LOG.error(e);
-    }
-  }
-
-  @Override
-  public String formatVersion() {
-    return "1.0";
-  }
-
-  public Map<String, EmaModel> getTrackedEmas() {
-    return trackedEmas;
-  }
-
-  public double getStartingWeight() {
-    return startingWeight;
-  }
-
-  public double getStartTimesSdev() {
-    return startTimesSdev;
-  }
-
-}
-
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/hsdev/HsdevTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/hsdev/HsdevTechnique.java
deleted file mode 100644
index 855cc70..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/hsdev/HsdevTechnique.java
+++ /dev/null
@@ -1,82 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.methods.hsdev;
-
-import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
-import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-
-import java.io.Serializable;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.Map;
-
-import static org.apache.ambari.metrics.adservice.prototype.common.StatisticUtils.median;
-import static org.apache.ambari.metrics.adservice.prototype.common.StatisticUtils.sdev;
-
-public class HsdevTechnique implements Serializable {
-
-  private Map<String, Double> hsdevMap;
-  private String methodType = "hsdev";
-  private static final Log LOG = LogFactory.getLog(HsdevTechnique.class);
-
-  public HsdevTechnique() {
-    hsdevMap = new HashMap<>();
-  }
-
-  public MetricAnomaly runHsdevTest(String key, DataSeries trainData, DataSeries testData) {
-    int testLength = testData.values.length;
-    int trainLength = trainData.values.length;
-
-    if (trainLength < testLength) {
-      LOG.info("Not enough train data.");
-      return null;
-    }
-
-    if (!hsdevMap.containsKey(key)) {
-      hsdevMap.put(key, 3.0);
-    }
-
-    double n = hsdevMap.get(key);
-
-    double historicSd = sdev(trainData.values, false);
-    double historicMedian = median(trainData.values);
-    double currentMedian = median(testData.values);
-
-
-    if (historicSd > 0) {
-      double diff = Math.abs(currentMedian - historicMedian);
-      LOG.info("Found anomaly for metric : " + key + " in the period ending " + new Date((long)testData.ts[testLength - 1]));
-      LOG.info("Current median = " + currentMedian + ", Historic Median = " + historicMedian + ", HistoricSd = " + historicSd);
-
-      if (diff > n * historicSd) {
-        double zScore = diff / historicSd;
-        LOG.info("Z Score of current series : " + zScore);
-        return new MetricAnomaly(key,
-          (long) testData.ts[testLength - 1],
-          testData.values[testLength - 1],
-          methodType,
-          zScore);
-      }
-    }
-
-    return null;
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/kstest/KSTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/kstest/KSTechnique.java
deleted file mode 100644
index 0dc679e..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/kstest/KSTechnique.java
+++ /dev/null
@@ -1,101 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.prototype.methods.kstest;
-
-import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
-import org.apache.ambari.metrics.adservice.prototype.common.ResultSet;
-import org.apache.ambari.metrics.adservice.prototype.core.RFunctionInvoker;
-import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-
-import java.io.Serializable;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.Map;
-
-public class KSTechnique implements Serializable {
-
-  private String methodType = "ks";
-  private Map<String, Double> pValueMap;
-  private static final Log LOG = LogFactory.getLog(KSTechnique.class);
-
-  public KSTechnique() {
-    pValueMap = new HashMap();
-  }
-
-  public MetricAnomaly runKsTest(String key, DataSeries trainData, DataSeries testData) {
-
-    int testLength = testData.values.length;
-    int trainLength = trainData.values.length;
-
-    if (trainLength < testLength) {
-      LOG.info("Not enough train data.");
-      return null;
-    }
-
-    if (!pValueMap.containsKey(key)) {
-      pValueMap.put(key, 0.05);
-    }
-    double pValue = pValueMap.get(key);
-
-    ResultSet result = RFunctionInvoker.ksTest(trainData, testData, Collections.singletonMap("ks.p_value", String.valueOf(pValue)));
-    if (result == null) {
-      LOG.error("Resultset is null when invoking KS R function...");
-      return null;
-    }
-
-    if (result.resultset.size() > 0) {
-
-      LOG.info("Is size 1 ? result size = " + result.resultset.get(0).length);
-      LOG.info("p_value = " + result.resultset.get(3)[0]);
-      double dValue = result.resultset.get(2)[0];
-
-      return new MetricAnomaly(key,
-        (long) testData.ts[testLength - 1],
-        testData.values[testLength - 1],
-        methodType,
-        dValue);
-    }
-
-    return null;
-  }
-
-  public void updateModel(String metricKey, boolean increaseSensitivity, double percent) {
-
-    LOG.info("Updating KS model for " + metricKey + " with increaseSensitivity = " + increaseSensitivity + ", percent = " + percent);
-
-    if (!pValueMap.containsKey(metricKey)) {
-      LOG.error("Unknown metric key : " + metricKey);
-      LOG.info("pValueMap :" + pValueMap.toString());
-      return;
-    }
-
-    double delta = percent / 100;
-    if (!increaseSensitivity) {
-      delta = delta * -1;
-    }
-
-    double pValue = pValueMap.get(metricKey);
-    double newPValue = Math.min(1.0, pValue + delta * pValue);
-    pValueMap.put(metricKey, newPValue);
-    LOG.info("New pValue = " + newPValue);
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyDetectorTestInput.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyDetectorTestInput.java
deleted file mode 100644
index 9a002a1..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyDetectorTestInput.java
+++ /dev/null
@@ -1,126 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.prototype.testing.utilities;
-
-import javax.xml.bind.annotation.XmlRootElement;
-import java.util.List;
-import java.util.Map;
-
-@XmlRootElement
-public class MetricAnomalyDetectorTestInput {
-
-  public MetricAnomalyDetectorTestInput() {
-  }
-
-  //Train data
-  private String trainDataName;
-  private String trainDataType;
-  private Map<String, String> trainDataConfigs;
-  private int trainDataSize;
-
-  //Test data
-  private String testDataName;
-  private String testDataType;
-  private Map<String, String> testDataConfigs;
-  private int testDataSize;
-
-  //Algorithm data
-  private List<String> methods;
-  private Map<String, String> methodConfigs;
-
-  public String getTrainDataName() {
-    return trainDataName;
-  }
-
-  public void setTrainDataName(String trainDataName) {
-    this.trainDataName = trainDataName;
-  }
-
-  public String getTrainDataType() {
-    return trainDataType;
-  }
-
-  public void setTrainDataType(String trainDataType) {
-    this.trainDataType = trainDataType;
-  }
-
-  public Map<String, String> getTrainDataConfigs() {
-    return trainDataConfigs;
-  }
-
-  public void setTrainDataConfigs(Map<String, String> trainDataConfigs) {
-    this.trainDataConfigs = trainDataConfigs;
-  }
-
-  public String getTestDataName() {
-    return testDataName;
-  }
-
-  public void setTestDataName(String testDataName) {
-    this.testDataName = testDataName;
-  }
-
-  public String getTestDataType() {
-    return testDataType;
-  }
-
-  public void setTestDataType(String testDataType) {
-    this.testDataType = testDataType;
-  }
-
-  public Map<String, String> getTestDataConfigs() {
-    return testDataConfigs;
-  }
-
-  public void setTestDataConfigs(Map<String, String> testDataConfigs) {
-    this.testDataConfigs = testDataConfigs;
-  }
-
-  public Map<String, String> getMethodConfigs() {
-    return methodConfigs;
-  }
-
-  public void setMethodConfigs(Map<String, String> methodConfigs) {
-    this.methodConfigs = methodConfigs;
-  }
-
-  public int getTrainDataSize() {
-    return trainDataSize;
-  }
-
-  public void setTrainDataSize(int trainDataSize) {
-    this.trainDataSize = trainDataSize;
-  }
-
-  public int getTestDataSize() {
-    return testDataSize;
-  }
-
-  public void setTestDataSize(int testDataSize) {
-    this.testDataSize = testDataSize;
-  }
-
-  public List<String> getMethods() {
-    return methods;
-  }
-
-  public void setMethods(List<String> methods) {
-    this.methods = methods;
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyTester.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyTester.java
deleted file mode 100644
index 10b3a71..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyTester.java
+++ /dev/null
@@ -1,151 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.prototype.testing.utilities;
-
-/**
- * Class which was originally used to send test series from AMS to Spark through Kafka.
- */
-public class MetricAnomalyTester {
-
-//  public static String appId = MetricsCollectorInterface.serviceName;
-//  static final Log LOG = LogFactory.getLog(MetricAnomalyTester.class);
-//  static Map<String, TimelineMetric> timelineMetricMap = new HashMap<>();
-//
-//  public static TimelineMetrics runTestAnomalyRequest(MetricAnomalyDetectorTestInput input) throws UnknownHostException {
-//
-//    long currentTime = System.currentTimeMillis();
-//    TimelineMetrics timelineMetrics = new TimelineMetrics();
-//    String hostname = InetAddress.getLocalHost().getHostName();
-//
-//    //Train data
-//    TimelineMetric metric1 = new TimelineMetric();
-//    if (StringUtils.isNotEmpty(input.getTrainDataName())) {
-//      metric1 = timelineMetricMap.get(input.getTrainDataName());
-//      if (metric1 == null) {
-//        metric1 = new TimelineMetric();
-//        double[] trainSeries = MetricSeriesGeneratorFactory.generateSeries(input.getTrainDataType(), input.getTrainDataSize(), input.getTrainDataConfigs());
-//        metric1.setMetricName(input.getTrainDataName());
-//        metric1.setAppId(appId);
-//        metric1.setHostName(hostname);
-//        metric1.setStartTime(currentTime);
-//        metric1.setInstanceId(null);
-//        metric1.setMetricValues(getAsTimeSeries(currentTime, trainSeries));
-//        timelineMetricMap.put(input.getTrainDataName(), metric1);
-//      }
-//      timelineMetrics.getMetrics().add(metric1);
-//    } else {
-//      LOG.error("No train data name specified");
-//    }
-//
-//    //Test data
-//    TimelineMetric metric2 = new TimelineMetric();
-//    if (StringUtils.isNotEmpty(input.getTestDataName())) {
-//      metric2 = timelineMetricMap.get(input.getTestDataName());
-//      if (metric2 == null) {
-//        metric2 = new TimelineMetric();
-//        double[] testSeries = MetricSeriesGeneratorFactory.generateSeries(input.getTestDataType(), input.getTestDataSize(), input.getTestDataConfigs());
-//        metric2.setMetricName(input.getTestDataName());
-//        metric2.setAppId(appId);
-//        metric2.setHostName(hostname);
-//        metric2.setStartTime(currentTime);
-//        metric2.setInstanceId(null);
-//        metric2.setMetricValues(getAsTimeSeries(currentTime, testSeries));
-//        timelineMetricMap.put(input.getTestDataName(), metric2);
-//      }
-//      timelineMetrics.getMetrics().add(metric2);
-//    } else {
-//      LOG.warn("No test data name specified");
-//    }
-//
-//    //Invoke method
-//    if (CollectionUtils.isNotEmpty(input.getMethods())) {
-//      RFunctionInvoker.setScriptsDir("/etc/ambari-metrics-collector/conf/R-scripts");
-//      for (String methodType : input.getMethods()) {
-//        ResultSet result = RFunctionInvoker.executeMethod(methodType, getAsDataSeries(metric1), getAsDataSeries(metric2), input.getMethodConfigs());
-//        TimelineMetric timelineMetric = getAsTimelineMetric(result, methodType, input, currentTime, hostname);
-//        if (timelineMetric != null) {
-//          timelineMetrics.getMetrics().add(timelineMetric);
-//        }
-//      }
-//    } else {
-//      LOG.warn("No anomaly method requested");
-//    }
-//
-//    return timelineMetrics;
-//  }
-//
-//
-//  private static TimelineMetric getAsTimelineMetric(ResultSet result, String methodType, MetricAnomalyDetectorTestInput input, long currentTime, String hostname) {
-//
-//    if (result == null) {
-//      return null;
-//    }
-//
-//    TimelineMetric timelineMetric = new TimelineMetric();
-//    if (methodType.equals("tukeys") || methodType.equals("ema")) {
-//      timelineMetric.setMetricName(input.getTrainDataName() + "_" + input.getTestDataName() + "_" + methodType + "_" + currentTime);
-//      timelineMetric.setHostName(hostname);
-//      timelineMetric.setAppId(appId);
-//      timelineMetric.setInstanceId(null);
-//      timelineMetric.setStartTime(currentTime);
-//
-//      TreeMap<Long, Double> metricValues = new TreeMap<>();
-//      if (result.resultset.size() > 0) {
-//        double[] ts = result.resultset.get(0);
-//        double[] metrics = result.resultset.get(1);
-//        for (int i = 0; i < ts.length; i++) {
-//          if (i == 0) {
-//            timelineMetric.setStartTime((long) ts[i]);
-//          }
-//          metricValues.put((long) ts[i], metrics[i]);
-//        }
-//      }
-//      timelineMetric.setMetricValues(metricValues);
-//      return timelineMetric;
-//    }
-//    return null;
-//  }
-//
-//
-//  private static TreeMap<Long, Double> getAsTimeSeries(long currentTime, double[] values) {
-//
-//    long startTime = currentTime - (values.length - 1) * 60 * 1000;
-//    TreeMap<Long, Double> metricValues = new TreeMap<>();
-//
-//    for (int i = 0; i < values.length; i++) {
-//      metricValues.put(startTime, values[i]);
-//      startTime += (60 * 1000);
-//    }
-//    return metricValues;
-//  }
-//
-//  private static DataSeries getAsDataSeries(TimelineMetric timelineMetric) {
-//
-//    TreeMap<Long, Double> metricValues = timelineMetric.getMetricValues();
-//    double[] timestamps = new double[metricValues.size()];
-//    double[] values = new double[metricValues.size()];
-//    int i = 0;
-//
-//    for (Long timestamp : metricValues.keySet()) {
-//      timestamps[i] = timestamp;
-//      values[i++] = metricValues.get(timestamp);
-//    }
-//    return new DataSeries(timelineMetric.getMetricName() + "_" + timelineMetric.getAppId() + "_" + timelineMetric.getHostName(), timestamps, values);
-//  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/TestMetricSeriesGenerator.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/TestMetricSeriesGenerator.java
deleted file mode 100644
index 3b2605b..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/TestMetricSeriesGenerator.java
+++ /dev/null
@@ -1,92 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.testing.utilities;
-
-/**
- * Class which was originally used to send test series from AMS to Spark through Kafka.
- */
-
-public class TestMetricSeriesGenerator {
-  //implements Runnable {
-
-//  private Map<TestSeriesInputRequest, AbstractMetricSeries> configuredSeries = new HashMap<>();
-//  private static final Log LOG = LogFactory.getLog(TestMetricSeriesGenerator.class);
-//  private TimelineMetricStore metricStore;
-//  private String hostname;
-//
-//  public TestMetricSeriesGenerator(TimelineMetricStore metricStore) {
-//    this.metricStore = metricStore;
-//    try {
-//      this.hostname = InetAddress.getLocalHost().getHostName();
-//    } catch (UnknownHostException e) {
-//      e.printStackTrace();
-//    }
-//  }
-//
-//  public void addSeries(TestSeriesInputRequest inputRequest) {
-//    if (!configuredSeries.containsKey(inputRequest)) {
-//      AbstractMetricSeries metricSeries = MetricSeriesGeneratorFactory.generateSeries(inputRequest.getSeriesType(), inputRequest.getConfigs());
-//      configuredSeries.put(inputRequest, metricSeries);
-//      LOG.info("Added series " + inputRequest.getSeriesName());
-//    }
-//  }
-//
-//  public void removeSeries(String seriesName) {
-//    boolean isPresent = false;
-//    TestSeriesInputRequest tbd = null;
-//    for (TestSeriesInputRequest inputRequest : configuredSeries.keySet()) {
-//      if (inputRequest.getSeriesName().equals(seriesName)) {
-//        isPresent = true;
-//        tbd = inputRequest;
-//      }
-//    }
-//    if (isPresent) {
-//      LOG.info("Removing series " + seriesName);
-//      configuredSeries.remove(tbd);
-//    } else {
-//      LOG.info("Series not found : " + seriesName);
-//    }
-//  }
-//
-//  @Override
-//  public void run() {
-//    long currentTime = System.currentTimeMillis();
-//    TimelineMetrics timelineMetrics = new TimelineMetrics();
-//
-//    for (TestSeriesInputRequest input : configuredSeries.keySet()) {
-//      AbstractMetricSeries metricSeries = configuredSeries.get(input);
-//      TimelineMetric timelineMetric = new TimelineMetric();
-//      timelineMetric.setMetricName(input.getSeriesName());
-//      timelineMetric.setAppId("anomaly-engine-test-metric");
-//      timelineMetric.setInstanceId(null);
-//      timelineMetric.setStartTime(currentTime);
-//      timelineMetric.setHostName(hostname);
-//      TreeMap<Long, Double> metricValues = new TreeMap();
-//      metricValues.put(currentTime, metricSeries.nextValue());
-//      timelineMetric.setMetricValues(metricValues);
-//      timelineMetrics.addOrMergeTimelineMetric(timelineMetric);
-//      LOG.info("Emitting metric with appId = " + timelineMetric.getAppId());
-//    }
-//    try {
-//      LOG.info("Publishing test metrics for " + timelineMetrics.getMetrics().size() + " series.");
-//      metricStore.putMetrics(timelineMetrics);
-//    } catch (Exception e) {
-//      LOG.error(e);
-//    }
-//  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/TestSeriesInputRequest.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/TestSeriesInputRequest.java
deleted file mode 100644
index d7db9ca..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/TestSeriesInputRequest.java
+++ /dev/null
@@ -1,88 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype.testing.utilities;
-
-import org.apache.htrace.fasterxml.jackson.core.JsonProcessingException;
-import org.apache.htrace.fasterxml.jackson.databind.ObjectMapper;
-
-import javax.xml.bind.annotation.XmlRootElement;
-import java.util.Collections;
-import java.util.Map;
-
-@XmlRootElement
-public class TestSeriesInputRequest {
-
-  private String seriesName;
-  private String seriesType;
-  private Map<String, String> configs;
-
-  public TestSeriesInputRequest() {
-  }
-
-  public TestSeriesInputRequest(String seriesName, String seriesType, Map<String, String> configs) {
-    this.seriesName = seriesName;
-    this.seriesType = seriesType;
-    this.configs = configs;
-  }
-
-  public String getSeriesName() {
-    return seriesName;
-  }
-
-  public void setSeriesName(String seriesName) {
-    this.seriesName = seriesName;
-  }
-
-  public String getSeriesType() {
-    return seriesType;
-  }
-
-  public void setSeriesType(String seriesType) {
-    this.seriesType = seriesType;
-  }
-
-  public Map<String, String> getConfigs() {
-    return configs;
-  }
-
-  public void setConfigs(Map<String, String> configs) {
-    this.configs = configs;
-  }
-
-  @Override
-  public boolean equals(Object o) {
-    TestSeriesInputRequest anotherInput = (TestSeriesInputRequest)o;
-    return anotherInput.getSeriesName().equals(this.getSeriesName());
-  }
-
-  @Override
-  public int hashCode() {
-    return seriesName.hashCode();
-  }
-
-  public static void main(String[] args) {
-
-    ObjectMapper objectMapper = new ObjectMapper();
-    TestSeriesInputRequest testSeriesInputRequest = new TestSeriesInputRequest("test", "ema", Collections.singletonMap("key","value"));
-    try {
-      System.out.print(objectMapper.writeValueAsString(testSeriesInputRequest));
-    } catch (JsonProcessingException e) {
-      e.printStackTrace();
-    }
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/ema.R b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/ema.R
deleted file mode 100644
index 0b66095..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/ema.R
+++ /dev/null
@@ -1,96 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-#  EMA <- w * EMA + (1 - w) * x
-# EMS <- sqrt( w * EMS^2 + (1 - w) * (x - EMA)^2 )
-# Alarm = abs(x - EMA) > n * EMS
-
-ema_global <- function(train_data, test_data, w, n) {
-  
-#  res <- get_data(url)
-#  data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
-#  names(data) <- c("TS", res$metrics[[1]]$metricname)
-#  train_data <- data[which(data$TS >= train_start & data$TS <= train_end), 2]
-#  test_data <- data[which(data$TS >= test_start & data$TS <= test_end), ]
-  
-  anomalies <- data.frame()
-  ema <- 0
-  ems <- 0
-
-  #Train Step
-  for (x in train_data) {
-    ema <- w*ema + (1-w)*x
-    ems <- sqrt(w* ems^2 + (1 - w)*(x - ema)^2)
-  }
-  
-  for ( i in 1:length(test_data[,1])) {
-    x <- test_data[i,2]
-    if (abs(x - ema) > n*ems) {
-      anomaly <- c(as.numeric(test_data[i,1]), x)
-      # print (anomaly)
-      anomalies <- rbind(anomalies, anomaly)
-    }
-    ema <- w*ema + (1-w)*x
-    ems <- sqrt(w* ems^2 + (1 - w)*(x - ema)^2)
-  }
-  
-  if(length(anomalies) > 0) {
-    names(anomalies) <- c("TS", "Value")
-  }
-  return (anomalies)
-}
-
-ema_daily <- function(train_data, test_data, w, n) {
-  
-#  res <- get_data(url)
-#  data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
-#  names(data) <- c("TS", res$metrics[[1]]$metricname)
-#  train_data <- data[which(data$TS >= train_start & data$TS <= train_end), ]
-#  test_data <- data[which(data$TS >= test_start & data$TS <= test_end), ]
-  
-  anomalies <- data.frame()
-  ema <- vector("numeric", 7)
-  ems <- vector("numeric", 7)
-  
-  #Train Step
-  for ( i in 1:length(train_data[,1])) {
-    x <- train_data[i,2]
-    time <- as.POSIXlt(as.numeric(train_data[i,1])/1000, origin = "1970-01-01" ,tz = "GMT")
-    index <- time$wday
-    ema[index] <- w*ema[index] + (1-w)*x
-    ems[index] <- sqrt(w* ems[index]^2 + (1 - w)*(x - ema[index])^2)
-  }
-  
-  for ( i in 1:length(test_data[,1])) {
-    x <- test_data[i,2]
-    time <- as.POSIXlt(as.numeric(test_data[i,1])/1000, origin = "1970-01-01" ,tz = "GMT")
-    index <- time$wday
-    
-    if (abs(x - ema[index+1]) > n*ems[index+1]) {
-      anomaly <- c(as.numeric(test_data[i,1]), x)
-      # print (anomaly)
-      anomalies <- rbind(anomalies, anomaly)
-    }
-    ema[index+1] <- w*ema[index+1] + (1-w)*x
-    ems[index+1] <- sqrt(w* ems[index+1]^2 + (1 - w)*(x - ema[index+1])^2)
-  }
-  
-  if(length(anomalies) > 0) {
-    names(anomalies) <- c("TS", "Value")
-  }
-  return(anomalies)
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/hsdev.r b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/hsdev.r
deleted file mode 100644
index bca3366..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/hsdev.r
+++ /dev/null
@@ -1,67 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-hsdev_daily <- function(train_data, test_data, n, num_historic_periods, interval, period) {
-
-  #res <- get_data(url)
-  #data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
-  #names(data) <- c("TS", res$metrics[[1]]$metricname)
-  anomalies <- data.frame()
-
-  granularity <- train_data[2,1] - train_data[1,1]
-  test_start <- test_data[1,1]
-  test_end <- test_data[length(test_data[1,]),1]
-  train_start <- test_start - num_historic_periods*period
-  # round to start of day
-  train_start <- train_start - (train_start %% interval)
-
-  time <- as.POSIXlt(as.numeric(test_data[1,1])/1000, origin = "1970-01-01" ,tz = "GMT")
-  test_data_day <- time$wday
-
-  h_data <- c()
-  for ( i in length(train_data[,1]):1) {
-    ts <- train_data[i,1]
-    if ( ts < train_start) {
-      break
-    }
-    time <- as.POSIXlt(as.numeric(ts)/1000, origin = "1970-01-01" ,tz = "GMT")
-    if (time$wday == test_data_day) {
-      x <- train_data[i,2]
-      h_data <- c(h_data, x)
-    }
-  }
-
-  if (length(h_data) < 2*length(test_data[,1])) {
-    cat ("\nNot enough training data")
-    return (anomalies)
-  }
-
-  past_median <- median(h_data)
-  past_sd <- sd(h_data)
-  curr_median <- median(test_data[,2])
-
-  if (abs(curr_median - past_median) > n * past_sd) {
-    anomaly <- c(test_start, test_end, curr_median, past_median, past_sd)
-    anomalies <- rbind(anomalies, anomaly)
-  }
-
-  if(length(anomalies) > 0) {
-    names(anomalies) <- c("TS Start", "TS End", "Current Median", "Past Median", "Past SD")
-  }
-
-  return (anomalies)
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/iforest.R b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/iforest.R
deleted file mode 100644
index 8956400..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/iforest.R
+++ /dev/null
@@ -1,52 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-ams_iforest <- function(url, train_start, train_end, test_start, test_end, threshold_score) {
-  
-  res <- get_data(url)
-  num_metrics <- length(res$metrics)
-  anomalies <- data.frame()
-  
-  metricname <- res$metrics[[1]]$metricname
-  data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
-  names(data) <- c("TS", res$metrics[[1]]$metricname)
-
-  for (i in 2:num_metrics) {
-    metricname <- res$metrics[[i]]$metricname
-    df <- data.frame(as.numeric(names(res$metrics[[i]]$metrics)), as.numeric(res$metrics[[i]]$metrics))
-    names(df) <- c("TS", res$metrics[[i]]$metricname)
-    data <- merge(data, df)
-  }
-  
-  algo_data <- data[ which(df$TS >= train_start & df$TS <= train_end) , ][c(1:num_metrics+1)]
-  iForest <- IsolationTrees(algo_data)
-  test_data <- data[ which(df$TS >= test_start & df$TS <= test_end) , ]
-  
-  if_res <- AnomalyScore(test_data[c(1:num_metrics+1)], iForest)
-  for (i in 1:length(if_res$outF)) {
-    index <- test_start+i-1
-    if (if_res$outF[i] > threshold_score) {
-      anomaly <- c(test_data[i,1], if_res$outF[i], if_res$pathLength[i])
-      anomalies <- rbind(anomalies, anomaly)
-    } 
-  }
-  
-  if(length(anomalies) > 0) {
-    names(anomalies) <- c("TS", "Anomaly Score", "Path length")
-  }
-  return (anomalies)
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/kstest.r b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/kstest.r
deleted file mode 100644
index f22bc15..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/kstest.r
+++ /dev/null
@@ -1,38 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-ams_ks <- function(train_data, test_data, p_value) {
-  
-#  res <- get_data(url)
-#  data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
-#  names(data) <- c("TS", res$metrics[[1]]$metricname)
-#  train_data <- data[which(data$TS >= train_start & data$TS <= train_end), 2]
-#  test_data <- data[which(data$TS >= test_start & data$TS <= test_end), 2]
-  
-  anomalies <- data.frame()
-  res <- ks.test(train_data[,2], test_data[,2])
-  
-  if (res[2] < p_value) {
-    anomaly <- c(test_data[1,1], test_data[length(test_data),1], res[1], res[2])
-    anomalies <- rbind(anomalies, anomaly)
-  }
- 
-  if(length(anomalies) > 0) {
-    names(anomalies) <- c("TS Start", "TS End", "D", "p-value")
-  }
-  return (anomalies)
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/test.R b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/test.R
deleted file mode 100644
index 7650356..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/test.R
+++ /dev/null
@@ -1,85 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-
-tukeys_anomalies <- data.frame()
-ema_global_anomalies <- data.frame()
-ema_daily_anomalies <- data.frame()
-ks_anomalies <- data.frame()
-hsdev_anomalies <- data.frame()
-
-init <- function() {
-  tukeys_anomalies <- data.frame()
-  ema_global_anomalies <- data.frame()
-  ema_daily_anomalies <- data.frame()
-  ks_anomalies <- data.frame()
-  hsdev_anomalies <- data.frame()
-}
-
-test_methods <- function(data) {
-
-  init()
-  #res <- get_data(url)
-  #data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
-  #names(data) <- c("TS", res$metrics[[1]]$metricname)
-
-  limit <- data[length(data[,1]),1]
-  step <- data[2,1] - data[1,1]
-
-  train_start <- data[1,1]
-  train_end <- get_next_day_boundary(train_start, step, limit)
-  test_start <- train_end + step
-  test_end <- get_next_day_boundary(test_start, step, limit)
-  i <- 1
-  day <- 24*60*60*1000
-
-  while (test_start < limit) {
-
-    print (i)
-    i <- i + 1
-    train_data <- data[which(data$TS >= train_start & data$TS <= train_end),]
-    test_data <- data[which(data$TS >= test_start & data$TS <= test_end), ]
-
-    #tukeys_anomalies <<- rbind(tukeys_anomalies, ams_tukeys(train_data, test_data, 3))
-    #ema_global_anomalies <<- rbind(ema_global_anomalies, ema_global(train_data, test_data, 0.9, 3))
-    #ema_daily_anomalies <<- rbind(ema_daily_anomalies, ema_daily(train_data, test_data, 0.9, 3))
-    #ks_anomalies <<- rbind(ks_anomalies, ams_ks(train_data, test_data, 0.05))
-    hsdev_train_data <- data[which(data$TS < test_start),]
-    hsdev_anomalies <<- rbind(hsdev_anomalies, hsdev_daily(hsdev_train_data, test_data, 3, 3, day, 7*day))
-
-    train_start <- test_start
-    train_end <- get_next_day_boundary(train_start, step, limit)
-    test_start <- train_end + step
-    test_end <- get_next_day_boundary(test_start, step, limit)
-  }
-  return (hsdev_anomalies)
-}
-
-get_next_day_boundary <- function(start, step, limit) {
-
-  if (start > limit) {
-    return (-1)
-  }
-
-  while (start <= limit) {
-    if (((start %% (24*60*60*1000)) - 28800000) == 0) {
-      return (start)
-    }
-    start <- start + step
-  }
-  return (start)
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/tukeys.r b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/tukeys.r
deleted file mode 100644
index 0312226..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/tukeys.r
+++ /dev/null
@@ -1,51 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-ams_tukeys <- function(train_data, test_data, n) {
-
-#  res <- get_data(url)
-#  data <- data.frame(as.numeric(names(res$metrics[[1]]$metrics)), as.numeric(res$metrics[[1]]$metrics))
-#  names(data) <- c("TS", res$metrics[[1]]$metricname)
-#  train_data <- data[which(data$TS >= train_start & data$TS <= train_end), 2]
-#  test_data <- data[which(data$TS >= test_start & data$TS <= test_end), ]
-
-  anomalies <- data.frame()
-  quantiles <- quantile(train_data[,2])
-  iqr <- quantiles[4] - quantiles[2]
-  niqr <- 0
-
-  for ( i in 1:length(test_data[,1])) {
-    x <- test_data[i,2]
-    lb <- quantiles[2] - n*iqr
-    ub <- quantiles[4] + n*iqr
-    if ( (x < lb)  || (x > ub) ) {
-      if (iqr != 0) {
-        if (x < lb) {
-          niqr <- (quantiles[2] - x) / iqr
-        } else {
-          niqr <- (x - quantiles[4]) / iqr
-        }
-      }
-        anomaly <- c(test_data[i,1], x, niqr)
-        anomalies <- rbind(anomalies, anomaly)
-      }
-  }
-  if(length(anomalies) > 0) {
-    names(anomalies) <- c("TS", "Value", "niqr")
-  }
-  return (anomalies)
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/hbase-site.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/hbase-site.xml
deleted file mode 100644
index 66f0454..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/hbase-site.xml
+++ /dev/null
@@ -1,286 +0,0 @@
-<?xml version="1.0"?>
-<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-<!--
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-<configuration>
-    
-    <property>
-      <name>dfs.client.read.shortcircuit</name>
-      <value>true</value>
-    </property>
-    
-    <property>
-      <name>hbase.client.scanner.caching</name>
-      <value>10000</value>
-    </property>
-    
-    <property>
-      <name>hbase.client.scanner.timeout.period</name>
-      <value>300000</value>
-    </property>
-    
-    <property>
-      <name>hbase.cluster.distributed</name>
-      <value>false</value>
-    </property>
-    
-    <property>
-      <name>hbase.hregion.majorcompaction</name>
-      <value>0</value>
-    </property>
-    
-    <property>
-      <name>hbase.hregion.max.filesize</name>
-      <value>4294967296</value>
-    </property>
-    
-    <property>
-      <name>hbase.hregion.memstore.block.multiplier</name>
-      <value>4</value>
-    </property>
-    
-    <property>
-      <name>hbase.hregion.memstore.flush.size</name>
-      <value>134217728</value>
-    </property>
-    
-    <property>
-      <name>hbase.hstore.blockingStoreFiles</name>
-      <value>200</value>
-    </property>
-    
-    <property>
-      <name>hbase.hstore.flusher.count</name>
-      <value>2</value>
-    </property>
-    
-    <property>
-      <name>hbase.local.dir</name>
-      <value>${hbase.tmp.dir}/local</value>
-    </property>
-    
-    <property>
-      <name>hbase.master.info.bindAddress</name>
-      <value>0.0.0.0</value>
-    </property>
-    
-    <property>
-      <name>hbase.master.info.port</name>
-      <value>61310</value>
-    </property>
-    
-    <property>
-      <name>hbase.master.normalizer.class</name>
-      <value>org.apache.hadoop.hbase.master.normalizer.SimpleRegionNormalizer</value>
-    </property>
-    
-    <property>
-      <name>hbase.master.port</name>
-      <value>61300</value>
-    </property>
-    
-    <property>
-      <name>hbase.master.wait.on.regionservers.mintostart</name>
-      <value>1</value>
-    </property>
-    
-    <property>
-      <name>hbase.normalizer.enabled</name>
-      <value>false</value>
-    </property>
-    
-    <property>
-      <name>hbase.normalizer.period</name>
-      <value>600000</value>
-    </property>
-    
-    <property>
-      <name>hbase.regionserver.global.memstore.lowerLimit</name>
-      <value>0.3</value>
-    </property>
-    
-    <property>
-      <name>hbase.regionserver.global.memstore.upperLimit</name>
-      <value>0.35</value>
-    </property>
-    
-    <property>
-      <name>hbase.regionserver.info.port</name>
-      <value>61330</value>
-    </property>
-    
-    <property>
-      <name>hbase.regionserver.port</name>
-      <value>61320</value>
-    </property>
-    
-    <property>
-      <name>hbase.regionserver.thread.compaction.large</name>
-      <value>2</value>
-    </property>
-    
-    <property>
-      <name>hbase.regionserver.thread.compaction.small</name>
-      <value>3</value>
-    </property>
-    
-    <property>
-      <name>hbase.replication</name>
-      <value>false</value>
-    </property>
-    
-    <property>
-      <name>hbase.rootdir</name>
-      <value>file:///var/lib/ambari-metrics-collector/hbase</value>
-    </property>
-    
-    <property>
-      <name>hbase.rpc.timeout</name>
-      <value>300000</value>
-    </property>
-    
-    <property>
-      <name>hbase.snapshot.enabled</name>
-      <value>false</value>
-    </property>
-    
-    <property>
-      <name>hbase.superuser</name>
-      <value>activity_explorer,activity_analyzer</value>
-    </property>
-    
-    <property>
-      <name>hbase.tmp.dir</name>
-      <value>/var/lib/ambari-metrics-collector/hbase-tmp</value>
-    </property>
-    
-    <property>
-      <name>hbase.zookeeper.leaderport</name>
-      <value>61388</value>
-    </property>
-    
-    <property>
-      <name>hbase.zookeeper.peerport</name>
-      <value>61288</value>
-    </property>
-    
-    <property>
-      <name>hbase.zookeeper.property.clientPort</name>
-      <value>61181</value>
-    </property>
-    
-    <property>
-      <name>hbase.zookeeper.property.dataDir</name>
-      <value>${hbase.tmp.dir}/zookeeper</value>
-    </property>
-    
-    <property>
-      <name>hbase.zookeeper.property.tickTime</name>
-      <value>6000</value>
-    </property>
-    
-    <property>
-      <name>hbase.zookeeper.quorum</name>
-      <value>c6401.ambari.apache.org</value>
-      <final>true</final>
-    </property>
-    
-    <property>
-      <name>hfile.block.cache.size</name>
-      <value>0.3</value>
-    </property>
-    
-    <property>
-      <name>phoenix.coprocessor.maxMetaDataCacheSize</name>
-      <value>20480000</value>
-    </property>
-    
-    <property>
-      <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name>
-      <value>60000</value>
-    </property>
-    
-    <property>
-      <name>phoenix.groupby.maxCacheSize</name>
-      <value>307200000</value>
-    </property>
-    
-    <property>
-      <name>phoenix.mutate.batchSize</name>
-      <value>10000</value>
-    </property>
-    
-    <property>
-      <name>phoenix.query.keepAliveMs</name>
-      <value>300000</value>
-    </property>
-    
-    <property>
-      <name>phoenix.query.maxGlobalMemoryPercentage</name>
-      <value>15</value>
-    </property>
-    
-    <property>
-      <name>phoenix.query.rowKeyOrderSaltedTable</name>
-      <value>true</value>
-    </property>
-    
-    <property>
-      <name>phoenix.query.spoolThresholdBytes</name>
-      <value>20971520</value>
-    </property>
-    
-    <property>
-      <name>phoenix.query.timeoutMs</name>
-      <value>300000</value>
-    </property>
-    
-    <property>
-      <name>phoenix.sequence.saltBuckets</name>
-      <value>2</value>
-    </property>
-    
-    <property>
-      <name>phoenix.spool.directory</name>
-      <value>${hbase.tmp.dir}/phoenix-spool</value>
-    </property>
-    
-    <property>
-      <name>zookeeper.session.timeout</name>
-      <value>120000</value>
-    </property>
-    
-    <property>
-      <name>zookeeper.session.timeout.localHBaseCluster</name>
-      <value>120000</value>
-    </property>
-    
-    <property>
-      <name>zookeeper.znode.parent</name>
-      <value>/ams-hbase-unsecure</value>
-    </property>
-
-    <property>
-      <name>hbase.use.dynamic.jars</name>
-      <value>false</value>
-    </property>
-
-  </configuration>
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/input-config.properties b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/input-config.properties
deleted file mode 100644
index ab106c4..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/input-config.properties
+++ /dev/null
@@ -1,42 +0,0 @@
-# Copyright 2011 The Apache Software Foundation
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-appIds=HOST
-
-collectorHost=localhost
-collectorPort=6188
-collectorProtocol=http
-
-zkQuorum=localhost:2181
-
-ambariServerHost=localhost
-clusterName=c1
-
-emaW=0.8
-emaN=3
-tukeysN=3
-pointInTimeTestInterval=300000
-pointInTimeTrainInterval=900000
-
-ksTestInterval=600000
-ksTrainInterval=600000
-hsdevNhp=3
-hsdevInterval=1800000;
-
-skipMetricPatterns=sdisk*,cpu_sintr*,proc*,disk*,boottime
-hosts=avijayan-ad-1.openstacklocal
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/ADServiceScalaModule.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/ADServiceScalaModule.scala
deleted file mode 100644
index 8578a80..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/ADServiceScalaModule.scala
+++ /dev/null
@@ -1,50 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.app
-
-import com.fasterxml.jackson.module.scala._
-import com.fasterxml.jackson.module.scala.deser.{ScalaNumberDeserializersModule, UntypedObjectDeserializerModule}
-import com.fasterxml.jackson.module.scala.introspect.{ScalaAnnotationIntrospector, ScalaAnnotationIntrospectorModule}
-
-/**
-  * Extended Jackson Module that fixes the Scala-Jackson BytecodeReadingParanamer issue.
-  */
-class ADServiceScalaModule extends JacksonModule
-  with IteratorModule
-  with EnumerationModule
-  with OptionModule
-  with SeqModule
-  with IterableModule
-  with TupleModule
-  with MapModule
-  with SetModule
-  with FixedScalaAnnotationIntrospectorModule
-  with UntypedObjectDeserializerModule
-  with EitherModule {
-
-  override def getModuleName = "ADServiceScalaModule"
-
-  object ADServiceScalaModule extends ADServiceScalaModule
-
-}
-
-
-trait FixedScalaAnnotationIntrospectorModule extends JacksonModule {
-  this += { _.appendAnnotationIntrospector(ScalaAnnotationIntrospector) }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala
deleted file mode 100644
index 2d0dbdf..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala
+++ /dev/null
@@ -1,80 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-package org.apache.ambari.metrics.adservice.app
-
-import javax.ws.rs.Path
-import javax.ws.rs.container.{ContainerRequestFilter, ContainerResponseFilter}
-
-import org.apache.ambari.metrics.adservice.app.GuiceInjector.{withInjector, wrap}
-import org.apache.ambari.metrics.adservice.db.{AdAnomalyStoreAccessor, MetadataDatasource}
-import org.apache.ambari.metrics.adservice.metadata.MetricDefinitionService
-import org.apache.ambari.metrics.adservice.service.ADQueryService
-import org.glassfish.jersey.filter.LoggingFilter
-
-import com.codahale.metrics.health.HealthCheck
-import com.fasterxml.jackson.databind.{ObjectMapper, SerializationFeature}
-import com.fasterxml.jackson.datatype.joda.JodaModule
-import com.fasterxml.jackson.jaxrs.json.JacksonJaxbJsonProvider
-import com.fasterxml.jackson.module.scala.DefaultScalaModule
-
-import io.dropwizard.Application
-import io.dropwizard.setup.Environment
-
-class AnomalyDetectionApp extends Application[AnomalyDetectionAppConfig] {
-  override def getName = "anomaly-detection-service"
-
-  override def run(t: AnomalyDetectionAppConfig, env: Environment): Unit = {
-    configure(t, env)
-  }
-
-  def configure(config: AnomalyDetectionAppConfig, env: Environment) {
-    withInjector(new AnomalyDetectionAppModule(config, env)) { injector =>
-      injector.instancesWithAnnotation(classOf[Path]).foreach { r => env.jersey().register(r) }
-      injector.instancesOfType(classOf[HealthCheck]).foreach { h => env.healthChecks.register(h.getClass.getName, h) }
-      injector.instancesOfType(classOf[ContainerRequestFilter]).foreach { f => env.jersey().register(f) }
-      injector.instancesOfType(classOf[ContainerResponseFilter]).foreach { f => env.jersey().register(f) }
-
-      //Initialize Services
-      injector.getInstance(classOf[MetadataDatasource]).initialize
-      injector.getInstance(classOf[MetricDefinitionService]).initialize
-      injector.getInstance(classOf[ADQueryService]).initialize
-    }
-    env.jersey.register(jacksonJaxbJsonProvider)
-    env.jersey.register(new LoggingFilter)
-  }
-
-  private def jacksonJaxbJsonProvider: JacksonJaxbJsonProvider = {
-    val provider = new JacksonJaxbJsonProvider()
-    val objectMapper = new ObjectMapper()
-    objectMapper.registerModule(new ADServiceScalaModule)
-    objectMapper.registerModule(new JodaModule)
-    objectMapper.configure(SerializationFeature.WRAP_ROOT_VALUE, false)
-    objectMapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false)
-    objectMapper.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false)
-    objectMapper.configure(SerializationFeature.WRITE_EMPTY_JSON_ARRAYS, true)
-    provider.setMapper(objectMapper)
-    provider
-  }
-
-  override def bootstrapLogging(): Unit = {}
-}
-
-
-object AnomalyDetectionApp {
-  def main(args: Array[String]): Unit = new AnomalyDetectionApp().run(args: _*)
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
deleted file mode 100644
index 58efa97..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
+++ /dev/null
@@ -1,89 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.app
-
-import javax.validation.Valid
-
-import org.apache.ambari.metrics.adservice.configuration.{HBaseConfiguration, _}
-
-import com.fasterxml.jackson.annotation.{JsonIgnore, JsonIgnoreProperties, JsonProperty}
-
-import io.dropwizard.Configuration
-
-/**
-  * Top Level AD System Manager config items.
-  */
-@JsonIgnoreProperties(ignoreUnknown=true)
-class AnomalyDetectionAppConfig extends Configuration {
-
-  /*
-   Metric Definition Service configuration
-    */
-  @Valid
-  private val metricDefinitionServiceConfiguration = new MetricDefinitionServiceConfiguration
-
-  @Valid
-  private val metricCollectorConfiguration = new MetricCollectorConfiguration
-
-  /*
-   Anomaly Query Service configuration
-    */
-  @Valid
-  private val adServiceConfiguration = new AdServiceConfiguration
-
-  /**
-    * LevelDB settings for metrics definitions
-    */
-  @Valid
-  private val metricDefinitionDBConfiguration = new MetricDefinitionDBConfiguration
-
-  /**
-    * Spark configurations
-    */
-  @Valid
-  private val sparkConfiguration = new SparkConfiguration
-
-  /*
-   AMS HBase Conf
-    */
-  @JsonIgnore
-  def getHBaseConf : org.apache.hadoop.conf.Configuration = {
-    HBaseConfiguration.getHBaseConf
-  }
-
-  @JsonProperty("metricDefinitionService")
-  def getMetricDefinitionServiceConfiguration: MetricDefinitionServiceConfiguration = {
-    metricDefinitionServiceConfiguration
-  }
-
-  @JsonProperty("adQueryService")
-  def getAdServiceConfiguration: AdServiceConfiguration = {
-    adServiceConfiguration
-  }
-
-  @JsonProperty("metricsCollector")
-  def getMetricCollectorConfiguration: MetricCollectorConfiguration = metricCollectorConfiguration
-
-  @JsonProperty("metricDefinitionDB")
-  def getMetricDefinitionDBConfiguration: MetricDefinitionDBConfiguration = metricDefinitionDBConfiguration
-
-  @JsonProperty("spark")
-  def getSparkConfiguration: SparkConfiguration = sparkConfiguration
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
deleted file mode 100644
index 68e9df9..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
+++ /dev/null
@@ -1,47 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-package org.apache.ambari.metrics.adservice.app
-
-import org.apache.ambari.metrics.adservice.db._
-import org.apache.ambari.metrics.adservice.leveldb.LevelDBDataSource
-import org.apache.ambari.metrics.adservice.metadata.{MetricDefinitionService, MetricDefinitionServiceImpl}
-import org.apache.ambari.metrics.adservice.resource.{AnomalyResource, MetricDefinitionResource, RootResource}
-import org.apache.ambari.metrics.adservice.service.{ADQueryService, ADQueryServiceImpl}
-
-import com.codahale.metrics.health.HealthCheck
-import com.google.inject.AbstractModule
-import com.google.inject.multibindings.Multibinder
-
-import io.dropwizard.setup.Environment
-
-class AnomalyDetectionAppModule(config: AnomalyDetectionAppConfig, env: Environment) extends AbstractModule {
-  override def configure() {
-    bind(classOf[AnomalyDetectionAppConfig]).toInstance(config)
-    bind(classOf[Environment]).toInstance(env)
-    val healthCheckBinder = Multibinder.newSetBinder(binder(), classOf[HealthCheck])
-    healthCheckBinder.addBinding().to(classOf[DefaultHealthCheck])
-    bind(classOf[AnomalyResource])
-    bind(classOf[MetricDefinitionResource])
-    bind(classOf[RootResource])
-    bind(classOf[AdMetadataStoreAccessor]).to(classOf[AdMetadataStoreAccessorImpl])
-    bind(classOf[ADQueryService]).to(classOf[ADQueryServiceImpl])
-    bind(classOf[MetricDefinitionService]).to(classOf[MetricDefinitionServiceImpl])
-    bind(classOf[MetadataDatasource]).to(classOf[LevelDBDataSource])
-    bind(classOf[AdAnomalyStoreAccessor]).to(classOf[PhoenixAnomalyStoreAccessor])
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/DefaultHealthCheck.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/DefaultHealthCheck.scala
deleted file mode 100644
index c36e8d2..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/DefaultHealthCheck.scala
+++ /dev/null
@@ -1,25 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-package org.apache.ambari.metrics.adservice.app
-
-import com.codahale.metrics.health.HealthCheck
-import com.codahale.metrics.health.HealthCheck.Result
-
-class DefaultHealthCheck extends HealthCheck {
-  override def check(): Result = Result.healthy()
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/GuiceInjector.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/GuiceInjector.scala
deleted file mode 100644
index 37da5f9..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/GuiceInjector.scala
+++ /dev/null
@@ -1,56 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-package org.apache.ambari.metrics.adservice.app
-
-import java.lang.annotation.Annotation
-
-import com.google.inject.{Guice, Injector, Module, TypeLiteral}
-
-import scala.collection.JavaConversions._
-import scala.language.implicitConversions
-import scala.reflect._
-
-object GuiceInjector {
-
-  def withInjector(modules: Module*)(fn: (Injector) => Unit) = {
-    val injector = Guice.createInjector(modules.toList: _*)
-    fn(injector)
-  }
-
-  implicit def wrap(injector: Injector): InjectorWrapper = new InjectorWrapper(injector)
-}
-
-class InjectorWrapper(injector: Injector) {
-  def instancesWithAnnotation[T <: Annotation](annotationClass: Class[T]): List[AnyRef] = {
-    injector.getAllBindings.filter { case (k, v) =>
-      !k.getTypeLiteral.getRawType.getAnnotationsByType[T](annotationClass).isEmpty
-    }.map { case (k, v) => injector.getInstance(k).asInstanceOf[AnyRef] }.toList
-  }
-
-  def instancesOfType[T: ClassTag](typeClass: Class[T]): List[T] = {
-    injector.findBindingsByType(TypeLiteral.get(classTag[T].runtimeClass)).map { b =>
-      injector.getInstance(b.getKey).asInstanceOf[T]
-    }.toList
-  }
-
-  def dumpBindings(): Unit = {
-    injector.getBindings.keySet() foreach { k =>
-      println(s"bind key = ${k.toString}")
-    }
-  }
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/AdServiceConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/AdServiceConfiguration.scala
deleted file mode 100644
index 11e9f28..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/AdServiceConfiguration.scala
+++ /dev/null
@@ -1,40 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.configuration
-
-import javax.validation.constraints.NotNull
-
-import com.fasterxml.jackson.annotation.JsonProperty
-
-/**
-  * Class to get Anomaly Service specific configuration.
-  */
-class AdServiceConfiguration {
-
-  @NotNull
-  var anomalyDataTtl: Long = _
-
-  @JsonProperty
-  def getAnomalyDataTtl: Long = anomalyDataTtl
-
-  @JsonProperty
-  def setAnomalyDataTtl(anomalyDataTtl: Long): Unit = {
-    this.anomalyDataTtl = anomalyDataTtl
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala
deleted file mode 100644
index a95ff15..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/HBaseConfiguration.scala
+++ /dev/null
@@ -1,59 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.configuration
-
-import java.net.{MalformedURLException, URISyntaxException}
-
-import org.apache.hadoop.conf.Configuration
-import org.slf4j.{Logger, LoggerFactory}
-
-object HBaseConfiguration {
-
-  val HBASE_SITE_CONFIGURATION_FILE: String = "hbase-site.xml"
-  val hbaseConf: org.apache.hadoop.conf.Configuration = new Configuration(true)
-  var isInitialized: Boolean = false
-  val LOG : Logger = LoggerFactory.getLogger("HBaseConfiguration")
-
-  /**
-    * Initialize the hbase conf from hbase-site present in classpath.
-    */
-  def initConfigs(): Unit = {
-    if (!isInitialized) {
-      var classLoader: ClassLoader = Thread.currentThread.getContextClassLoader
-      if (classLoader == null) classLoader = getClass.getClassLoader
-
-      try {
-        val hbaseResUrl = classLoader.getResource(HBASE_SITE_CONFIGURATION_FILE)
-        if (hbaseResUrl == null) throw new IllegalStateException("Unable to initialize the AD subsystem. No hbase-site present in the classpath.")
-
-        hbaseConf.addResource(hbaseResUrl.toURI.toURL)
-        isInitialized = true
-
-      } catch {
-        case me : MalformedURLException => println("MalformedURLException")
-        case ue : URISyntaxException => println("URISyntaxException")
-      }
-    }
-  }
-
-  def getHBaseConf: org.apache.hadoop.conf.Configuration = {
-    if (!isInitialized) {
-      initConfigs()
-    }
-    hbaseConf
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala
deleted file mode 100644
index 2530730..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricCollectorConfiguration.scala
+++ /dev/null
@@ -1,54 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.configuration
-
-import javax.validation.constraints.NotNull
-
-import com.fasterxml.jackson.annotation.JsonProperty
-
-/**
-  * Class to capture the Metrics Collector related configuration.
-  */
-class MetricCollectorConfiguration {
-
-  @NotNull
-  private var hosts: String = _
-
-  @NotNull
-  private var port: String = _
-
-  @NotNull
-  private var protocol: String = _
-
-  @NotNull
-  private var metadataEndpoint: String = _
-
-  @JsonProperty
-  def getHosts: String = hosts
-
-  @JsonProperty
-  def getPort: String = port
-
-  @JsonProperty
-  def getProtocol: String = protocol
-
-  @JsonProperty
-  def getMetadataEndpoint: String = metadataEndpoint
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionDBConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionDBConfiguration.scala
deleted file mode 100644
index ef4e00c..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionDBConfiguration.scala
+++ /dev/null
@@ -1,40 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.configuration
-
-import javax.validation.constraints.NotNull
-
-import com.fasterxml.jackson.annotation.JsonProperty
-
-class MetricDefinitionDBConfiguration {
-
-  @NotNull
-  private var dbDirPath: String = _
-  private var verifyChecksums: Boolean = true
-  private var performParanoidChecks: Boolean = false
-
-  @JsonProperty("verifyChecksums")
-  def getVerifyChecksums: Boolean = verifyChecksums
-
-  @JsonProperty("performParanoidChecks")
-  def getPerformParanoidChecks: Boolean = performParanoidChecks
-
-  @JsonProperty("dbDirPath")
-  def getDbDirPath: String = dbDirPath
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionServiceConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionServiceConfiguration.scala
deleted file mode 100644
index a453f03..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionServiceConfiguration.scala
+++ /dev/null
@@ -1,31 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.configuration
-
-import com.fasterxml.jackson.annotation.JsonProperty
-
-/**
-  * Class to capture the Metric Definition Service configuration.
-  */
-class MetricDefinitionServiceConfiguration {
-
-  private val inputDefinitionDirectory: String = ""
-
-  @JsonProperty
-  def getInputDefinitionDirectory: String = inputDefinitionDirectory
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/SparkConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/SparkConfiguration.scala
deleted file mode 100644
index 30efdc7..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/SparkConfiguration.scala
+++ /dev/null
@@ -1,39 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.configuration
-
-import javax.validation.constraints.NotNull
-
-import com.fasterxml.jackson.annotation.JsonProperty
-
-class SparkConfiguration {
-
-  @NotNull
-  private var mode: String = _
-
-  @NotNull
-  private var masterHostPort: String = _
-
-  @JsonProperty
-  def getMode: String = mode
-
-  @JsonProperty
-  def getMasterHostPort: String = masterHostPort
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdAnomalyStoreAccessor.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdAnomalyStoreAccessor.scala
deleted file mode 100644
index 676b09a..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdAnomalyStoreAccessor.scala
+++ /dev/null
@@ -1,36 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.db
-
-import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-import org.apache.ambari.metrics.adservice.model.MetricAnomalyInstance
-
-/**
-  * Trait for anomaly store accessor. (Phoenix)
-  */
-trait AdAnomalyStoreAccessor {
-
-  def initialize(): Unit
-
-  def getMetricAnomalies(anomalyType: AnomalyType,
-                         startTime: Long,
-                         endTime: Long,
-                         limit: Int) : List[MetricAnomalyInstance]
-
-  }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreAccessor.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreAccessor.scala
deleted file mode 100644
index bcdb416..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreAccessor.scala
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.db
-
-import org.apache.ambari.metrics.adservice.metadata.MetricSourceDefinition
-
-/**
-  * Trait used to talk to the AD Metadata Store.
-  */
-trait AdMetadataStoreAccessor {
-
-  /**
-    * Return all saved component definitions from DB.
-    * @return
-    */
-  def getSavedInputDefinitions: List[MetricSourceDefinition]
-
-  /**
-    * Save a set of component definitions
-    * @param metricSourceDefinitions Set of component definitions
-    * @return Success / Failure
-    */
-  def saveInputDefinitions(metricSourceDefinitions: List[MetricSourceDefinition]) : Boolean
-
-  /**
-    * Save a component definition
-    * @param metricSourceDefinition component definition
-    * @return Success / Failure
-    */
-  def saveInputDefinition(metricSourceDefinition: MetricSourceDefinition) : Boolean
-
-  /**
-    * Delete a component definition
-    * @param definitionName component definition
-    * @return
-    */
-  def removeInputDefinition(definitionName: String) : Boolean
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreAccessorImpl.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreAccessorImpl.scala
deleted file mode 100644
index 7405459..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreAccessorImpl.scala
+++ /dev/null
@@ -1,96 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.db
-
-import org.apache.ambari.metrics.adservice.metadata.MetricSourceDefinition
-import org.apache.commons.lang.SerializationUtils
-
-import com.google.inject.Inject
-
-/**
-  * Implementation of the AdMetadataStoreAccessor.
-  * Serves as the adaptor between metric definition service and LevelDB worlds.
-  */
-class AdMetadataStoreAccessorImpl extends AdMetadataStoreAccessor {
-
-  @Inject
-  var metadataDataSource: MetadataDatasource = _
-
-  @Inject
-  def this(metadataDataSource: MetadataDatasource) = {
-    this
-    this.metadataDataSource = metadataDataSource
-  }
-
-  /**
-    * Return all saved component definitions from DB.
-    *
-    * @return
-    */
-  override def getSavedInputDefinitions: List[MetricSourceDefinition] = {
-    val valuesFromStore : List[MetadataDatasource#Value] = metadataDataSource.getAll
-    val definitions = scala.collection.mutable.MutableList.empty[MetricSourceDefinition]
-
-    for (value : Array[Byte] <- valuesFromStore) {
-      val definition : MetricSourceDefinition = SerializationUtils.deserialize(value).asInstanceOf[MetricSourceDefinition]
-      if (definition != null) {
-        definitions.+=(definition)
-      }
-    }
-    definitions.toList
-  }
-
-  /**
-    * Save a set of component definitions
-    *
-    * @param metricSourceDefinitions Set of component definitions
-    * @return Success / Failure
-    */
-  override def saveInputDefinitions(metricSourceDefinitions: List[MetricSourceDefinition]): Boolean = {
-    for (definition <- metricSourceDefinitions) {
-      saveInputDefinition(definition)
-    }
-    true
-  }
-
-  /**
-    * Save a component definition
-    *
-    * @param metricSourceDefinition component definition
-    * @return Success / Failure
-    */
-  override def saveInputDefinition(metricSourceDefinition: MetricSourceDefinition): Boolean = {
-    val storeValue : MetadataDatasource#Value = SerializationUtils.serialize(metricSourceDefinition)
-    val storeKey : MetadataDatasource#Key = metricSourceDefinition.definitionName.getBytes()
-    metadataDataSource.put(storeKey, storeValue)
-    true
-  }
-
-  /**
-    * Delete a component definition
-    *
-    * @param definitionName component definition
-    * @return
-    */
-  override def removeInputDefinition(definitionName: String): Boolean = {
-    val storeKey : MetadataDatasource#Key = definitionName.getBytes()
-    metadataDataSource.delete(storeKey)
-    true
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreConstants.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreConstants.scala
deleted file mode 100644
index 3d273a3..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/AdMetadataStoreConstants.scala
+++ /dev/null
@@ -1,39 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.db
-
-object AdMetadataStoreConstants {
-
-  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-  /* Table Name constants */
-  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-
-  val METRIC_PROFILE_TABLE_NAME = "METRIC_DEFINITION"
-
-  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-  /* CREATE statement constants */
-  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-
-  val CREATE_METRIC_DEFINITION_TABLE: String = "CREATE TABLE IF NOT EXISTS %s (" +
-    "DEFINITION_NAME VARCHAR, " +
-    "DEFINITION_JSON VARCHAR, " +
-    "DEFINITION_SOURCE NUMBER, " +
-    "CREATED_TIME TIMESTAMP, " +
-    "UPDATED_TIME TIMESTAMP " +
-    "CONSTRAINT pk PRIMARY KEY (DEFINITION_NAME))"
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/ConnectionProvider.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/ConnectionProvider.scala
deleted file mode 100644
index cc02ed4..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/ConnectionProvider.scala
+++ /dev/null
@@ -1,45 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  *//**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-package org.apache.ambari.metrics.adservice.db
-
-import java.sql.Connection
-import java.sql.SQLException
-
-/**
-  * Provides a connection to the anomaly store.
-  */
-trait ConnectionProvider {
-  @throws[SQLException]
-  def getConnection: Connection
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/DefaultPhoenixDataSource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/DefaultPhoenixDataSource.scala
deleted file mode 100644
index d9396de..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/DefaultPhoenixDataSource.scala
+++ /dev/null
@@ -1,79 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-package org.apache.ambari.metrics.adservice.db
-
-import org.apache.commons.logging.LogFactory
-import org.apache.hadoop.conf.Configuration
-import org.apache.hadoop.hbase.client.ConnectionFactory
-import org.apache.hadoop.hbase.client.HBaseAdmin
-import java.io.IOException
-import java.sql.Connection
-import java.sql.DriverManager
-import java.sql.SQLException
-
-object DefaultPhoenixDataSource {
-  private[db] val LOG = LogFactory.getLog(classOf[DefaultPhoenixDataSource])
-  private val ZOOKEEPER_CLIENT_PORT = "hbase.zookeeper.property.clientPort"
-  private val ZOOKEEPER_QUORUM = "hbase.zookeeper.quorum"
-  private val ZNODE_PARENT = "zookeeper.znode.parent"
-  private val connectionUrl = "jdbc:phoenix:%s:%s:%s"
-}
-
-class DefaultPhoenixDataSource(var hbaseConf: Configuration) extends PhoenixConnectionProvider {
-
-  val zookeeperClientPort: String = hbaseConf.getTrimmed(DefaultPhoenixDataSource.ZOOKEEPER_CLIENT_PORT, "2181")
-  val zookeeperQuorum: String = hbaseConf.getTrimmed(DefaultPhoenixDataSource.ZOOKEEPER_QUORUM)
-  val znodeParent: String = hbaseConf.getTrimmed(DefaultPhoenixDataSource.ZNODE_PARENT, "/ams-hbase-unsecure")
-  final private var url : String = _
-
-  if (zookeeperQuorum == null || zookeeperQuorum.isEmpty) {
-    throw new IllegalStateException("Unable to find Zookeeper quorum to access HBase store using Phoenix.")
-  }
-  url = String.format(DefaultPhoenixDataSource.connectionUrl, zookeeperQuorum, zookeeperClientPort, znodeParent)
-
-
-  /**
-    * Get HBaseAdmin for table ops.
-    *
-    * @return @HBaseAdmin
-    * @throws IOException
-    */
-  @throws[IOException]
-  override def getHBaseAdmin: HBaseAdmin = ConnectionFactory.createConnection(hbaseConf).getAdmin.asInstanceOf[HBaseAdmin]
-
-  /**
-    * Get JDBC connection to HBase store. Assumption is that the hbase
-    * configuration is present on the classpath and loaded by the caller into
-    * the Configuration object.
-    * Phoenix already caches the HConnection between the client and HBase
-    * cluster.
-    *
-    * @return @java.sql.Connection
-    */
-  @throws[SQLException]
-  override def getConnection: Connection = {
-    DefaultPhoenixDataSource.LOG.debug("Metric store connection url: " + url)
-    try DriverManager.getConnection(url)
-    catch {
-      case e: SQLException =>
-        DefaultPhoenixDataSource.LOG.warn("Unable to connect to HBase store using Phoenix.", e)
-        throw e
-    }
-  }
-
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/MetadataDatasource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/MetadataDatasource.scala
deleted file mode 100644
index 7b223a2..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/MetadataDatasource.scala
+++ /dev/null
@@ -1,79 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.db
-
-trait MetadataDatasource {
-
-  type Key = Array[Byte]
-  type Value = Array[Byte]
-
-  /**
-    *  Idempotent call at the start of the application to initialize db
-    */
-  def initialize(): Unit
-
-  /**
-    * This function obtains the associated value to a key. It requires the (key-value) pair to be in the DataSource
-    *
-    * @param key
-    * @return the value associated with the passed key.
-    */
-  def apply(key: Key): Value = get(key).get
-
-  /**
-    * This function obtains the associated value to a key, if there exists one.
-    *
-    * @param key
-    * @return the value associated with the passed key.
-    */
-  def get(key: Key): Option[Value]
-
-  /**
-    * This function obtains all the values
-    *
-    * @return the list of values
-    */
-  def getAll: List[Value]
-
-  /**
-    * This function associates a key to a value, overwriting if necessary
-    */
-  def put(key: Key, value: Value): Unit
-
-  /**
-    * Delete key from the db
-    */
-  def delete(key: Key): Unit
-
-  /**
-    * This function updates the DataSource by deleting, updating and inserting new (key-value) pairs.
-    *
-    * @param toRemove which includes all the keys to be removed from the DataSource.
-    * @param toUpsert which includes all the (key-value) pairs to be inserted into the DataSource.
-    *                 If a key is already in the DataSource its value will be updated.
-    * @return the new DataSource after the removals and insertions were done.
-    */
-  def update(toRemove: Seq[Key], toUpsert: Seq[(Key, Value)]): Unit
-
-  /**
-    * This function closes the DataSource, without deleting the files used by it.
-    */
-  def close(): Unit
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
deleted file mode 100644
index 53e6dee..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixAnomalyStoreAccessor.scala
+++ /dev/null
@@ -1,195 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.db
-
-import java.sql.{Connection, PreparedStatement, ResultSet, SQLException}
-import java.util.concurrent.TimeUnit.SECONDS
-
-import org.apache.ambari.metrics.adservice.app.AnomalyDetectionAppConfig
-import org.apache.ambari.metrics.adservice.configuration.HBaseConfiguration
-import org.apache.ambari.metrics.adservice.metadata.{MetricDefinitionService, MetricKey}
-import org.apache.ambari.metrics.adservice.model.AnomalyDetectionMethod.AnomalyDetectionMethod
-import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-import org.apache.ambari.metrics.adservice.model._
-import org.apache.hadoop.hbase.util.RetryCounterFactory
-import org.slf4j.{Logger, LoggerFactory}
-
-import com.google.inject.Inject
-
-/**
-  * Phoenix query handler class.
-  */
-class PhoenixAnomalyStoreAccessor extends AdAnomalyStoreAccessor {
-
-  @Inject
-  var configuration: AnomalyDetectionAppConfig = _
-
-  @Inject
-  var metricDefinitionService: MetricDefinitionService = _
-
-  var datasource: PhoenixConnectionProvider = _
-  val LOG : Logger = LoggerFactory.getLogger(classOf[PhoenixAnomalyStoreAccessor])
-
-  @Override
-  def initialize(): Unit = {
-
-    datasource = new DefaultPhoenixDataSource(HBaseConfiguration.getHBaseConf)
-    val retryCounterFactory = new RetryCounterFactory(10, SECONDS.toMillis(3).toInt)
-
-    val ttl = configuration.getAdServiceConfiguration.getAnomalyDataTtl
-    try {
-      var conn : Connection = getConnectionRetryingOnException(retryCounterFactory)
-      var stmt = conn.createStatement
-
-      //Create Method parameters table.
-      val methodParametersSql = String.format(PhoenixQueryConstants.CREATE_METHOD_PARAMETERS_TABLE,
-        PhoenixQueryConstants.METHOD_PARAMETERS_TABLE_NAME)
-      stmt.executeUpdate(methodParametersSql)
-
-      //Create Point in Time anomaly table
-      val pointInTimeAnomalySql = String.format(PhoenixQueryConstants.CREATE_PIT_ANOMALY_METRICS_TABLE_SQL,
-        PhoenixQueryConstants.PIT_ANOMALY_METRICS_TABLE_NAME,
-        ttl.asInstanceOf[Object])
-      stmt.executeUpdate(pointInTimeAnomalySql)
-
-      //Create Trend Anomaly table
-      val trendAnomalySql = String.format(PhoenixQueryConstants.CREATE_TREND_ANOMALY_METRICS_TABLE_SQL,
-        PhoenixQueryConstants.TREND_ANOMALY_METRICS_TABLE_NAME,
-        ttl.asInstanceOf[Object])
-      stmt.executeUpdate(trendAnomalySql)
-
-      //Create model snapshot table.
-      val snapshotSql = String.format(PhoenixQueryConstants.CREATE_MODEL_SNAPSHOT_TABLE,
-        PhoenixQueryConstants.MODEL_SNAPSHOT)
-      stmt.executeUpdate(snapshotSql)
-
-      conn.commit()
-    } catch {
-      case e: SQLException => throw e
-    }
-  }
-
-  @Override
-  def getMetricAnomalies(anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int) : List[MetricAnomalyInstance] = {
-    val anomalies = scala.collection.mutable.MutableList.empty[MetricAnomalyInstance]
-    val conn : Connection = getConnection
-    var stmt : PreparedStatement = null
-    var rs : ResultSet = null
-    val s : Season = Season(Range(-1,-1), SeasonType.DAY)
-
-    try {
-      stmt = prepareAnomalyMetricsGetSqlStatement(conn, anomalyType, startTime, endTime, limit)
-      rs = stmt.executeQuery
-      if (anomalyType.equals(AnomalyType.POINT_IN_TIME)) {
-        while (rs.next()) {
-          val uuid: Array[Byte] = rs.getBytes("METRIC_UUID")
-          val timestamp: Long = rs.getLong("ANOMALY_TIMESTAMP")
-          val metricValue: Double = rs.getDouble("METRIC_VALUE")
-          val methodType: AnomalyDetectionMethod = AnomalyDetectionMethod.withName(rs.getString("METHOD_NAME"))
-          val season: Season = Season.fromJson(rs.getString("SEASONAL_INFO"))
-          val anomalyScore: Double = rs.getDouble("ANOMALY_SCORE")
-          val modelSnapshot: String = rs.getString("MODEL_PARAMETERS")
-
-          val metricKey: MetricKey = metricDefinitionService.getMetricKeyFromUuid(uuid)
-          val anomalyInstance: MetricAnomalyInstance = new PointInTimeAnomalyInstance(metricKey, timestamp,
-            metricValue, methodType, anomalyScore, season, modelSnapshot)
-          anomalies.+=(anomalyInstance)
-        }
-      } else {
-        while (rs.next()) {
-          val uuid: Array[Byte] = rs.getBytes("METRIC_UUID")
-          val anomalyStart: Long = rs.getLong("ANOMALY_PERIOD_START")
-          val anomalyEnd: Long = rs.getLong("ANOMALY_PERIOD_END")
-          val referenceStart: Long = rs.getLong("TEST_PERIOD_START")
-          val referenceEnd: Long = rs.getLong("TEST_PERIOD_END")
-          val methodType: AnomalyDetectionMethod = AnomalyDetectionMethod.withName(rs.getString("METHOD_NAME"))
-          val season: Season = Season.fromJson(rs.getString("SEASONAL_INFO"))
-          val anomalyScore: Double = rs.getDouble("ANOMALY_SCORE")
-          val modelSnapshot: String = rs.getString("MODEL_PARAMETERS")
-
-          val metricKey: MetricKey = metricDefinitionService.getMetricKeyFromUuid(uuid)
-          val anomalyInstance: MetricAnomalyInstance = TrendAnomalyInstance(metricKey,
-            TimeRange(anomalyStart, anomalyEnd),
-            TimeRange(referenceStart, referenceEnd),
-            methodType, anomalyScore, season, modelSnapshot)
-          anomalies.+=(anomalyInstance)
-        }
-      }
-    } catch {
-      case e: SQLException => throw e
-    }
-
-    anomalies.toList
-  }
-
-  @throws[SQLException]
-  private def prepareAnomalyMetricsGetSqlStatement(connection: Connection, anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int): PreparedStatement = {
-
-    val sb = new StringBuilder
-
-    if (anomalyType.equals(AnomalyType.POINT_IN_TIME)) {
-      sb.++=(String.format(PhoenixQueryConstants.GET_PIT_ANOMALY_METRIC_SQL, PhoenixQueryConstants.PIT_ANOMALY_METRICS_TABLE_NAME))
-    } else {
-      sb.++=(String.format(PhoenixQueryConstants.GET_TREND_ANOMALY_METRIC_SQL, PhoenixQueryConstants.TREND_ANOMALY_METRICS_TABLE_NAME))
-    }
-
-    sb.append(" LIMIT " + limit)
-    var stmt: java.sql.PreparedStatement = null
-    try {
-      stmt = connection.prepareStatement(sb.toString)
-
-      var pos = 1
-      stmt.setLong(pos, startTime)
-
-      pos += 1
-      stmt.setLong(pos, endTime)
-
-      stmt.setFetchSize(limit)
-
-    } catch {
-      case e: SQLException =>
-        if (stmt != null)
-          return stmt
-        throw e
-    }
-    stmt
-  }
-
-  @throws[SQLException]
-  private def getConnection: Connection = datasource.getConnection
-
-  @throws[SQLException]
-  @throws[InterruptedException]
-  private def getConnectionRetryingOnException (retryCounterFactory : RetryCounterFactory) : Connection = {
-    val retryCounter = retryCounterFactory.create
-    while(true) {
-      try
-        return getConnection
-      catch {
-        case e: SQLException =>
-          if (!retryCounter.shouldRetry) {
-            LOG.error("HBaseAccessor getConnection failed after " + retryCounter.getMaxAttempts + " attempts")
-            throw e
-          }
-      }
-      retryCounter.sleepUntilNextRetry()
-    }
-    null
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixConnectionProvider.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixConnectionProvider.scala
deleted file mode 100644
index 1faf1ba..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixConnectionProvider.scala
+++ /dev/null
@@ -1,66 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  *//**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.db
-
-import org.apache.hadoop.hbase.client.HBaseAdmin
-import java.io.IOException
-
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  * <p/>
-  * http://www.apache.org/licenses/LICENSE-2.0
-  * <p/>
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-trait PhoenixConnectionProvider extends ConnectionProvider {
-  /**
-    * Get HBaseAdmin for the Phoenix connection
-    *
-    * @return
-    * @throws IOException
-    */
-    @throws[IOException]
-    def getHBaseAdmin: HBaseAdmin
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixQueryConstants.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixQueryConstants.scala
deleted file mode 100644
index d9774e0..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/PhoenixQueryConstants.scala
+++ /dev/null
@@ -1,108 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.db
-
-object PhoenixQueryConstants {
-
-  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-  /* Table Name constants */
-  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-
-  val METRIC_PROFILE_TABLE_NAME = "METRIC_PROFILE"
-  val METHOD_PARAMETERS_TABLE_NAME = "METHOD_PARAMETERS"
-  val PIT_ANOMALY_METRICS_TABLE_NAME = "PIT_METRIC_ANOMALIES"
-  val TREND_ANOMALY_METRICS_TABLE_NAME = "TREND_METRIC_ANOMALIES"
-  val MODEL_SNAPSHOT = "MODEL_SNAPSHOT"
-
-  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-  /* CREATE statement constants */
-  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-
-  val CREATE_METHOD_PARAMETERS_TABLE: String = "CREATE TABLE IF NOT EXISTS %s (" +
-    "METHOD_NAME VARCHAR, " +
-    "METHOD_TYPE VARCHAR, " +
-    "PARAMETERS VARCHAR " +
-    "CONSTRAINT pk PRIMARY KEY (METHOD_NAME)) " +
-    "DATA_BLOCK_ENCODING='FAST_DIFF', IMMUTABLE_ROWS=true, COMPRESSION='SNAPPY'"
-
-  val CREATE_PIT_ANOMALY_METRICS_TABLE_SQL: String = "CREATE TABLE IF NOT EXISTS %s (" +
-    "METRIC_UUID BINARY(20) NOT NULL, " +
-    "METHOD_NAME VARCHAR, " +
-    "ANOMALY_TIMESTAMP UNSIGNED_LONG NOT NULL, " +
-    "METRIC_VALUE DOUBLE, " +
-    "SEASONAL_INFO VARCHAR, " +
-    "ANOMALY_SCORE DOUBLE, " +
-    "MODEL_PARAMETERS VARCHAR, " +
-    "DETECTION_TIME UNSIGNED_LONG " +
-    "CONSTRAINT pk PRIMARY KEY (METRIC_UUID, METHOD_NAME, ANOMALY_TIMESTAMP)) " +
-    "DATA_BLOCK_ENCODING='FAST_DIFF', IMMUTABLE_ROWS=true, TTL=%s, COMPRESSION='SNAPPY'"
-
-  val CREATE_TREND_ANOMALY_METRICS_TABLE_SQL: String = "CREATE TABLE IF NOT EXISTS %s (" +
-    "METRIC_UUID BINARY(20) NOT NULL, " +
-    "METHOD_NAME VARCHAR, " +
-    "ANOMALY_PERIOD_START UNSIGNED_LONG NOT NULL, " +
-    "ANOMALY_PERIOD_END UNSIGNED_LONG NOT NULL, " +
-    "TEST_PERIOD_START UNSIGNED_LONG NOT NULL, " +
-    "TEST_PERIOD_END UNSIGNED_LONG NOT NULL, " +
-    "SEASONAL_INFO VARCHAR, " +
-    "ANOMALY_SCORE DOUBLE, " +
-    "MODEL_PARAMETERS VARCHAR, " +
-    "DETECTION_TIME UNSIGNED_LONG " +
-    "CONSTRAINT pk PRIMARY KEY (METRIC_UUID, METHOD_NAME, ANOMALY_PERIOD_START, ANOMALY_PERIOD_END, TEST_PERIOD_START, TEST_PERIOD_END)) " +
-    "DATA_BLOCK_ENCODING='FAST_DIFF', IMMUTABLE_ROWS=true, TTL=%s, COMPRESSION='SNAPPY'"
-
-  val CREATE_MODEL_SNAPSHOT_TABLE: String = "CREATE TABLE IF NOT EXISTS %s (" +
-    "METRIC_UUID BINARY(20) NOT NULL, " +
-    "METHOD_NAME VARCHAR, " +
-    "METHOD_TYPE VARCHAR, " +
-    "PARAMETERS VARCHAR, " +
-    "SNAPSHOT_TIME UNSIGNED_LONG NOT NULL " +
-    "CONSTRAINT pk PRIMARY KEY (METRIC_UUID, METHOD_NAME, SNAPSHOT_TIME)) " +
-    "DATA_BLOCK_ENCODING='FAST_DIFF', IMMUTABLE_ROWS=true, COMPRESSION='SNAPPY'"
-
-  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-  /* UPSERT statement constants */
-  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-
-  val UPSERT_METHOD_PARAMETERS_SQL: String = "UPSERT INTO %s (METHOD_NAME, METHOD_TYPE, PARAMETERS) VALUES (?,?,?)"
-
-  val UPSERT_PIT_ANOMALY_METRICS_SQL: String = "UPSERT INTO %s (METRIC_UUID, ANOMALY_TIMESTAMP, METRIC_VALUE, METHOD_NAME, " +
-    "SEASONAL_INFO, ANOMALY_SCORE, MODEL_PARAMETERS, DETECTION_TIME) VALUES (?, ?, ?, ?, ?, ?, ?, ?)"
-
-  val UPSERT_TREND_ANOMALY_METRICS_SQL: String = "UPSERT INTO %s (METRIC_UUID, ANOMALY_PERIOD_START, ANOMALY_PERIOD_END, " +
-    "TEST_PERIOD_START, TEST_PERIOD_END, METHOD_NAME, ANOMALY_SCORE, MODEL_PARAMETERS, DETECTION_TIME) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)"
-
-  val UPSERT_MODEL_SNAPSHOT_SQL: String = "UPSERT INTO %s (METRIC_UUID, METHOD_NAME, METHOD_TYPE, PARAMETERS) VALUES (?, ?, ?, ?)"
-
-  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-  /* GET statement constants */
-  //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-
-  val GET_METHOD_PARAMETERS_SQL: String = "SELECT METHOD_NAME, METHOD_TYPE, PARAMETERS FROM %s WHERE METHOD_NAME = %s"
-
-  val GET_PIT_ANOMALY_METRIC_SQL: String = "SELECT METRIC_UUID, ANOMALY_TIMESTAMP, METRIC_VALUE, METHOD_NAME, SEASONAL_INFO, " +
-    "ANOMALY_SCORE, MODEL_PARAMETERS, DETECTION_TIME FROM %s WHERE ANOMALY_TIMESTAMP > ? AND ANOMALY_TIMESTAMP <= ? " +
-    "ORDER BY ANOMALY_SCORE DESC"
-
-  val GET_TREND_ANOMALY_METRIC_SQL: String = "SELECT METRIC_UUID, ANOMALY_PERIOD_START, ANOMALY_PERIOD_END, TEST_PERIOD_START, " +
-    "TEST_PERIOD_END, METHOD_NAME, SEASONAL_INFO, ANOMALY_SCORE, MODEL_PARAMETERS, DETECTION_TIME FROM %s WHERE ANOMALY_PERIOD_END > ? " +
-    "AND ANOMALY_PERIOD_END <= ? ORDER BY ANOMALY_SCORE DESC"
-
-  val GET_MODEL_SNAPSHOT_SQL: String = "SELECT METRIC_UUID, METHOD_NAME, METHOD_TYPE, PARAMETERS FROM %s WHERE UUID = %s AND METHOD_NAME = %s"
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala
deleted file mode 100644
index 49ef272..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala
+++ /dev/null
@@ -1,128 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.leveldb
-
-import java.io.File
-
-import javax.inject.Inject
-
-import org.apache.ambari.metrics.adservice.app.AnomalyDetectionAppConfig
-import org.apache.ambari.metrics.adservice.configuration.MetricDefinitionDBConfiguration
-import org.apache.ambari.metrics.adservice.db.MetadataDatasource
-import org.iq80.leveldb.{DB, Options, WriteOptions}
-import org.iq80.leveldb.impl.Iq80DBFactory
-
-import com.google.inject.Singleton
-
-@Singleton
-class LevelDBDataSource() extends MetadataDatasource {
-
-  private var db: DB = _
-  @volatile var isInitialized: Boolean = false
-
-  var appConfig: AnomalyDetectionAppConfig = _
-
-  @Inject
-  def this(appConfig: AnomalyDetectionAppConfig) = {
-    this
-    this.appConfig = appConfig
-  }
-
-  override def initialize(): Unit = {
-    if (isInitialized) return 
-
-    val configuration: MetricDefinitionDBConfiguration = appConfig.getMetricDefinitionDBConfiguration
-
-    db = createDB(new LevelDbConfig {
-      override val createIfMissing: Boolean = true
-      override val verifyChecksums: Boolean = configuration.getVerifyChecksums
-      override val paranoidChecks: Boolean = configuration.getPerformParanoidChecks
-      override val path: String = configuration.getDbDirPath
-    })
-    isInitialized = true
-  }
-
-  private def createDB(levelDbConfig: LevelDbConfig): DB = {
-    import levelDbConfig._
-
-    val options = new Options()
-      .createIfMissing(createIfMissing)
-      .paranoidChecks(paranoidChecks) // raise an error as soon as it detects an internal corruption
-      .verifyChecksums(verifyChecksums) // force checksum verification of all data that is read from the file system on behalf of a particular read
-
-    Iq80DBFactory.factory.open(new File(path), options)
-  }
-
-  override def close(): Unit = {
-    db.close()
-  }
-
-  /**
-    * This function obtains the associated value to a key, if there exists one.
-    *
-    * @param key
-    * @return the value associated with the passed key.
-    */
-  override def get(key: Key): Option[Value] = Option(db.get(key))
-
-  /**
-    * This function obtains all the values
-    *
-    * @return the list of values
-    */
-  def getAll: List[Value] = {
-    val values = scala.collection.mutable.MutableList.empty[Value]
-    val iterator = db.iterator()
-    iterator.seekToFirst()
-    while (iterator.hasNext) {
-      val entry: java.util.Map.Entry[Key, Value] = iterator.next()
-      values.+=(entry.getValue)
-    }
-    values.toList
-  }
-
-  /**
-    * This function updates the DataSource by deleting, updating and inserting new (key-value) pairs.
-    *
-    * @param toRemove which includes all the keys to be removed from the DataSource.
-    * @param toUpsert which includes all the (key-value) pairs to be inserted into the DataSource.
-    *                 If a key is already in the DataSource its value will be updated.
-    */
-  override def update(toRemove: Seq[Key], toUpsert: Seq[(Key, Value)]): Unit = {
-    val batch = db.createWriteBatch()
-    toRemove.foreach { key => batch.delete(key) }
-    toUpsert.foreach { item => batch.put(item._1, item._2) }
-    db.write(batch, new WriteOptions())
-  }
-
-  override def put(key: Key, value: Value): Unit = {
-    db.put(key, value)
-  }
-
-  override def delete(key: Key): Unit = {
-    db.delete(key)
-  }
-}
-
-trait LevelDbConfig {
-  val createIfMissing: Boolean
-  val paranoidChecks: Boolean
-  val verifyChecksums: Boolean
-  val path: String
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala
deleted file mode 100644
index c277221..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/ADMetadataProvider.scala
+++ /dev/null
@@ -1,147 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.metadata
-
-import javax.ws.rs.core.Response
-
-import org.apache.ambari.metrics.adservice.configuration.MetricCollectorConfiguration
-import org.apache.commons.lang.StringUtils
-import org.slf4j.{Logger, LoggerFactory}
-
-import com.fasterxml.jackson.databind.ObjectMapper
-import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
-
-import scalaj.http.{Http, HttpRequest, HttpResponse}
-
-/**
-  * Class to invoke Metrics Collector metadata API.
-  * TODO : Instantiate a sync thread that regularly updates the internal maps by reading off AMS metadata.
-  */
-class ADMetadataProvider extends MetricMetadataProvider {
-
-  var metricCollectorHosts: Array[String] = Array.empty[String]
-  var metricCollectorPort: String = _
-  var metricCollectorProtocol: String = _
-  var metricMetadataPath: String = "/v1/timeline/metrics/metadata/keys"
-  val LOG : Logger = LoggerFactory.getLogger(classOf[ADMetadataProvider])
-
-  val connectTimeout: Int = 10000
-  val readTimeout: Int = 10000
-  //TODO: Add retries for metrics collector GET call.
-  //val retries: Long = 5
-
-  def this(configuration: MetricCollectorConfiguration) {
-    this
-    if (StringUtils.isNotEmpty(configuration.getHosts)) {
-      metricCollectorHosts = configuration.getHosts.split(",")
-    }
-    metricCollectorPort = configuration.getPort
-    metricCollectorProtocol = configuration.getProtocol
-    metricMetadataPath = configuration.getMetadataEndpoint
-  }
-
-  override def getMetricKeysForDefinitions(metricSourceDefinition: MetricSourceDefinition): Set[MetricKey] = {
-
-    val numDefinitions: Int = metricSourceDefinition.metricDefinitions.size
-    val metricKeySet: scala.collection.mutable.Set[MetricKey] = scala.collection.mutable.Set.empty[MetricKey]
-
-    for (metricDef <- metricSourceDefinition.metricDefinitions) {
-      if (metricDef.isValid) { //Skip requesting metric keys for invalid definitions.
-        for (host <- metricCollectorHosts) {
-          val metricKeys: Set[MetricKey] = getKeysFromMetricsCollector(metricCollectorProtocol, host, metricCollectorPort, metricMetadataPath, metricDef)
-          if (metricKeys != null) {
-            metricKeySet.++=(metricKeys)
-          }
-        }
-      }
-    }
-    metricKeySet.toSet
-  }
-
-  /**
-    *
-    * @param protocol
-    * @param host
-    * @param port
-    * @param path
-    * @param metricDefinition
-    * @return
-    */
-  def getKeysFromMetricsCollector(protocol: String, host: String, port: String, path: String, metricDefinition: MetricDefinition): Set[MetricKey] = {
-
-    val url: String = protocol + "://" + host + ":" + port + path
-    val mapper = new ObjectMapper() with ScalaObjectMapper
-
-    if (metricDefinition.hosts == null || metricDefinition.hosts.isEmpty) {
-      val request: HttpRequest = Http(url)
-        .param("metricName", metricDefinition.metricName)
-        .param("appId", metricDefinition.appId)
-      makeHttpGetCall(request, mapper)
-    } else {
-      val metricKeySet: scala.collection.mutable.Set[MetricKey] = scala.collection.mutable.Set.empty[MetricKey]
-
-      for (h <- metricDefinition.hosts) {
-        val request: HttpRequest = Http(url)
-          .param("metricName", metricDefinition.metricName)
-          .param("appId", metricDefinition.appId)
-          .param("hostname", h)
-
-        val metricKeys = makeHttpGetCall(request, mapper)
-        metricKeySet.++=(metricKeys)
-      }
-      metricKeySet.toSet
-    }
-  }
-
-  private def makeHttpGetCall(request: HttpRequest, mapper: ObjectMapper): Set[MetricKey] = {
-
-    try {
-      val result: HttpResponse[String] = request.asString
-      if (result.code == Response.Status.OK.getStatusCode) {
-        LOG.info("Successfully fetched metric keys from metrics collector")
-        val metricKeySet: java.util.Set[java.util.Map[String, String]] = mapper.readValue(result.body,
-          classOf[java.util.Set[java.util.Map[String, String]]])
-        getMetricKeys(metricKeySet)
-      } else {
-        LOG.error("Got an error when trying to fetch metric key from metrics collector. Code = " + result.code + ", Message = " + result.body)
-      }
-    } catch {
-      case _: java.io.IOException | _: java.net.SocketTimeoutException => LOG.error("Unable to fetch metric keys from Metrics collector for : " + request.toString)
-    }
-    Set.empty[MetricKey]
-  }
-
-
-  def getMetricKeys(timelineMetricKeys: java.util.Set[java.util.Map[String, String]]): Set[MetricKey] = {
-    val metricKeySet: scala.collection.mutable.Set[MetricKey] = scala.collection.mutable.Set.empty[MetricKey]
-    val iter = timelineMetricKeys.iterator()
-    while (iter.hasNext) {
-      val timelineMetricKey: java.util.Map[String, String] = iter.next()
-      val metricKey: MetricKey = MetricKey(
-        timelineMetricKey.get("metricName"),
-        timelineMetricKey.get("appId"),
-        timelineMetricKey.get("instanceId"),
-        timelineMetricKey.get("hostname"),
-        timelineMetricKey.get("uuid").getBytes())
-
-      metricKeySet.add(metricKey)
-    }
-    metricKeySet.toSet
-  }
-
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/InputMetricDefinitionParser.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/InputMetricDefinitionParser.scala
deleted file mode 100644
index 3c8ea84..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/InputMetricDefinitionParser.scala
+++ /dev/null
@@ -1,58 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.metadata
-
-import java.io.File
-
-import org.apache.ambari.metrics.adservice.app.ADServiceScalaModule
-
-import com.fasterxml.jackson.databind.ObjectMapper
-import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
-
-object InputMetricDefinitionParser {
-
-  def parseInputDefinitionsFromDirectory(directory: String): List[MetricSourceDefinition] = {
-
-    if (directory == null) {
-      return List.empty[MetricSourceDefinition]
-    }
-    val mapper = new ObjectMapper() with ScalaObjectMapper
-    mapper.registerModule(new ADServiceScalaModule)
-    val metricSourceDefinitions: scala.collection.mutable.MutableList[MetricSourceDefinition] =
-      scala.collection.mutable.MutableList.empty[MetricSourceDefinition]
-
-    for (file <- getFilesInDirectory(directory)) {
-      val source = scala.io.Source.fromFile(file)
-      val lines = try source.mkString finally source.close()
-      val definition: MetricSourceDefinition = mapper.readValue[MetricSourceDefinition](lines)
-      if (definition != null) {
-        metricSourceDefinitions.+=(definition)
-      }
-    }
-    metricSourceDefinitions.toList
-  }
-
-  private def getFilesInDirectory(directory: String): List[File] = {
-    val dir = new File(directory)
-    if (dir.exists && dir.isDirectory) {
-      dir.listFiles.filter(_.isFile).toList
-    } else {
-      List[File]()
-    }
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala
deleted file mode 100644
index c668dfa..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinition.scala
+++ /dev/null
@@ -1,105 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.metadata
-
-import org.apache.commons.lang3.StringUtils
-
-import com.fasterxml.jackson.annotation.JsonIgnore
-/*
-   {
-       "metric-name": "mem_free",
-       "appId" : "HOST",
-       "hosts" : ["h1","h2"],
-       "metric-description" : "Free memory on a Host.",
-       "troubleshooting-info" : "Sudden drop / hike in free memory on a host.",
-       "static-threshold" : 10,
-       “app-id” : “HOST”
-}
- */
-
-@SerialVersionUID(1002L)
-class MetricDefinition extends Serializable {
-
-  var metricName: String = _
-  var appId: String = _
-  var hosts: List[String] = List.empty[String]
-  var metricDescription: String = ""
-  var troubleshootingInfo: String = ""
-  var staticThreshold: Double = _
-
-  //A Metric definition is valid if we can resolve a metricName and appId (defined or inherited) at runtime)
-  private var valid : Boolean = true
-
-  def this(metricName: String,
-           appId: String,
-           hosts: List[String],
-           metricDescription: String,
-           troubleshootingInfo: String,
-           staticThreshold: Double) = {
-    this
-    this.metricName = metricName
-    this.appId = appId
-    this.hosts = hosts
-    this.metricDescription = metricDescription
-    this.troubleshootingInfo = troubleshootingInfo
-    this.staticThreshold = staticThreshold
-  }
-
-  @Override
-  override def equals(obj: scala.Any): Boolean = {
-
-    if (obj == null || (getClass ne obj.getClass))
-      return false
-
-    val that = obj.asInstanceOf[MetricDefinition]
-
-    if (!(metricName == that.metricName))
-      return false
-
-    if (StringUtils.isNotEmpty(appId)) {
-      appId == that.appId
-    }
-    else {
-      StringUtils.isEmpty(that.appId)
-    }
-  }
-
-  def isValid: Boolean = {
-    valid
-  }
-
-  def makeInvalid() : Unit = {
-    valid = false
-  }
-}
-
-object MetricDefinition {
-
-  def apply(metricName: String,
-            appId: String,
-            hosts: List[String],
-            metricDescription: String,
-            troubleshootingInfo: String,
-            staticThreshold: Double): MetricDefinition =
-    new MetricDefinition(metricName, appId, hosts, metricDescription, troubleshootingInfo, staticThreshold)
-
-  def apply(metricName: String, appId: String, hosts: List[String]): MetricDefinition =
-    new MetricDefinition(metricName, appId, hosts, null, null, -1)
-
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionService.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionService.scala
deleted file mode 100644
index 52ce39e..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionService.scala
+++ /dev/null
@@ -1,78 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.metadata
-
-import org.apache.ambari.metrics.adservice.service.AbstractADService
-
-trait MetricDefinitionService extends AbstractADService{
-
-  /**
-    * Given a 'UUID', return the metric key associated with it.
-    * @param uuid UUID
-    * @return
-    */
-  def getMetricKeyFromUuid(uuid: Array[Byte]) : MetricKey
-
-  /**
-    * Return all the definitions being tracked.
-    * @return Map of Metric Source Definition name to Metric Source Definition.
-    */
-  def getDefinitions: List[MetricSourceDefinition]
-
-  /**
-    * Given a component definition name, return the definition associated with it.
-    * @param name component definition name
-    * @return
-    */
-  def getDefinitionByName(name: String) : MetricSourceDefinition
-
-  /**
-    * Add a new definition.
-    * @param definition component definition JSON
-    * @return
-    */
-  def addDefinition(definition: MetricSourceDefinition) : Boolean
-
-  /**
-    * Update a component definition by name. Only definitions which were added by API can be modified through API.
-    * @param definition component definition name
-    * @return
-    */
-  def updateDefinition(definition: MetricSourceDefinition) : Boolean
-
-  /**
-    * Delete a component definition by name. Only definitions which were added by API can be deleted through API.
-    * @param name component definition name
-    * @return
-    */
-  def deleteDefinitionByName(name: String) : Boolean
-
-  /**
-    * Given an appId, return set of definitions that are tracked for that appId.
-    * @param appId component definition appId
-    * @return
-    */
-  def getDefinitionByAppId(appId: String) : List[MetricSourceDefinition]
-
-  /**
-    * Return the mapping between definition name to set of metric keys.
-    * @return Map of Metric Source Definition to set of metric keys associated with it.
-    */
-  def getMetricKeys:  Map[String, Set[MetricKey]]
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala
deleted file mode 100644
index b9b4a7c..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricDefinitionServiceImpl.scala
+++ /dev/null
@@ -1,242 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.metadata
-
-import org.apache.ambari.metrics.adservice.app.AnomalyDetectionAppConfig
-import org.apache.ambari.metrics.adservice.db.AdMetadataStoreAccessor
-import org.slf4j.{Logger, LoggerFactory}
-
-import com.google.inject.{Inject, Singleton}
-
-@Singleton
-class MetricDefinitionServiceImpl extends MetricDefinitionService {
-
-  val LOG : Logger = LoggerFactory.getLogger(classOf[MetricDefinitionServiceImpl])
-
-  var adMetadataStoreAccessor: AdMetadataStoreAccessor = _
-  var configuration: AnomalyDetectionAppConfig = _
-  var metricMetadataProvider: MetricMetadataProvider = _
-
-  val metricSourceDefinitionMap: scala.collection.mutable.Map[String, MetricSourceDefinition] = scala.collection.mutable.Map()
-  val metricDefinitionMetricKeyMap: scala.collection.mutable.Map[MetricSourceDefinition, Set[MetricKey]] = scala.collection.mutable.Map()
-  val metricKeys: scala.collection.mutable.Set[MetricKey] = scala.collection.mutable.Set.empty[MetricKey]
-
-  @Inject
-  def this (anomalyDetectionAppConfig: AnomalyDetectionAppConfig, metadataStoreAccessor: AdMetadataStoreAccessor) = {
-    this ()
-    adMetadataStoreAccessor = metadataStoreAccessor
-    configuration = anomalyDetectionAppConfig
-  }
-
-  @Override
-  def initialize() : Unit = {
-    LOG.info("Initializing Metric Definition Service...")
-
-    //Initialize Metric Metadata Provider
-    metricMetadataProvider = new ADMetadataProvider(configuration.getMetricCollectorConfiguration)
-
-    //Load definitions from metadata store
-    val definitionsFromStore: List[MetricSourceDefinition] = adMetadataStoreAccessor.getSavedInputDefinitions
-    for (definition <- definitionsFromStore) {
-      sanitizeMetricSourceDefinition(definition)
-    }
-
-    //Load definitions from configs
-    val definitionsFromConfig: List[MetricSourceDefinition] = getInputDefinitionsFromConfig
-    for (definition <- definitionsFromConfig) {
-      sanitizeMetricSourceDefinition(definition)
-    }
-
-    //Union the 2 sources, with DB taking precedence.
-    //Save new definition list to DB.
-    metricSourceDefinitionMap.++=(combineDefinitionSources(definitionsFromConfig, definitionsFromStore))
-
-    //Reach out to AMS Metadata and get Metric Keys. Pass in MSD and get back Set<MK>
-    for (definition <- metricSourceDefinitionMap.values) {
-      val keys: Set[MetricKey] = metricMetadataProvider.getMetricKeysForDefinitions(definition)
-      metricDefinitionMetricKeyMap(definition) = keys
-      metricKeys.++=(keys)
-    }
-
-    LOG.info("Successfully initialized Metric Definition Service.")
-  }
-
-  def getMetricKeyFromUuid(uuid: Array[Byte]): MetricKey = {
-    var key: MetricKey = null
-    for (metricKey <- metricKeys) {
-      if (metricKey.uuid.sameElements(uuid)) {
-        key = metricKey
-      }
-    }
-    key
-  }
-
-  @Override
-  def getDefinitions: List[MetricSourceDefinition] = {
-    metricSourceDefinitionMap.values.toList
-  }
-
-  @Override
-  def getDefinitionByName(name: String): MetricSourceDefinition = {
-    if (!metricSourceDefinitionMap.contains(name)) {
-      LOG.warn("Metric Source Definition with name " + name + " not found")
-      null
-    } else {
-      metricSourceDefinitionMap.apply(name)
-    }
-  }
-
-  @Override
-  def addDefinition(definition: MetricSourceDefinition): Boolean = {
-    if (metricSourceDefinitionMap.contains(definition.definitionName)) {
-      LOG.info("Definition with name " + definition.definitionName + " already present.")
-      return false
-    }
-    definition.definitionSource = MetricSourceDefinitionType.API
-
-    val success: Boolean = adMetadataStoreAccessor.saveInputDefinition(definition)
-    if (success) {
-      metricSourceDefinitionMap += definition.definitionName -> definition
-      val keys: Set[MetricKey] = metricMetadataProvider.getMetricKeysForDefinitions(definition)
-      metricDefinitionMetricKeyMap(definition) = keys
-      metricKeys.++=(keys)
-      LOG.info("Successfully created metric source definition : " + definition.definitionName)
-    }
-    success
-  }
-
-  @Override
-  def updateDefinition(definition: MetricSourceDefinition): Boolean = {
-    if (!metricSourceDefinitionMap.contains(definition.definitionName)) {
-      LOG.warn("Metric Source Definition with name " + definition.definitionName + " not found")
-      return false
-    }
-
-    if (metricSourceDefinitionMap.apply(definition.definitionName).definitionSource != MetricSourceDefinitionType.API) {
-      return false
-    }
-    definition.definitionSource = MetricSourceDefinitionType.API
-
-    val success: Boolean = adMetadataStoreAccessor.saveInputDefinition(definition)
-    if (success) {
-      metricSourceDefinitionMap += definition.definitionName -> definition
-      val keys: Set[MetricKey] = metricMetadataProvider.getMetricKeysForDefinitions(definition)
-      metricDefinitionMetricKeyMap(definition) = keys
-      metricKeys.++=(keys)
-      LOG.info("Successfully updated metric source definition : " + definition.definitionName)
-    }
-    success
-  }
-
-  @Override
-  def deleteDefinitionByName(name: String): Boolean = {
-    if (!metricSourceDefinitionMap.contains(name)) {
-      LOG.warn("Metric Source Definition with name " + name + " not found")
-      return false
-    }
-
-    val definition : MetricSourceDefinition = metricSourceDefinitionMap.apply(name)
-    if (definition.definitionSource != MetricSourceDefinitionType.API) {
-      LOG.warn("Cannot delete metric source definition which was not created through API.")
-      return false
-    }
-
-    val success: Boolean = adMetadataStoreAccessor.removeInputDefinition(name)
-    if (success) {
-      metricSourceDefinitionMap -= definition.definitionName
-      metricKeys.--=(metricDefinitionMetricKeyMap.apply(definition))
-      metricDefinitionMetricKeyMap -= definition
-      LOG.info("Successfully deleted metric source definition : " + name)
-    }
-    success
-  }
-
-  @Override
-  def getDefinitionByAppId(appId: String): List[MetricSourceDefinition] = {
-
-    val defList : List[MetricSourceDefinition] = metricSourceDefinitionMap.values.toList
-    defList.filter(_.appId == appId)
-  }
-
-  def combineDefinitionSources(configDefinitions: List[MetricSourceDefinition], dbDefinitions: List[MetricSourceDefinition])
-  : Map[String, MetricSourceDefinition] = {
-
-    var combinedDefinitionMap: scala.collection.mutable.Map[String, MetricSourceDefinition] =
-      scala.collection.mutable.Map.empty[String, MetricSourceDefinition]
-
-    for (definitionFromDb <- dbDefinitions) {
-      combinedDefinitionMap(definitionFromDb.definitionName) = definitionFromDb
-    }
-
-    for (definition <- configDefinitions) {
-      if (!dbDefinitions.contains(definition)) {
-        adMetadataStoreAccessor.saveInputDefinition(definition)
-        combinedDefinitionMap(definition.definitionName) = definition
-      }
-    }
-    combinedDefinitionMap.toMap
-  }
-
-  def getInputDefinitionsFromConfig: List[MetricSourceDefinition] = {
-    val configDirectory = configuration.getMetricDefinitionServiceConfiguration.getInputDefinitionDirectory
-    InputMetricDefinitionParser.parseInputDefinitionsFromDirectory(configDirectory)
-  }
-
-  def setAdMetadataStoreAccessor (adMetadataStoreAccessor: AdMetadataStoreAccessor) : Unit = {
-    this.adMetadataStoreAccessor = adMetadataStoreAccessor
-  }
-
-  /**
-    * Look into the Metric Definitions inside a Metric Source definition, and push down source level appId &
-    * hosts to Metric definition if they do not have an override.
-    * @param metricSourceDefinition Input Metric Source Definition
-    */
-  def sanitizeMetricSourceDefinition(metricSourceDefinition: MetricSourceDefinition): Unit = {
-    val sourceLevelAppId: String = metricSourceDefinition.appId
-    val sourceLevelHostList: List[String] = metricSourceDefinition.hosts
-
-    for (metricDef <- metricSourceDefinition.metricDefinitions.toList) {
-      if (metricDef.appId == null) {
-        if (sourceLevelAppId == null || sourceLevelAppId.isEmpty) {
-          metricDef.makeInvalid()
-        } else {
-          metricDef.appId = sourceLevelAppId
-        }
-      }
-
-      if (metricDef.isValid && (metricDef.hosts == null || metricDef.hosts.isEmpty)) {
-        if (sourceLevelHostList != null && sourceLevelHostList.nonEmpty) {
-          metricDef.hosts = sourceLevelHostList
-        }
-      }
-    }
-  }
-
-  /**
-    * Return the mapping between definition name to set of metric keys.
-    *
-    * @return Map of Metric Source Definition to set of metric keys associated with it.
-    */
-  override def getMetricKeys: Map[String, Set[MetricKey]] = {
-    val metricKeyMap: scala.collection.mutable.Map[String, Set[MetricKey]] = scala.collection.mutable.Map()
-    for (definition <- metricSourceDefinitionMap.values) {
-      metricKeyMap(definition.definitionName) = metricDefinitionMetricKeyMap.apply(definition)
-    }
-    metricKeyMap.toMap
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricKey.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricKey.scala
deleted file mode 100644
index 65c496e..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricKey.scala
+++ /dev/null
@@ -1,53 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.metadata
-
-import javax.xml.bind.annotation.XmlRootElement
-
-@XmlRootElement
-case class MetricKey (metricName: String, appId: String, instanceId: String, hostname: String, uuid: Array[Byte]) {
-
-  @Override
-  override def toString: String = {
-  "MetricName=" + metricName + ",App=" + appId + ",InstanceId=" + instanceId + ",Host=" + hostname
-  }
-
-  @Override
-  override def equals(obj: scala.Any): Boolean = {
-
-    if (obj == null || (getClass ne obj.getClass))
-      return false
-
-    val that = obj.asInstanceOf[MetricKey]
-
-    if (!(metricName == that.metricName))
-      return false
-
-    if (!(appId == that.appId))
-      return false
-
-    if (!(instanceId == that.instanceId))
-      return false
-
-    if (!(hostname == that.hostname))
-      return false
-
-    true
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricMetadataProvider.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricMetadataProvider.scala
deleted file mode 100644
index b5ba15e..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricMetadataProvider.scala
+++ /dev/null
@@ -1,31 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.metadata
-
-/**
-  * Metadata provider for maintaining the metric information in the Metric Definition Service.
-  */
-trait MetricMetadataProvider {
-
-  /**
-    * Return the set of Metric Keys for a given component definition.
-    * @param metricSourceDefinition component definition
-    * @return
-    */
-  def getMetricKeysForDefinitions(metricSourceDefinition: MetricSourceDefinition): Set[MetricKey]
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinition.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinition.scala
deleted file mode 100644
index 47b1499..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinition.scala
+++ /dev/null
@@ -1,85 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.metadata
-
-import javax.xml.bind.annotation.XmlRootElement
-
-import org.apache.ambari.metrics.adservice.metadata.MetricSourceDefinitionType.MetricSourceDefinitionType
-import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-
-/*
-{
- "definition-name": "host-memory",
- "app-id" : "HOST",
- "hosts" : [“c6401.ambari.apache.org”],
- "metric-definitions" : [
-   {
-       "metric-name": "mem_free",
-       "metric-description" : "Free memory on a Host.",
-       "troubleshooting-info" : "Sudden drop / hike in free memory on a host.",
-       "static-threshold" : 10,
-       “app-id” : “HOST”
-}   ],
-
- "related-definition-names" : ["host-cpu", “host-network”],
- “anomaly-detection-subsystems” : [“point-in-time”, “trend”]
-}
-*/
-
-
-@SerialVersionUID(10001L)
-@XmlRootElement
-class MetricSourceDefinition extends Serializable{
-
-  var definitionName: String = _
-  var appId: String = _
-  var definitionSource: MetricSourceDefinitionType = MetricSourceDefinitionType.CONFIG
-  var hosts: List[String] = List.empty[String]
-  var relatedDefinitions: List[String] = List.empty[String]
-  var associatedAnomalySubsystems: List[AnomalyType] = List.empty[AnomalyType]
-
-  var metricDefinitions: scala.collection.mutable.MutableList[MetricDefinition] =
-    scala.collection.mutable.MutableList.empty[MetricDefinition]
-
-  def this(definitionName: String, appId: String, source: MetricSourceDefinitionType) = {
-    this
-    this.definitionName = definitionName
-    this.appId = appId
-    this.definitionSource = source
-  }
-
-  def addMetricDefinition(metricDefinition: MetricDefinition): Unit = {
-    if (!metricDefinitions.contains(metricDefinition)) {
-      metricDefinitions.+=(metricDefinition)
-    }
-  }
-
-  def removeMetricDefinition(metricDefinition: MetricDefinition): Unit = {
-    metricDefinitions = metricDefinitions.filter(_ != metricDefinition)
-  }
-
-  @Override
-  override def equals(obj: scala.Any): Boolean = {
-
-    if (obj == null) {
-      return false
-    }
-    val that = obj.asInstanceOf[MetricSourceDefinition]
-    definitionName.equals(that.definitionName)
-  }
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionType.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionType.scala
deleted file mode 100644
index 04ff95b..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/metadata/MetricSourceDefinitionType.scala
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.metadata
-
-import javax.xml.bind.annotation.XmlRootElement
-
-@XmlRootElement
-object MetricSourceDefinitionType extends Enumeration{
-  type MetricSourceDefinitionType = Value
-  val CONFIG,API = Value
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/AnomalyDetectionMethod.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/AnomalyDetectionMethod.scala
deleted file mode 100644
index 81a7023..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/AnomalyDetectionMethod.scala
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.model
-
-object AnomalyDetectionMethod extends Enumeration {
-  type AnomalyDetectionMethod = Value
-  val EMA, TUKEYS, KS, HSDEV, UNKOWN = Value
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/AnomalyType.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/AnomalyType.scala
deleted file mode 100644
index 817180e..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/AnomalyType.scala
+++ /dev/null
@@ -1,26 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-package org.apache.ambari.metrics.adservice.model
-
-import javax.xml.bind.annotation.XmlRootElement
-
-@XmlRootElement
-object AnomalyType extends Enumeration {
-  type AnomalyType = Value
-   val POINT_IN_TIME, TREND, UNKNOWN = Value
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/MetricAnomalyInstance.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/MetricAnomalyInstance.scala
deleted file mode 100644
index 248a380..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/MetricAnomalyInstance.scala
+++ /dev/null
@@ -1,32 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.model
-
-import javax.xml.bind.annotation.XmlRootElement
-
-import org.apache.ambari.metrics.adservice.metadata.MetricKey
-import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-
-@XmlRootElement
-abstract class MetricAnomalyInstance {
-
-  val metricKey: MetricKey
-  val anomalyType: AnomalyType
-
-}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/PointInTimeAnomalyInstance.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/PointInTimeAnomalyInstance.scala
deleted file mode 100644
index 470cc2c..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/PointInTimeAnomalyInstance.scala
+++ /dev/null
@@ -1,46 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.model
-
-import java.util.Date
-
-import org.apache.ambari.metrics.adservice.metadata.MetricKey
-import org.apache.ambari.metrics.adservice.model.AnomalyDetectionMethod.AnomalyDetectionMethod
-import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-
-class PointInTimeAnomalyInstance(val metricKey: MetricKey,
-                                 val timestamp: Long,
-                                 val metricValue: Double,
-                                 val methodType: AnomalyDetectionMethod,
-                                 val anomalyScore: Double,
-                                 val anomalousSeason: Season,
-                                 val modelParameters: String) extends MetricAnomalyInstance {
-
-  override val anomalyType: AnomalyType = AnomalyType.POINT_IN_TIME
-
-  private def anomalyToString : String = {
-      "Method=" + methodType + ", AnomalyScore=" + anomalyScore + ", Season=" + anomalousSeason.toString +
-        ", Model Parameters=" + modelParameters
-  }
-
-  @Override
-  override def toString: String = {
-    "Metric : [" + metricKey.toString + ", Metric Value=" + metricValue + " @ Time = " + new Date(timestamp) +  "], Anomaly : [" + anomalyToString + "]"
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/Range.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/Range.scala
deleted file mode 100644
index 4ad35e7..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/Range.scala
+++ /dev/null
@@ -1,44 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.model
-
-/**
-  * Class to capture a Range in a Season.
-  * For example Monday - Wednesday is a 'Range' in a DAY Season.
-  * @param lower lower end
-  * @param higher higher end
-  */
-case class Range (lower: Int, higher: Int) {
-
-  def withinRange(value: Int) : Boolean = {
-    if (lower <= higher) {
-      (value >= lower) && (value <= higher)
-    } else {
-      !(value > higher) && (value < lower)
-    }
-  }
-
-  @Override
-  override def equals(obj: scala.Any): Boolean = {
-    if (obj == null) {
-      return false
-    }
-    val that : Range = obj.asInstanceOf[Range]
-    (lower == that.lower) && (higher == that.higher)
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/Season.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/Season.scala
deleted file mode 100644
index 84784bc..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/Season.scala
+++ /dev/null
@@ -1,122 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.model
-
-import java.time.DayOfWeek
-import java.util.Calendar
-
-import javax.xml.bind.annotation.XmlRootElement
-
-import org.apache.ambari.metrics.adservice.model.SeasonType.SeasonType
-
-import com.fasterxml.jackson.databind.ObjectMapper
-import com.fasterxml.jackson.module.scala.DefaultScalaModule
-import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
-
-/**
-  * Class to capture a 'Season' for a metric anomaly.
-  * A Season is a combination of DAY Range and HOUR Range.
-  * @param DAY Day Range
-  * @param HOUR Hour Range
-  */
-@XmlRootElement
-case class Season(var DAY: Range, var HOUR: Range) {
-
-  def belongsTo(timestamp : Long) : Boolean = {
-    val c = Calendar.getInstance
-    c.setTimeInMillis(timestamp)
-    val dayOfWeek = c.get(Calendar.DAY_OF_WEEK)
-    val hourOfDay = c.get(Calendar.HOUR_OF_DAY)
-
-    if (DAY.lower != -1 && !DAY.withinRange(dayOfWeek))
-      return false
-    if (HOUR.lower != -1 && !HOUR.withinRange(hourOfDay))
-      return false
-    true
-  }
-
-  @Override
-  override def equals(obj: scala.Any): Boolean = {
-
-    if (obj == null) {
-      return false
-    }
-
-    val that : Season = obj.asInstanceOf[Season]
-    DAY.equals(that.DAY) && HOUR.equals(that.HOUR)
-  }
-
-  @Override
-  override def toString: String = {
-
-    var prettyPrintString = ""
-
-    var dLower: Int = DAY.lower - 1
-    if (dLower == 0) {
-      dLower = 7
-    }
-
-    var dHigher: Int = DAY.higher - 1
-    if (dHigher == 0) {
-      dHigher = 7
-    }
-
-    if (DAY != null) {
-      prettyPrintString = prettyPrintString.concat("DAY : [" + DayOfWeek.of(dLower) + "," + DayOfWeek.of(dHigher)) + "]"
-    }
-
-    if (HOUR != null) {
-      prettyPrintString = prettyPrintString.concat(" HOUR : [" + HOUR.lower + "," + HOUR.higher) + "]"
-    }
-    prettyPrintString
-  }
-}
-
-object Season {
-
-  def apply(DAY: Range, HOUR: Range): Season = new Season(DAY, HOUR)
-
-  def apply(range: Range, seasonType: SeasonType): Season = {
-    if (seasonType.equals(SeasonType.DAY)) {
-      new Season(range, Range(-1,-1))
-    } else {
-      new Season(Range(-1,-1), range)
-    }
-  }
-
-  val mapper = new ObjectMapper() with ScalaObjectMapper
-  mapper.registerModule(DefaultScalaModule)
-
-  def getSeasons(timestamp: Long, seasons : List[Season]) : List[Season] = {
-    val validSeasons : scala.collection.mutable.MutableList[Season] = scala.collection.mutable.MutableList.empty[Season]
-    for ( season <- seasons ) {
-      if (season.belongsTo(timestamp)) {
-        validSeasons += season
-      }
-    }
-    validSeasons.toList
-  }
-
-  def toJson(season: Season) : String = {
-    mapper.writeValueAsString(season)
-  }
-
-  def fromJson(seasonString: String) : Season = {
-    mapper.readValue[Season](seasonString)
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SeasonType.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SeasonType.scala
deleted file mode 100644
index b510531..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/SeasonType.scala
+++ /dev/null
@@ -1,24 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.model
-
-object SeasonType extends Enumeration{
-
-  type SeasonType = Value
-  val DAY,HOUR = Value
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/TimeRange.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/TimeRange.scala
deleted file mode 100644
index 0be2564..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/TimeRange.scala
+++ /dev/null
@@ -1,39 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.model
-
-import java.util.Date
-
-/**
-  * A special form of a 'Range' class to denote Time range.
-  */
-case class TimeRange (startTime: Long, endTime: Long) {
-  @Override
-  override def toString: String = {
-    "StartTime=" + new Date(startTime) + ", EndTime=" + new Date(endTime)
-  }
-
-  @Override
-  override def equals(obj: scala.Any): Boolean = {
-    if (obj == null) {
-      return false
-    }
-    val that : TimeRange = obj.asInstanceOf[TimeRange]
-    (startTime == that.startTime) && (endTime == that.endTime)
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/TrendAnomalyInstance.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/TrendAnomalyInstance.scala
deleted file mode 100644
index d67747c..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/model/TrendAnomalyInstance.scala
+++ /dev/null
@@ -1,44 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.model
-
-import org.apache.ambari.metrics.adservice.metadata.MetricKey
-import org.apache.ambari.metrics.adservice.model.AnomalyDetectionMethod.AnomalyDetectionMethod
-import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-
-case class TrendAnomalyInstance (metricKey: MetricKey,
-                                 anomalousPeriod: TimeRange,
-                                 referencePeriod: TimeRange,
-                                 methodType: AnomalyDetectionMethod,
-                                 anomalyScore: Double,
-                                 seasonInfo: Season,
-                                 modelParameters: String) extends MetricAnomalyInstance {
-
-  override val anomalyType: AnomalyType = AnomalyType.POINT_IN_TIME
-
-  private def anomalyToString : String = {
-    "Method=" + methodType + ", AnomalyScore=" + anomalyScore + ", Season=" + anomalousPeriod.toString +
-      ", Model Parameters=" + modelParameters
-  }
-
-  @Override
-  override def toString: String = {
-    "Metric : [" + metricKey.toString + ", AnomalousPeriod=" + anomalousPeriod + ", ReferencePeriod=" + referencePeriod +
-      "], Anomaly : [" + anomalyToString + "]"
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala
deleted file mode 100644
index db12307..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala
+++ /dev/null
@@ -1,80 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-package org.apache.ambari.metrics.adservice.resource
-
-import javax.ws.rs.core.MediaType.APPLICATION_JSON
-import javax.ws.rs.core.Response
-import javax.ws.rs.{GET, Path, Produces, QueryParam}
-
-import org.apache.ambari.metrics.adservice.model.{AnomalyType, MetricAnomalyInstance}
-import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-import org.apache.ambari.metrics.adservice.service.ADQueryService
-import org.apache.commons.lang.StringUtils
-
-import com.google.inject.Inject
-
-@Path("/anomaly")
-class AnomalyResource {
-
-  @Inject
-  var aDQueryService: ADQueryService = _
-
-  @GET
-  @Produces(Array(APPLICATION_JSON))
-  def getTopNAnomalies(@QueryParam("type") anType: String,
-                       @QueryParam("startTime") startTime: Long,
-                       @QueryParam("endTime") endTime: Long,
-                       @QueryParam("top") limit: Int): Response = {
-
-    val anomalies: List[MetricAnomalyInstance] = aDQueryService.getTopNAnomaliesByType(
-      parseAnomalyType(anType),
-      parseStartTime(startTime),
-      parseEndTime(endTime),
-      parseTop(limit))
-
-    Response.ok.entity(anomalies).build()
-  }
-
-  private def parseAnomalyType(anomalyType: String) : AnomalyType = {
-    if (StringUtils.isEmpty(anomalyType)) {
-      return AnomalyType.POINT_IN_TIME
-    }
-    AnomalyType.withName(anomalyType.toUpperCase)
-  }
-
-  private def parseStartTime(startTime: Long) : Long = {
-    if (startTime > 0l) {
-      return startTime
-    }
-    System.currentTimeMillis() - 60*60*1000
-  }
-
-  private def parseEndTime(endTime: Long) : Long = {
-    if (endTime > 0l) {
-      return endTime
-    }
-    System.currentTimeMillis()
-  }
-
-  private def parseTop(limit: Int) : Int = {
-    if (limit > 0) {
-      return limit
-    }
-    5
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala
deleted file mode 100644
index 442bf46..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/MetricDefinitionResource.scala
+++ /dev/null
@@ -1,109 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.resource
-
-import javax.ws.rs._
-import javax.ws.rs.core.MediaType.APPLICATION_JSON
-import javax.ws.rs.core.Response
-
-import org.apache.ambari.metrics.adservice.metadata.{MetricDefinitionService, MetricKey, MetricSourceDefinition}
-import org.apache.commons.lang.StringUtils
-
-import com.google.inject.Inject
-
-@Path("/metric-definition")
-class MetricDefinitionResource {
-
-  @Inject
-  var metricDefinitionService: MetricDefinitionService = _
-
-  @GET
-  @Produces(Array(APPLICATION_JSON))
-  @Path("/{name}")
-  def defaultGet(@PathParam("name") definitionName: String): Response  = {
-
-    if (StringUtils.isEmpty(definitionName)) {
-      Response.ok.entity(Map("message" -> "Definition name cannot be empty. Use query parameter 'name'")).build()
-    }
-    val metricSourceDefinition = metricDefinitionService.getDefinitionByName(definitionName)
-    if (metricSourceDefinition != null) {
-      Response.ok.entity(metricSourceDefinition).build()
-    } else {
-      Response.ok.entity(Map("message" -> "Definition not found")).build()
-    }
-  }
-
-  @GET
-  @Produces(Array(APPLICATION_JSON))
-  def getAllMetricDefinitions: Response  = {
-    val metricSourceDefinitionMap: List[MetricSourceDefinition] = metricDefinitionService.getDefinitions
-    Response.ok.entity(metricSourceDefinitionMap).build()
-  }
-
-  @GET
-  @Path("/keys")
-  @Produces(Array(APPLICATION_JSON))
-  def getMetricKeys: Response  = {
-    val metricKeyMap:  Map[String, Set[MetricKey]] = metricDefinitionService.getMetricKeys
-    Response.ok.entity(metricKeyMap).build()
-  }
-
-  @POST
-  @Produces(Array(APPLICATION_JSON))
-  def defaultPost(definition: MetricSourceDefinition) : Response = {
-    if (definition == null) {
-      Response.ok.entity(Map("message" -> "Definition content cannot be empty.")).build()
-    }
-    val success : Boolean = metricDefinitionService.addDefinition(definition)
-    if (success) {
-      Response.ok.entity(Map("message" -> "Definition saved")).build()
-    } else {
-      Response.ok.entity(Map("message" -> "Definition could not be saved")).build()
-    }
-  }
-
-  @PUT
-  @Produces(Array(APPLICATION_JSON))
-  def defaultPut(definition: MetricSourceDefinition) : Response = {
-    if (definition == null) {
-      Response.ok.entity(Map("message" -> "Definition content cannot be empty.")).build()
-    }
-    val success : Boolean = metricDefinitionService.updateDefinition(definition)
-    if (success) {
-      Response.ok.entity(Map("message" -> "Definition updated")).build()
-    } else {
-      Response.ok.entity(Map("message" -> "Definition could not be updated")).build()
-    }
-  }
-
-  @DELETE
-  @Produces(Array(APPLICATION_JSON))
-  @Path("/{name}")
-  def defaultDelete(@PathParam("name") definitionName: String): Response  = {
-
-    if (StringUtils.isEmpty(definitionName)) {
-      Response.ok.entity(Map("message" -> "Definition name cannot be empty. Use query parameter 'name'")).build()
-    }
-    val success: Boolean = metricDefinitionService.deleteDefinitionByName(definitionName)
-    if (success) {
-      Response.ok.entity(Map("message" -> "Definition deleted")).build()
-    } else {
-      Response.ok.entity(Map("message" -> "Definition could not be deleted")).build()
-    }
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala
deleted file mode 100644
index fd55b64..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala
+++ /dev/null
@@ -1,38 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-package org.apache.ambari.metrics.adservice.resource
-
-import java.time.LocalDateTime
-
-import javax.ws.rs.core.MediaType.APPLICATION_JSON
-import javax.ws.rs.core.Response
-import javax.ws.rs.{GET, Path, Produces}
-
-import org.joda.time.DateTime
-
-@Path("/")
-class RootResource {
-
-  @Produces(Array(APPLICATION_JSON))
-  @GET
-  def default: Response = {
-    val dtf = java.time.format.DateTimeFormatter.ofPattern("yyyy/MM/dd HH:mm")
-    Response.ok.entity(Map("name" -> "anomaly-detection-service", "today" -> LocalDateTime.now)).build()
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/SubsystemResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/SubsystemResource.scala
deleted file mode 100644
index e7d7c9a..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/SubsystemResource.scala
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.resource
-
-class SubsystemResource {
-
-  /*
-    GET / UPDATE - parameters (which subsystem, parameters)
-    POST - Update sensitivity of a subsystem (which subsystem, increase or decrease, factor)
-   */
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala
deleted file mode 100644
index 2cfa30f..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala
+++ /dev/null
@@ -1,34 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-package org.apache.ambari.metrics.adservice.service
-
-import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-import org.apache.ambari.metrics.adservice.model.MetricAnomalyInstance
-
-trait ADQueryService extends AbstractADService{
-
-  /**
-    * API to return list of single metric anomalies satisfying a set of conditions from the anomaly store.
-    * @param anomalyType Type of the anomaly (Point In Time / Trend)
-    * @param startTime Start of time range
-    * @param endTime End of time range
-    * @param limit Maximim number of anomaly metrics that need to be returned based on anomaly score.
-    * @return
-    */
-  def getTopNAnomaliesByType(anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int): List[MetricAnomalyInstance]
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala
deleted file mode 100644
index 3b49208..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala
+++ /dev/null
@@ -1,56 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-package org.apache.ambari.metrics.adservice.service
-import org.apache.ambari.metrics.adservice.db.AdAnomalyStoreAccessor
-import org.apache.ambari.metrics.adservice.model.AnomalyType.AnomalyType
-import org.apache.ambari.metrics.adservice.model.MetricAnomalyInstance
-import org.slf4j.{Logger, LoggerFactory}
-
-import com.google.inject.{Inject, Singleton}
-
-@Singleton
-class ADQueryServiceImpl extends ADQueryService {
-
-  val LOG : Logger = LoggerFactory.getLogger(classOf[ADQueryServiceImpl])
-
-  @Inject
-  var adAnomalyStoreAccessor: AdAnomalyStoreAccessor = _
-
-  /**
-    * Initialize Service
-    */
-  override def initialize(): Unit = {
-    LOG.info("Initializing AD Query Service...")
-    adAnomalyStoreAccessor.initialize()
-    LOG.info("Successfully initialized AD Query Service.")
-  }
-
-  /**
-    * Implementation to return list of anomalies satisfying a set of conditions from the anomaly store.
-    *
-    * @param anomalyType Type of the anomaly (Point In Time / Trend)
-    * @param startTime   Start of time range
-    * @param endTime     End of time range
-    * @param limit       Maximim number of anomaly metrics that need to be returned based on anomaly score.
-    * @return
-    */
-  override def getTopNAnomaliesByType(anomalyType: AnomalyType, startTime: Long, endTime: Long, limit: Int): List[MetricAnomalyInstance] = {
-    val anomalies = adAnomalyStoreAccessor.getMetricAnomalies(anomalyType, startTime, endTime, limit)
-    anomalies
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/AbstractADService.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/AbstractADService.scala
deleted file mode 100644
index 56bb999..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/AbstractADService.scala
+++ /dev/null
@@ -1,44 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  *//**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-
-package org.apache.ambari.metrics.adservice.service
-
-trait AbstractADService {
-
-  /**
-    * Initialize Service
-    */
-  def initialize(): Unit
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/MetricAnomalyDetector.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/MetricAnomalyDetector.scala
deleted file mode 100644
index 90c564e..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/MetricAnomalyDetector.scala
+++ /dev/null
@@ -1,110 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.spark.prototype
-
-object MetricAnomalyDetector {
-
-  /*
-    Load current EMA model
-    Filter step - Check if anomaly
-    Collect / Write to AMS / Print.
-   */
-
-//  var brokers = "avijayan-ams-1.openstacklocal:2181,avijayan-ams-2.openstacklocal:2181,avijayan-ams-3.openstacklocal:2181"
-//  var groupId = "ambari-metrics-group"
-//  var topicName = "ambari-metrics-topic"
-//  var numThreads = 1
-//  val anomalyDetectionModels: Array[AnomalyDetectionTechnique] = Array[AnomalyDetectionTechnique]()
-//
-//  def readProperties(propertiesFile: String): Properties = try {
-//    val properties = new Properties
-//    var inputStream = ClassLoader.getSystemResourceAsStream(propertiesFile)
-//    if (inputStream == null) inputStream = new FileInputStream(propertiesFile)
-//    properties.load(inputStream)
-//    properties
-//  } catch {
-//    case ioEx: IOException =>
-//      null
-//  }
-//
-//  def main(args: Array[String]): Unit = {
-//
-//    @transient
-//    lazy val log = org.apache.log4j.LogManager.getLogger("MetricAnomalyDetectorLogger")
-//
-//    if (args.length < 1) {
-//      System.err.println("Usage: MetricSparkConsumer <input-config-file>")
-//      System.exit(1)
-//    }
-//
-//    //Read properties
-//    val properties = readProperties(propertiesFile = args(0))
-//
-//    //Load EMA parameters - w, n
-//    val emaW = properties.getProperty("emaW").toDouble
-//    val emaN = properties.getProperty("emaN").toDouble
-//
-//    //collector info
-//    val collectorHost: String = properties.getProperty("collectorHost")
-//    val collectorPort: String = properties.getProperty("collectorPort")
-//    val collectorProtocol: String = properties.getProperty("collectorProtocol")
-//    val anomalyMetricPublisher = new MetricsCollectorInterface(collectorHost, collectorProtocol, collectorPort)
-//
-//    //Instantiate Kafka stream reader
-//    val sparkConf = new SparkConf().setAppName("AmbariMetricsAnomalyDetector")
-//    val streamingContext = new StreamingContext(sparkConf, Duration(10000))
-//
-//    val topicsSet = topicName.toSet
-//    val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers)
-////    val stream = KafkaUtils.createDirectStream()
-//
-//    val kafkaStream = KafkaUtils.createStream(streamingContext, zkQuorum, groupId, Map(topicName -> numThreads), StorageLevel.MEMORY_AND_DISK_SER_2)
-//    kafkaStream.print()
-//
-//    var timelineMetricsStream = kafkaStream.map( message => {
-//      val mapper = new ObjectMapper
-//      val metrics = mapper.readValue(message._2, classOf[TimelineMetrics])
-//      metrics
-//    })
-//    timelineMetricsStream.print()
-//
-//    var appMetricStream = timelineMetricsStream.map( timelineMetrics => {
-//      (timelineMetrics.getMetrics.get(0).getAppId, timelineMetrics)
-//    })
-//    appMetricStream.print()
-//
-//    var filteredAppMetricStream = appMetricStream.filter( appMetricTuple => {
-//      appIds.contains(appMetricTuple._1)
-//    } )
-//    filteredAppMetricStream.print()
-//
-//    filteredAppMetricStream.foreachRDD( rdd => {
-//      rdd.foreach( appMetricTuple => {
-//        val timelineMetrics = appMetricTuple._2
-//        logger.info("Received Metric (1): " + timelineMetrics.getMetrics.get(0).getMetricName)
-//        log.info("Received Metric (2): " + timelineMetrics.getMetrics.get(0).getMetricName)
-//        for (timelineMetric <- timelineMetrics.getMetrics) {
-//          var anomalies = emaModel.test(timelineMetric)
-//          anomalyMetricPublisher.publish(anomalies)
-//        }
-//      })
-//    })
-//
-//    streamingContext.start()
-//    streamingContext.awaitTermination()
-//  }
-  }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala
deleted file mode 100644
index 466225f..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala
+++ /dev/null
@@ -1,73 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.spark.prototype
-
-object SparkPhoenixReader {
-
-  def main(args: Array[String]) {
-
-//    if (args.length < 6) {
-//      System.err.println("Usage: SparkPhoenixReader <metric_name> <appId> <hostname> <weight> <timessdev> <phoenixConnectionString> <model_dir>")
-//      System.exit(1)
-//    }
-//
-//    var metricName = args(0)
-//    var appId = args(1)
-//    var hostname = args(2)
-//    var weight = args(3).toDouble
-//    var timessdev = args(4).toInt
-//    var phoenixConnectionString = args(5) //avijayan-ams-3.openstacklocal:61181:/ams-hbase-unsecure
-//    var modelDir = args(6)
-//
-//    val conf = new SparkConf()
-//    conf.set("spark.app.name", "AMSAnomalyModelBuilder")
-//    //conf.set("spark.master", "spark://avijayan-ams-2.openstacklocal:7077")
-//
-//    var sc = new SparkContext(conf)
-//    val sqlContext = new SQLContext(sc)
-//
-//    val currentTime = System.currentTimeMillis()
-//    val oneDayBack = currentTime - 24*60*60*1000
-//
-//    val df = sqlContext.load("org.apache.phoenix.spark", Map("table" -> "METRIC_RECORD", "zkUrl" -> phoenixConnectionString))
-//    df.registerTempTable("METRIC_RECORD")
-//    val result = sqlContext.sql("SELECT METRIC_NAME, HOSTNAME, APP_ID, SERVER_TIME, METRIC_SUM, METRIC_COUNT FROM METRIC_RECORD " +
-//      "WHERE METRIC_NAME = '" + metricName + "' AND HOSTNAME = '" + hostname + "' AND APP_ID = '" + appId + "' AND SERVER_TIME < " + currentTime + " AND SERVER_TIME > " + oneDayBack)
-//
-//    var metricValues = new java.util.TreeMap[java.lang.Long, java.lang.Double]
-//    result.collect().foreach(
-//      t => metricValues.put(t.getLong(3), t.getDouble(4) / t.getInt(5))
-//    )
-//
-//    //val seriesName = result.head().getString(0)
-//    //val hostname = result.head().getString(1)
-//    //val appId = result.head().getString(2)
-//
-//    val timelineMetric = new TimelineMetric()
-//    timelineMetric.setMetricName(metricName)
-//    timelineMetric.setAppId(appId)
-//    timelineMetric.setHostName(hostname)
-//    timelineMetric.setMetricValues(metricValues)
-//
-//    var emaModel = new EmaTechnique(weight, timessdev)
-//    emaModel.test(timelineMetric)
-//    emaModel.save(sc, modelDir)
-
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestEmaTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestEmaTechnique.java
deleted file mode 100644
index 76a00a6..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestEmaTechnique.java
+++ /dev/null
@@ -1,106 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype;
-
-import org.apache.ambari.metrics.adservice.prototype.core.RFunctionInvoker;
-import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
-import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.junit.Assert;
-import org.junit.Assume;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-import java.io.File;
-import java.net.URISyntaxException;
-import java.net.URL;
-import java.util.List;
-import java.util.TreeMap;
-
-import static org.apache.ambari.metrics.adservice.prototype.TestRFunctionInvoker.getTS;
-
-public class TestEmaTechnique {
-
-  private static double[] ts;
-  private static String fullFilePath;
-
-  @BeforeClass
-  public static void init() throws URISyntaxException {
-
-    Assume.assumeTrue(System.getenv("R_HOME") != null);
-    ts = getTS(1000);
-    URL url = ClassLoader.getSystemResource("R-scripts");
-    fullFilePath = new File(url.toURI()).getAbsolutePath();
-    RFunctionInvoker.setScriptsDir(fullFilePath);
-  }
-
-  @Test
-  public void testEmaInitialization() {
-
-    EmaTechnique ema = new EmaTechnique(0.5, 3);
-    Assert.assertTrue(ema.getTrackedEmas().isEmpty());
-    Assert.assertTrue(ema.getStartingWeight() == 0.5);
-    Assert.assertTrue(ema.getStartTimesSdev() == 2);
-  }
-
-  @Test
-  public void testEma() {
-    EmaTechnique ema = new EmaTechnique(0.5, 3);
-
-    long now = System.currentTimeMillis();
-
-    TimelineMetric metric1 = new TimelineMetric();
-    metric1.setMetricName("M1");
-    metric1.setHostName("H1");
-    metric1.setStartTime(now - 1000);
-    metric1.setAppId("A1");
-    metric1.setInstanceId(null);
-    metric1.setType("Integer");
-
-    //Train
-    TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
-    for (int i = 0; i < 50; i++) {
-      double metric = 20000 + Math.random();
-      metricValues.put(now - i * 100, metric);
-    }
-    metric1.setMetricValues(metricValues);
-    List<MetricAnomaly> anomalyList = ema.test(metric1);
-//    Assert.assertTrue(anomalyList.isEmpty());
-
-    metricValues = new TreeMap<Long, Double>();
-    for (int i = 0; i < 50; i++) {
-      double metric = 20000 + Math.random();
-      metricValues.put(now - i * 100, metric);
-    }
-    metric1.setMetricValues(metricValues);
-    anomalyList = ema.test(metric1);
-    Assert.assertTrue(!anomalyList.isEmpty());
-    int l1 = anomalyList.size();
-
-    Assert.assertTrue(ema.updateModel(metric1, false, 20));
-    anomalyList = ema.test(metric1);
-    int l2 = anomalyList.size();
-    Assert.assertTrue(l2 < l1);
-
-    Assert.assertTrue(ema.updateModel(metric1, true, 50));
-    anomalyList = ema.test(metric1);
-    int l3 = anomalyList.size();
-    Assert.assertTrue(l3 > l2 && l3 > l1);
-
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestRFunctionInvoker.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestRFunctionInvoker.java
deleted file mode 100644
index 98fa050..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestRFunctionInvoker.java
+++ /dev/null
@@ -1,161 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype;
-
-import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
-import org.apache.ambari.metrics.adservice.prototype.common.ResultSet;
-import org.apache.ambari.metrics.adservice.prototype.core.RFunctionInvoker;
-import org.apache.ambari.metrics.adservice.seriesgenerator.UniformMetricSeries;
-import org.apache.commons.lang.ArrayUtils;
-import org.junit.Assert;
-import org.junit.Assume;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-import java.io.File;
-import java.net.URISyntaxException;
-import java.net.URL;
-import java.util.HashMap;
-import java.util.Map;
-
-public class TestRFunctionInvoker {
-
-  private static String metricName = "TestMetric";
-  private static double[] ts;
-  private static String fullFilePath;
-
-  @BeforeClass
-  public static void init() throws URISyntaxException {
-
-    Assume.assumeTrue(System.getenv("R_HOME") != null);
-    ts = getTS(1000);
-    URL url = ClassLoader.getSystemResource("R-scripts");
-    fullFilePath = new File(url.toURI()).getAbsolutePath();
-    RFunctionInvoker.setScriptsDir(fullFilePath);
-  }
-
-  @Test
-  public void testTukeys() throws URISyntaxException {
-
-    double[] train_ts = ArrayUtils.subarray(ts, 0, 750);
-    double[] train_x = getRandomData(750);
-    DataSeries trainData = new DataSeries(metricName, train_ts, train_x);
-
-    double[] test_ts = ArrayUtils.subarray(ts, 750, 1000);
-    double[] test_x = getRandomData(250);
-    test_x[50] = 5.5; //Anomaly
-    DataSeries testData = new DataSeries(metricName, test_ts, test_x);
-    Map<String, String> configs = new HashMap();
-    configs.put("tukeys.n", "3");
-
-    ResultSet rs = RFunctionInvoker.tukeys(trainData, testData, configs);
-    Assert.assertEquals(rs.resultset.size(), 2);
-    Assert.assertEquals(rs.resultset.get(1)[0], 5.5, 0.1);
-
-  }
-
-  public static void main(String[] args) throws URISyntaxException {
-
-    String metricName = "TestMetric";
-    double[] ts = getTS(1000);
-    URL url = ClassLoader.getSystemResource("R-scripts");
-    String fullFilePath = new File(url.toURI()).getAbsolutePath();
-    RFunctionInvoker.setScriptsDir(fullFilePath);
-
-    double[] train_ts = ArrayUtils.subarray(ts, 0, 750);
-    double[] train_x = getRandomData(750);
-    DataSeries trainData = new DataSeries(metricName, train_ts, train_x);
-
-    double[] test_ts = ArrayUtils.subarray(ts, 750, 1000);
-    double[] test_x = getRandomData(250);
-    test_x[50] = 5.5; //Anomaly
-    DataSeries testData = new DataSeries(metricName, test_ts, test_x);
-    ResultSet rs;
-
-    Map<String, String> configs = new HashMap();
-
-    System.out.println("TUKEYS");
-    configs.put("tukeys.n", "3");
-    rs = RFunctionInvoker.tukeys(trainData, testData, configs);
-    rs.print();
-    System.out.println("--------------");
-
-//    System.out.println("EMA Global");
-//    configs.put("ema.n", "3");
-//    configs.put("ema.w", "0.8");
-//    rs = RFunctionInvoker.ema_global(trainData, testData, configs);
-//    rs.print();
-//    System.out.println("--------------");
-//
-//    System.out.println("EMA Daily");
-//    rs = RFunctionInvoker.ema_daily(trainData, testData, configs);
-//    rs.print();
-//    System.out.println("--------------");
-//
-//    configs.put("ks.p_value", "0.00005");
-//    System.out.println("KS Test");
-//    rs = RFunctionInvoker.ksTest(trainData, testData, configs);
-//    rs.print();
-//    System.out.println("--------------");
-//
-    ts = getTS(5000);
-    train_ts = ArrayUtils.subarray(ts, 0, 4800);
-    train_x = getRandomData(4800);
-    trainData = new DataSeries(metricName, train_ts, train_x);
-    test_ts = ArrayUtils.subarray(ts, 4800, 5000);
-    test_x = getRandomData(200);
-    for (int i = 0; i < 200; i++) {
-      test_x[i] = test_x[i] * 5;
-    }
-    testData = new DataSeries(metricName, test_ts, test_x);
-    configs.put("hsdev.n", "3");
-    configs.put("hsdev.nhp", "3");
-    configs.put("hsdev.interval", "86400000");
-    configs.put("hsdev.period", "604800000");
-    System.out.println("HSdev");
-    rs = RFunctionInvoker.hsdev(trainData, testData, configs);
-    rs.print();
-    System.out.println("--------------");
-
-  }
-
-  static double[] getTS(int n) {
-    long currentTime = System.currentTimeMillis();
-    double[] ts = new double[n];
-    currentTime = currentTime - (currentTime % (5 * 60 * 1000));
-
-    for (int i = 0, j = n - 1; i < n; i++, j--) {
-      ts[j] = currentTime;
-      currentTime = currentTime - (5 * 60 * 1000);
-    }
-    return ts;
-  }
-
-  static double[] getRandomData(int n) {
-
-    UniformMetricSeries metricSeries =  new UniformMetricSeries(10, 0.1,0.05, 0.6, 0.8, true);
-    return metricSeries.getSeries(n);
-
-//    double[] metrics = new double[n];
-//    Random random = new Random();
-//    for (int i = 0; i < n; i++) {
-//      metrics[i] = random.nextDouble();
-//    }
-//    return metrics;
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestTukeys.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestTukeys.java
deleted file mode 100644
index 1077a9c..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestTukeys.java
+++ /dev/null
@@ -1,100 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.prototype;
-
-import org.apache.ambari.metrics.adservice.prototype.core.MetricsCollectorInterface;
-import org.apache.ambari.metrics.adservice.prototype.core.RFunctionInvoker;
-import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
-import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-import org.junit.Assume;
-import org.junit.BeforeClass;
-import org.junit.Test;
-
-import java.io.File;
-import java.net.URISyntaxException;
-import java.net.URL;
-import java.net.UnknownHostException;
-import java.util.List;
-import java.util.TreeMap;
-
-public class TestTukeys {
-
-  @BeforeClass
-  public static void init() throws URISyntaxException {
-    Assume.assumeTrue(System.getenv("R_HOME") != null);
-  }
-
-  @Test
-  public void testPointInTimeDetectionSystem() throws UnknownHostException, URISyntaxException {
-
-    URL url = ClassLoader.getSystemResource("R-scripts");
-    String fullFilePath = new File(url.toURI()).getAbsolutePath();
-    RFunctionInvoker.setScriptsDir(fullFilePath);
-
-    MetricsCollectorInterface metricsCollectorInterface = new MetricsCollectorInterface("avijayan-ams-1.openstacklocal","http", "6188");
-
-    EmaTechnique ema = new EmaTechnique(0.5, 3);
-    long now = System.currentTimeMillis();
-
-    TimelineMetric metric1 = new TimelineMetric();
-    metric1.setMetricName("mm9");
-    metric1.setHostName(MetricsCollectorInterface.getDefaultLocalHostName());
-    metric1.setStartTime(now);
-    metric1.setAppId("aa9");
-    metric1.setInstanceId(null);
-    metric1.setType("Integer");
-
-    //Train
-    TreeMap<Long, Double> metricValues = new TreeMap<Long, Double>();
-
-    //2hr data.
-    for (int i = 0; i < 120; i++) {
-      double metric = 20000 + Math.random();
-      metricValues.put(now - i * 60 * 1000, metric);
-    }
-    metric1.setMetricValues(metricValues);
-    TimelineMetrics timelineMetrics = new TimelineMetrics();
-    timelineMetrics.addOrMergeTimelineMetric(metric1);
-
-    metricsCollectorInterface.emitMetrics(timelineMetrics);
-
-    List<MetricAnomaly> anomalyList = ema.test(metric1);
-    metricsCollectorInterface.publish(anomalyList);
-//
-//    PointInTimeADSystem pointInTimeADSystem = new PointInTimeADSystem(ema, metricsCollectorInterface, 3, 5*60*1000, 15*60*1000);
-//    pointInTimeADSystem.runOnce();
-//
-//    List<MetricAnomaly> anomalyList2 = ema.test(metric1);
-//
-//    pointInTimeADSystem.runOnce();
-//    List<MetricAnomaly> anomalyList3 = ema.test(metric1);
-//
-//    pointInTimeADSystem.runOnce();
-//    List<MetricAnomaly> anomalyList4 = ema.test(metric1);
-//
-//    pointInTimeADSystem.runOnce();
-//    List<MetricAnomaly> anomalyList5 = ema.test(metric1);
-//
-//    pointInTimeADSystem.runOnce();
-//    List<MetricAnomaly> anomalyList6 = ema.test(metric1);
-//
-//    Assert.assertTrue(anomalyList6.size() < anomalyList.size());
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/AbstractMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/AbstractMetricSeries.java
deleted file mode 100644
index 635a929..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/AbstractMetricSeries.java
+++ /dev/null
@@ -1,25 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.seriesgenerator;
-
-public interface AbstractMetricSeries {
-
-  public double nextValue();
-  public double[] getSeries(int n);
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/DualBandMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/DualBandMetricSeries.java
deleted file mode 100644
index a9e3f30..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/DualBandMetricSeries.java
+++ /dev/null
@@ -1,88 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.seriesgenerator;
-
-import java.util.Random;
-
-public class DualBandMetricSeries implements AbstractMetricSeries {
-
-  double lowBandValue = 0.0;
-  double lowBandDeviationPercentage = 0.0;
-  int lowBandPeriodSize = 10;
-  double highBandValue = 1.0;
-  double highBandDeviationPercentage = 0.0;
-  int highBandPeriodSize = 10;
-
-  Random random = new Random();
-  double lowBandValueLowerLimit, lowBandValueHigherLimit;
-  double highBandLowerLimit, highBandUpperLimit;
-  int l = 0, h = 0;
-
-  public DualBandMetricSeries(double lowBandValue,
-                              double lowBandDeviationPercentage,
-                              int lowBandPeriodSize,
-                              double highBandValue,
-                              double highBandDeviationPercentage,
-                              int highBandPeriodSize) {
-    this.lowBandValue = lowBandValue;
-    this.lowBandDeviationPercentage = lowBandDeviationPercentage;
-    this.lowBandPeriodSize = lowBandPeriodSize;
-    this.highBandValue = highBandValue;
-    this.highBandDeviationPercentage = highBandDeviationPercentage;
-    this.highBandPeriodSize = highBandPeriodSize;
-    init();
-  }
-
-  private void init() {
-    lowBandValueLowerLimit = lowBandValue - lowBandDeviationPercentage * lowBandValue;
-    lowBandValueHigherLimit = lowBandValue + lowBandDeviationPercentage * lowBandValue;
-    highBandLowerLimit = highBandValue - highBandDeviationPercentage * highBandValue;
-    highBandUpperLimit = highBandValue + highBandDeviationPercentage * highBandValue;
-  }
-
-  @Override
-  public double nextValue() {
-
-    double value = 0.0;
-
-    if (l < lowBandPeriodSize) {
-      value = lowBandValueLowerLimit + (lowBandValueHigherLimit - lowBandValueLowerLimit) * random.nextDouble();
-      l++;
-    } else if (h < highBandPeriodSize) {
-      value = highBandLowerLimit + (highBandUpperLimit - highBandLowerLimit) * random.nextDouble();
-      h++;
-    }
-
-    if (l == lowBandPeriodSize && h == highBandPeriodSize) {
-      l = 0;
-      h = 0;
-    }
-
-    return value;
-  }
-
-  @Override
-  public double[] getSeries(int n) {
-    double[] series = new double[n];
-    for (int i = 0; i < n; i++) {
-      series[i] = nextValue();
-    }
-    return series;
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MetricSeriesGeneratorFactory.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MetricSeriesGeneratorFactory.java
deleted file mode 100644
index a50b433..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MetricSeriesGeneratorFactory.java
+++ /dev/null
@@ -1,377 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.seriesgenerator;
-
-import java.util.Map;
-
-public class MetricSeriesGeneratorFactory {
-
-  /**
-   * Return a normally distributed data series with some deviation % and outliers.
-   *
-   * @param n                                size of the data series
-   * @param value                            The value around which the uniform data series is centered on.
-   * @param deviationPercentage              The allowed deviation % on either side of the uniform value. For example, if value = 10, and deviation % is 0.1, the series values lie between 0.9 to 1.1.
-   * @param outlierProbability               The probability of finding an outlier in the series.
-   * @param outlierDeviationLowerPercentage  min percentage outlier should be away from the uniform value in % terms. if value = 10 and outlierDeviationPercentage = 30%, the outlier is 7 and  13.
-   * @param outlierDeviationHigherPercentage max percentage outlier should be away from the uniform value in % terms. if value = 10 and outlierDeviationPercentage = 60%, the outlier is 4 and  16.
-   * @param outliersAboveValue               Outlier should be greater or smaller than the value.
-   * @return uniform series
-   */
-  public static double[] createUniformSeries(int n,
-                                             double value,
-                                             double deviationPercentage,
-                                             double outlierProbability,
-                                             double outlierDeviationLowerPercentage,
-                                             double outlierDeviationHigherPercentage,
-                                             boolean outliersAboveValue) {
-
-    UniformMetricSeries metricSeries = new UniformMetricSeries(value,
-      deviationPercentage,
-      outlierProbability,
-      outlierDeviationLowerPercentage,
-      outlierDeviationHigherPercentage,
-      outliersAboveValue);
-
-    return metricSeries.getSeries(n);
-  }
-
-
-  /**
-   * /**
-   * Returns a normally distributed series.
-   *
-   * @param n                             size of the data series
-   * @param mean                          mean of the distribution
-   * @param sd                            sd of the distribution
-   * @param outlierProbability            sd of the distribution
-   * @param outlierDeviationSDTimesLower  Lower Limit of the outlier with respect to times sdev from the mean.
-   * @param outlierDeviationSDTimesHigher Higher Limit of the outlier with respect to times sdev from the mean.
-   * @param outlierOnRightEnd             Outlier should be on the right end or the left end.
-   * @return normal series
-   */
-  public static double[] createNormalSeries(int n,
-                                            double mean,
-                                            double sd,
-                                            double outlierProbability,
-                                            double outlierDeviationSDTimesLower,
-                                            double outlierDeviationSDTimesHigher,
-                                            boolean outlierOnRightEnd) {
-
-
-    NormalMetricSeries metricSeries = new NormalMetricSeries(mean,
-      sd,
-      outlierProbability,
-      outlierDeviationSDTimesLower,
-      outlierDeviationSDTimesHigher,
-      outlierOnRightEnd);
-
-    return metricSeries.getSeries(n);
-  }
-
-
-  /**
-   * Returns a monotonically increasing / decreasing series
-   *
-   * @param n                                size of the data series
-   * @param startValue                       Start value of the monotonic sequence
-   * @param slope                            direction of monotonicity m > 0 for increasing and m < 0 for decreasing.
-   * @param deviationPercentage              The allowed deviation % on either side of the current 'y' value. For example, if current value = 10 according to slope, and deviation % is 0.1, the series values lie between 0.9 to 1.1.
-   * @param outlierProbability               The probability of finding an outlier in the series.
-   * @param outlierDeviationLowerPercentage  min percentage outlier should be away from the current 'y' value in % terms. if value = 10 and outlierDeviationPercentage = 30%, the outlier is 7 and  13.
-   * @param outlierDeviationHigherPercentage max percentage outlier should be away from the current 'y' value in % terms. if value = 10 and outlierDeviationPercentage = 60%, the outlier is 4 and  16.
-   * @param outliersAboveValue               Outlier should be greater or smaller than the 'y' value.
-   * @return
-   */
-  public static double[] createMonotonicSeries(int n,
-                                               double startValue,
-                                               double slope,
-                                               double deviationPercentage,
-                                               double outlierProbability,
-                                               double outlierDeviationLowerPercentage,
-                                               double outlierDeviationHigherPercentage,
-                                               boolean outliersAboveValue) {
-
-    MonotonicMetricSeries metricSeries = new MonotonicMetricSeries(startValue,
-      slope,
-      deviationPercentage,
-      outlierProbability,
-      outlierDeviationLowerPercentage,
-      outlierDeviationHigherPercentage,
-      outliersAboveValue);
-
-    return metricSeries.getSeries(n);
-  }
-
-
-  /**
-   * Returns a dual band series (lower and higher)
-   *
-   * @param n                           size of the data series
-   * @param lowBandValue                lower band value
-   * @param lowBandDeviationPercentage  lower band deviation
-   * @param lowBandPeriodSize           lower band
-   * @param highBandValue               high band centre value
-   * @param highBandDeviationPercentage high band deviation.
-   * @param highBandPeriodSize          high band size
-   * @return
-   */
-  public static double[] getDualBandSeries(int n,
-                                           double lowBandValue,
-                                           double lowBandDeviationPercentage,
-                                           int lowBandPeriodSize,
-                                           double highBandValue,
-                                           double highBandDeviationPercentage,
-                                           int highBandPeriodSize) {
-
-    DualBandMetricSeries metricSeries  = new DualBandMetricSeries(lowBandValue,
-      lowBandDeviationPercentage,
-      lowBandPeriodSize,
-      highBandValue,
-      highBandDeviationPercentage,
-      highBandPeriodSize);
-
-    return metricSeries.getSeries(n);
-  }
-
-  /**
-   * Returns a step function series.
-   *
-   * @param n                              size of the data series
-   * @param startValue                     start steady value
-   * @param steadyValueDeviationPercentage required devation in the steady state value
-   * @param steadyPeriodSlope              direction of monotonicity m > 0 for increasing and m < 0 for decreasing, m = 0 no increase or decrease.
-   * @param steadyPeriodMinSize            min size for step period
-   * @param steadyPeriodMaxSize            max size for step period.
-   * @param stepChangePercentage           Increase / decrease in steady state to denote a step in terms of deviation percentage from the last value.
-   * @param upwardStep                     upward or downward step.
-   * @return
-   */
-  public static double[] getStepFunctionSeries(int n,
-                                               double startValue,
-                                               double steadyValueDeviationPercentage,
-                                               double steadyPeriodSlope,
-                                               int steadyPeriodMinSize,
-                                               int steadyPeriodMaxSize,
-                                               double stepChangePercentage,
-                                               boolean upwardStep) {
-
-    StepFunctionMetricSeries metricSeries = new StepFunctionMetricSeries(startValue,
-      steadyValueDeviationPercentage,
-      steadyPeriodSlope,
-      steadyPeriodMinSize,
-      steadyPeriodMaxSize,
-      stepChangePercentage,
-      upwardStep);
-
-    return metricSeries.getSeries(n);
-  }
-
-  /**
-   * Series with small period of turbulence and then back to steady.
-   *
-   * @param n                                        size of the data series
-   * @param steadyStateValue                         steady state center value
-   * @param steadyStateDeviationPercentage           steady state deviation in percentage
-   * @param turbulentPeriodDeviationLowerPercentage  turbulent state lower limit in terms of percentage from centre value.
-   * @param turbulentPeriodDeviationHigherPercentage turbulent state higher limit in terms of percentage from centre value.
-   * @param turbulentPeriodLength                    turbulent period length (number of points)
-   * @param turbulentStatePosition                   Where the turbulent state should be 0 - at the beginning, 1 - in the middle (25% - 50% of the series), 2 - at the end of the series.
-   * @return
-   */
-  public static double[] getSteadySeriesWithTurbulentPeriod(int n,
-                                                            double steadyStateValue,
-                                                            double steadyStateDeviationPercentage,
-                                                            double turbulentPeriodDeviationLowerPercentage,
-                                                            double turbulentPeriodDeviationHigherPercentage,
-                                                            int turbulentPeriodLength,
-                                                            int turbulentStatePosition
-  ) {
-
-
-    SteadyWithTurbulenceMetricSeries metricSeries = new SteadyWithTurbulenceMetricSeries(n,
-      steadyStateValue,
-      steadyStateDeviationPercentage,
-      turbulentPeriodDeviationLowerPercentage,
-      turbulentPeriodDeviationHigherPercentage,
-      turbulentPeriodLength,
-      turbulentStatePosition);
-
-    return metricSeries.getSeries(n);
-  }
-
-
-  public static double[] generateSeries(String type, int n, Map<String, String> configs) {
-
-    double[] series;
-    switch (type) {
-
-      case "normal":
-        series = createNormalSeries(n,
-          Double.parseDouble(configs.getOrDefault("mean", "0")),
-          Double.parseDouble(configs.getOrDefault("sd", "1")),
-          Double.parseDouble(configs.getOrDefault("outlierProbability", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierDeviationSDTimesLower", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierDeviationSDTimesHigher", "0")),
-          Boolean.parseBoolean(configs.getOrDefault("outlierOnRightEnd", "true")));
-        break;
-
-      case "uniform":
-        series = createUniformSeries(n,
-          Double.parseDouble(configs.getOrDefault("value", "10")),
-          Double.parseDouble(configs.getOrDefault("deviationPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierProbability", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierDeviationLowerPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierDeviationHigherPercentage", "0")),
-          Boolean.parseBoolean(configs.getOrDefault("outliersAboveValue", "true")));
-        break;
-
-      case "monotonic":
-        series = createMonotonicSeries(n,
-          Double.parseDouble(configs.getOrDefault("startValue", "10")),
-          Double.parseDouble(configs.getOrDefault("slope", "0")),
-          Double.parseDouble(configs.getOrDefault("deviationPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierProbability", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierDeviationLowerPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierDeviationHigherPercentage", "0")),
-          Boolean.parseBoolean(configs.getOrDefault("outliersAboveValue", "true")));
-        break;
-
-      case "dualband":
-        series = getDualBandSeries(n,
-          Double.parseDouble(configs.getOrDefault("lowBandValue", "10")),
-          Double.parseDouble(configs.getOrDefault("lowBandDeviationPercentage", "0")),
-          Integer.parseInt(configs.getOrDefault("lowBandPeriodSize", "0")),
-          Double.parseDouble(configs.getOrDefault("highBandValue", "10")),
-          Double.parseDouble(configs.getOrDefault("highBandDeviationPercentage", "0")),
-          Integer.parseInt(configs.getOrDefault("highBandPeriodSize", "0")));
-        break;
-
-      case "step":
-        series = getStepFunctionSeries(n,
-          Double.parseDouble(configs.getOrDefault("startValue", "10")),
-          Double.parseDouble(configs.getOrDefault("steadyValueDeviationPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("steadyPeriodSlope", "0")),
-          Integer.parseInt(configs.getOrDefault("steadyPeriodMinSize", "0")),
-          Integer.parseInt(configs.getOrDefault("steadyPeriodMaxSize", "0")),
-          Double.parseDouble(configs.getOrDefault("stepChangePercentage", "0")),
-          Boolean.parseBoolean(configs.getOrDefault("upwardStep", "true")));
-        break;
-
-      case "turbulence":
-        series = getSteadySeriesWithTurbulentPeriod(n,
-          Double.parseDouble(configs.getOrDefault("steadyStateValue", "10")),
-          Double.parseDouble(configs.getOrDefault("steadyStateDeviationPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("turbulentPeriodDeviationLowerPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("turbulentPeriodDeviationHigherPercentage", "10")),
-          Integer.parseInt(configs.getOrDefault("turbulentPeriodLength", "0")),
-          Integer.parseInt(configs.getOrDefault("turbulentStatePosition", "0")));
-        break;
-
-      default:
-        series = createNormalSeries(n,
-          0,
-          1,
-          0,
-          0,
-          0,
-          true);
-    }
-    return series;
-  }
-
-  public static AbstractMetricSeries generateSeries(String type, Map<String, String> configs) {
-
-    AbstractMetricSeries series;
-    switch (type) {
-
-      case "normal":
-        series = new NormalMetricSeries(Double.parseDouble(configs.getOrDefault("mean", "0")),
-          Double.parseDouble(configs.getOrDefault("sd", "1")),
-          Double.parseDouble(configs.getOrDefault("outlierProbability", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierDeviationSDTimesLower", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierDeviationSDTimesHigher", "0")),
-          Boolean.parseBoolean(configs.getOrDefault("outlierOnRightEnd", "true")));
-        break;
-
-      case "uniform":
-        series = new UniformMetricSeries(
-          Double.parseDouble(configs.getOrDefault("value", "10")),
-          Double.parseDouble(configs.getOrDefault("deviationPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierProbability", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierDeviationLowerPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierDeviationHigherPercentage", "0")),
-          Boolean.parseBoolean(configs.getOrDefault("outliersAboveValue", "true")));
-        break;
-
-      case "monotonic":
-        series = new MonotonicMetricSeries(
-          Double.parseDouble(configs.getOrDefault("startValue", "10")),
-          Double.parseDouble(configs.getOrDefault("slope", "0")),
-          Double.parseDouble(configs.getOrDefault("deviationPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierProbability", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierDeviationLowerPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("outlierDeviationHigherPercentage", "0")),
-          Boolean.parseBoolean(configs.getOrDefault("outliersAboveValue", "true")));
-        break;
-
-      case "dualband":
-        series = new DualBandMetricSeries(
-          Double.parseDouble(configs.getOrDefault("lowBandValue", "10")),
-          Double.parseDouble(configs.getOrDefault("lowBandDeviationPercentage", "0")),
-          Integer.parseInt(configs.getOrDefault("lowBandPeriodSize", "0")),
-          Double.parseDouble(configs.getOrDefault("highBandValue", "10")),
-          Double.parseDouble(configs.getOrDefault("highBandDeviationPercentage", "0")),
-          Integer.parseInt(configs.getOrDefault("highBandPeriodSize", "0")));
-        break;
-
-      case "step":
-        series = new StepFunctionMetricSeries(
-          Double.parseDouble(configs.getOrDefault("startValue", "10")),
-          Double.parseDouble(configs.getOrDefault("steadyValueDeviationPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("steadyPeriodSlope", "0")),
-          Integer.parseInt(configs.getOrDefault("steadyPeriodMinSize", "0")),
-          Integer.parseInt(configs.getOrDefault("steadyPeriodMaxSize", "0")),
-          Double.parseDouble(configs.getOrDefault("stepChangePercentage", "0")),
-          Boolean.parseBoolean(configs.getOrDefault("upwardStep", "true")));
-        break;
-
-      case "turbulence":
-        series = new SteadyWithTurbulenceMetricSeries(
-          Integer.parseInt(configs.getOrDefault("approxSeriesLength", "100")),
-          Double.parseDouble(configs.getOrDefault("steadyStateValue", "10")),
-          Double.parseDouble(configs.getOrDefault("steadyStateDeviationPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("turbulentPeriodDeviationLowerPercentage", "0")),
-          Double.parseDouble(configs.getOrDefault("turbulentPeriodDeviationHigherPercentage", "10")),
-          Integer.parseInt(configs.getOrDefault("turbulentPeriodLength", "0")),
-          Integer.parseInt(configs.getOrDefault("turbulentStatePosition", "0")));
-        break;
-
-      default:
-        series = new NormalMetricSeries(0,
-          1,
-          0,
-          0,
-          0,
-          true);
-    }
-    return series;
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MetricSeriesGeneratorTest.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MetricSeriesGeneratorTest.java
deleted file mode 100644
index 03537e4..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MetricSeriesGeneratorTest.java
+++ /dev/null
@@ -1,101 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.seriesgenerator;
-
-import org.junit.Assert;
-import org.junit.Test;
-
-public class MetricSeriesGeneratorTest {
-
-  @Test
-  public void testUniformSeries() {
-
-    UniformMetricSeries metricSeries = new UniformMetricSeries(5, 0.2, 0, 0, 0, true);
-    Assert.assertTrue(metricSeries.nextValue() <= 6 && metricSeries.nextValue() >= 4);
-
-    double[] uniformSeries = MetricSeriesGeneratorFactory.createUniformSeries(50, 10, 0.2, 0.1, 0.4, 0.5, true);
-    Assert.assertTrue(uniformSeries.length == 50);
-
-    for (int i = 0; i < uniformSeries.length; i++) {
-      double value = uniformSeries[i];
-
-      if (value > 10 * 1.2) {
-        Assert.assertTrue(value >= 10 * 1.4 && value <= 10 * 1.6);
-      } else {
-        Assert.assertTrue(value >= 10 * 0.8 && value <= 10 * 1.2);
-      }
-    }
-  }
-
-  @Test
-  public void testNormalSeries() {
-    NormalMetricSeries metricSeries = new NormalMetricSeries(0, 1, 0, 0, 0, true);
-    Assert.assertTrue(metricSeries.nextValue() <= 3 && metricSeries.nextValue() >= -3);
-  }
-
-  @Test
-  public void testMonotonicSeries() {
-
-    MonotonicMetricSeries metricSeries = new MonotonicMetricSeries(0, 0.5, 0, 0, 0, 0, true);
-    Assert.assertTrue(metricSeries.nextValue() == 0);
-    Assert.assertTrue(metricSeries.nextValue() == 0.5);
-
-    double[] incSeries = MetricSeriesGeneratorFactory.createMonotonicSeries(20, 0, 0.5, 0, 0, 0, 0, true);
-    Assert.assertTrue(incSeries.length == 20);
-    for (int i = 0; i < incSeries.length; i++) {
-      Assert.assertTrue(incSeries[i] == i * 0.5);
-    }
-  }
-
-  @Test
-  public void testDualBandSeries() {
-    double[] dualBandSeries = MetricSeriesGeneratorFactory.getDualBandSeries(30, 5, 0.2, 5, 15, 0.3, 4);
-    Assert.assertTrue(dualBandSeries[0] >= 4 && dualBandSeries[0] <= 6);
-    Assert.assertTrue(dualBandSeries[4] >= 4 && dualBandSeries[4] <= 6);
-    Assert.assertTrue(dualBandSeries[5] >= 10.5 && dualBandSeries[5] <= 19.5);
-    Assert.assertTrue(dualBandSeries[8] >= 10.5 && dualBandSeries[8] <= 19.5);
-    Assert.assertTrue(dualBandSeries[9] >= 4 && dualBandSeries[9] <= 6);
-  }
-
-  @Test
-  public void testStepSeries() {
-    double[] stepSeries = MetricSeriesGeneratorFactory.getStepFunctionSeries(30, 10, 0, 0, 5, 5, 0.5, true);
-
-    Assert.assertTrue(stepSeries[0] == 10);
-    Assert.assertTrue(stepSeries[4] == 10);
-
-    Assert.assertTrue(stepSeries[5] == 10*1.5);
-    Assert.assertTrue(stepSeries[9] == 10*1.5);
-
-    Assert.assertTrue(stepSeries[10] == 10*1.5*1.5);
-    Assert.assertTrue(stepSeries[14] == 10*1.5*1.5);
-  }
-
-  @Test
-  public void testSteadySeriesWithTurbulence() {
-    double[] steadySeriesWithTurbulence = MetricSeriesGeneratorFactory.getSteadySeriesWithTurbulentPeriod(30, 5, 0, 1, 1, 5, 1);
-
-    int count = 0;
-    for (int i = 0; i < steadySeriesWithTurbulence.length; i++) {
-      if (steadySeriesWithTurbulence[i] == 10) {
-        count++;
-      }
-    }
-    Assert.assertTrue(count == 5);
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MonotonicMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MonotonicMetricSeries.java
deleted file mode 100644
index 8bd1a9b..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MonotonicMetricSeries.java
+++ /dev/null
@@ -1,101 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.seriesgenerator;
-
-import java.util.Random;
-
-public class MonotonicMetricSeries implements AbstractMetricSeries {
-
-  double startValue = 0.0;
-  double slope = 0.5;
-  double deviationPercentage = 0.0;
-  double outlierProbability = 0.0;
-  double outlierDeviationLowerPercentage = 0.0;
-  double outlierDeviationHigherPercentage = 0.0;
-  boolean outliersAboveValue = true;
-
-  Random random = new Random();
-  double nonOutlierProbability;
-
-  // y = mx + c
-  double y;
-  double m;
-  double x;
-  double c;
-
-  public MonotonicMetricSeries(double startValue,
-                               double slope,
-                               double deviationPercentage,
-                               double outlierProbability,
-                               double outlierDeviationLowerPercentage,
-                               double outlierDeviationHigherPercentage,
-                               boolean outliersAboveValue) {
-    this.startValue = startValue;
-    this.slope = slope;
-    this.deviationPercentage = deviationPercentage;
-    this.outlierProbability = outlierProbability;
-    this.outlierDeviationLowerPercentage = outlierDeviationLowerPercentage;
-    this.outlierDeviationHigherPercentage = outlierDeviationHigherPercentage;
-    this.outliersAboveValue = outliersAboveValue;
-    init();
-  }
-
-  private void init() {
-    y = startValue;
-    m = slope;
-    x = 1;
-    c = y - (m * x);
-    nonOutlierProbability = 1.0 - outlierProbability;
-  }
-
-  @Override
-  public double nextValue() {
-
-    double value;
-    double probability = random.nextDouble();
-
-    y = m * x + c;
-    if (probability <= nonOutlierProbability) {
-      double valueDeviationLowerLimit = y - deviationPercentage * y;
-      double valueDeviationHigherLimit = y + deviationPercentage * y;
-      value = valueDeviationLowerLimit + (valueDeviationHigherLimit - valueDeviationLowerLimit) * random.nextDouble();
-    } else {
-      if (outliersAboveValue) {
-        double outlierLowerLimit = y + outlierDeviationLowerPercentage * y;
-        double outlierUpperLimit = y + outlierDeviationHigherPercentage * y;
-        value = outlierLowerLimit + (outlierUpperLimit - outlierLowerLimit) * random.nextDouble();
-      } else {
-        double outlierLowerLimit = y - outlierDeviationLowerPercentage * y;
-        double outlierUpperLimit = y - outlierDeviationHigherPercentage * y;
-        value = outlierUpperLimit + (outlierLowerLimit - outlierUpperLimit) * random.nextDouble();
-      }
-    }
-    x++;
-    return value;
-  }
-
-  @Override
-  public double[] getSeries(int n) {
-    double[] series = new double[n];
-    for (int i = 0; i < n; i++) {
-      series[i] = nextValue();
-    }
-    return series;
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/NormalMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/NormalMetricSeries.java
deleted file mode 100644
index fdedb6e..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/NormalMetricSeries.java
+++ /dev/null
@@ -1,81 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.seriesgenerator;
-
-import java.util.Random;
-
-public class NormalMetricSeries implements AbstractMetricSeries {
-
-  double mean = 0.0;
-  double sd = 1.0;
-  double outlierProbability = 0.0;
-  double outlierDeviationSDTimesLower = 0.0;
-  double outlierDeviationSDTimesHigher = 0.0;
-  boolean outlierOnRightEnd = true;
-
-  Random random = new Random();
-  double nonOutlierProbability;
-
-
-  public NormalMetricSeries(double mean,
-                            double sd,
-                            double outlierProbability,
-                            double outlierDeviationSDTimesLower,
-                            double outlierDeviationSDTimesHigher,
-                            boolean outlierOnRightEnd) {
-    this.mean = mean;
-    this.sd = sd;
-    this.outlierProbability = outlierProbability;
-    this.outlierDeviationSDTimesLower = outlierDeviationSDTimesLower;
-    this.outlierDeviationSDTimesHigher = outlierDeviationSDTimesHigher;
-    this.outlierOnRightEnd = outlierOnRightEnd;
-    init();
-  }
-
-  private void init() {
-    nonOutlierProbability = 1.0 - outlierProbability;
-  }
-
-  @Override
-  public double nextValue() {
-
-    double value;
-    double probability = random.nextDouble();
-
-    if (probability <= nonOutlierProbability) {
-      value = random.nextGaussian() * sd + mean;
-    } else {
-      if (outlierOnRightEnd) {
-        value = mean + (outlierDeviationSDTimesLower + (outlierDeviationSDTimesHigher - outlierDeviationSDTimesLower) * random.nextDouble()) * sd;
-      } else {
-        value = mean - (outlierDeviationSDTimesLower + (outlierDeviationSDTimesHigher - outlierDeviationSDTimesLower) * random.nextDouble()) * sd;
-      }
-    }
-    return value;
-  }
-
-  @Override
-  public double[] getSeries(int n) {
-    double[] series = new double[n];
-    for (int i = 0; i < n; i++) {
-      series[i] = nextValue();
-    }
-    return series;
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java
deleted file mode 100644
index 403e599..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java
+++ /dev/null
@@ -1,115 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.seriesgenerator;
-
-import java.util.Random;
-
-public class SteadyWithTurbulenceMetricSeries implements AbstractMetricSeries {
-
-  double steadyStateValue = 0.0;
-  double steadyStateDeviationPercentage = 0.0;
-  double turbulentPeriodDeviationLowerPercentage = 0.3;
-  double turbulentPeriodDeviationHigherPercentage = 0.5;
-  int turbulentPeriodLength = 5;
-  int turbulentStatePosition = 1;
-  int approximateSeriesLength = 10;
-
-  Random random = new Random();
-  double valueDeviationLowerLimit;
-  double valueDeviationHigherLimit;
-  double tPeriodLowerLimit;
-  double tPeriodUpperLimit;
-  int tPeriodStartIndex = 0;
-  int index = 0;
-
-  public SteadyWithTurbulenceMetricSeries(int approximateSeriesLength,
-                                          double steadyStateValue,
-                                          double steadyStateDeviationPercentage,
-                                          double turbulentPeriodDeviationLowerPercentage,
-                                          double turbulentPeriodDeviationHigherPercentage,
-                                          int turbulentPeriodLength,
-                                          int turbulentStatePosition) {
-    this.approximateSeriesLength = approximateSeriesLength;
-    this.steadyStateValue = steadyStateValue;
-    this.steadyStateDeviationPercentage = steadyStateDeviationPercentage;
-    this.turbulentPeriodDeviationLowerPercentage = turbulentPeriodDeviationLowerPercentage;
-    this.turbulentPeriodDeviationHigherPercentage = turbulentPeriodDeviationHigherPercentage;
-    this.turbulentPeriodLength = turbulentPeriodLength;
-    this.turbulentStatePosition = turbulentStatePosition;
-    init();
-  }
-
-  private void init() {
-
-    if (turbulentStatePosition == 1) {
-      tPeriodStartIndex = (int) (0.25 * approximateSeriesLength + (0.25 * approximateSeriesLength * random.nextDouble()));
-    } else if (turbulentStatePosition == 2) {
-      tPeriodStartIndex = approximateSeriesLength - turbulentPeriodLength;
-    }
-
-    valueDeviationLowerLimit = steadyStateValue - steadyStateDeviationPercentage * steadyStateValue;
-    valueDeviationHigherLimit = steadyStateValue + steadyStateDeviationPercentage * steadyStateValue;
-
-    tPeriodLowerLimit = steadyStateValue + turbulentPeriodDeviationLowerPercentage * steadyStateValue;
-    tPeriodUpperLimit = steadyStateValue + turbulentPeriodDeviationHigherPercentage * steadyStateValue;
-  }
-
-  @Override
-  public double nextValue() {
-
-    double value;
-
-    if (index >= tPeriodStartIndex && index <= (tPeriodStartIndex + turbulentPeriodLength)) {
-      value = tPeriodLowerLimit + (tPeriodUpperLimit - tPeriodLowerLimit) * random.nextDouble();
-    } else {
-      value = valueDeviationLowerLimit + (valueDeviationHigherLimit - valueDeviationLowerLimit) * random.nextDouble();
-    }
-    index++;
-    return value;
-  }
-
-  @Override
-  public double[] getSeries(int n) {
-
-    double[] series = new double[n];
-    int turbulentPeriodStartIndex = 0;
-
-    if (turbulentStatePosition == 1) {
-      turbulentPeriodStartIndex = (int) (0.25 * n + (0.25 * n * random.nextDouble()));
-    } else if (turbulentStatePosition == 2) {
-      turbulentPeriodStartIndex = n - turbulentPeriodLength;
-    }
-
-    double valueDevLowerLimit = steadyStateValue - steadyStateDeviationPercentage * steadyStateValue;
-    double valueDevHigherLimit = steadyStateValue + steadyStateDeviationPercentage * steadyStateValue;
-
-    double turbulentPeriodLowerLimit = steadyStateValue + turbulentPeriodDeviationLowerPercentage * steadyStateValue;
-    double turbulentPeriodUpperLimit = steadyStateValue + turbulentPeriodDeviationHigherPercentage * steadyStateValue;
-
-    for (int i = 0; i < n; i++) {
-      if (i >= turbulentPeriodStartIndex && i < (turbulentPeriodStartIndex + turbulentPeriodLength)) {
-        series[i] = turbulentPeriodLowerLimit + (turbulentPeriodUpperLimit - turbulentPeriodLowerLimit) * random.nextDouble();
-      } else {
-        series[i] = valueDevLowerLimit + (valueDevHigherLimit - valueDevLowerLimit) * random.nextDouble();
-      }
-    }
-
-    return series;
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/StepFunctionMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/StepFunctionMetricSeries.java
deleted file mode 100644
index c91eac9..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/StepFunctionMetricSeries.java
+++ /dev/null
@@ -1,107 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.seriesgenerator;
-
-import java.util.Random;
-
-public class StepFunctionMetricSeries implements AbstractMetricSeries {
-
-  double startValue = 0.0;
-  double steadyValueDeviationPercentage = 0.0;
-  double steadyPeriodSlope = 0.5;
-  int steadyPeriodMinSize = 10;
-  int steadyPeriodMaxSize = 20;
-  double stepChangePercentage = 0.0;
-  boolean upwardStep = true;
-
-  Random random = new Random();
-
-  // y = mx + c
-  double y;
-  double m;
-  double x;
-  double c;
-  int currentStepSize;
-  int currentIndex;
-
-  public StepFunctionMetricSeries(double startValue,
-                                  double steadyValueDeviationPercentage,
-                                  double steadyPeriodSlope,
-                                  int steadyPeriodMinSize,
-                                  int steadyPeriodMaxSize,
-                                  double stepChangePercentage,
-                                  boolean upwardStep) {
-    this.startValue = startValue;
-    this.steadyValueDeviationPercentage = steadyValueDeviationPercentage;
-    this.steadyPeriodSlope = steadyPeriodSlope;
-    this.steadyPeriodMinSize = steadyPeriodMinSize;
-    this.steadyPeriodMaxSize = steadyPeriodMaxSize;
-    this.stepChangePercentage = stepChangePercentage;
-    this.upwardStep = upwardStep;
-    init();
-  }
-
-  private void init() {
-    y = startValue;
-    m = steadyPeriodSlope;
-    x = 1;
-    c = y - (m * x);
-
-    currentStepSize = (int) (steadyPeriodMinSize + (steadyPeriodMaxSize - steadyPeriodMinSize) * random.nextDouble());
-    currentIndex = 0;
-  }
-
-  @Override
-  public double nextValue() {
-
-    double value = 0.0;
-
-    if (currentIndex < currentStepSize) {
-      y = m * x + c;
-      double valueDeviationLowerLimit = y - steadyValueDeviationPercentage * y;
-      double valueDeviationHigherLimit = y + steadyValueDeviationPercentage * y;
-      value = valueDeviationLowerLimit + (valueDeviationHigherLimit - valueDeviationLowerLimit) * random.nextDouble();
-      x++;
-      currentIndex++;
-    }
-
-    if (currentIndex == currentStepSize) {
-      currentIndex = 0;
-      currentStepSize = (int) (steadyPeriodMinSize + (steadyPeriodMaxSize - steadyPeriodMinSize) * random.nextDouble());
-      if (upwardStep) {
-        y = y + stepChangePercentage * y;
-      } else {
-        y = y - stepChangePercentage * y;
-      }
-      x = 1;
-      c = y - (m * x);
-    }
-
-    return value;
-  }
-
-  @Override
-  public double[] getSeries(int n) {
-    double[] series = new double[n];
-    for (int i = 0; i < n; i++) {
-      series[i] = nextValue();
-    }
-    return series;
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/UniformMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/UniformMetricSeries.java
deleted file mode 100644
index 6122f82..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/UniformMetricSeries.java
+++ /dev/null
@@ -1,95 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.ambari.metrics.adservice.seriesgenerator;
-
-import java.util.Random;
-
-public class UniformMetricSeries implements AbstractMetricSeries {
-
-  double value = 0.0;
-  double deviationPercentage = 0.0;
-  double outlierProbability = 0.0;
-  double outlierDeviationLowerPercentage = 0.0;
-  double outlierDeviationHigherPercentage = 0.0;
-  boolean outliersAboveValue= true;
-
-  Random random = new Random();
-  double valueDeviationLowerLimit;
-  double valueDeviationHigherLimit;
-  double outlierLeftLowerLimit;
-  double outlierLeftHigherLimit;
-  double outlierRightLowerLimit;
-  double outlierRightUpperLimit;
-  double nonOutlierProbability;
-
-
-  public UniformMetricSeries(double value,
-                             double deviationPercentage,
-                             double outlierProbability,
-                             double outlierDeviationLowerPercentage,
-                             double outlierDeviationHigherPercentage,
-                             boolean outliersAboveValue) {
-    this.value = value;
-    this.deviationPercentage = deviationPercentage;
-    this.outlierProbability = outlierProbability;
-    this.outlierDeviationLowerPercentage = outlierDeviationLowerPercentage;
-    this.outlierDeviationHigherPercentage = outlierDeviationHigherPercentage;
-    this.outliersAboveValue = outliersAboveValue;
-    init();
-  }
-
-  private void init() {
-    valueDeviationLowerLimit = value - deviationPercentage * value;
-    valueDeviationHigherLimit = value + deviationPercentage * value;
-
-    outlierLeftLowerLimit = value - outlierDeviationHigherPercentage * value;
-    outlierLeftHigherLimit = value - outlierDeviationLowerPercentage * value;
-    outlierRightLowerLimit = value + outlierDeviationLowerPercentage * value;
-    outlierRightUpperLimit = value + outlierDeviationHigherPercentage * value;
-
-    nonOutlierProbability = 1.0 - outlierProbability;
-  }
-
-  @Override
-  public double nextValue() {
-
-    double value;
-    double probability = random.nextDouble();
-
-    if (probability <= nonOutlierProbability) {
-      value = valueDeviationLowerLimit + (valueDeviationHigherLimit - valueDeviationLowerLimit) * random.nextDouble();
-    } else {
-      if (!outliersAboveValue) {
-        value = outlierLeftLowerLimit + (outlierLeftHigherLimit - outlierLeftLowerLimit) * random.nextDouble();
-      } else {
-        value = outlierRightLowerLimit + (outlierRightUpperLimit - outlierRightLowerLimit) * random.nextDouble();
-      }
-    }
-    return value;
-  }
-
-  @Override
-  public double[] getSeries(int n) {
-    double[] series = new double[n];
-    for (int i = 0; i < n; i++) {
-      series[i] = nextValue();
-    }
-    return series;
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/resources/config.yaml b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/resources/config.yaml
deleted file mode 100644
index 6b09499..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/resources/config.yaml
+++ /dev/null
@@ -1,35 +0,0 @@
-#Licensed under the Apache License, Version 2.0 (the "License");
-#you may not use this file except in compliance with the License.
-#You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-#Unless required by applicable law or agreed to in writing, software
-#distributed under the License is distributed on an "AS IS" BASIS,
-#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#See the License for the specific language governing permissions and
-#limitations under the License.
-
-metricDefinitionService:
-  inputDefinitionDirectory: /etc/ambari-metrics-anomaly-detection/conf/definitionDirectory
-
-metricsCollector:
-  hosts: host1,host2
-  port: 6188
-  protocol: http
-  metadataEndpoint: /ws/v1/timeline/metrics/metadata/key
-
-adQueryService:
-  anomalyDataTtl: 604800
-
-metricDefinitionDB:
-  # force checksum verification of all data that is read from the file system on behalf of a particular read
-  verifyChecksums: true
-  # raise an error as soon as it detects an internal corruption
-  performParanoidChecks: false
-  # Path to Level DB directory
-  dbDirPath: /var/lib/ambari-metrics-anomaly-detection/
-
-spark:
-  mode: standalone
-  masterHostPort: localhost:7077
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala
deleted file mode 100644
index 76391a0..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfigTest.scala
+++ /dev/null
@@ -1,67 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.adservice.app
-
-import java.io.File
-import java.net.URL
-
-import javax.validation.Validator
-
-import org.scalatest.FunSuite
-
-import com.fasterxml.jackson.databind.ObjectMapper
-import com.fasterxml.jackson.datatype.guava.GuavaModule
-
-import io.dropwizard.configuration.YamlConfigurationFactory
-import io.dropwizard.jersey.validation.Validators
-
-class AnomalyDetectionAppConfigTest extends FunSuite {
-
-  test("testConfiguration") {
-
-    val classLoader = getClass.getClassLoader
-    val url: URL = classLoader.getResource("config.yaml")
-    val file = new File(url.getFile)
-
-    val objectMapper: ObjectMapper = new ObjectMapper()
-    objectMapper.registerModule(new GuavaModule)
-    val validator: Validator = Validators.newValidator
-    val factory: YamlConfigurationFactory[AnomalyDetectionAppConfig] =
-      new YamlConfigurationFactory[AnomalyDetectionAppConfig](classOf[AnomalyDetectionAppConfig], validator, objectMapper, "")
-    val config = factory.build(file)
-
-    assert(config.isInstanceOf[AnomalyDetectionAppConfig])
-
-    assert(config.getMetricDefinitionServiceConfiguration.getInputDefinitionDirectory ==
-      "/etc/ambari-metrics-anomaly-detection/conf/definitionDirectory")
-
-    assert(config.getMetricCollectorConfiguration.getHosts == "host1,host2")
-    assert(config.getMetricCollectorConfiguration.getPort == "6188")
-
-    assert(config.getAdServiceConfiguration.getAnomalyDataTtl == 604800)
-
-    assert(config.getMetricDefinitionDBConfiguration.getDbDirPath == "/var/lib/ambari-metrics-anomaly-detection/")
-    assert(config.getMetricDefinitionDBConfiguration.getVerifyChecksums)
-    assert(!config.getMetricDefinitionDBConfiguration.getPerformParanoidChecks)
-
-    assert(config.getSparkConfiguration.getMode.equals("standalone"))
-    assert(config.getSparkConfiguration.getMasterHostPort.equals("localhost:7077"))
-
-  }
-
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
deleted file mode 100644
index 7330ff9..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
+++ /dev/null
@@ -1,57 +0,0 @@
-/**
-  * Licensed to the Apache Software Foundation (ASF) under one
-  * or more contributor license agreements.  See the NOTICE file
-  * distributed with this work for additional information
-  * regarding copyright ownership.  The ASF licenses this file
-  * to you under the Apache License, Version 2.0 (the
-  * "License"); you may not use this file except in compliance
-  * with the License.  You may obtain a copy of the License at
-  *
-  * http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-package org.apache.ambari.metrics.adservice.app
-
-import java.time.LocalDateTime
-
-import javax.ws.rs.client.Client
-import javax.ws.rs.core.MediaType.APPLICATION_JSON
-
-import org.apache.ambari.metrics.adservice.app.DropwizardAppRuleHelper.withAppRunning
-import org.glassfish.jersey.client.ClientProperties.{CONNECT_TIMEOUT, READ_TIMEOUT}
-import org.glassfish.jersey.client.{ClientConfig, JerseyClientBuilder}
-import org.glassfish.jersey.filter.LoggingFilter
-import org.glassfish.jersey.jaxb.internal.XmlJaxbElementProvider
-import org.joda.time.DateTime
-import org.scalatest.{FunSpec, Matchers}
-
-import com.google.common.io.Resources
-
-class DefaultADResourceSpecTest extends FunSpec with Matchers {
-
-  describe("/anomaly") {
-    it("Must return default message") {
-      withAppRunning(classOf[AnomalyDetectionApp], Resources.getResource("config.yaml").getPath) { rule =>
-        val json = client.target(s"http://localhost:${rule.getLocalPort}/anomaly")
-          .request().accept(APPLICATION_JSON).buildGet().invoke(classOf[String])
-        val dtf = java.time.format.DateTimeFormatter.ofPattern("yyyy/MM/dd HH:mm")
-        val now = LocalDateTime.now
-        assert(json == "{\"message\":\"Anomaly Detection Service!\"," + "\"today\":\"" + now + "\"}")
-      }
-    }
-  }
-
-  def client: Client = {
-    val config = new ClientConfig()
-    config.register(classOf[LoggingFilter])
-    config.register(classOf[XmlJaxbElementProvider.App])
-    config.property(CONNECT_TIMEOUT, 5000)
-    config.property(READ_TIMEOUT, 10000)
-    JerseyClientBuilder.createClient(config)
-  }
-}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DropwizardAppRuleHelper.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DropwizardAppRuleHelper.scala
deleted file mode 100644
index 6017bb4..0000000
... 2994 lines suppressed ...

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 22/39: AMBARI-22192. Setup an application server for hosting the AD System Manager.

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit 0c0c6279426a24ce411769268fff493a3e3b4fb0
Author: Siddharth Wagle <sw...@hortonworks.com>
AuthorDate: Thu Oct 12 12:09:49 2017 -0700

    AMBARI-22192. Setup an application server for hosting the AD System Manager.
---
 .../pom.xml                                        | 569 +++++++++++++--------
 .../src/main/resources/config.yml                  |  12 +
 .../adservice/app/AnomalyDetectionApp.scala        |  69 +++
 .../adservice/app/AnomalyDetectionAppConfig.scala  |  24 +
 .../adservice/app/AnomalyDetectionAppModule.scala  |  38 ++
 .../metrics/adservice/app/DefaultHealthCheck.scala |  25 +
 .../metrics/adservice/app/GuiceInjector.scala      |  56 ++
 .../adservice/common/PhoenixQueryConstants.scala   |   2 +-
 .../adservice/resource/AnomalyResource.scala       |  35 ++
 .../metrics/adservice/resource/RootResource.scala  |  35 ++
 .../metrics/adservice/service/ADQueryService.scala |  22 +
 .../adservice/service/ADQueryServiceImpl.scala     |  22 +
 .../adservice/app/DefaultADResourceSpecTest.scala  |  54 ++
 .../adservice/app/DropwizardAppRuleHelper.scala    |  39 ++
 .../app/DropwizardResourceTestRuleHelper.scala     |  33 ++
 .../common/ADManagerConfigurationTest.scala        |  17 +
 ambari-metrics/ambari-metrics-common/pom.xml       |  12 +-
 ambari-metrics/pom.xml                             |   1 +
 ambari-utility/pom.xml                             |   4 +-
 19 files changed, 857 insertions(+), 212 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
index 6f8f8c1..c9bb7b7 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
@@ -20,217 +20,370 @@
 <project xmlns="http://maven.apache.org/POM/4.0.0"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-    <parent>
-        <artifactId>ambari-metrics</artifactId>
-        <groupId>org.apache.ambari</groupId>
-        <version>2.0.0.0-SNAPSHOT</version>
-    </parent>
-    <modelVersion>4.0.0</modelVersion>
-    <artifactId>ambari-metrics-anomaly-detection-service</artifactId>
+  <parent>
+    <artifactId>ambari-metrics</artifactId>
+    <groupId>org.apache.ambari</groupId>
     <version>2.0.0.0-SNAPSHOT</version>
-    <properties>
-        <scala.version>2.11.1</scala.version>
-        <scala.binary.version>2.11</scala.binary.version>
-        <hadoop.version>2.7.3.2.6.0.3-8</hadoop.version>
-    </properties>
+  </parent>
+  <modelVersion>4.0.0</modelVersion>
+  <artifactId>ambari-metrics-anomaly-detection-service</artifactId>
+  <version>2.0.0.0-SNAPSHOT</version>
+  <name>Ambari Metrics Anomaly Detection Service</name>
+  <packaging>jar</packaging>
 
-    <repositories>
-        <repository>
-            <id>scala-tools.org</id>
-            <name>Scala-Tools Maven2 Repository</name>
-            <url>http://scala-tools.org/repo-releases</url>
-        </repository>
-    </repositories>
+  <properties>
+    <scala.version>2.12.3</scala.version>
+    <scala.binary.version>2.11</scala.binary.version>
+    <hadoop.version>2.7.3.2.6.0.3-8</hadoop.version>
+    <jackson.version>2.8.9</jackson.version>
+    <dropwizard.version>1.2.0</dropwizard.version>
+    <spark.version>2.2.0</spark.version>
+  </properties>
+  
+  <repositories>
+    <repository>
+      <id>scala-tools.org</id>
+      <name>Scala-Tools Maven2 Repository</name>
+      <url>http://scala-tools.org/repo-releases</url>
+    </repository>
+  </repositories>
 
-    <pluginRepositories>
-        <pluginRepository>
-            <id>scala-tools.org</id>
-            <name>Scala-Tools Maven2 Repository</name>
-            <url>http://scala-tools.org/repo-releases</url>
-        </pluginRepository>
-    </pluginRepositories>
+  <pluginRepositories>
+    <pluginRepository>
+      <id>scala-tools.org</id>
+      <name>Scala-Tools Maven2 Repository</name>
+      <url>http://scala-tools.org/repo-releases</url>
+    </pluginRepository>
+  </pluginRepositories>
 
-    <build>
-        <plugins>
-            <plugin>
-                <artifactId>maven-compiler-plugin</artifactId>
-                <configuration>
-                    <source>1.8</source>
-                    <target>1.8</target>
-                </configuration>
-            </plugin>
-            <plugin>
-                <groupId>org.scala-tools</groupId>
-                <artifactId>maven-scala-plugin</artifactId>
-                <executions>
-                    <execution>
-                        <goals>
-                            <goal>compile</goal>
-                            <goal>testCompile</goal>
-                        </goals>
-                    </execution>
-                </executions>
-                <configuration>
-                    <scalaVersion>${scala.version}</scalaVersion>
-                    <args>
-                        <arg>-target:jvm-1.5</arg>
-                    </args>
-                </configuration>
-            </plugin>
-        </plugins>
-    </build>
-    <name>Ambari Metrics Anomaly Detection Service</name>
-    <packaging>jar</packaging>
+  <build>
+    <finalName>${project.artifactId}</finalName>
+    <resources>
+      <resource>
+        <filtering>true</filtering>
+        <directory>src/main/resources</directory>
+        <includes>
+          <include>**/*.yml</include>
+          <include>**/*.txt</include>
+        </includes>
+      </resource>
+    </resources>
+    <plugins>
+      <plugin>
+        <artifactId>maven-compiler-plugin</artifactId>
+        <configuration>
+          <source>1.8</source>
+          <target>1.8</target>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>net.alchim31.maven</groupId>
+        <artifactId>scala-maven-plugin</artifactId>
+        <version>3.3.1</version>
+        <executions>
+          <execution>
+            <id>scala-compile-first</id>
+            <phase>process-resources</phase>
+            <goals>
+              <goal>add-source</goal>
+              <goal>compile</goal>
+            </goals>
+          </execution>
+          <execution>
+            <id>scala-test-compile</id>
+            <phase>process-test-resources</phase>
+            <goals>
+              <goal>testCompile</goal>
+            </goals>
+          </execution>
+        </executions>
+        <configuration>
+          <jvmArgs>
+            <jvmArg>-Xms512m</jvmArg>
+            <jvmArg>-Xmx2048m</jvmArg>
+          </jvmArgs>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>org.scalatest</groupId>
+        <artifactId>scalatest-maven-plugin</artifactId>
+        <version>1.0</version>
+      </plugin>
+      <plugin>
+        <groupId>org.scala-tools</groupId>
+        <artifactId>maven-scala-plugin</artifactId>
+        <executions>
+          <execution>
+            <goals>
+              <goal>compile</goal>
+              <goal>testCompile</goal>
+            </goals>
+          </execution>
+        </executions>
+        <configuration>
+          <scalaVersion>${scala.version}</scalaVersion>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-jar-plugin</artifactId>
+        <version>2.5</version>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-shade-plugin</artifactId>
+        <version>3.1.0</version>
+        <configuration>
+          <createDependencyReducedPom>false</createDependencyReducedPom>
+          <minimizeJar>true</minimizeJar>
+          <filters>
+            <filter>
+              <artifact>*:*</artifact>
+              <excludes>
+                <exclude>META-INF/*.SF</exclude>
+                <exclude>META-INF/*.DSA</exclude>
+                <exclude>META-INF/*.RSA</exclude>
+              </excludes>
+            </filter>
+          </filters>
+        </configuration>
+        <executions>
+          <execution>
+            <phase>package</phase>
+            <goals>
+              <goal>shade</goal>
+            </goals>
+            <configuration>
+              <transformers>
+                <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
+                <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
+                  <mainClass>
+                    org.apache.ambari.metrics.adservice.app.AnomalyDetectionApp
+                  </mainClass>
+                </transformer>
+              </transformers>
+            </configuration>
+          </execution>
+        </executions>
+      </plugin>
+    </plugins>
+  </build>
 
-    <dependencies>
-
-        <dependency>
-            <groupId>commons-lang</groupId>
-            <artifactId>commons-lang</artifactId>
-            <version>2.5</version>
-        </dependency>
-
-        <dependency>
-            <groupId>org.slf4j</groupId>
-            <artifactId>slf4j-api</artifactId>
-            <version>1.7.2</version>
-        </dependency>
-
-        <dependency>
-            <groupId>org.slf4j</groupId>
-            <artifactId>slf4j-log4j12</artifactId>
-            <version>1.7.2</version>
-        </dependency>
-
-        <dependency>
-            <groupId>com.github.lucarosellini.rJava</groupId>
-            <artifactId>JRI</artifactId>
-            <version>0.9-7</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.spark</groupId>
-            <artifactId>spark-streaming_2.11</artifactId>
-            <version>2.1.1</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.kafka</groupId>
-            <artifactId>kafka_2.10</artifactId>
-            <version>0.10.1.0</version>
-            <exclusions>
-                <exclusion>
-                    <groupId>com.sun.jdmk</groupId>
-                    <artifactId>jmxtools</artifactId>
-                </exclusion>
-                <exclusion>
-                    <groupId>com.sun.jmx</groupId>
-                    <artifactId>jmxri</artifactId>
-                </exclusion>
-                <exclusion>
-                    <groupId>javax.mail</groupId>
-                    <artifactId>mail</artifactId>
-                </exclusion>
-                <exclusion>
-                    <groupId>javax.jms</groupId>
-                    <artifactId>jmx</artifactId>
-                </exclusion>
-                <exclusion>
-                    <groupId>javax.jms</groupId>
-                    <artifactId>jms</artifactId>
-                </exclusion>
-            </exclusions>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.kafka</groupId>
-            <artifactId>kafka-clients</artifactId>
-            <version>0.10.1.0</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.kafka</groupId>
-            <artifactId>connect-json</artifactId>
-            <version>0.10.1.0</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.spark</groupId>
-            <artifactId>spark-streaming-kafka_2.10</artifactId>
-            <version>1.6.3</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.spark</groupId>
-            <artifactId>spark-sql_2.10</artifactId>
-            <version>1.6.3</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.phoenix</groupId>
-            <artifactId>phoenix-spark</artifactId>
-            <version>4.10.0-HBase-1.1</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.spark</groupId>
-            <artifactId>spark-mllib_2.10</artifactId>
-            <version>1.3.0</version>
-        </dependency>
-        <dependency>
-            <groupId>junit</groupId>
-            <artifactId>junit</artifactId>
-            <scope>test</scope>
-            <version>4.10</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.ambari</groupId>
-            <artifactId>ambari-metrics-common</artifactId>
-            <version>${project.version}</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.httpcomponents</groupId>
-            <artifactId>httpclient</artifactId>
-            <version>4.2.5</version>
-        </dependency>
-        <dependency>
-            <groupId>org.scala-lang</groupId>
-            <artifactId>scala-library</artifactId>
-            <version>${scala.version}</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.spark</groupId>
-            <artifactId>spark-core_${scala.binary.version}</artifactId>
-            <version>2.1.1</version>
-            <scope>provided</scope>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.spark</groupId>
-            <artifactId>spark-mllib_${scala.binary.version}</artifactId>
-            <version>2.1.1</version>
-            <scope>provided</scope>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.hadoop</groupId>
-            <artifactId>hadoop-common</artifactId>
-            <version>${hadoop.version}</version>
-            <scope>provided</scope>
-            <exclusions>
-                <exclusion>
-                    <groupId>commons-el</groupId>
-                    <artifactId>commons-el</artifactId>
-                </exclusion>
-                <exclusion>
-                    <groupId>tomcat</groupId>
-                    <artifactId>jasper-runtime</artifactId>
-                </exclusion>
-                <exclusion>
-                    <groupId>tomcat</groupId>
-                    <artifactId>jasper-compiler</artifactId>
-                </exclusion>
-                <exclusion>
-                    <groupId>org.mortbay.jetty</groupId>
-                    <artifactId>jsp-2.1-jetty</artifactId>
-                </exclusion>
-            </exclusions>
-        </dependency>
-        <dependency>
-            <groupId>org.scalatest</groupId>
-            <artifactId>scalatest_2.11</artifactId>
-            <version>3.0.1</version>
-            <scope>test</scope>
-        </dependency>
-    </dependencies>
+  <dependencies>
+    <dependency>
+      <groupId>commons-lang</groupId>
+      <artifactId>commons-lang</artifactId>
+      <version>2.5</version>
+    </dependency>
+    <dependency>
+      <groupId>org.slf4j</groupId>
+      <artifactId>slf4j-api</artifactId>
+      <version>1.7.2</version>
+    </dependency>
+    <dependency>
+      <groupId>com.github.lucarosellini.rJava</groupId>
+      <artifactId>JRI</artifactId>
+      <version>0.9-7</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.spark</groupId>
+      <artifactId>spark-streaming_${scala.binary.version}</artifactId>
+      <version>${spark.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.kafka</groupId>
+      <artifactId>kafka_2.10</artifactId>
+      <version>0.10.1.0</version>
+      <exclusions>
+        <exclusion>
+          <groupId>com.sun.jdmk</groupId>
+          <artifactId>jmxtools</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jmx</groupId>
+          <artifactId>jmxri</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>javax.mail</groupId>
+          <artifactId>mail</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>javax.jms</groupId>
+          <artifactId>jmx</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>javax.jms</groupId>
+          <artifactId>jms</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.kafka</groupId>
+      <artifactId>kafka-clients</artifactId>
+      <version>0.10.1.0</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.kafka</groupId>
+      <artifactId>connect-json</artifactId>
+      <version>0.10.1.0</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.spark</groupId>
+      <artifactId>spark-streaming-kafka_2.10</artifactId>
+      <version>1.6.3</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.phoenix</groupId>
+      <artifactId>phoenix-spark</artifactId>
+      <version>4.10.0-HBase-1.1</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.ambari</groupId>
+      <artifactId>ambari-metrics-common</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.httpcomponents</groupId>
+      <artifactId>httpclient</artifactId>
+      <version>4.2.5</version>
+    </dependency>
+    <dependency>
+      <groupId>org.scala-lang</groupId>
+      <artifactId>scala-library</artifactId>
+      <version>${scala.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.spark</groupId>
+      <artifactId>spark-core_${scala.binary.version}</artifactId>
+      <version>${spark.version}</version>
+      <scope>provided</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.spark</groupId>
+      <artifactId>spark-mllib_${scala.binary.version}</artifactId>
+      <version>${spark.version}</version>
+      <scope>provided</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-common</artifactId>
+      <version>${hadoop.version}</version>
+      <scope>provided</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>commons-el</groupId>
+          <artifactId>commons-el</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>tomcat</groupId>
+          <artifactId>jasper-runtime</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>tomcat</groupId>
+          <artifactId>jasper-compiler</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.mortbay.jetty</groupId>
+          <artifactId>jsp-2.1-jetty</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
+      <groupId>org.scalatest</groupId>
+      <artifactId>scalatest_2.12</artifactId>
+      <version>3.0.1</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>io.dropwizard</groupId>
+      <artifactId>dropwizard-core</artifactId>
+      <version>${dropwizard.version}</version>
+      <exclusions>
+        <exclusion>
+          <groupId>org.glassfish.hk2.external</groupId>
+          <artifactId>javax.inject</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.glassfish.hk2.external</groupId>
+          <artifactId>aopalliance-repackaged</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>ch.qos.logback</groupId>
+          <artifactId>logback-classic</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>ch.qos.logback</groupId>
+          <artifactId>logback-access</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.slf4j</groupId>
+          <artifactId>log4j-over-slf4j</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
+      <groupId>log4j</groupId>
+      <artifactId>log4j</artifactId>
+      <version>1.2.17</version>
+    </dependency>
+    <dependency>
+      <groupId>org.slf4j</groupId>
+      <artifactId>slf4j-log4j12</artifactId>
+      <version>1.7.21</version>
+    </dependency>
+    <dependency>
+      <groupId>io.dropwizard</groupId>
+      <artifactId>dropwizard-testing</artifactId>
+      <version>${dropwizard.version}</version>
+      <scope>test</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>org.glassfish.hk2.external</groupId>
+          <artifactId>javax.inject</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
+      <groupId>joda-time</groupId>
+      <artifactId>joda-time</artifactId>
+      <version>2.9.4</version>
+    </dependency>
+    <dependency>
+      <groupId>org.joda</groupId>
+      <artifactId>joda-convert</artifactId>
+      <version>1.8.1</version>
+    </dependency>
+    <dependency>
+      <groupId>com.google.inject</groupId>
+      <artifactId>guice</artifactId>
+      <version>4.1.0</version>
+    </dependency>
+    <dependency>
+      <groupId>com.google.inject.extensions</groupId>
+      <artifactId>guice-multibindings</artifactId>
+      <version>4.1.0</version>
+    </dependency>
+    <dependency>
+      <groupId>com.fasterxml.jackson.module</groupId>
+      <artifactId>jackson-module-scala_2.12</artifactId>
+      <version>${jackson.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>com.fasterxml.jackson.datatype</groupId>
+      <artifactId>jackson-datatype-jdk8</artifactId>
+      <version>${jackson.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
+      <version>4.12</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>com.google.guava</groupId>
+      <artifactId>guava</artifactId>
+      <version>21.0</version>
+      <scope>test</scope>
+    </dependency>
+  </dependencies>
 </project>
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
new file mode 100644
index 0000000..9ca9e95
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
@@ -0,0 +1,12 @@
+server:
+  applicationConnectors:
+   - type: http
+     port: 9999
+  adminConnectors:
+    - type: http
+      port: 9990
+  requestLog:
+    type: external
+
+logging:
+  type: external
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala
new file mode 100644
index 0000000..2cf0fc5
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionApp.scala
@@ -0,0 +1,69 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.app
+
+import javax.ws.rs.Path
+import javax.ws.rs.container.{ContainerRequestFilter, ContainerResponseFilter}
+
+import org.apache.ambari.metrics.adservice.app.GuiceInjector.{withInjector, wrap}
+import org.glassfish.jersey.filter.LoggingFilter
+
+import com.codahale.metrics.health.HealthCheck
+import com.fasterxml.jackson.databind.{ObjectMapper, SerializationFeature}
+import com.fasterxml.jackson.datatype.joda.JodaModule
+import com.fasterxml.jackson.jaxrs.json.JacksonJaxbJsonProvider
+import com.fasterxml.jackson.module.scala.DefaultScalaModule
+import io.dropwizard.Application
+import io.dropwizard.setup.Environment
+
+class AnomalyDetectionApp extends Application[AnomalyDetectionAppConfig] {
+  override def getName = "anomaly-detection-service"
+
+  override def run(t: AnomalyDetectionAppConfig, env: Environment): Unit = {
+    configure(t, env)
+  }
+
+  def configure(config: AnomalyDetectionAppConfig, env: Environment) {
+    withInjector(new AnomalyDetectionAppModule(config, env)) { injector =>
+      injector.instancesWithAnnotation(classOf[Path]).foreach { r => env.jersey().register(r) }
+      injector.instancesOfType(classOf[HealthCheck]).foreach { h => env.healthChecks.register(h.getClass.getName, h) }
+      injector.instancesOfType(classOf[ContainerRequestFilter]).foreach { f => env.jersey().register(f) }
+      injector.instancesOfType(classOf[ContainerResponseFilter]).foreach { f => env.jersey().register(f) }
+    }
+    env.jersey.register(jacksonJaxbJsonProvider)
+    env.jersey.register(new LoggingFilter)
+  }
+
+  private def jacksonJaxbJsonProvider: JacksonJaxbJsonProvider = {
+    val provider = new JacksonJaxbJsonProvider()
+    val objectMapper = new ObjectMapper()
+    objectMapper.registerModule(DefaultScalaModule)
+    objectMapper.registerModule(new JodaModule)
+    objectMapper.configure(SerializationFeature.WRAP_ROOT_VALUE, false)
+    objectMapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false)
+    objectMapper.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false)
+    objectMapper.configure(SerializationFeature.WRITE_EMPTY_JSON_ARRAYS, true)
+    provider.setMapper(objectMapper)
+    provider
+  }
+}
+
+
+object AnomalyDetectionApp {
+  def main(args: Array[String]): Unit = new AnomalyDetectionApp().run(args: _*)
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
new file mode 100644
index 0000000..9e6cc6d
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
@@ -0,0 +1,24 @@
+package org.apache.ambari.metrics.adservice.app
+
+import io.dropwizard.Configuration
+
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+class AnomalyDetectionAppConfig extends Configuration {
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
new file mode 100644
index 0000000..338c97b
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
@@ -0,0 +1,38 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.app
+
+import org.apache.ambari.metrics.adservice.resource.{AnomalyResource, RootResource}
+import org.apache.ambari.metrics.adservice.service.{ADQueryService, ADQueryServiceImpl}
+
+import com.codahale.metrics.health.HealthCheck
+import com.google.inject.AbstractModule
+import com.google.inject.multibindings.Multibinder
+import io.dropwizard.setup.Environment
+
+class AnomalyDetectionAppModule(config: AnomalyDetectionAppConfig, env: Environment) extends AbstractModule {
+  override def configure() {
+    bind(classOf[AnomalyDetectionAppConfig]).toInstance(config)
+    bind(classOf[Environment]).toInstance(env)
+    val healthCheckBinder = Multibinder.newSetBinder(binder(), classOf[HealthCheck])
+    healthCheckBinder.addBinding().to(classOf[DefaultHealthCheck])
+    bind(classOf[AnomalyResource])
+    bind(classOf[RootResource])
+    bind(classOf[ADQueryService]).to(classOf[ADQueryServiceImpl])
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/DefaultHealthCheck.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/DefaultHealthCheck.scala
new file mode 100644
index 0000000..c36e8d2
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/DefaultHealthCheck.scala
@@ -0,0 +1,25 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.app
+
+import com.codahale.metrics.health.HealthCheck
+import com.codahale.metrics.health.HealthCheck.Result
+
+class DefaultHealthCheck extends HealthCheck {
+  override def check(): Result = Result.healthy()
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/GuiceInjector.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/GuiceInjector.scala
new file mode 100644
index 0000000..37da5f9
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/GuiceInjector.scala
@@ -0,0 +1,56 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.app
+
+import java.lang.annotation.Annotation
+
+import com.google.inject.{Guice, Injector, Module, TypeLiteral}
+
+import scala.collection.JavaConversions._
+import scala.language.implicitConversions
+import scala.reflect._
+
+object GuiceInjector {
+
+  def withInjector(modules: Module*)(fn: (Injector) => Unit) = {
+    val injector = Guice.createInjector(modules.toList: _*)
+    fn(injector)
+  }
+
+  implicit def wrap(injector: Injector): InjectorWrapper = new InjectorWrapper(injector)
+}
+
+class InjectorWrapper(injector: Injector) {
+  def instancesWithAnnotation[T <: Annotation](annotationClass: Class[T]): List[AnyRef] = {
+    injector.getAllBindings.filter { case (k, v) =>
+      !k.getTypeLiteral.getRawType.getAnnotationsByType[T](annotationClass).isEmpty
+    }.map { case (k, v) => injector.getInstance(k).asInstanceOf[AnyRef] }.toList
+  }
+
+  def instancesOfType[T: ClassTag](typeClass: Class[T]): List[T] = {
+    injector.findBindingsByType(TypeLiteral.get(classTag[T].runtimeClass)).map { b =>
+      injector.getInstance(b.getKey).asInstanceOf[T]
+    }.toList
+  }
+
+  def dumpBindings(): Unit = {
+    injector.getBindings.keySet() foreach { k =>
+      println(s"bind key = ${k.toString}")
+    }
+  }
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/PhoenixQueryConstants.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/PhoenixQueryConstants.scala
index 5e90d2b..17173ec 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/PhoenixQueryConstants.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/common/PhoenixQueryConstants.scala
@@ -72,7 +72,7 @@ object PhoenixQueryConstants {
     "METHOD_NAME VARCHAR, " +
     "METHOD_TYPE VARCHAR, " +
     "PARAMETERS VARCHAR " +
-    "SNAPSHOT_TIME UNSIGNED LONG NOT NULL "
+    "SNAPSHOT_TIME UNSIGNED LONG NOT NULL " +
     "CONSTRAINT pk PRIMARY KEY (METRIC_UUID, METHOD_NAME)) " +
     "DATA_BLOCK_ENCODING='FAST_DIFF', IMMUTABLE_ROWS=true, COMPRESSION='SNAPPY'"
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala
new file mode 100644
index 0000000..fb9921a
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/AnomalyResource.scala
@@ -0,0 +1,35 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.resource
+
+import javax.ws.rs.{GET, Path, Produces}
+import javax.ws.rs.core.Response
+import javax.ws.rs.core.MediaType.APPLICATION_JSON
+
+import org.joda.time.DateTime
+
+@Path("/topNAnomalies")
+class AnomalyResource {
+
+  @GET
+  @Produces(Array(APPLICATION_JSON))
+  def default: Response = {
+    Response.ok.entity(Map("message" -> "Anomaly Detection Service!",
+      "today" -> DateTime.now.toString("MM-dd-yyyy hh:mm"))).build()
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala
new file mode 100644
index 0000000..b92a145
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/resource/RootResource.scala
@@ -0,0 +1,35 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.resource
+
+import javax.ws.rs.{GET, Path, Produces}
+import javax.ws.rs.core.Response
+import javax.ws.rs.core.MediaType.APPLICATION_JSON
+
+import org.joda.time.DateTime
+
+@Path("/")
+class RootResource {
+
+  @Produces(Array(APPLICATION_JSON))
+  @GET
+  def default: Response = {
+    Response.ok.entity(Map("name" -> "anomaly-detection-service", "today" -> DateTime.now.toString("MM-dd-yyyy hh:mm"))).build()
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala
new file mode 100644
index 0000000..0161166
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryService.scala
@@ -0,0 +1,22 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.service
+
+trait ADQueryService {
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala
new file mode 100644
index 0000000..fe00f58
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/service/ADQueryServiceImpl.scala
@@ -0,0 +1,22 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.service
+
+class ADQueryServiceImpl extends ADQueryService {
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
new file mode 100644
index 0000000..c088855
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DefaultADResourceSpecTest.scala
@@ -0,0 +1,54 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.app
+
+import javax.ws.rs.client.Client
+import javax.ws.rs.core.MediaType.APPLICATION_JSON
+
+import org.apache.ambari.metrics.adservice.app.DropwizardAppRuleHelper.withAppRunning
+import org.glassfish.jersey.client.{ClientConfig, JerseyClientBuilder}
+import org.glassfish.jersey.client.ClientProperties.{CONNECT_TIMEOUT, READ_TIMEOUT}
+import org.glassfish.jersey.filter.LoggingFilter
+import org.glassfish.jersey.jaxb.internal.XmlJaxbElementProvider
+import org.joda.time.DateTime
+import org.scalatest.{FunSpec, Matchers}
+
+import com.google.common.io.Resources
+
+class DefaultADResourceSpecTest extends FunSpec with Matchers {
+
+  describe("/topNAnomalies") {
+    it("Must return default message") {
+      withAppRunning(classOf[AnomalyDetectionApp], Resources.getResource("config.yml").getPath) { rule =>
+        val json = client.target(s"http://localhost:${rule.getLocalPort}/topNAnomalies")
+          .request().accept(APPLICATION_JSON).buildGet().invoke(classOf[String])
+        val now = DateTime.now.toString("MM-dd-yyyy hh:mm")
+        assert(json == "{\"message\":\"Anomaly Detection Service!\"," + "\"today\":\"" + now + "\"}")
+      }
+    }
+  }
+
+  def client: Client = {
+    val config = new ClientConfig()
+    config.register(classOf[LoggingFilter])
+    config.register(classOf[XmlJaxbElementProvider.App])
+    config.property(CONNECT_TIMEOUT, 5000)
+    config.property(READ_TIMEOUT, 10000)
+    JerseyClientBuilder.createClient(config)
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DropwizardAppRuleHelper.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DropwizardAppRuleHelper.scala
new file mode 100644
index 0000000..6017bb4
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DropwizardAppRuleHelper.scala
@@ -0,0 +1,39 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.app
+
+import org.junit.runner.Description
+
+import io.dropwizard.Configuration
+import io.dropwizard.testing.ConfigOverride
+import io.dropwizard.testing.junit.DropwizardAppRule
+
+import scala.collection.mutable
+
+object DropwizardAppRuleHelper {
+
+  def withAppRunning[C <: Configuration](serviceClass: Class[_ <: io.dropwizard.Application[C]],
+                                         configPath: String, configOverrides: ConfigOverride*)
+                                        (fn: (DropwizardAppRule[C]) => Unit) {
+    val overrides = new mutable.ListBuffer[ConfigOverride]
+    configOverrides.foreach { o => overrides += o }
+    val rule = new DropwizardAppRule(serviceClass, configPath, overrides.toList: _*)
+    rule.apply(() => fn(rule), Description.EMPTY).evaluate()
+  }
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DropwizardResourceTestRuleHelper.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DropwizardResourceTestRuleHelper.scala
new file mode 100644
index 0000000..f896db4
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/app/DropwizardResourceTestRuleHelper.scala
@@ -0,0 +1,33 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.app
+
+import org.junit.runner.Description
+
+import io.dropwizard.testing.junit.ResourceTestRule
+
+object DropwizardResourceTestRuleHelper {
+  def withResourceTestRule(configBlock: (ResourceTestRule.Builder) => Unit)(testBlock: (ResourceTestRule) => Unit) {
+    val builder = new ResourceTestRule.Builder()
+    configBlock(builder)
+    val rule = builder.build()
+    rule.apply(() => {
+      testBlock(rule)
+    }, Description.EMPTY).evaluate()
+  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/ADManagerConfigurationTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/ADManagerConfigurationTest.scala
index 535dc9e..40b9d6a 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/ADManagerConfigurationTest.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/common/ADManagerConfigurationTest.scala
@@ -1,3 +1,20 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
 package org.apache.ambari.metrics.adservice.common
 
 import org.scalatest.FlatSpec
diff --git a/ambari-metrics/ambari-metrics-common/pom.xml b/ambari-metrics/ambari-metrics-common/pom.xml
index 5477270..34bf5cb 100644
--- a/ambari-metrics/ambari-metrics-common/pom.xml
+++ b/ambari-metrics/ambari-metrics-common/pom.xml
@@ -63,7 +63,7 @@
       <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-shade-plugin</artifactId>
-        <version>2.3</version>
+        <version>3.1.0</version>
         <executions>
           <!-- Run shade goal on package phase -->
           <execution>
@@ -124,6 +124,16 @@
                   <shadedPattern>org.apache.hadoop.metrics2.sink.relocated.apache.http</shadedPattern>
                 </relocation>
               </relocations>
+              <filters>
+                <filter>
+                  <artifact>*:*</artifact>
+                  <excludes>
+                    <exclude>META-INF/*.SF</exclude>
+                    <exclude>META-INF/*.DSA</exclude>
+                    <exclude>META-INF/*.RSA</exclude>
+                  </excludes>
+                </filter>
+              </filters>
             </configuration>
           </execution>
         </executions>
diff --git a/ambari-metrics/pom.xml b/ambari-metrics/pom.xml
index f2acb4f..a8c71e6 100644
--- a/ambari-metrics/pom.xml
+++ b/ambari-metrics/pom.xml
@@ -280,6 +280,7 @@
             <exclude>derby.log</exclude>
             <exclude>**/*.nuspec</exclude>
             <exclude>**/*.json</exclude>
+            <exclude>**/out</exclude>
           </excludes>
         </configuration>
         <executions>
diff --git a/ambari-utility/pom.xml b/ambari-utility/pom.xml
index fed63c6..bc98350 100644
--- a/ambari-utility/pom.xml
+++ b/ambari-utility/pom.xml
@@ -114,8 +114,8 @@
         <artifactId>maven-compiler-plugin</artifactId>
         <version>3.2</version>
         <configuration>
-          <source>1.7</source>
-          <target>1.7</target>
+          <source>1.8</source>
+          <target>1.8</target>
           <useIncrementalCompilation>false</useIncrementalCompilation>
         </configuration>
       </plugin>

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 27/39: AMBARI-22365. Add storage support for storing metric definitions using LevelDB. (swagle)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit cc510d67ea7d6e1ba25cd229807528fd8d13c8a6
Author: Siddharth Wagle <sw...@hortonworks.com>
AuthorDate: Fri Nov 3 10:06:12 2017 -0700

    AMBARI-22365. Add storage support for storing metric definitions using LevelDB. (swagle)
---
 .../pom.xml                                        |  19 +++-
 .../src/main/resources/config.yml                  |   8 ++
 .../adservice/app/AnomalyDetectionAppConfig.scala  |  11 ++-
 .../adservice/app/AnomalyDetectionAppModule.scala  |   4 +-
 .../MetricDefinitionDBConfiguration.scala          |  38 ++++++++
 .../metrics/adservice/db/MetadataDatasource.scala  |  73 +++++++++++++++
 .../adservice/leveldb/LevelDBDatasource.scala      | 102 +++++++++++++++++++++
 .../adservice/leveldb/LevelDBDataSourceTest.scala  |  57 ++++++++++++
 8 files changed, 306 insertions(+), 6 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
index 44bdc1f..cfa8124 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
@@ -424,11 +424,18 @@
       <artifactId>jackson-datatype-jdk8</artifactId>
       <version>${jackson.version}</version>
     </dependency>
+
     <dependency>
-      <groupId>com.fasterxml.jackson.core</groupId>
-      <artifactId>jackson-databind</artifactId>
-      <version>${jackson.version}</version>
+      <groupId>org.fusesource.leveldbjni</groupId>
+      <artifactId>leveldbjni-all</artifactId>
+      <version>1.8</version>
+    </dependency>
+    <dependency>
+      <groupId>org.iq80.leveldb</groupId>
+      <artifactId>leveldb</artifactId>
+      <version>0.9</version>
     </dependency>
+
     <dependency>
       <groupId>junit</groupId>
       <artifactId>junit</artifactId>
@@ -452,5 +459,11 @@
       <version>2.5</version>
       <scope>test</scope>
     </dependency>
+    <dependency>
+      <groupId>org.mockito</groupId>
+      <artifactId>mockito-all</artifactId>
+      <version>1.8.4</version>
+      <scope>test</scope>
+    </dependency>
   </dependencies>
 </project>
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
index 920c50c..299a472 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/config.yml
@@ -30,6 +30,14 @@ metricsCollector:
 adQueryService:
   anomalyDataTtl: 604800
 
+metricDefinitionDB:
+  # force checksum verification of all data that is read from the file system on behalf of a particular read
+  verifyChecksums: true
+  # raise an error as soon as it detects an internal corruption
+  performParanoidChecks: false
+  # Path to Level DB directory
+  dbDirPath: /var/lib/ambari-metrics-anomaly-detection/
+
 #subsystemService:
 #  spark:
 #  pointInTime:
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
index c1ef0d1..aa20223 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppConfig.scala
@@ -20,10 +20,9 @@ package org.apache.ambari.metrics.adservice.app
 
 import javax.validation.Valid
 
-import org.apache.ambari.metrics.adservice.configuration.{AdServiceConfiguration, HBaseConfiguration, MetricCollectorConfiguration, MetricDefinitionServiceConfiguration}
+import org.apache.ambari.metrics.adservice.configuration._
 
 import com.fasterxml.jackson.annotation.JsonProperty
-
 import io.dropwizard.Configuration
 
 /**
@@ -46,6 +45,12 @@ class AnomalyDetectionAppConfig extends Configuration {
   @Valid
   private val adServiceConfiguration = new AdServiceConfiguration
 
+  /**
+    * LevelDB settings for metrics definitions
+    */
+  @Valid
+  private val metricDefinitionDBConfiguration = new MetricDefinitionDBConfiguration
+
   /*
    HBase Conf
     */
@@ -66,4 +71,6 @@ class AnomalyDetectionAppConfig extends Configuration {
   @JsonProperty("metricsCollector")
   def getMetricCollectorConfiguration: MetricCollectorConfiguration = metricCollectorConfiguration
 
+  @JsonProperty("metricDefinitionDB")
+  def getMetricDefinitionDBConfiguration: MetricDefinitionDBConfiguration = metricDefinitionDBConfiguration
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
index 7425a7e..28b2880 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/app/AnomalyDetectionAppModule.scala
@@ -17,13 +17,14 @@
   */
 package org.apache.ambari.metrics.adservice.app
 
+import org.apache.ambari.metrics.adservice.db.MetadataDatasource
+import org.apache.ambari.metrics.adservice.leveldb.LevelDBDataSource
 import org.apache.ambari.metrics.adservice.resource.{AnomalyResource, RootResource}
 import org.apache.ambari.metrics.adservice.service.{ADQueryService, ADQueryServiceImpl}
 
 import com.codahale.metrics.health.HealthCheck
 import com.google.inject.AbstractModule
 import com.google.inject.multibindings.Multibinder
-
 import io.dropwizard.setup.Environment
 
 class AnomalyDetectionAppModule(config: AnomalyDetectionAppConfig, env: Environment) extends AbstractModule {
@@ -35,5 +36,6 @@ class AnomalyDetectionAppModule(config: AnomalyDetectionAppConfig, env: Environm
     bind(classOf[AnomalyResource])
     bind(classOf[RootResource])
     bind(classOf[ADQueryService]).to(classOf[ADQueryServiceImpl])
+    bind(classOf[MetadataDatasource]).to(classOf[LevelDBDataSource])
   }
 }
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionDBConfiguration.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionDBConfiguration.scala
new file mode 100644
index 0000000..79a350c
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/configuration/MetricDefinitionDBConfiguration.scala
@@ -0,0 +1,38 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+
+package org.apache.ambari.metrics.adservice.configuration
+
+import javax.validation.constraints.NotNull
+
+import com.fasterxml.jackson.annotation.JsonProperty
+
+class MetricDefinitionDBConfiguration {
+
+  @NotNull
+  private var dbDirPath: String = _
+
+  @JsonProperty("verifyChecksums")
+  def verifyChecksums: Boolean = true
+
+  @JsonProperty("performParanoidChecks")
+  def performParanoidChecks: Boolean = false
+
+  @JsonProperty("dbDirPath")
+  def getDbDirPath: String = dbDirPath
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/MetadataDatasource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/MetadataDatasource.scala
new file mode 100644
index 0000000..aa6694a
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/db/MetadataDatasource.scala
@@ -0,0 +1,73 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+
+package org.apache.ambari.metrics.adservice.db
+
+trait MetadataDatasource {
+
+  type Key = Array[Byte]
+  type Value = Array[Byte]
+
+  /**
+    *  Idempotent call at the start of the application to initialize db
+    */
+  def initialize(): Unit
+
+  /**
+    * This function obtains the associated value to a key. It requires the (key-value) pair to be in the DataSource
+    *
+    * @param key
+    * @return the value associated with the passed key.
+    */
+  def apply(key: Key): Value = get(key).get
+
+  /**
+    * This function obtains the associated value to a key, if there exists one.
+    *
+    * @param key
+    * @return the value associated with the passed key.
+    */
+  def get(key: Key): Option[Value]
+
+
+  /**
+    * This function associates a key to a value, overwriting if necessary
+    */
+  def put(key: Key, value: Value): Unit
+
+  /**
+    * Delete key from the db
+    */
+  def delete(key: Key): Unit
+
+  /**
+    * This function updates the DataSource by deleting, updating and inserting new (key-value) pairs.
+    *
+    * @param toRemove which includes all the keys to be removed from the DataSource.
+    * @param toUpsert which includes all the (key-value) pairs to be inserted into the DataSource.
+    *                 If a key is already in the DataSource its value will be updated.
+    * @return the new DataSource after the removals and insertions were done.
+    */
+  def update(toRemove: Seq[Key], toUpsert: Seq[(Key, Value)]): Unit
+
+  /**
+    * This function closes the DataSource, without deleting the files used by it.
+    */
+  def close(): Unit
+
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala
new file mode 100644
index 0000000..6d185bf
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDatasource.scala
@@ -0,0 +1,102 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+
+package org.apache.ambari.metrics.adservice.leveldb
+
+import java.io.File
+
+import org.apache.ambari.metrics.adservice.app.AnomalyDetectionAppConfig
+import org.apache.ambari.metrics.adservice.configuration.MetricDefinitionDBConfiguration
+import org.apache.ambari.metrics.adservice.db.MetadataDatasource
+import org.iq80.leveldb.{DB, Options, WriteOptions}
+import org.iq80.leveldb.impl.Iq80DBFactory
+
+import com.google.inject.Singleton
+
+@Singleton
+class LevelDBDataSource(appConfig: AnomalyDetectionAppConfig) extends MetadataDatasource {
+
+  private var db: DB = _
+  @volatile var isInitialized: Boolean = false
+
+  override def initialize(): Unit = {
+    if (isInitialized) return 
+
+    val configuration: MetricDefinitionDBConfiguration = appConfig.getMetricDefinitionDBConfiguration
+
+    db = createDB(new LevelDbConfig {
+      override val createIfMissing: Boolean = true
+      override val verifyChecksums: Boolean = configuration.verifyChecksums
+      override val paranoidChecks: Boolean = configuration.performParanoidChecks
+      override val path: String = configuration.getDbDirPath
+    })
+    isInitialized = true
+  }
+
+  private def createDB(levelDbConfig: LevelDbConfig): DB = {
+    import levelDbConfig._
+
+    val options = new Options()
+      .createIfMissing(createIfMissing)
+      .paranoidChecks(paranoidChecks) // raise an error as soon as it detects an internal corruption
+      .verifyChecksums(verifyChecksums) // force checksum verification of all data that is read from the file system on behalf of a particular read
+
+    Iq80DBFactory.factory.open(new File(path), options)
+  }
+
+  override def close(): Unit = {
+    db.close()
+  }
+
+  /**
+    * This function obtains the associated value to a key, if there exists one.
+    *
+    * @param key
+    * @return the value associated with the passed key.
+    */
+  override def get(key: Key): Option[Value] = Option(db.get(key))
+
+  /**
+    * This function updates the DataSource by deleting, updating and inserting new (key-value) pairs.
+    *
+    * @param toRemove which includes all the keys to be removed from the DataSource.
+    * @param toUpsert which includes all the (key-value) pairs to be inserted into the DataSource.
+    *                 If a key is already in the DataSource its value will be updated.
+    */
+  override def update(toRemove: Seq[Key], toUpsert: Seq[(Key, Value)]): Unit = {
+    val batch = db.createWriteBatch()
+    toRemove.foreach { key => batch.delete(key) }
+    toUpsert.foreach { item => batch.put(item._1, item._2) }
+    db.write(batch, new WriteOptions())
+  }
+
+  override def put(key: Key, value: Value): Unit = {
+    db.put(key, value)
+  }
+
+  override def delete(key: Key): Unit = {
+    db.delete(key)
+  }
+}
+
+trait LevelDbConfig {
+  val createIfMissing: Boolean
+  val paranoidChecks: Boolean
+  val verifyChecksums: Boolean
+  val path: String
+}
\ No newline at end of file
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDataSourceTest.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDataSourceTest.scala
new file mode 100644
index 0000000..2ddb7b8
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/scala/org/apache/ambari/metrics/adservice/leveldb/LevelDBDataSourceTest.scala
@@ -0,0 +1,57 @@
+/**
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+package org.apache.ambari.metrics.adservice.leveldb
+
+import java.io.File
+
+import org.apache.ambari.metrics.adservice.app.AnomalyDetectionAppConfig
+import org.apache.ambari.metrics.adservice.configuration.MetricDefinitionDBConfiguration
+import org.iq80.leveldb.util.FileUtils
+import org.mockito.Mockito.when
+import org.scalatest.{BeforeAndAfter, FunSuite, Matchers}
+import org.scalatest.mockito.MockitoSugar
+
+class LevelDBDataSourceTest extends FunSuite with BeforeAndAfter with Matchers with MockitoSugar {
+
+  var db: LevelDBDataSource = _
+  var file : File = FileUtils.createTempDir("adservice-leveldb-test")
+
+  before {
+    val appConfig: AnomalyDetectionAppConfig = mock[AnomalyDetectionAppConfig]
+    val mdConfig : MetricDefinitionDBConfiguration = mock[MetricDefinitionDBConfiguration]
+
+    when(appConfig.getMetricDefinitionDBConfiguration).thenReturn(mdConfig)
+    when(mdConfig.verifyChecksums).thenReturn(true)
+    when(mdConfig.performParanoidChecks).thenReturn(false)
+    when(mdConfig.getDbDirPath).thenReturn(file.getAbsolutePath)
+
+    db = new LevelDBDataSource(appConfig)
+    db.initialize()
+  }
+
+  test("testOperations") {
+    db.put("Hello".getBytes(), "World".getBytes())
+    assert(db.get("Hello".getBytes()).get.sameElements("World".getBytes()))
+    db.update(Seq("Hello".getBytes()), Seq(("Hello".getBytes(), "Mars".getBytes())))
+    assert(db.get("Hello".getBytes()).get.sameElements("Mars".getBytes()))
+  }
+
+  after {
+    FileUtils.deleteRecursively(file)
+  }
+}

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.

[ambari] 19/39: AMBARI-22077 : Create maven module and package structure for the anomaly detection engine. (Commit 2) (avijayan)

Posted by av...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

avijayan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/ambari.git

commit fb75e61f60bd9925a127510ef53f92834631fc74
Author: Aravindan Vijayan <av...@hortonworks.com>
AuthorDate: Wed Sep 27 15:02:56 2017 -0700

    AMBARI-22077 : Create maven module and package structure for the anomaly detection engine. (Commit 2) (avijayan)
---
 .../pom.xml                                        |   4 +-
 .../adservice}/prototype/common/DataSeries.java    |   2 +-
 .../adservice}/prototype/common/ResultSet.java     |   2 +-
 .../prototype/common/StatisticUtils.java           |   2 +-
 .../prototype/core/AmbariServerInterface.java      |   2 +-
 .../prototype/core/MetricKafkaProducer.java        |   2 +-
 .../prototype/core/MetricSparkConsumer.java        |   6 +-
 .../prototype/core/MetricsCollectorInterface.java  |   4 +-
 .../prototype/core/PointInTimeADSystem.java        |  10 +-
 .../prototype/core/RFunctionInvoker.java           |   6 +-
 .../adservice}/prototype/core/TrendADSystem.java   |  10 +-
 .../adservice}/prototype/core/TrendMetric.java     |   2 +-
 .../methods/AnomalyDetectionTechnique.java         |   4 +-
 .../prototype/methods/MetricAnomaly.java           |   2 +-
 .../adservice}/prototype/methods/ema/EmaModel.java |   4 +-
 .../prototype/methods/ema/EmaModelLoader.java      |   8 +-
 .../prototype/methods/ema/EmaTechnique.java        |   6 +-
 .../prototype/methods/hsdev/HsdevTechnique.java    |  10 +-
 .../prototype/methods/kstest/KSTechnique.java      |  10 +-
 .../utilities/MetricAnomalyDetectorTestInput.java  |   2 +-
 .../testing/utilities/MetricAnomalyTester.java     | 168 +++++++++++++++++++++
 .../utilities/TestMetricSeriesGenerator.java       |   2 +-
 .../testing/utilities/TestSeriesInputRequest.java  |   2 +-
 .../src/main/resources/R-scripts/ema.R             |   0
 .../src/main/resources/R-scripts/hsdev.r           |   0
 .../src/main/resources/R-scripts/iforest.R         |   0
 .../src/main/resources/R-scripts/kstest.r          |   0
 .../src/main/resources/R-scripts/test.R            |   0
 .../src/main/resources/R-scripts/tukeys.r          |   0
 .../src/main/resources/input-config.properties     |   0
 .../spark/prototype}/MetricAnomalyDetector.scala   |   9 +-
 .../spark/prototype}/SparkPhoenixReader.scala      |   4 +-
 .../adservice}/prototype/TestEmaTechnique.java     |  10 +-
 .../adservice}/prototype/TestRFunctionInvoker.java |  10 +-
 .../metrics/adservice}/prototype/TestTukeys.java   |  10 +-
 .../seriesgenerator/AbstractMetricSeries.java      |   2 +-
 .../seriesgenerator/DualBandMetricSeries.java      |   2 +-
 .../MetricSeriesGeneratorFactory.java              |   4 +-
 .../seriesgenerator/MetricSeriesGeneratorTest.java |   2 +-
 .../seriesgenerator/MonotonicMetricSeries.java     |   2 +-
 .../seriesgenerator/NormalMetricSeries.java        |   2 +-
 .../SteadyWithTurbulenceMetricSeries.java          |   2 +-
 .../seriesgenerator/StepFunctionMetricSeries.java  |   2 +-
 .../seriesgenerator/UniformMetricSeries.java       |   2 +-
 .../testing/utilities/MetricAnomalyTester.java     | 166 --------------------
 ambari-metrics/pom.xml                             |   2 +-
 46 files changed, 246 insertions(+), 255 deletions(-)

diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/pom.xml b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
similarity index 98%
rename from ambari-metrics/ambari-metrics-anomaly-detector/pom.xml
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
index e6e12f2..1a10f86 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/pom.xml
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/pom.xml
@@ -26,7 +26,7 @@
         <version>2.0.0.0-SNAPSHOT</version>
     </parent>
     <modelVersion>4.0.0</modelVersion>
-    <artifactId>ambari-metrics-anomaly-detector</artifactId>
+    <artifactId>ambari-metrics-anomaly-detection-service</artifactId>
     <version>2.0.0.0-SNAPSHOT</version>
     <properties>
         <scala.version>2.10.4</scala.version>
@@ -78,7 +78,7 @@
             </plugin>
         </plugins>
     </build>
-    <name>Ambari Metrics Anomaly Detector</name>
+    <name>Ambari Metrics Anomaly Detection Service</name>
     <packaging>jar</packaging>
 
     <dependencies>
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/DataSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/DataSeries.java
similarity index 95%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/DataSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/DataSeries.java
index eb19857..54b402f 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/DataSeries.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/DataSeries.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.common;
+package org.apache.ambari.metrics.adservice.prototype.common;
 
 import java.util.Arrays;
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/ResultSet.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/ResultSet.java
similarity index 95%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/ResultSet.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/ResultSet.java
index 101b0e9..dd3038f 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/ResultSet.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/ResultSet.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.common;
+package org.apache.ambari.metrics.adservice.prototype.common;
 
 
 import java.util.ArrayList;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/StatisticUtils.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/StatisticUtils.java
similarity index 96%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/StatisticUtils.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/StatisticUtils.java
index 4ea4ac5..7f0aed3 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/common/StatisticUtils.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/common/StatisticUtils.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.common;
+package org.apache.ambari.metrics.adservice.prototype.common;
 
 
 import java.util.ArrayList;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/AmbariServerInterface.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/AmbariServerInterface.java
similarity index 98%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/AmbariServerInterface.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/AmbariServerInterface.java
index b6b1bf5..920d758 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/AmbariServerInterface.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/AmbariServerInterface.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.alertservice.prototype.core;
+package org.apache.ambari.metrics.adservice.prototype.core;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricKafkaProducer.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricKafkaProducer.java
similarity index 97%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricKafkaProducer.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricKafkaProducer.java
index 2287ee3..167fbb3 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricKafkaProducer.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricKafkaProducer.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.core;
+package org.apache.ambari.metrics.adservice.prototype.core;
 
 import com.fasterxml.jackson.databind.JsonNode;
 import com.fasterxml.jackson.databind.ObjectMapper;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricSparkConsumer.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricSparkConsumer.java
similarity index 97%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricSparkConsumer.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricSparkConsumer.java
index 706c69f..e8257e5 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricSparkConsumer.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricSparkConsumer.java
@@ -15,11 +15,11 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.core;
+package org.apache.ambari.metrics.adservice.prototype.core;
 
 import com.fasterxml.jackson.databind.ObjectMapper;
-import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
-import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique;
+import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
+import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricsCollectorInterface.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricsCollectorInterface.java
similarity index 98%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricsCollectorInterface.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricsCollectorInterface.java
index 246565d..da3999a 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/MetricsCollectorInterface.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/MetricsCollectorInterface.java
@@ -15,9 +15,9 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.core;
+package org.apache.ambari.metrics.adservice.prototype.core;
 
-import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
+import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/PointInTimeADSystem.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/PointInTimeADSystem.java
similarity index 96%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/PointInTimeADSystem.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/PointInTimeADSystem.java
index c579515..0a2271a 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/PointInTimeADSystem.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/PointInTimeADSystem.java
@@ -15,12 +15,12 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.core;
+package org.apache.ambari.metrics.adservice.prototype.core;
 
-import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
-import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
-import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaModel;
-import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique;
+import org.apache.ambari.metrics.adservice.prototype.common.ResultSet;
+import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaModel;
+import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/RFunctionInvoker.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/RFunctionInvoker.java
similarity index 96%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/RFunctionInvoker.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/RFunctionInvoker.java
index 4538f0b..8f1eba6 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/RFunctionInvoker.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/RFunctionInvoker.java
@@ -15,11 +15,11 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.core;
+package org.apache.ambari.metrics.adservice.prototype.core;
 
 
-import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
-import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.adservice.prototype.common.ResultSet;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.rosuda.JRI.REXP;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/TrendADSystem.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendADSystem.java
similarity index 96%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/TrendADSystem.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendADSystem.java
index 2a205d1..f5ec83a 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/TrendADSystem.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendADSystem.java
@@ -15,12 +15,12 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.core;
+package org.apache.ambari.metrics.adservice.prototype.core;
 
-import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
-import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
-import org.apache.ambari.metrics.alertservice.prototype.methods.hsdev.HsdevTechnique;
-import org.apache.ambari.metrics.alertservice.prototype.methods.kstest.KSTechnique;
+import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
+import org.apache.ambari.metrics.adservice.prototype.methods.hsdev.HsdevTechnique;
+import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.adservice.prototype.methods.kstest.KSTechnique;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/TrendMetric.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendMetric.java
similarity index 94%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/TrendMetric.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendMetric.java
index 0640142..d4db227 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/core/TrendMetric.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/core/TrendMetric.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.core;
+package org.apache.ambari.metrics.adservice.prototype.core;
 
 import java.io.Serializable;
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/AnomalyDetectionTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/AnomalyDetectionTechnique.java
similarity index 90%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/AnomalyDetectionTechnique.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/AnomalyDetectionTechnique.java
index 0b10b4b..c19adda 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/AnomalyDetectionTechnique.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/AnomalyDetectionTechnique.java
@@ -15,13 +15,11 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.methods;
+package org.apache.ambari.metrics.adservice.prototype.methods;
 
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 
-import java.sql.Time;
 import java.util.List;
-import java.util.Map;
 
 public abstract class AnomalyDetectionTechnique {
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/MetricAnomaly.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/MetricAnomaly.java
similarity index 97%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/MetricAnomaly.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/MetricAnomaly.java
index da4f030..251603b 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/MetricAnomaly.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/MetricAnomaly.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.methods;
+package org.apache.ambari.metrics.adservice.prototype.methods;
 
 import java.io.Serializable;
 import java.util.HashMap;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaModel.java
similarity index 95%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaModel.java
index a31410d..593028e 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModel.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaModel.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.methods.ema;
+package org.apache.ambari.metrics.adservice.prototype.methods.ema;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -23,7 +23,7 @@ import org.apache.commons.logging.LogFactory;
 import javax.xml.bind.annotation.XmlRootElement;
 import java.io.Serializable;
 
-import static org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique.suppressAnomaliesTheshold;
+import static org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique.suppressAnomaliesTheshold;
 
 @XmlRootElement
 public class EmaModel implements Serializable {
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModelLoader.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaModelLoader.java
similarity index 87%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModelLoader.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaModelLoader.java
index 62749c1..7623f27 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaModelLoader.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaModelLoader.java
@@ -15,19 +15,13 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.methods.ema;
+package org.apache.ambari.metrics.adservice.prototype.methods.ema;
 
-import com.google.gson.Gson;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.spark.SparkContext;
 import org.apache.spark.mllib.util.Loader;
 
-import java.io.IOException;
-import java.nio.charset.StandardCharsets;
-import java.nio.file.Files;
-import java.nio.file.Paths;
-
 public class EmaModelLoader implements Loader<EmaTechnique> {
     private static final Log LOG = LogFactory.getLog(EmaModelLoader.class);
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaTechnique.java
similarity index 95%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaTechnique.java
index 52c6cf3..7ec17d8 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/ema/EmaTechnique.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/ema/EmaTechnique.java
@@ -15,13 +15,13 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.methods.ema;
+package org.apache.ambari.metrics.adservice.prototype.methods.ema;
 
 import com.google.gson.Gson;
+import org.apache.ambari.metrics.adservice.prototype.methods.AnomalyDetectionTechnique;
+import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
-import org.apache.ambari.metrics.alertservice.prototype.methods.AnomalyDetectionTechnique;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.spark.SparkContext;
 import org.apache.spark.mllib.util.Saveable;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/hsdev/HsdevTechnique.java
similarity index 86%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/hsdev/HsdevTechnique.java
index 04f4a73..6facc99 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/hsdev/HsdevTechnique.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/hsdev/HsdevTechnique.java
@@ -15,14 +15,14 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.methods.hsdev;
+package org.apache.ambari.metrics.adservice.prototype.methods.hsdev;
 
-import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
-import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
+import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import static org.apache.ambari.metrics.alertservice.prototype.common.StatisticUtils.median;
-import static org.apache.ambari.metrics.alertservice.prototype.common.StatisticUtils.sdev;
+import static org.apache.ambari.metrics.adservice.prototype.common.StatisticUtils.median;
+import static org.apache.ambari.metrics.adservice.prototype.common.StatisticUtils.sdev;
 
 import java.io.Serializable;
 import java.util.Date;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/kstest/KSTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/kstest/KSTechnique.java
similarity index 88%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/kstest/KSTechnique.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/kstest/KSTechnique.java
index a9360d3..4727c6f 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/methods/kstest/KSTechnique.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/methods/kstest/KSTechnique.java
@@ -16,12 +16,12 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.alertservice.prototype.methods.kstest;
+package org.apache.ambari.metrics.adservice.prototype.methods.kstest;
 
-import org.apache.ambari.metrics.alertservice.prototype.core.RFunctionInvoker;
-import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
-import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
-import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
+import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.adservice.prototype.common.ResultSet;
+import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
+import org.apache.ambari.metrics.adservice.prototype.core.RFunctionInvoker;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/MetricAnomalyDetectorTestInput.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyDetectorTestInput.java
similarity index 97%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/MetricAnomalyDetectorTestInput.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyDetectorTestInput.java
index 268cd15..9a002a1 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/MetricAnomalyDetectorTestInput.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyDetectorTestInput.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.alertservice.prototype.testing.utilities;
+package org.apache.ambari.metrics.adservice.prototype.testing.utilities;
 
 import javax.xml.bind.annotation.XmlRootElement;
 import java.util.List;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyTester.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyTester.java
new file mode 100644
index 0000000..d079e66
--- /dev/null
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/MetricAnomalyTester.java
@@ -0,0 +1,168 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.metrics.adservice.prototype.testing.utilities;
+
+import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.adservice.prototype.common.ResultSet;
+import org.apache.ambari.metrics.adservice.prototype.core.MetricsCollectorInterface;
+import org.apache.ambari.metrics.adservice.prototype.core.RFunctionInvoker;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
+import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
+
+import java.net.InetAddress;
+import java.net.UnknownHostException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.TreeMap;
+
+/**
+ * Class which was originally used to send test series from AMS to Spark through Kafka.
+ */
+public class MetricAnomalyTester {
+
+//  public static String appId = MetricsCollectorInterface.serviceName;
+//  static final Log LOG = LogFactory.getLog(MetricAnomalyTester.class);
+//  static Map<String, TimelineMetric> timelineMetricMap = new HashMap<>();
+//
+//  public static TimelineMetrics runTestAnomalyRequest(MetricAnomalyDetectorTestInput input) throws UnknownHostException {
+//
+//    long currentTime = System.currentTimeMillis();
+//    TimelineMetrics timelineMetrics = new TimelineMetrics();
+//    String hostname = InetAddress.getLocalHost().getHostName();
+//
+//    //Train data
+//    TimelineMetric metric1 = new TimelineMetric();
+//    if (StringUtils.isNotEmpty(input.getTrainDataName())) {
+//      metric1 = timelineMetricMap.get(input.getTrainDataName());
+//      if (metric1 == null) {
+//        metric1 = new TimelineMetric();
+//        double[] trainSeries = MetricSeriesGeneratorFactory.generateSeries(input.getTrainDataType(), input.getTrainDataSize(), input.getTrainDataConfigs());
+//        metric1.setMetricName(input.getTrainDataName());
+//        metric1.setAppId(appId);
+//        metric1.setHostName(hostname);
+//        metric1.setStartTime(currentTime);
+//        metric1.setInstanceId(null);
+//        metric1.setMetricValues(getAsTimeSeries(currentTime, trainSeries));
+//        timelineMetricMap.put(input.getTrainDataName(), metric1);
+//      }
+//      timelineMetrics.getMetrics().add(metric1);
+//    } else {
+//      LOG.error("No train data name specified");
+//    }
+//
+//    //Test data
+//    TimelineMetric metric2 = new TimelineMetric();
+//    if (StringUtils.isNotEmpty(input.getTestDataName())) {
+//      metric2 = timelineMetricMap.get(input.getTestDataName());
+//      if (metric2 == null) {
+//        metric2 = new TimelineMetric();
+//        double[] testSeries = MetricSeriesGeneratorFactory.generateSeries(input.getTestDataType(), input.getTestDataSize(), input.getTestDataConfigs());
+//        metric2.setMetricName(input.getTestDataName());
+//        metric2.setAppId(appId);
+//        metric2.setHostName(hostname);
+//        metric2.setStartTime(currentTime);
+//        metric2.setInstanceId(null);
+//        metric2.setMetricValues(getAsTimeSeries(currentTime, testSeries));
+//        timelineMetricMap.put(input.getTestDataName(), metric2);
+//      }
+//      timelineMetrics.getMetrics().add(metric2);
+//    } else {
+//      LOG.warn("No test data name specified");
+//    }
+//
+//    //Invoke method
+//    if (CollectionUtils.isNotEmpty(input.getMethods())) {
+//      RFunctionInvoker.setScriptsDir("/etc/ambari-metrics-collector/conf/R-scripts");
+//      for (String methodType : input.getMethods()) {
+//        ResultSet result = RFunctionInvoker.executeMethod(methodType, getAsDataSeries(metric1), getAsDataSeries(metric2), input.getMethodConfigs());
+//        TimelineMetric timelineMetric = getAsTimelineMetric(result, methodType, input, currentTime, hostname);
+//        if (timelineMetric != null) {
+//          timelineMetrics.getMetrics().add(timelineMetric);
+//        }
+//      }
+//    } else {
+//      LOG.warn("No anomaly method requested");
+//    }
+//
+//    return timelineMetrics;
+//  }
+//
+//
+//  private static TimelineMetric getAsTimelineMetric(ResultSet result, String methodType, MetricAnomalyDetectorTestInput input, long currentTime, String hostname) {
+//
+//    if (result == null) {
+//      return null;
+//    }
+//
+//    TimelineMetric timelineMetric = new TimelineMetric();
+//    if (methodType.equals("tukeys") || methodType.equals("ema")) {
+//      timelineMetric.setMetricName(input.getTrainDataName() + "_" + input.getTestDataName() + "_" + methodType + "_" + currentTime);
+//      timelineMetric.setHostName(hostname);
+//      timelineMetric.setAppId(appId);
+//      timelineMetric.setInstanceId(null);
+//      timelineMetric.setStartTime(currentTime);
+//
+//      TreeMap<Long, Double> metricValues = new TreeMap<>();
+//      if (result.resultset.size() > 0) {
+//        double[] ts = result.resultset.get(0);
+//        double[] metrics = result.resultset.get(1);
+//        for (int i = 0; i < ts.length; i++) {
+//          if (i == 0) {
+//            timelineMetric.setStartTime((long) ts[i]);
+//          }
+//          metricValues.put((long) ts[i], metrics[i]);
+//        }
+//      }
+//      timelineMetric.setMetricValues(metricValues);
+//      return timelineMetric;
+//    }
+//    return null;
+//  }
+//
+//
+//  private static TreeMap<Long, Double> getAsTimeSeries(long currentTime, double[] values) {
+//
+//    long startTime = currentTime - (values.length - 1) * 60 * 1000;
+//    TreeMap<Long, Double> metricValues = new TreeMap<>();
+//
+//    for (int i = 0; i < values.length; i++) {
+//      metricValues.put(startTime, values[i]);
+//      startTime += (60 * 1000);
+//    }
+//    return metricValues;
+//  }
+//
+//  private static DataSeries getAsDataSeries(TimelineMetric timelineMetric) {
+//
+//    TreeMap<Long, Double> metricValues = timelineMetric.getMetricValues();
+//    double[] timestamps = new double[metricValues.size()];
+//    double[] values = new double[metricValues.size()];
+//    int i = 0;
+//
+//    for (Long timestamp : metricValues.keySet()) {
+//      timestamps[i] = timestamp;
+//      values[i++] = metricValues.get(timestamp);
+//    }
+//    return new DataSeries(timelineMetric.getMetricName() + "_" + timelineMetric.getAppId() + "_" + timelineMetric.getHostName(), timestamps, values);
+//  }
+}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/TestMetricSeriesGenerator.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/TestMetricSeriesGenerator.java
similarity index 97%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/TestMetricSeriesGenerator.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/TestMetricSeriesGenerator.java
index b817f3e..3b2605b 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/TestMetricSeriesGenerator.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/TestMetricSeriesGenerator.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.testing.utilities;
+package org.apache.ambari.metrics.adservice.prototype.testing.utilities;
 
 /**
  * Class which was originally used to send test series from AMS to Spark through Kafka.
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/TestSeriesInputRequest.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/TestSeriesInputRequest.java
similarity index 97%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/TestSeriesInputRequest.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/TestSeriesInputRequest.java
index a424f8e..d7db9ca 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/TestSeriesInputRequest.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/java/org/apache/ambari/metrics/adservice/prototype/testing/utilities/TestSeriesInputRequest.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype.testing.utilities;
+package org.apache.ambari.metrics.adservice.prototype.testing.utilities;
 
 import org.apache.htrace.fasterxml.jackson.core.JsonProcessingException;
 import org.apache.htrace.fasterxml.jackson.databind.ObjectMapper;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/ema.R b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/ema.R
similarity index 100%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/ema.R
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/ema.R
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/hsdev.r b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/hsdev.r
similarity index 100%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/hsdev.r
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/hsdev.r
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/iforest.R b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/iforest.R
similarity index 100%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/iforest.R
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/iforest.R
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/kstest.r b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/kstest.r
similarity index 100%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/kstest.r
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/kstest.r
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/test.R b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/test.R
similarity index 100%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/test.R
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/test.R
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/tukeys.r b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/tukeys.r
similarity index 100%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/R-scripts/tukeys.r
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/R-scripts/tukeys.r
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/input-config.properties b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/input-config.properties
similarity index 100%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/resources/input-config.properties
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/resources/input-config.properties
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/MetricAnomalyDetector.scala
similarity index 93%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/MetricAnomalyDetector.scala
index 324058b..6122f5e 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/scala/org/apache/ambari/metrics/spark/MetricAnomalyDetector.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/MetricAnomalyDetector.scala
@@ -14,8 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.spark
-
+package org.apache.ambari.metrics.adservice.spark.prototype
 
 import java.io.{FileInputStream, IOException, InputStream}
 import java.util
@@ -23,12 +22,12 @@ import java.util.Properties
 import java.util.logging.LogManager
 
 import com.fasterxml.jackson.databind.ObjectMapper
-import org.apache.ambari.metrics.alertservice.prototype.core.MetricsCollectorInterface
+import org.apache.ambari.metrics.adservice.prototype.core.MetricsCollectorInterface
 import org.apache.spark.SparkConf
 import org.apache.spark.streaming._
 import org.apache.spark.streaming.kafka._
-import org.apache.ambari.metrics.alertservice.prototype.methods.{AnomalyDetectionTechnique, MetricAnomaly}
-import org.apache.ambari.metrics.alertservice.prototype.methods.ema.{EmaModelLoader, EmaTechnique}
+import org.apache.ambari.metrics.adservice.prototype.methods.{AnomalyDetectionTechnique, MetricAnomaly}
+import org.apache.ambari.metrics.adservice.prototype.methods.ema.{EmaModelLoader, EmaTechnique}
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics
 import org.apache.log4j.Logger
 import org.apache.spark.storage.StorageLevel
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala
similarity index 95%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala
index ccded6b..6e1ae07 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/scala/org/apache/ambari/metrics/spark/SparkPhoenixReader.scala
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/main/scala/org/apache/ambari/metrics/adservice/spark/prototype/SparkPhoenixReader.scala
@@ -15,9 +15,9 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.spark
+package org.apache.ambari.metrics.adservice.spark.prototype
 
-import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique
+import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric
 import org.apache.spark.sql.SQLContext
 import org.apache.spark.{SparkConf, SparkContext}
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestEmaTechnique.java
similarity index 89%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestEmaTechnique.java
index a0b06e6..76a00a6 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestEmaTechnique.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestEmaTechnique.java
@@ -15,11 +15,11 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype;
+package org.apache.ambari.metrics.adservice.prototype;
 
-import org.apache.ambari.metrics.alertservice.prototype.core.RFunctionInvoker;
-import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
-import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique;
+import org.apache.ambari.metrics.adservice.prototype.core.RFunctionInvoker;
+import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
+import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.junit.Assert;
 import org.junit.Assume;
@@ -32,7 +32,7 @@ import java.net.URL;
 import java.util.List;
 import java.util.TreeMap;
 
-import static org.apache.ambari.metrics.alertservice.prototype.TestRFunctionInvoker.getTS;
+import static org.apache.ambari.metrics.adservice.prototype.TestRFunctionInvoker.getTS;
 
 public class TestEmaTechnique {
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestRFunctionInvoker.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestRFunctionInvoker.java
similarity index 93%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestRFunctionInvoker.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestRFunctionInvoker.java
index d98ef0c..98fa050 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestRFunctionInvoker.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestRFunctionInvoker.java
@@ -15,12 +15,12 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype;
+package org.apache.ambari.metrics.adservice.prototype;
 
-import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
-import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
-import org.apache.ambari.metrics.alertservice.prototype.core.RFunctionInvoker;
-import org.apache.ambari.metrics.alertservice.seriesgenerator.UniformMetricSeries;
+import org.apache.ambari.metrics.adservice.prototype.common.DataSeries;
+import org.apache.ambari.metrics.adservice.prototype.common.ResultSet;
+import org.apache.ambari.metrics.adservice.prototype.core.RFunctionInvoker;
+import org.apache.ambari.metrics.adservice.seriesgenerator.UniformMetricSeries;
 import org.apache.commons.lang.ArrayUtils;
 import org.junit.Assert;
 import org.junit.Assume;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestTukeys.java
similarity index 89%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestTukeys.java
index 86590bd..57a6f34 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/prototype/TestTukeys.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/prototype/TestTukeys.java
@@ -15,12 +15,12 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.prototype;
+package org.apache.ambari.metrics.adservice.prototype;
 
-import org.apache.ambari.metrics.alertservice.prototype.core.MetricsCollectorInterface;
-import org.apache.ambari.metrics.alertservice.prototype.core.RFunctionInvoker;
-import org.apache.ambari.metrics.alertservice.prototype.methods.MetricAnomaly;
-import org.apache.ambari.metrics.alertservice.prototype.methods.ema.EmaTechnique;
+import org.apache.ambari.metrics.adservice.prototype.methods.MetricAnomaly;
+import org.apache.ambari.metrics.adservice.prototype.core.MetricsCollectorInterface;
+import org.apache.ambari.metrics.adservice.prototype.core.RFunctionInvoker;
+import org.apache.ambari.metrics.adservice.prototype.methods.ema.EmaTechnique;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
 import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
 import org.junit.Assume;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/AbstractMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/AbstractMetricSeries.java
similarity index 93%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/AbstractMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/AbstractMetricSeries.java
index a8e31bf..635a929 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/AbstractMetricSeries.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/AbstractMetricSeries.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.seriesgenerator;
+package org.apache.ambari.metrics.adservice.seriesgenerator;
 
 public interface AbstractMetricSeries {
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/DualBandMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/DualBandMetricSeries.java
similarity index 97%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/DualBandMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/DualBandMetricSeries.java
index 4158ff4..a9e3f30 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/DualBandMetricSeries.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/DualBandMetricSeries.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.seriesgenerator;
+package org.apache.ambari.metrics.adservice.seriesgenerator;
 
 import java.util.Random;
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorFactory.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MetricSeriesGeneratorFactory.java
similarity index 99%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorFactory.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MetricSeriesGeneratorFactory.java
index 1e37ff3..a50b433 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorFactory.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MetricSeriesGeneratorFactory.java
@@ -16,11 +16,9 @@
  * limitations under the License.
  */
 
-package org.apache.ambari.metrics.alertservice.seriesgenerator;
+package org.apache.ambari.metrics.adservice.seriesgenerator;
 
-import java.util.Arrays;
 import java.util.Map;
-import java.util.Random;
 
 public class MetricSeriesGeneratorFactory {
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorTest.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MetricSeriesGeneratorTest.java
similarity index 98%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorTest.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MetricSeriesGeneratorTest.java
index fe7dba9..03537e4 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/test/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MetricSeriesGeneratorTest.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MetricSeriesGeneratorTest.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.seriesgenerator;
+package org.apache.ambari.metrics.adservice.seriesgenerator;
 
 import org.junit.Assert;
 import org.junit.Test;
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MonotonicMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MonotonicMetricSeries.java
similarity index 98%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MonotonicMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MonotonicMetricSeries.java
index a883d08..8bd1a9b 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/MonotonicMetricSeries.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/MonotonicMetricSeries.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.seriesgenerator;
+package org.apache.ambari.metrics.adservice.seriesgenerator;
 
 import java.util.Random;
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/NormalMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/NormalMetricSeries.java
similarity index 97%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/NormalMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/NormalMetricSeries.java
index cc83d2c..fdedb6e 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/NormalMetricSeries.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/NormalMetricSeries.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.seriesgenerator;
+package org.apache.ambari.metrics.adservice.seriesgenerator;
 
 import java.util.Random;
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java
similarity index 98%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java
index c4ed3ba..403e599 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/SteadyWithTurbulenceMetricSeries.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.seriesgenerator;
+package org.apache.ambari.metrics.adservice.seriesgenerator;
 
 import java.util.Random;
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/StepFunctionMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/StepFunctionMetricSeries.java
similarity index 98%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/StepFunctionMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/StepFunctionMetricSeries.java
index d5beb48..c91eac9 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/StepFunctionMetricSeries.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/StepFunctionMetricSeries.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.seriesgenerator;
+package org.apache.ambari.metrics.adservice.seriesgenerator;
 
 import java.util.Random;
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/UniformMetricSeries.java b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/UniformMetricSeries.java
similarity index 98%
rename from ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/UniformMetricSeries.java
rename to ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/UniformMetricSeries.java
index a2b0eea..6122f82 100644
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/seriesgenerator/UniformMetricSeries.java
+++ b/ambari-metrics/ambari-metrics-anomaly-detection-service/src/test/java/org/apache/ambari/metrics/adservice/seriesgenerator/UniformMetricSeries.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.ambari.metrics.alertservice.seriesgenerator;
+package org.apache.ambari.metrics.adservice.seriesgenerator;
 
 import java.util.Random;
 
diff --git a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/MetricAnomalyTester.java b/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/MetricAnomalyTester.java
deleted file mode 100644
index 6485ebb..0000000
--- a/ambari-metrics/ambari-metrics-anomaly-detector/src/main/java/org/apache/ambari/metrics/alertservice/prototype/testing/utilities/MetricAnomalyTester.java
+++ /dev/null
@@ -1,166 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.ambari.metrics.alertservice.prototype.testing.utilities;
-
-import org.apache.ambari.metrics.alertservice.prototype.core.MetricsCollectorInterface;
-import org.apache.ambari.metrics.alertservice.prototype.core.RFunctionInvoker;
-import org.apache.ambari.metrics.alertservice.prototype.common.DataSeries;
-import org.apache.ambari.metrics.alertservice.prototype.common.ResultSet;
-import org.apache.ambari.metrics.alertservice.seriesgenerator.MetricSeriesGeneratorFactory;
-import org.apache.commons.collections.CollectionUtils;
-import org.apache.commons.lang.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetric;
-import org.apache.hadoop.metrics2.sink.timeline.TimelineMetrics;
-
-import java.net.InetAddress;
-import java.net.UnknownHostException;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.TreeMap;
-
-public class MetricAnomalyTester {
-
-  public static String appId = MetricsCollectorInterface.serviceName;
-  static final Log LOG = LogFactory.getLog(MetricAnomalyTester.class);
-  static Map<String, TimelineMetric> timelineMetricMap = new HashMap<>();
-
-  public static TimelineMetrics runTestAnomalyRequest(MetricAnomalyDetectorTestInput input) throws UnknownHostException {
-
-    long currentTime = System.currentTimeMillis();
-    TimelineMetrics timelineMetrics = new TimelineMetrics();
-    String hostname = InetAddress.getLocalHost().getHostName();
-
-    //Train data
-    TimelineMetric metric1 = new TimelineMetric();
-    if (StringUtils.isNotEmpty(input.getTrainDataName())) {
-      metric1 = timelineMetricMap.get(input.getTrainDataName());
-      if (metric1 == null) {
-        metric1 = new TimelineMetric();
-        double[] trainSeries = MetricSeriesGeneratorFactory.generateSeries(input.getTrainDataType(), input.getTrainDataSize(), input.getTrainDataConfigs());
-        metric1.setMetricName(input.getTrainDataName());
-        metric1.setAppId(appId);
-        metric1.setHostName(hostname);
-        metric1.setStartTime(currentTime);
-        metric1.setInstanceId(null);
-        metric1.setMetricValues(getAsTimeSeries(currentTime, trainSeries));
-        timelineMetricMap.put(input.getTrainDataName(), metric1);
-      }
-      timelineMetrics.getMetrics().add(metric1);
-    } else {
-      LOG.error("No train data name specified");
-    }
-
-    //Test data
-    TimelineMetric metric2 = new TimelineMetric();
-    if (StringUtils.isNotEmpty(input.getTestDataName())) {
-      metric2 = timelineMetricMap.get(input.getTestDataName());
-      if (metric2 == null) {
-        metric2 = new TimelineMetric();
-        double[] testSeries = MetricSeriesGeneratorFactory.generateSeries(input.getTestDataType(), input.getTestDataSize(), input.getTestDataConfigs());
-        metric2.setMetricName(input.getTestDataName());
-        metric2.setAppId(appId);
-        metric2.setHostName(hostname);
-        metric2.setStartTime(currentTime);
-        metric2.setInstanceId(null);
-        metric2.setMetricValues(getAsTimeSeries(currentTime, testSeries));
-        timelineMetricMap.put(input.getTestDataName(), metric2);
-      }
-      timelineMetrics.getMetrics().add(metric2);
-    } else {
-      LOG.warn("No test data name specified");
-    }
-
-    //Invoke method
-    if (CollectionUtils.isNotEmpty(input.getMethods())) {
-      RFunctionInvoker.setScriptsDir("/etc/ambari-metrics-collector/conf/R-scripts");
-      for (String methodType : input.getMethods()) {
-        ResultSet result = RFunctionInvoker.executeMethod(methodType, getAsDataSeries(metric1), getAsDataSeries(metric2), input.getMethodConfigs());
-        TimelineMetric timelineMetric = getAsTimelineMetric(result, methodType, input, currentTime, hostname);
-        if (timelineMetric != null) {
-          timelineMetrics.getMetrics().add(timelineMetric);
-        }
-      }
-    } else {
-      LOG.warn("No anomaly method requested");
-    }
-
-    return timelineMetrics;
-  }
-
-
-  private static TimelineMetric getAsTimelineMetric(ResultSet result, String methodType, MetricAnomalyDetectorTestInput input, long currentTime, String hostname) {
-
-    if (result == null) {
-      return null;
-    }
-
-    TimelineMetric timelineMetric = new TimelineMetric();
-    if (methodType.equals("tukeys") || methodType.equals("ema")) {
-      timelineMetric.setMetricName(input.getTrainDataName() + "_" + input.getTestDataName() + "_" + methodType + "_" + currentTime);
-      timelineMetric.setHostName(hostname);
-      timelineMetric.setAppId(appId);
-      timelineMetric.setInstanceId(null);
-      timelineMetric.setStartTime(currentTime);
-
-      TreeMap<Long, Double> metricValues = new TreeMap<>();
-      if (result.resultset.size() > 0) {
-        double[] ts = result.resultset.get(0);
-        double[] metrics = result.resultset.get(1);
-        for (int i = 0; i < ts.length; i++) {
-          if (i == 0) {
-            timelineMetric.setStartTime((long) ts[i]);
-          }
-          metricValues.put((long) ts[i], metrics[i]);
-        }
-      }
-      timelineMetric.setMetricValues(metricValues);
-      return timelineMetric;
-    }
-    return null;
-  }
-
-
-  private static TreeMap<Long, Double> getAsTimeSeries(long currentTime, double[] values) {
-
-    long startTime = currentTime - (values.length - 1) * 60 * 1000;
-    TreeMap<Long, Double> metricValues = new TreeMap<>();
-
-    for (int i = 0; i < values.length; i++) {
-      metricValues.put(startTime, values[i]);
-      startTime += (60 * 1000);
-    }
-    return metricValues;
-  }
-
-  private static DataSeries getAsDataSeries(TimelineMetric timelineMetric) {
-
-    TreeMap<Long, Double> metricValues = timelineMetric.getMetricValues();
-    double[] timestamps = new double[metricValues.size()];
-    double[] values = new double[metricValues.size()];
-    int i = 0;
-
-    for (Long timestamp : metricValues.keySet()) {
-      timestamps[i] = timestamp;
-      values[i++] = metricValues.get(timestamp);
-    }
-    return new DataSeries(timelineMetric.getMetricName() + "_" + timelineMetric.getAppId() + "_" + timelineMetric.getHostName(), timestamps, values);
-  }
-}
diff --git a/ambari-metrics/pom.xml b/ambari-metrics/pom.xml
index 386be91..f2acb4f 100644
--- a/ambari-metrics/pom.xml
+++ b/ambari-metrics/pom.xml
@@ -34,7 +34,7 @@
     <module>ambari-metrics-grafana</module>
     <module>ambari-metrics-assembly</module>
     <module>ambari-metrics-host-aggregator</module>
-    <module>ambari-metrics-anomaly-detector</module>
+    <module>ambari-metrics-anomaly-detection-service</module>
   </modules>
   <properties>
     <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

-- 
To stop receiving notification emails like this one, please contact
avijayan@apache.org.