You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@ambari.apache.org by jo...@apache.org on 2017/05/31 20:21:41 UTC

[1/4] ambari git commit: AMBARI-21155. Design Ambari Infra Component & workflows (oleewere)

Repository: ambari
Updated Branches:
  refs/heads/branch-feature-AMBARI-12556 138aa48f5 -> fb2076c71


AMBARI-21155. Design Ambari Infra Component & workflows (oleewere)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/23e23afd
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/23e23afd
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/23e23afd

Branch: refs/heads/branch-feature-AMBARI-12556
Commit: 23e23afd884449812d05c490804dc4845b01ac21
Parents: 753f8aa
Author: oleewere <ol...@gmail.com>
Authored: Wed May 31 12:21:34 2017 +0200
Committer: oleewere <ol...@gmail.com>
Committed: Wed May 31 20:03:27 2017 +0200

----------------------------------------------------------------------
 ambari-infra/ambari-infra-manager/README.md     |  92 ++-
 .../ambari-infra-manager/docs/api/swagger.yaml  | 784 +++++++++++++++++++
 .../docs/images/batch-1.png                     | Bin 0 -> 20521 bytes
 .../docs/images/batch-2.png                     | Bin 0 -> 29388 bytes
 .../docs/images/batch-3.png                     | Bin 0 -> 14105 bytes
 .../docs/images/batch-4.png                     | Bin 0 -> 23277 bytes
 .../infra/common/InfraManagerConstants.java     |   2 +-
 .../infra/conf/InfraManagerApiDocConfig.java    |  35 +-
 .../conf/batch/InfraManagerBatchConfig.java     |   8 +-
 .../ambari/infra/job/dummy/DummyItemWriter.java |  13 +
 .../infra/job/dummy/DummyJobListener.java       |  39 +
 .../infra/job/dummy/DummyStepListener.java      |  41 +
 .../apache/ambari/infra/rest/JobResource.java   |   2 +-
 13 files changed, 1002 insertions(+), 14 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/ambari/blob/23e23afd/ambari-infra/ambari-infra-manager/README.md
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/README.md b/ambari-infra/ambari-infra-manager/README.md
index d3527c4..dd2854c 100644
--- a/ambari-infra/ambari-infra-manager/README.md
+++ b/ambari-infra/ambari-infra-manager/README.md
@@ -18,13 +18,99 @@ limitations under the License.
 -->
 
 # Ambari Infra Manager
-TODO
-## Build & Run Application
+
+## Overview
+
+Ambari Infra Manager is a REST based management application for Ambari Infra services (like Infra Solr). The API is built on top of [Spring Batch] (http://docs.spring.io/spring-batch/reference/html/)
+
+### Architecture
+![batch-1](docs/images/batch-1.png)
+
+### Job execution overview
+![batch-2](docs/images/batch-2.png)
+
+### Job workflow
+![batch-3](docs/images/batch-3.png)
+
+### Step workflow
+![batch-4](docs/images/batch-4.png)
+
+(images originally from [here] (http://docs.spring.io/spring-batch/reference/html/))
+
+## API documentation
+
+Infra Manager uses [Swagger] (http://swagger.io/), generated yaml file can be downloaded from [here] (docs/api/swagger.yaml)
+
+
+## Development guide
+
+### Adding a new custom job
+
+As Infra Manager is a Spring based application and using Java configurations, if it is needed to add a new custom Job, the Jobs/Steps/Configurations are need to be on the classpath. Spring beans are registered only in a specific package, so for writing a plugin, all the added Java classes needs to be added inside "org.apache.ambari.infra" package.
+
+For the plugin it will be needed to add all Spring & Spring batch dependencies. For adding a new Job you will need to define a new Configuration object. There you can define your own jobs/steps/writers/readers/processors, as you can see in that example:
+```java
+@Configuration
+@EnableBatchProcessing
+public class MyJobConfig {
+
+  @Inject
+  private StepBuilderFactory steps;
+
+  @Inject
+  private JobBuilderFactory jobs;
+  
+  
+  @Bean(name = "dummyStep")
+  protected Step dummyStep(ItemReader<DummyObject> reader,
+                         ItemProcessor<DummyObject, String> processor,
+                         ItemWriter<String> writer) {
+    return steps.get("dummyStep").listener(new DummyStepListener()).<DummyObject, String> chunk(2)
+      .reader(reader).processor(processor).writer(writer).build();
+  }
+  
+  @Bean(name = "dummyJob")
+  public Job job(@Qualifier("dummyStep") Step dummyStep) {
+    return jobs.get("dummyJob").listener(new DummyJobListener()).start(dummyStep).build();
+  }
+
+}
+```
+As you can see it will require to implement [ItemWriter] (https://docs.spring.io/spring-batch/apidocs/org/springframework/batch/item/ItemWriter.html), [ItemReader] (http://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/item/ItemReader.html) and [ItemProcessor] (https://docs.spring.io/spring-batch/apidocs/org/springframework/batch/item/ItemProcessor.html)
+
+### Schedule custom jobs
+
+It can be needed based on business requirements to schedule jobs (e.g. daily) instead of run manually through the REST API. It can be done with adding a custom bean to "org.apache.ambari.infra" package with using [@Scheduled] (http://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/annotation/Scheduled.html):
+```java
+@Named
+public class MySchedulerObject {
+
+   @Inject
+   private JobService jobService; // or JobOperator jobOperator if spring-batch-admin manager dependecy is not included
+   
+   @Value("${infra-manager.batch.my.param:defaultString}")
+   private String myParamFromLogSearchProperties;
+   
+   @Scheduled(cron = "*/5 * * * * MON-FRI")
+   public void doSomething() {
+      // setup job params
+      jobService.launch(jobName, jobParameters, TimeZone.getDefault());
+   }
+   
+   @Scheduled(cron = "${infra.manager.my.prop}")
+   public void doSomethingBasedOnInfraProperty() {
+      // do something ...
+   }
+}
+```
+
+You can put your cron expression inside infra-manager.properties file just make it configuratble.
+### Build & Run Application
 ```bash
 mvn clean package exec:java
 ```
 
-## Build & Run Application in docker container
+### Build & Run Application in docker container
 ```bash
 cd docker
 ./infra-manager-docker.sh

http://git-wip-us.apache.org/repos/asf/ambari/blob/23e23afd/ambari-infra/ambari-infra-manager/docs/api/swagger.yaml
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/docs/api/swagger.yaml b/ambari-infra/ambari-infra-manager/docs/api/swagger.yaml
new file mode 100644
index 0000000..824629f
--- /dev/null
+++ b/ambari-infra/ambari-infra-manager/docs/api/swagger.yaml
@@ -0,0 +1,784 @@
+---
+swagger: "2.0"
+info:
+  description: "Manager component for Ambari Infra"
+  version: "1.0.0"
+  title: "Infra Manager REST API"
+  license:
+    name: "Apache 2.0"
+    url: "http://www.apache.org/licenses/LICENSE-2.0.html"
+basePath: "/api/v1"
+tags:
+- name: "jobs"
+schemes:
+- "http"
+- "https"
+paths:
+  /jobs:
+    get:
+      tags:
+      - "jobs"
+      summary: "Get all jobs"
+      description: ""
+      operationId: "getAllJobs"
+      produces:
+      - "application/json"
+      parameters:
+      - name: "page"
+        in: "query"
+        required: false
+        type: "integer"
+        default: 0
+        format: "int32"
+      - name: "size"
+        in: "query"
+        required: false
+        type: "integer"
+        default: 20
+        format: "int32"
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            type: "array"
+            items:
+              $ref: "#/definitions/JobInfo"
+  /jobs/executions:
+    delete:
+      tags:
+      - "jobs"
+      summary: "Stop all job executions."
+      description: ""
+      operationId: "stopAll"
+      produces:
+      - "application/json"
+      parameters: []
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            type: "integer"
+            format: "int32"
+  /jobs/executions/{jobExecutionId}:
+    get:
+      tags:
+      - "jobs"
+      summary: "Get job and step details for job execution instance."
+      description: ""
+      operationId: "getExectionInfo"
+      produces:
+      - "application/json"
+      parameters:
+      - name: "jobExecutionId"
+        in: "path"
+        required: true
+        type: "integer"
+        format: "int64"
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            $ref: "#/definitions/JobExecutionDetailsResponse"
+    delete:
+      tags:
+      - "jobs"
+      summary: "Stop or abandon a running job execution."
+      description: ""
+      operationId: "stopOrAbandonJobExecution"
+      produces:
+      - "application/json"
+      parameters:
+      - name: "jobExecutionId"
+        in: "path"
+        required: true
+        type: "integer"
+        format: "int64"
+      - name: "operation"
+        in: "query"
+        required: true
+        type: "string"
+        enum:
+        - "STOP"
+        - "ABANDON"
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            $ref: "#/definitions/JobExecutionInfoResponse"
+  /jobs/executions/{jobExecutionId}/context:
+    get:
+      tags:
+      - "jobs"
+      summary: "Get execution context for specific job."
+      description: ""
+      operationId: "getExecutionContextByJobExecId"
+      produces:
+      - "application/json"
+      parameters:
+      - name: "jobExecutionId"
+        in: "path"
+        required: true
+        type: "integer"
+        format: "int64"
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            $ref: "#/definitions/ExecutionContextResponse"
+  /jobs/executions/{jobExecutionId}/steps/{stepExecutionId}:
+    get:
+      tags:
+      - "jobs"
+      summary: "Get step execution details."
+      description: ""
+      operationId: "getStepExecution"
+      produces:
+      - "application/json"
+      parameters:
+      - name: "jobExecutionId"
+        in: "path"
+        required: true
+        type: "integer"
+        format: "int64"
+      - name: "stepExecutionId"
+        in: "path"
+        required: true
+        type: "integer"
+        format: "int64"
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            $ref: "#/definitions/StepExecutionInfoResponse"
+  /jobs/executions/{jobExecutionId}/steps/{stepExecutionId}/execution-context:
+    get:
+      tags:
+      - "jobs"
+      summary: "Get the execution context of step execution."
+      description: ""
+      operationId: "getStepExecutionContext"
+      produces:
+      - "application/json"
+      parameters:
+      - name: "jobExecutionId"
+        in: "path"
+        required: true
+        type: "integer"
+        format: "int64"
+      - name: "stepExecutionId"
+        in: "path"
+        required: true
+        type: "integer"
+        format: "int64"
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            $ref: "#/definitions/StepExecutionContextResponse"
+  /jobs/executions/{jobExecutionId}/steps/{stepExecutionId}/progress:
+    get:
+      tags:
+      - "jobs"
+      summary: "Get progress of step execution."
+      description: ""
+      operationId: "getStepExecutionProgress"
+      produces:
+      - "application/json"
+      parameters:
+      - name: "jobExecutionId"
+        in: "path"
+        required: true
+        type: "integer"
+        format: "int64"
+      - name: "stepExecutionId"
+        in: "path"
+        required: true
+        type: "integer"
+        format: "int64"
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            $ref: "#/definitions/StepExecutionProgressResponse"
+  /jobs/info/names:
+    get:
+      tags:
+      - "jobs"
+      summary: "Get all job names"
+      description: ""
+      operationId: "getAllJobNames"
+      produces:
+      - "application/json"
+      parameters: []
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            type: "array"
+            uniqueItems: true
+            items:
+              type: "string"
+  /jobs/{jobName}:
+    post:
+      tags:
+      - "jobs"
+      summary: "Start a new job instance by job name."
+      description: ""
+      operationId: "startJob"
+      produces:
+      - "application/json"
+      parameters:
+      - name: "jobName"
+        in: "path"
+        required: true
+        type: "string"
+      - name: "params"
+        in: "query"
+        required: false
+        type: "string"
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            $ref: "#/definitions/JobExecutionInfoResponse"
+  /jobs/{jobName}/executions:
+    get:
+      tags:
+      - "jobs"
+      summary: "Get the id values of all the running job instances."
+      description: ""
+      operationId: "getExecutionIdsByJobName"
+      produces:
+      - "application/json"
+      parameters:
+      - name: "jobName"
+        in: "path"
+        required: true
+        type: "string"
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            type: "array"
+            uniqueItems: true
+            items:
+              type: "integer"
+              format: "int64"
+  /jobs/{jobName}/info:
+    get:
+      tags:
+      - "jobs"
+      summary: "Get job details by job name."
+      description: ""
+      operationId: "getJobDetails"
+      produces:
+      - "application/json"
+      parameters:
+      - name: "page"
+        in: "query"
+        required: false
+        type: "integer"
+        default: 0
+        format: "int32"
+      - name: "size"
+        in: "query"
+        required: false
+        type: "integer"
+        default: 20
+        format: "int32"
+      - name: "jobName"
+        in: "path"
+        required: true
+        type: "string"
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            $ref: "#/definitions/JobDetailsResponse"
+  /jobs/{jobName}/{jobInstanceId}/executions:
+    get:
+      tags:
+      - "jobs"
+      summary: "Get execution for job instance."
+      description: ""
+      operationId: "getExecutionsForInstance"
+      produces:
+      - "application/json"
+      parameters:
+      - name: "jobName"
+        in: "path"
+        required: true
+        type: "string"
+      - name: "jobInstanceId"
+        in: "path"
+        required: true
+        type: "integer"
+        format: "int64"
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            type: "array"
+            items:
+              $ref: "#/definitions/JobExecutionInfoResponse"
+    post:
+      tags:
+      - "jobs"
+      summary: "Restart job instance."
+      description: ""
+      operationId: "restartJobInstance"
+      produces:
+      - "application/json"
+      parameters:
+      - in: "body"
+        name: "body"
+        required: false
+        schema:
+          $ref: "#/definitions/JobExecutionRestartRequest"
+      responses:
+        200:
+          description: "successful operation"
+          schema:
+            $ref: "#/definitions/JobExecutionInfoResponse"
+definitions:
+  JobExecutionData:
+    type: "object"
+    properties:
+      id:
+        type: "integer"
+        format: "int64"
+      executionContext:
+        $ref: "#/definitions/ExecutionContext"
+      jobInstance:
+        $ref: "#/definitions/JobInstance"
+      jobId:
+        type: "integer"
+        format: "int64"
+      jobParameters:
+        $ref: "#/definitions/JobParameters"
+      failureExceptions:
+        type: "array"
+        items:
+          $ref: "#/definitions/Throwable"
+      endTime:
+        type: "string"
+        format: "date-time"
+      exitStatus:
+        $ref: "#/definitions/ExitStatus"
+      createTime:
+        type: "string"
+        format: "date-time"
+      lastUpdated:
+        type: "string"
+        format: "date-time"
+      jobConfigurationName:
+        type: "string"
+      startTime:
+        type: "string"
+        format: "date-time"
+      status:
+        type: "string"
+        enum:
+        - "COMPLETED"
+        - "STARTING"
+        - "STARTED"
+        - "STOPPING"
+        - "STOPPED"
+        - "FAILED"
+        - "ABANDONED"
+        - "UNKNOWN"
+      stepExecutionDataList:
+        type: "array"
+        items:
+          $ref: "#/definitions/StepExecutionData"
+  JobInstance:
+    type: "object"
+    properties:
+      id:
+        type: "integer"
+        format: "int64"
+      version:
+        type: "integer"
+        format: "int32"
+      jobName:
+        type: "string"
+      instanceId:
+        type: "integer"
+        format: "int64"
+  StepExecutionData:
+    type: "object"
+    properties:
+      id:
+        type: "integer"
+        format: "int64"
+      jobExecutionId:
+        type: "integer"
+        format: "int64"
+      executionContext:
+        $ref: "#/definitions/ExecutionContext"
+      stepName:
+        type: "string"
+      terminateOnly:
+        type: "boolean"
+        default: false
+      failureExceptions:
+        type: "array"
+        items:
+          $ref: "#/definitions/Throwable"
+      endTime:
+        type: "string"
+        format: "date-time"
+      exitStatus:
+        $ref: "#/definitions/ExitStatus"
+      lastUpdated:
+        type: "string"
+        format: "date-time"
+      commitCount:
+        type: "integer"
+        format: "int32"
+      readCount:
+        type: "integer"
+        format: "int32"
+      filterCount:
+        type: "integer"
+        format: "int32"
+      writeCount:
+        type: "integer"
+        format: "int32"
+      readSkipCount:
+        type: "integer"
+        format: "int32"
+      writeSkipCount:
+        type: "integer"
+        format: "int32"
+      processSkipCount:
+        type: "integer"
+        format: "int32"
+      rollbackCount:
+        type: "integer"
+        format: "int32"
+      startTime:
+        type: "string"
+        format: "date-time"
+      status:
+        type: "string"
+        enum:
+        - "COMPLETED"
+        - "STARTING"
+        - "STARTED"
+        - "STOPPING"
+        - "STOPPED"
+        - "FAILED"
+        - "ABANDONED"
+        - "UNKNOWN"
+  StackTraceElement:
+    type: "object"
+    properties:
+      methodName:
+        type: "string"
+      fileName:
+        type: "string"
+      lineNumber:
+        type: "integer"
+        format: "int32"
+      className:
+        type: "string"
+      nativeMethod:
+        type: "boolean"
+        default: false
+  JobExecutionDetailsResponse:
+    type: "object"
+    properties:
+      jobExecutionInfoResponse:
+        $ref: "#/definitions/JobExecutionInfoResponse"
+      stepExecutionInfoList:
+        type: "array"
+        items:
+          $ref: "#/definitions/StepExecutionInfoResponse"
+  StepExecutionContextResponse:
+    type: "object"
+    properties:
+      executionContextMap:
+        type: "object"
+        additionalProperties:
+          type: "object"
+      jobExecutionId:
+        type: "integer"
+        format: "int64"
+      stepExecutionId:
+        type: "integer"
+        format: "int64"
+      stepName:
+        type: "string"
+  StepExecutionProgress:
+    type: "object"
+    properties:
+      estimatedPercentCompleteMessage:
+        $ref: "#/definitions/MessageSourceResolvable"
+      estimatedPercentComplete:
+        type: "number"
+        format: "double"
+  ExitStatus:
+    type: "object"
+    properties:
+      exitCode:
+        type: "string"
+      exitDescription:
+        type: "string"
+      running:
+        type: "boolean"
+        default: false
+  ExecutionContextResponse:
+    type: "object"
+    properties:
+      jobExecutionId:
+        type: "integer"
+        format: "int64"
+      executionContextMap:
+        type: "object"
+        additionalProperties:
+          type: "object"
+  StepExecutionHistory:
+    type: "object"
+    properties:
+      stepName:
+        type: "string"
+      count:
+        type: "integer"
+        format: "int32"
+      commitCount:
+        $ref: "#/definitions/CumulativeHistory"
+      rollbackCount:
+        $ref: "#/definitions/CumulativeHistory"
+      readCount:
+        $ref: "#/definitions/CumulativeHistory"
+      writeCount:
+        $ref: "#/definitions/CumulativeHistory"
+      filterCount:
+        $ref: "#/definitions/CumulativeHistory"
+      readSkipCount:
+        $ref: "#/definitions/CumulativeHistory"
+      writeSkipCount:
+        $ref: "#/definitions/CumulativeHistory"
+      processSkipCount:
+        $ref: "#/definitions/CumulativeHistory"
+      duration:
+        $ref: "#/definitions/CumulativeHistory"
+      durationPerRead:
+        $ref: "#/definitions/CumulativeHistory"
+  TimeZone:
+    type: "object"
+    properties:
+      displayName:
+        type: "string"
+      id:
+        type: "string"
+      dstsavings:
+        type: "integer"
+        format: "int32"
+      rawOffset:
+        type: "integer"
+        format: "int32"
+  MessageSourceResolvable:
+    type: "object"
+    properties:
+      arguments:
+        type: "array"
+        items:
+          type: "object"
+      codes:
+        type: "array"
+        items:
+          type: "string"
+      defaultMessage:
+        type: "string"
+  ExecutionContext:
+    type: "object"
+    properties:
+      dirty:
+        type: "boolean"
+        default: false
+      empty:
+        type: "boolean"
+        default: false
+  StepExecutionInfoResponse:
+    type: "object"
+    properties:
+      id:
+        type: "integer"
+        format: "int64"
+      jobExecutionId:
+        type: "integer"
+        format: "int64"
+      jobName:
+        type: "string"
+      name:
+        type: "string"
+      startDate:
+        type: "string"
+      startTime:
+        type: "string"
+      duration:
+        type: "string"
+      durationMillis:
+        type: "integer"
+        format: "int64"
+      exitCode:
+        type: "string"
+      status:
+        type: "string"
+  JobExecutionInfoResponse:
+    type: "object"
+    properties:
+      id:
+        type: "integer"
+        format: "int64"
+      stepExecutionCount:
+        type: "integer"
+        format: "int32"
+      jobId:
+        type: "integer"
+        format: "int64"
+      jobName:
+        type: "string"
+      startDate:
+        type: "string"
+      startTime:
+        type: "string"
+      duration:
+        type: "string"
+      jobExecutionData:
+        $ref: "#/definitions/JobExecutionData"
+      jobParameters:
+        type: "object"
+        additionalProperties:
+          type: "object"
+      jobParametersString:
+        type: "string"
+      restartable:
+        type: "boolean"
+        default: false
+      abandonable:
+        type: "boolean"
+        default: false
+      stoppable:
+        type: "boolean"
+        default: false
+      timeZone:
+        $ref: "#/definitions/TimeZone"
+  JobInfo:
+    type: "object"
+    properties:
+      name:
+        type: "string"
+      executionCount:
+        type: "integer"
+        format: "int32"
+      launchable:
+        type: "boolean"
+        default: false
+      incrementable:
+        type: "boolean"
+        default: false
+      jobInstanceId:
+        type: "integer"
+        format: "int64"
+  JobExecutionRestartRequest:
+    type: "object"
+    properties:
+      jobName:
+        type: "string"
+      jobInstanceId:
+        type: "integer"
+        format: "int64"
+      operation:
+        type: "string"
+        enum:
+        - "RESTART"
+  Throwable:
+    type: "object"
+    properties:
+      cause:
+        $ref: "#/definitions/Throwable"
+      stackTrace:
+        type: "array"
+        items:
+          $ref: "#/definitions/StackTraceElement"
+      message:
+        type: "string"
+      localizedMessage:
+        type: "string"
+      suppressed:
+        type: "array"
+        items:
+          $ref: "#/definitions/Throwable"
+  JobParameters:
+    type: "object"
+    properties:
+      parameters:
+        type: "object"
+        additionalProperties:
+          $ref: "#/definitions/JobParameter"
+      empty:
+        type: "boolean"
+        default: false
+  CumulativeHistory:
+    type: "object"
+    properties:
+      count:
+        type: "integer"
+        format: "int32"
+      min:
+        type: "number"
+        format: "double"
+      max:
+        type: "number"
+        format: "double"
+      standardDeviation:
+        type: "number"
+        format: "double"
+      mean:
+        type: "number"
+        format: "double"
+  JobInstanceDetailsResponse:
+    type: "object"
+    properties:
+      jobInstance:
+        $ref: "#/definitions/JobInstance"
+      jobExecutionInfoResponseList:
+        type: "array"
+        items:
+          $ref: "#/definitions/JobExecutionInfoResponse"
+  JobParameter:
+    type: "object"
+    properties:
+      identifying:
+        type: "boolean"
+        default: false
+      value:
+        type: "object"
+      type:
+        type: "string"
+        enum:
+        - "STRING"
+        - "DATE"
+        - "LONG"
+        - "DOUBLE"
+  StepExecutionProgressResponse:
+    type: "object"
+    properties:
+      stepExecutionProgress:
+        $ref: "#/definitions/StepExecutionProgress"
+      stepExecutionHistory:
+        $ref: "#/definitions/StepExecutionHistory"
+      stepExecutionInfoResponse:
+        $ref: "#/definitions/StepExecutionInfoResponse"
+  JobDetailsResponse:
+    type: "object"
+    properties:
+      jobInfo:
+        $ref: "#/definitions/JobInfo"
+      jobInstanceDetailsResponseList:
+        type: "array"
+        items:
+          $ref: "#/definitions/JobInstanceDetailsResponse"

http://git-wip-us.apache.org/repos/asf/ambari/blob/23e23afd/ambari-infra/ambari-infra-manager/docs/images/batch-1.png
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/docs/images/batch-1.png b/ambari-infra/ambari-infra-manager/docs/images/batch-1.png
new file mode 100644
index 0000000..d763852
Binary files /dev/null and b/ambari-infra/ambari-infra-manager/docs/images/batch-1.png differ

http://git-wip-us.apache.org/repos/asf/ambari/blob/23e23afd/ambari-infra/ambari-infra-manager/docs/images/batch-2.png
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/docs/images/batch-2.png b/ambari-infra/ambari-infra-manager/docs/images/batch-2.png
new file mode 100644
index 0000000..1de3479
Binary files /dev/null and b/ambari-infra/ambari-infra-manager/docs/images/batch-2.png differ

http://git-wip-us.apache.org/repos/asf/ambari/blob/23e23afd/ambari-infra/ambari-infra-manager/docs/images/batch-3.png
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/docs/images/batch-3.png b/ambari-infra/ambari-infra-manager/docs/images/batch-3.png
new file mode 100644
index 0000000..7f1123c
Binary files /dev/null and b/ambari-infra/ambari-infra-manager/docs/images/batch-3.png differ

http://git-wip-us.apache.org/repos/asf/ambari/blob/23e23afd/ambari-infra/ambari-infra-manager/docs/images/batch-4.png
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/docs/images/batch-4.png b/ambari-infra/ambari-infra-manager/docs/images/batch-4.png
new file mode 100644
index 0000000..beb610a
Binary files /dev/null and b/ambari-infra/ambari-infra-manager/docs/images/batch-4.png differ

http://git-wip-us.apache.org/repos/asf/ambari/blob/23e23afd/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/common/InfraManagerConstants.java
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/common/InfraManagerConstants.java b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/common/InfraManagerConstants.java
index 77f7008..105f20e 100644
--- a/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/common/InfraManagerConstants.java
+++ b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/common/InfraManagerConstants.java
@@ -25,7 +25,7 @@ public final class InfraManagerConstants {
   public static final String PROTOCOL_SSL = "https";
   public static final String ROOT_CONTEXT = "/";
   public static final String WEB_RESOURCE_FOLDER = "webapp";
-  public static final String DEFAULT_DATA_FOLDER_LOCATION = "/usr/ambari-infra-manager/data";
+  public static final String DEFAULT_DATA_FOLDER_LOCATION = "/opt/ambari-infra-manager/data";
   public static final String DATA_FOLDER_LOCATION_PARAM = "dataFolderLocation";
   public static final Integer SESSION_TIMEOUT = 60 * 30;
 }

http://git-wip-us.apache.org/repos/asf/ambari/blob/23e23afd/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/conf/InfraManagerApiDocConfig.java
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/conf/InfraManagerApiDocConfig.java b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/conf/InfraManagerApiDocConfig.java
index 22e2263..4c76742 100644
--- a/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/conf/InfraManagerApiDocConfig.java
+++ b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/conf/InfraManagerApiDocConfig.java
@@ -21,12 +21,22 @@ package org.apache.ambari.infra.conf;
 import io.swagger.jaxrs.config.BeanConfig;
 import io.swagger.jaxrs.listing.ApiListingResource;
 import io.swagger.jaxrs.listing.SwaggerSerializers;
+import io.swagger.models.Info;
+import io.swagger.models.License;
 import org.springframework.context.annotation.Bean;
 import org.springframework.context.annotation.Configuration;
 
 @Configuration
 public class InfraManagerApiDocConfig {
 
+  private static final String DESCRIPTION = "Manager component for Ambari Infra";
+  private static final String VERSION = "1.0.0";
+  private static final String TITLE = "Infra Manager REST API";
+  private static final String LICENSE = "Apache 2.0";
+  private static final String LICENSE_URL = "http://www.apache.org/licenses/LICENSE-2.0.html";
+  private static final String RESOURCE_PACKAGE = "org.apache.ambari.infra.rest";
+  private static final String BASE_PATH = "/api/v1";
+
   @Bean
   public ApiListingResource apiListingResource() {
     return new ApiListingResource();
@@ -41,14 +51,25 @@ public class InfraManagerApiDocConfig {
   public BeanConfig swaggerConfig() {
     BeanConfig beanConfig = new BeanConfig();
     beanConfig.setSchemes(new String[]{"http", "https"});
-    beanConfig.setBasePath("/api/v1");
-    beanConfig.setTitle("Infra Manager REST API");
-    beanConfig.setDescription("Manager component for Ambari Infra");
-    beanConfig.setLicense("Apache 2.0");
-    beanConfig.setLicenseUrl("http://www.apache.org/licenses/LICENSE-2.0.html");
+    beanConfig.setBasePath(BASE_PATH);
+    beanConfig.setTitle(TITLE);
+    beanConfig.setDescription(DESCRIPTION);
+    beanConfig.setLicense(LICENSE);
+    beanConfig.setLicenseUrl(LICENSE_URL);
     beanConfig.setScan(true);
-    beanConfig.setVersion("1.0.0");
-    beanConfig.setResourcePackage("org.apache.ambari.infra.rest");
+    beanConfig.setVersion(VERSION);
+    beanConfig.setResourcePackage(RESOURCE_PACKAGE);
+
+    License license = new License();
+    license.setName(LICENSE);
+    license.setUrl(LICENSE_URL);
+
+    Info info = new Info();
+    info.setDescription(DESCRIPTION);
+    info.setTitle(TITLE);
+    info.setVersion(VERSION);
+    info.setLicense(license);
+    beanConfig.setInfo(info);
     return beanConfig;
   }
 }

http://git-wip-us.apache.org/repos/asf/ambari/blob/23e23afd/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/conf/batch/InfraManagerBatchConfig.java
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/conf/batch/InfraManagerBatchConfig.java b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/conf/batch/InfraManagerBatchConfig.java
index c3d8db6..95f87f5 100644
--- a/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/conf/batch/InfraManagerBatchConfig.java
+++ b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/conf/batch/InfraManagerBatchConfig.java
@@ -20,7 +20,9 @@ package org.apache.ambari.infra.conf.batch;
 
 import org.apache.ambari.infra.job.dummy.DummyItemProcessor;
 import org.apache.ambari.infra.job.dummy.DummyItemWriter;
+import org.apache.ambari.infra.job.dummy.DummyJobListener;
 import org.apache.ambari.infra.job.dummy.DummyObject;
+import org.apache.ambari.infra.job.dummy.DummyStepListener;
 import org.springframework.batch.admin.service.JdbcSearchableJobExecutionDao;
 import org.springframework.batch.admin.service.JdbcSearchableJobInstanceDao;
 import org.springframework.batch.admin.service.JdbcSearchableStepExecutionDao;
@@ -68,6 +70,7 @@ import org.springframework.jdbc.core.JdbcTemplate;
 import org.springframework.jdbc.datasource.DriverManagerDataSource;
 import org.springframework.jdbc.datasource.init.DataSourceInitializer;
 import org.springframework.jdbc.datasource.init.ResourceDatabasePopulator;
+import org.springframework.scheduling.annotation.EnableAsync;
 import org.springframework.scheduling.annotation.EnableScheduling;
 import org.springframework.transaction.PlatformTransactionManager;
 
@@ -78,6 +81,7 @@ import java.net.MalformedURLException;
 @Configuration
 @EnableBatchProcessing
 @EnableScheduling
+@EnableAsync
 public class InfraManagerBatchConfig {
 
   @Value("classpath:org/springframework/batch/core/schema-drop-sqlite.sql")
@@ -225,13 +229,13 @@ public class InfraManagerBatchConfig {
   protected Step dummyStep(ItemReader<DummyObject> reader,
                        ItemProcessor<DummyObject, String> processor,
                        ItemWriter<String> writer) {
-    return steps.get("dummyStep").<DummyObject, String> chunk(2)
+    return steps.get("dummyStep").listener(new DummyStepListener()).<DummyObject, String> chunk(2)
       .reader(reader).processor(processor).writer(writer).build();
   }
 
   @Bean(name = "dummyJob")
   public Job job(@Qualifier("dummyStep") Step dummyStep) {
-    return jobs.get("dummyJob").start(dummyStep).build();
+    return jobs.get("dummyJob").listener(new DummyJobListener()).start(dummyStep).build();
   }
 
   @Bean

http://git-wip-us.apache.org/repos/asf/ambari/blob/23e23afd/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/dummy/DummyItemWriter.java
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/dummy/DummyItemWriter.java b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/dummy/DummyItemWriter.java
index f495795..9a78706 100644
--- a/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/dummy/DummyItemWriter.java
+++ b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/dummy/DummyItemWriter.java
@@ -22,8 +22,15 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.springframework.batch.item.ItemWriter;
 
+import java.io.File;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.Date;
 import java.util.List;
 
+import static org.apache.ambari.infra.common.InfraManagerConstants.DATA_FOLDER_LOCATION_PARAM;
+
 public class DummyItemWriter implements ItemWriter<String> {
 
   private static final Logger LOG = LoggerFactory.getLogger(DummyItemWriter.class);
@@ -32,5 +39,11 @@ public class DummyItemWriter implements ItemWriter<String> {
   public void write(List<? extends String> values) throws Exception {
     LOG.info("DummyItem writer called (values: {})... wait 1 seconds", values.toString());
     Thread.sleep(1000);
+    String outputDirectoryLocation = String.format("%s%s%s%s", System.getProperty(DATA_FOLDER_LOCATION_PARAM), File.separator, "dummyOutput-", new Date().getTime());
+    Path pathToDirectory = Paths.get(outputDirectoryLocation);
+    Path pathToFile = Paths.get(String.format("%s%s%s", outputDirectoryLocation, File.separator, "dummyOutput.txt"));
+    Files.createDirectories(pathToDirectory);
+    LOG.info("Write to file: ", pathToFile.getFileName().toAbsolutePath());
+    Files.write(pathToFile, values.toString().getBytes());
   }
 }

http://git-wip-us.apache.org/repos/asf/ambari/blob/23e23afd/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/dummy/DummyJobListener.java
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/dummy/DummyJobListener.java b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/dummy/DummyJobListener.java
new file mode 100644
index 0000000..0bbfb55
--- /dev/null
+++ b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/dummy/DummyJobListener.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.ambari.infra.job.dummy;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.springframework.batch.core.JobExecution;
+import org.springframework.batch.core.JobExecutionListener;
+
+public class DummyJobListener implements JobExecutionListener {
+
+  private static final Logger LOG = LoggerFactory.getLogger(DummyJobListener.class);
+
+  @Override
+  public void beforeJob(JobExecution jobExecution) {
+    LOG.info("Dummy - before job execution");
+  }
+
+  @Override
+  public void afterJob(JobExecution jobExecution) {
+    LOG.info("Dummy - after job execution");
+  }
+}

http://git-wip-us.apache.org/repos/asf/ambari/blob/23e23afd/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/dummy/DummyStepListener.java
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/dummy/DummyStepListener.java b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/dummy/DummyStepListener.java
new file mode 100644
index 0000000..548e650
--- /dev/null
+++ b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/dummy/DummyStepListener.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.ambari.infra.job.dummy;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.springframework.batch.core.ExitStatus;
+import org.springframework.batch.core.StepExecution;
+import org.springframework.batch.core.StepExecutionListener;
+
+public class DummyStepListener implements StepExecutionListener {
+
+  private static final Logger LOG = LoggerFactory.getLogger(DummyStepListener.class);
+
+  @Override
+  public void beforeStep(StepExecution stepExecution) {
+    LOG.info("Dummy step - before step execution");
+  }
+
+  @Override
+  public ExitStatus afterStep(StepExecution stepExecution) {
+    LOG.info("Dummy step - after step execution");
+    return stepExecution.getExitStatus();
+  }
+}

http://git-wip-us.apache.org/repos/asf/ambari/blob/23e23afd/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/rest/JobResource.java
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/rest/JobResource.java b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/rest/JobResource.java
index 7023957..0e20b54 100644
--- a/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/rest/JobResource.java
+++ b/ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/rest/JobResource.java
@@ -98,7 +98,7 @@ public class JobResource {
 
   @GET
   @Produces({"application/json"})
-  @Path("/info/{jobName}")
+  @Path("{jobName}/info")
   @ApiOperation("Get job details by job name.")
   public JobDetailsResponse getJobDetails(@BeanParam @Valid JobRequest jobRequest) throws NoSuchJobException {
     return jobManager.getJobDetails(jobRequest.getJobName(), jobRequest.getPage(), jobRequest.getSize());


[3/4] ambari git commit: AMBARI-21002. RU/EU: Restart Zookeeper step failed (dgrinenko via dlysnichenko)

Posted by jo...@apache.org.
AMBARI-21002. RU/EU: Restart Zookeeper step failed (dgrinenko via dlysnichenko)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/dc30b4e3
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/dc30b4e3
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/dc30b4e3

Branch: refs/heads/branch-feature-AMBARI-12556
Commit: dc30b4e366f12ce7baa05a2c3dec096175c22f9d
Parents: 69e73ba
Author: Lisnichenko Dmitro <dl...@hortonworks.com>
Authored: Fri May 12 16:53:07 2017 +0300
Committer: Jonathan Hurley <jh...@hortonworks.com>
Committed: Wed May 31 15:18:49 2017 -0400

----------------------------------------------------------------------
 .../server/controller/internal/UpgradeResourceProvider.java | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/ambari/blob/dc30b4e3/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/UpgradeResourceProvider.java
----------------------------------------------------------------------
diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/UpgradeResourceProvider.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/UpgradeResourceProvider.java
index 97da150..a8b7fb4 100644
--- a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/UpgradeResourceProvider.java
+++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/UpgradeResourceProvider.java
@@ -120,6 +120,7 @@ import org.apache.ambari.server.state.stack.upgrade.UpdateStackGrouping;
 import org.apache.ambari.server.state.stack.upgrade.UpgradeScope;
 import org.apache.ambari.server.state.stack.upgrade.UpgradeType;
 import org.apache.ambari.server.state.svccomphost.ServiceComponentHostServerActionEvent;
+import org.apache.ambari.server.utils.StageUtils;
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.lang.StringUtils;
 import org.slf4j.Logger;
@@ -818,7 +819,7 @@ public class UpgradeResourceProvider extends AbstractControllerResourceProvider
     }
 
     List<UpgradeGroupEntity> groupEntities = new ArrayList<>();
-    RequestStageContainer req = createRequest(direction, version);
+    RequestStageContainer req = createRequest(cluster, direction, version);
 
     // the upgrade context calculated these for us based on direction
     StackId sourceStackId = upgradeContext.getOriginalStackId();
@@ -1264,13 +1265,17 @@ public class UpgradeResourceProvider extends AbstractControllerResourceProvider
     }
   }
 
-  private RequestStageContainer createRequest(Direction direction, String version) {
+  private RequestStageContainer createRequest(Cluster cluster, Direction direction, String version) throws AmbariException {
     ActionManager actionManager = getManagementController().getActionManager();
 
     RequestStageContainer requestStages = new RequestStageContainer(
         actionManager.getNextRequestId(), null, s_requestFactory.get(), actionManager);
     requestStages.setRequestContext(String.format("%s to %s", direction.getVerb(true), version));
 
+    Map<String, Set<String>> clusterHostInfo = StageUtils.getClusterHostInfo(cluster);
+    String clusterHostInfoJson = StageUtils.getGson().toJson(clusterHostInfo);
+    requestStages.setClusterHostInfo(clusterHostInfoJson);
+
     return requestStages;
   }
 


[2/4] ambari git commit: AMBARI-21155. Update Infra Manager README.md (oleewere)

Posted by jo...@apache.org.
AMBARI-21155. Update Infra Manager README.md (oleewere)


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/69e73baa
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/69e73baa
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/69e73baa

Branch: refs/heads/branch-feature-AMBARI-12556
Commit: 69e73baa8812a1196d225af7c41cc8ddcfe457d0
Parents: 23e23af
Author: oleewere <ol...@gmail.com>
Authored: Wed May 31 20:12:27 2017 +0200
Committer: oleewere <ol...@gmail.com>
Committed: Wed May 31 20:12:27 2017 +0200

----------------------------------------------------------------------
 ambari-infra/ambari-infra-manager/README.md | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/ambari/blob/69e73baa/ambari-infra/ambari-infra-manager/README.md
----------------------------------------------------------------------
diff --git a/ambari-infra/ambari-infra-manager/README.md b/ambari-infra/ambari-infra-manager/README.md
index dd2854c..4e38a69 100644
--- a/ambari-infra/ambari-infra-manager/README.md
+++ b/ambari-infra/ambari-infra-manager/README.md
@@ -21,7 +21,7 @@ limitations under the License.
 
 ## Overview
 
-Ambari Infra Manager is a REST based management application for Ambari Infra services (like Infra Solr). The API is built on top of [Spring Batch] (http://docs.spring.io/spring-batch/reference/html/)
+Ambari Infra Manager is a REST based management application for Ambari Infra services (like Infra Solr). The API is built on top of [Spring Batch](http://docs.spring.io/spring-batch/reference/html/)
 
 ### Architecture
 ![batch-1](docs/images/batch-1.png)
@@ -35,11 +35,11 @@ Ambari Infra Manager is a REST based management application for Ambari Infra ser
 ### Step workflow
 ![batch-4](docs/images/batch-4.png)
 
-(images originally from [here] (http://docs.spring.io/spring-batch/reference/html/))
+(images originally from [here](http://docs.spring.io/spring-batch/reference/html/))
 
 ## API documentation
 
-Infra Manager uses [Swagger] (http://swagger.io/), generated yaml file can be downloaded from [here] (docs/api/swagger.yaml)
+Infra Manager uses [Swagger](http://swagger.io/), generated yaml file can be downloaded from [here](docs/api/swagger.yaml)
 
 
 ## Development guide
@@ -76,11 +76,11 @@ public class MyJobConfig {
 
 }
 ```
-As you can see it will require to implement [ItemWriter] (https://docs.spring.io/spring-batch/apidocs/org/springframework/batch/item/ItemWriter.html), [ItemReader] (http://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/item/ItemReader.html) and [ItemProcessor] (https://docs.spring.io/spring-batch/apidocs/org/springframework/batch/item/ItemProcessor.html)
+As you can see it will require to implement [ItemWriter](https://docs.spring.io/spring-batch/apidocs/org/springframework/batch/item/ItemWriter.html), [ItemReader](http://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/item/ItemReader.html) and [ItemProcessor](https://docs.spring.io/spring-batch/apidocs/org/springframework/batch/item/ItemProcessor.html)
 
 ### Schedule custom jobs
 
-It can be needed based on business requirements to schedule jobs (e.g. daily) instead of run manually through the REST API. It can be done with adding a custom bean to "org.apache.ambari.infra" package with using [@Scheduled] (http://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/annotation/Scheduled.html):
+It can be needed based on business requirements to schedule jobs (e.g. daily) instead of run manually through the REST API. It can be done with adding a custom bean to "org.apache.ambari.infra" package with using [@Scheduled](http://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/annotation/Scheduled.html):
 ```java
 @Named
 public class MySchedulerObject {


[4/4] ambari git commit: Merge branch 'trunk' into branch-feature-AMBARI-12556

Posted by jo...@apache.org.
Merge branch 'trunk' into branch-feature-AMBARI-12556


Project: http://git-wip-us.apache.org/repos/asf/ambari/repo
Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/fb2076c7
Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/fb2076c7
Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/fb2076c7

Branch: refs/heads/branch-feature-AMBARI-12556
Commit: fb2076c718c5bcafb1e83c35a841111d30c6204d
Parents: 138aa48 dc30b4e
Author: Jonathan Hurley <jh...@hortonworks.com>
Authored: Wed May 31 15:23:58 2017 -0400
Committer: Jonathan Hurley <jh...@hortonworks.com>
Committed: Wed May 31 15:23:58 2017 -0400

----------------------------------------------------------------------
 ambari-infra/ambari-infra-manager/README.md     |  92 ++-
 .../ambari-infra-manager/docs/api/swagger.yaml  | 784 +++++++++++++++++++
 .../docs/images/batch-1.png                     | Bin 0 -> 20521 bytes
 .../docs/images/batch-2.png                     | Bin 0 -> 29388 bytes
 .../docs/images/batch-3.png                     | Bin 0 -> 14105 bytes
 .../docs/images/batch-4.png                     | Bin 0 -> 23277 bytes
 .../infra/common/InfraManagerConstants.java     |   2 +-
 .../infra/conf/InfraManagerApiDocConfig.java    |  35 +-
 .../conf/batch/InfraManagerBatchConfig.java     |   8 +-
 .../ambari/infra/job/dummy/DummyItemWriter.java |  13 +
 .../infra/job/dummy/DummyJobListener.java       |  39 +
 .../infra/job/dummy/DummyStepListener.java      |  41 +
 .../apache/ambari/infra/rest/JobResource.java   |   2 +-
 .../internal/UpgradeResourceProvider.java       |   8 +-
 14 files changed, 1009 insertions(+), 15 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/ambari/blob/fb2076c7/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/UpgradeResourceProvider.java
----------------------------------------------------------------------
diff --cc ambari-server/src/main/java/org/apache/ambari/server/controller/internal/UpgradeResourceProvider.java
index 6f452b0,a8b7fb4..345bf5f
--- a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/UpgradeResourceProvider.java
+++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/UpgradeResourceProvider.java
@@@ -99,11 -117,12 +99,12 @@@ import org.apache.ambari.server.state.s
  import org.apache.ambari.server.state.stack.upgrade.Task;
  import org.apache.ambari.server.state.stack.upgrade.TaskWrapper;
  import org.apache.ambari.server.state.stack.upgrade.UpdateStackGrouping;
 -import org.apache.ambari.server.state.stack.upgrade.UpgradeScope;
  import org.apache.ambari.server.state.stack.upgrade.UpgradeType;
  import org.apache.ambari.server.state.svccomphost.ServiceComponentHostServerActionEvent;
+ import org.apache.ambari.server.utils.StageUtils;
  import org.apache.commons.collections.CollectionUtils;
  import org.apache.commons.lang.StringUtils;
 +import org.codehaus.jackson.annotate.JsonProperty;
  import org.slf4j.Logger;
  import org.slf4j.LoggerFactory;
  
@@@ -784,18 -995,287 +785,23 @@@ public class UpgradeResourceProvider ex
      return upgradeEntity;
    }
  
-   private RequestStageContainer createRequest(UpgradeContext upgradeContext) {
 -  /**
 -   * Handles the creation or resetting of configurations based on whether an
 -   * upgrade or downgrade is occurring. This method will not do anything when
 -   * the target stack version is the same as the cluster's current stack version
 -   * since, by definition, no new configurations are automatically created when
 -   * upgrading with the same stack (ie HDP 2.2.0.0 -> HDP 2.2.1.0).
 -   * <p/>
 -   * When upgrading or downgrade between stacks (HDP 2.2.0.0 -> HDP 2.3.0.0)
 -   * then this will perform the following:
 -   * <ul>
 -   * <li>Upgrade: Create new configurations that are a merge between the current
 -   * stack and the desired stack. If a value has changed between stacks, then
 -   * the target stack value should be taken unless the cluster's value differs
 -   * from the old stack. This can occur if a property has been customized after
 -   * installation.</li>
 -   * <li>Downgrade: Reset the latest configurations from the cluster's original
 -   * stack. The new configurations that were created on upgrade must be left
 -   * intact until all components have been reverted, otherwise heartbeats will
 -   * fail due to missing configurations.</li>
 -   * </ul>
 -   *
 -   *
 -   * @param stackName Stack name such as HDP, HDPWIN, BIGTOP
 -   * @param cluster
 -   *          the cluster
 -   * @param version
 -   *          the version
 -   * @param direction
 -   *          upgrade or downgrade
 -   * @param upgradePack
 -   *          upgrade pack used for upgrade or downgrade. This is needed to determine
 -   *          which services are effected.
 -   * @param userName
 -   *          username performing the action
 -   * @throws AmbariException
 -   */
 -  public void applyStackAndProcessConfigurations(String stackName, Cluster cluster, String version, Direction direction, UpgradePack upgradePack, String userName)
 -    throws AmbariException {
 -    RepositoryVersionEntity targetRve = s_repoVersionDAO.findByStackNameAndVersion(stackName, version);
 -    if (null == targetRve) {
 -      LOG.info("Could not find version entity for {}; not setting new configs", version);
 -      return;
 -    }
 -
 -    if (null == userName) {
 -      userName = getManagementController().getAuthName();
 -    }
 -
 -    // if the current and target stacks are the same (ie HDP 2.2.0.0 -> 2.2.1.0)
 -    // then we should never do anything with configs on either upgrade or
 -    // downgrade; however if we are going across stacks, we have to do the stack
 -    // checks differently depending on whether this is an upgrade or downgrade
 -    StackEntity targetStack = targetRve.getStack();
 -    StackId currentStackId = cluster.getCurrentStackVersion();
 -    StackId desiredStackId = cluster.getDesiredStackVersion();
 -    StackId targetStackId = new StackId(targetStack);
 -    // Only change configs if moving to a different stack.
 -    switch (direction) {
 -      case UPGRADE:
 -        if (currentStackId.equals(targetStackId)) {
 -          return;
 -        }
 -        break;
 -      case DOWNGRADE:
 -        if (desiredStackId.equals(targetStackId)) {
 -          return;
 -        }
 -        break;
 -    }
 -
 -    Map<String, Map<String, String>> newConfigurationsByType = null;
 -    ConfigHelper configHelper = getManagementController().getConfigHelper();
 -
 -    if (direction == Direction.UPGRADE) {
 -      // populate a map of default configurations for the old stack (this is
 -      // used when determining if a property has been customized and should be
 -      // overriden with the new stack value)
 -      Map<String, Map<String, String>> oldStackDefaultConfigurationsByType = configHelper.getDefaultProperties(
 -          currentStackId, cluster, true);
 -
 -      // populate a map with default configurations from the new stack
 -      newConfigurationsByType = configHelper.getDefaultProperties(targetStackId, cluster, true);
 -
 -      // We want to skip updating config-types of services that are not in the upgrade pack.
 -      // Care should be taken as some config-types could be in services that are in and out
 -      // of the upgrade pack. We should never ignore config-types of services in upgrade pack.
 -      Set<String> skipConfigTypes = new HashSet<>();
 -      Set<String> upgradePackServices = new HashSet<>();
 -      Set<String> upgradePackConfigTypes = new HashSet<>();
 -      AmbariMetaInfo ambariMetaInfo = s_metaProvider.get();
 -
 -      // ensure that we get the service info from the target stack
 -      // (since it could include new configuration types for a service)
 -      Map<String, ServiceInfo> stackServicesMap = ambariMetaInfo.getServices(
 -          targetStack.getStackName(), targetStack.getStackVersion());
 -
 -      for (Grouping group : upgradePack.getGroups(direction)) {
 -        for (UpgradePack.OrderService service : group.services) {
 -          if (service.serviceName == null || upgradePackServices.contains(service.serviceName)) {
 -            // No need to re-process service that has already been looked at
 -            continue;
 -          }
 -
 -          upgradePackServices.add(service.serviceName);
 -          ServiceInfo serviceInfo = stackServicesMap.get(service.serviceName);
 -          if (serviceInfo == null) {
 -            continue;
 -          }
 -
 -          // add every configuration type for all services defined in the
 -          // upgrade pack
 -          Set<String> serviceConfigTypes = serviceInfo.getConfigTypeAttributes().keySet();
 -          for (String serviceConfigType : serviceConfigTypes) {
 -            if (!upgradePackConfigTypes.contains(serviceConfigType)) {
 -              upgradePackConfigTypes.add(serviceConfigType);
 -            }
 -          }
 -        }
 -      }
 -
 -      // build a set of configurations that should not be merged since their
 -      // services are not installed
 -      Set<String> servicesNotInUpgradePack = new HashSet<>(stackServicesMap.keySet());
 -      servicesNotInUpgradePack.removeAll(upgradePackServices);
 -      for (String serviceNotInUpgradePack : servicesNotInUpgradePack) {
 -        ServiceInfo serviceInfo = stackServicesMap.get(serviceNotInUpgradePack);
 -        Set<String> configTypesOfServiceNotInUpgradePack = serviceInfo.getConfigTypeAttributes().keySet();
 -        for (String configType : configTypesOfServiceNotInUpgradePack) {
 -          if (!upgradePackConfigTypes.contains(configType) && !skipConfigTypes.contains(configType)) {
 -            skipConfigTypes.add(configType);
 -          }
 -        }
 -      }
 -
 -      // remove any configurations from the target stack that are not used
 -      // because the services are not installed
 -      Iterator<String> iterator = newConfigurationsByType.keySet().iterator();
 -      while (iterator.hasNext()) {
 -        String configType = iterator.next();
 -        if (skipConfigTypes.contains(configType)) {
 -          LOG.info("Stack Upgrade: Removing configs for config-type {}", configType);
 -          iterator.remove();
 -        }
 -      }
 -
 -      // now that the map has been populated with the default configurations
 -      // from the stack/service, overlay the existing configurations on top
 -      Map<String, DesiredConfig> existingDesiredConfigurationsByType = cluster.getDesiredConfigs();
 -      for (Map.Entry<String, DesiredConfig> existingEntry : existingDesiredConfigurationsByType.entrySet()) {
 -        String configurationType = existingEntry.getKey();
 -        if(skipConfigTypes.contains(configurationType)) {
 -          LOG.info("Stack Upgrade: Skipping config-type {} as upgrade-pack contains no updates to its service", configurationType);
 -          continue;
 -        }
 -
 -        // NPE sanity, although shouldn't even happen since we are iterating
 -        // over the desired configs to start with
 -        Config currentClusterConfig = cluster.getDesiredConfigByType(configurationType);
 -        if (null == currentClusterConfig) {
 -          continue;
 -        }
 -
 -        // get current stack default configurations on install
 -        Map<String, String> configurationTypeDefaultConfigurations = oldStackDefaultConfigurationsByType.get(
 -            configurationType);
 -
 -        // NPE sanity for current stack defaults
 -        if (null == configurationTypeDefaultConfigurations) {
 -          configurationTypeDefaultConfigurations = Collections.emptyMap();
 -        }
 -
 -        // get the existing configurations
 -        Map<String, String> existingConfigurations = currentClusterConfig.getProperties();
 -
 -        // if the new stack configurations don't have the type, then simply add
 -        // all of the existing in
 -        Map<String, String> newDefaultConfigurations = newConfigurationsByType.get(
 -            configurationType);
 -
 -        if (null == newDefaultConfigurations) {
 -          newConfigurationsByType.put(configurationType, existingConfigurations);
 -          continue;
 -        } else {
 -          // TODO, should we remove existing configs whose value is NULL even though they don't have a value in the new stack?
 -
 -          // Remove any configs in the new stack whose value is NULL, unless they currently exist and the value is not NULL.
 -          Iterator<Map.Entry<String, String>> iter = newDefaultConfigurations.entrySet().iterator();
 -          while (iter.hasNext()) {
 -            Map.Entry<String, String> entry = iter.next();
 -            if (entry.getValue() == null) {
 -              iter.remove();
 -            }
 -          }
 -        }
 -
 -        // for every existing configuration, see if an entry exists; if it does
 -        // not exist, then put it in the map, otherwise we'll have to compare
 -        // the existing value to the original stack value to see if its been
 -        // customized
 -        for (Map.Entry<String, String> existingConfigurationEntry : existingConfigurations.entrySet()) {
 -          String existingConfigurationKey = existingConfigurationEntry.getKey();
 -          String existingConfigurationValue = existingConfigurationEntry.getValue();
 -
 -          // if there is already an entry, we now have to try to determine if
 -          // the value was customized after stack installation
 -          if (newDefaultConfigurations.containsKey(existingConfigurationKey)) {
 -            String newDefaultConfigurationValue = newDefaultConfigurations.get(
 -                existingConfigurationKey);
 -
 -            if (!StringUtils.equals(existingConfigurationValue, newDefaultConfigurationValue)) {
 -              // the new default is different from the existing cluster value;
 -              // only override the default value if the existing value differs
 -              // from the original stack
 -              String oldDefaultValue = configurationTypeDefaultConfigurations.get(
 -                  existingConfigurationKey);
 -
 -              if (!StringUtils.equals(existingConfigurationValue, oldDefaultValue)) {
 -                // at this point, we've determined that there is a difference
 -                // between default values between stacks, but the value was
 -                // also customized, so keep the customized value
 -                newDefaultConfigurations.put(existingConfigurationKey, existingConfigurationValue);
 -              }
 -            }
 -          } else {
 -            // there is no entry in the map, so add the existing key/value pair
 -            newDefaultConfigurations.put(existingConfigurationKey, existingConfigurationValue);
 -          }
 -        }
 -
 -        /*
 -        for every new configuration which does not exist in the existing
 -        configurations, see if it was present in the current stack
 -
 -        stack 2.x has foo-site/property (on-ambari-upgrade is false)
 -        stack 2.y has foo-site/property
 -        the current cluster (on 2.x) does not have it
 -
 -        In this case, we should NOT add it back as clearly stack advisor has removed it
 -        */
 -        Iterator<Map.Entry<String, String>> newDefaultConfigurationsIterator = newDefaultConfigurations.entrySet().iterator();
 -        while( newDefaultConfigurationsIterator.hasNext() ){
 -          Map.Entry<String, String> newConfigurationEntry = newDefaultConfigurationsIterator.next();
 -          String newConfigurationPropertyName = newConfigurationEntry.getKey();
 -          if (configurationTypeDefaultConfigurations.containsKey(newConfigurationPropertyName)
 -              && !existingConfigurations.containsKey(newConfigurationPropertyName)) {
 -            LOG.info(
 -                "The property {}/{} exists in both {} and {} but is not part of the current set of configurations and will therefore not be included in the configuration merge",
 -                configurationType, newConfigurationPropertyName, currentStackId, targetStackId);
 -
 -            // remove the property so it doesn't get merged in
 -            newDefaultConfigurationsIterator.remove();
 -          }
 -        }
 -      }
 -    } else {
 -      // downgrade
 -      cluster.applyLatestConfigurations(cluster.getCurrentStackVersion());
 -    }
 -
 -    // !!! update the stack
 -    cluster.setDesiredStackVersion(
 -        new StackId(targetStack.getStackName(), targetStack.getStackVersion()), true);
 -
 -    // !!! configs must be created after setting the stack version
 -    if (null != newConfigurationsByType) {
 -      configHelper.createConfigTypes(cluster, getManagementController(), newConfigurationsByType,
 -          userName, "Configuration created for Upgrade");
 -    }
 -  }
 -
 -  private RequestStageContainer createRequest(Cluster cluster, Direction direction, String version) throws AmbariException {
++  private RequestStageContainer createRequest(UpgradeContext upgradeContext) throws AmbariException {
      ActionManager actionManager = getManagementController().getActionManager();
  
      RequestStageContainer requestStages = new RequestStageContainer(
          actionManager.getNextRequestId(), null, s_requestFactory.get(), actionManager);
 -    requestStages.setRequestContext(String.format("%s to %s", direction.getVerb(true), version));
  
 +    Direction direction = upgradeContext.getDirection();
 +    RepositoryVersionEntity repositoryVersion = upgradeContext.getRepositoryVersion();
 +
 +    requestStages.setRequestContext(String.format("%s %s %s", direction.getVerb(true),
 +        direction.getPreposition(), repositoryVersion.getVersion()));
 +
++    Cluster cluster = upgradeContext.getCluster();
+     Map<String, Set<String>> clusterHostInfo = StageUtils.getClusterHostInfo(cluster);
+     String clusterHostInfoJson = StageUtils.getGson().toJson(clusterHostInfo);
+     requestStages.setClusterHostInfo(clusterHostInfoJson);
+ 
      return requestStages;
    }