You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@ambari.apache.org by tb...@apache.org on 2013/05/08 19:21:56 UTC

svn commit: r1480363 - in /incubator/ambari/trunk/ambari-server/docs/api/v1: host-component-resources.md index.md service-resources.md update-hostcomponent.md update-service.md

Author: tbeerbower
Date: Wed May  8 17:21:55 2013
New Revision: 1480363

URL: http://svn.apache.org/r1480363
Log:
AMBARI-2090 - Add content to API docs.

Added:
    incubator/ambari/trunk/ambari-server/docs/api/v1/update-hostcomponent.md
Modified:
    incubator/ambari/trunk/ambari-server/docs/api/v1/host-component-resources.md
    incubator/ambari/trunk/ambari-server/docs/api/v1/index.md
    incubator/ambari/trunk/ambari-server/docs/api/v1/service-resources.md
    incubator/ambari/trunk/ambari-server/docs/api/v1/update-service.md

Modified: incubator/ambari/trunk/ambari-server/docs/api/v1/host-component-resources.md
URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/docs/api/v1/host-component-resources.md?rev=1480363&r1=1480362&r2=1480363&view=diff
==============================================================================
--- incubator/ambari/trunk/ambari-server/docs/api/v1/host-component-resources.md (original)
+++ incubator/ambari/trunk/ambari-server/docs/api/v1/host-component-resources.md Wed May  8 17:21:55 2013
@@ -16,8 +16,103 @@ limitations under the License.
 -->
 
 # Host Component Resources
- 
+###States
+
+The current state of a host component resource can be determined by looking at the ServiceComponentInfo/state property.
+
+
+    GET api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=ServiceComponentInfo/state
+
+**Response**
+
+    200 OK
+    {
+      "href" : "http://your.ambari.server/api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=ServiceComponentInfo/state",
+      "ServiceComponentInfo" : {
+        "cluster_name" : "c1",
+        "component_name" : "NAMENODE",
+        "state" : "INSTALLED",
+        "service_name" : "HDFS"
+      }
+    }
+
+The following table lists the possible values of the service resource ServiceComponentInfo/state property.
+<table>
+  <tr>
+    <th>State</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>INIT</td>
+    <td>The initial clean state after the component is first created.</td>  
+  </tr>
+  <tr>
+    <td>INSTALLING</td>
+    <td>In the process of installing the component.</td>  
+  </tr>
+  <tr>
+    <td>INSTALL_FAILED</td>
+    <td>The component install failed.</td>  
+  </tr>
+  <tr>
+    <td>INSTALLED</td>
+    <td>The component has been installed successfully but is not currently running.</td>  
+  </tr>
+  <tr>
+    <td>STARTING</td>
+    <td>In the process of starting the component.</td>  
+  </tr>
+  <tr>
+    <td>STARTED</td>
+    <td>The component has been installed and started.</td>  
+  </tr>
+  <tr>
+    <td>STOPPING</td>
+    <td>In the process of stopping the component.</td>  
+  </tr>
+
+  <tr>
+    <td>UNINSTALLING</td>
+    <td>In the process of uninstalling the component.</td>  
+  </tr>
+  <tr>
+    <td>UNINSTALLED</td>
+    <td>The component has been successfully uninstalled.</td>  
+  </tr>
+  <tr>
+    <td>WIPING_OUT</td>
+    <td>In the process of wiping out the installed component.</td>  
+  </tr>
+  <tr>
+    <td>UPGRADING</td>
+    <td>In the process of upgrading the component.</td>  
+  </tr>
+  <tr>
+    <td>MAINTENANCE</td>
+    <td>The component has been marked for maintenance.</td>  
+  </tr>
+  <tr>
+    <td>UNKNOWN</td>
+    <td>The component state can not be determined.</td>  
+  </tr>
+</table>
+
+###Starting
+A component can be started through the API by setting its state to be STARTED (see [update host component](update-hostcomponent.md)).
+
+###Starting
+A component can be stopped through the API by setting its state to be INSTALLED (see [update host component](update-hostcomponent.md)).
+
+###Maintenance
+
+The user can update the desired state of a component through the API to be MAINTENANCE (see [update host component](update-hostcomponent.md)).  When a host component is into maintenance state it is basically taken off line. This state can be used, for example, to move a component like NameNode.  The NameNode component can be put in MAINTENANCE mode and then a new NameNode can be created for the service. 
+
+
+
+###Examples
+
 
 - [List host components](host-components.md)
 - [View host component information](host-component.md)
 - [Create host component](create-hostcomponent.md)
+- [Update host component](update-hostcomponent.md)
\ No newline at end of file

Modified: incubator/ambari/trunk/ambari-server/docs/api/v1/index.md
URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/docs/api/v1/index.md?rev=1480363&r1=1480362&r2=1480363&view=diff
==============================================================================
--- incubator/ambari/trunk/ambari-server/docs/api/v1/index.md (original)
+++ incubator/ambari/trunk/ambari-server/docs/api/v1/index.md Wed May  8 17:21:55 2013
@@ -1 +1 @@
-<!---
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

Ambari API Reference v1
=========

The Ambari API facilitates the management and monitoring of the resources of an Apache Hadoop cluster. This document describes the resources and syntax used in the Ambari API and is intended for developers who want to integrate with Ambari.

- [Release Version](#release-version)
- [Authentication](#authentication)
- [Monitoring](#monitoring)
- [Management](#management)
- [Resources](#resources)
- [Partial Response](#partial-response)
- [Query Parameters](#query-parameters)
- [Errors](#errors)


Release Version
----
_Last Updated April 25, 2013_

Authentication
----

The operations you perform against the Ambari API require authentication. Access to the API requires the use of **Basic Authentication**. To use Basic Authentication, you need to send the **Authorization: Basic** header with your requests. For example, this can be handled when using curl and the --user option.

    curl --user name:password http://{your.ambari.server}/api/v1/clusters

_Note: The authentication method and source is configured at the Ambari Server. Changing and configuring the authentication method and source is not covered in this document._

Monitoring
----
The Ambari API provides access to monitoring and metrics information of an Apache Hadoop cluster.

###GET
Use the GET method to read the properties, metrics and sub-resources of an Ambari resource.  Calling the GET method returns the requested resources and produces no side-effects.  A response code of 200 indicates that the request was successfully processed with the requested resource included in the response body.
 
**Example**

Get the DATANODE component resource for the HDFS service of the cluster named 'c1'.

    GET /clusters/c1/services/HDFS/components/DATANODE

**Response**

    200 OK
    {
    	"href" : "http://your.ambari.server/api/v1/clusters/c1/services/HDFS/components/DATANODE",
    	"metrics" : {
    		"process" : {
              "proc_total" : 697.75,
              "proc_run" : 0.875
    		},
      		"rpc" : {
        		...
      		},
      		"ugi" : {
      			...
      		},
      		"dfs" : {
        		"datanode" : {
          		...
        		}
      		},
      		"disk" : {
        		...
      		},
      		"cpu" : {
        		...
      		}
      		...
        },
    	"ServiceComponentInfo" : {
      		"cluster_name" : "c1",
      		"component_name" : "DATANODE",
      		"service_name" : "HDFS"
      		"state" : "STARTED"
    	},
    	"host_components" : [
      		{
      			"href" : "http://your.ambari.server/api/v1/clusters/c1/hosts/host1/host_components/DATANODE",
      			"HostRoles" : {
        			"cluster_name" : "c1",
        			"component_name" : "DATANODE",
        			"host_name" : "host1"
        		}
      		}
       	]
    }


Management
----
The Ambari API provides for the management of the resources of an Apache Hadoop cluster.  This includes the creation, deletion and updating of resources.

###POST
The POST method creates a new resource. If a new resource is created then a 201 response code is returned.  The code 202 can also be returned to indicate that the instruction was accepted by the server (see [asynchronous response](#asynchronous-response)). 

**Example**

Create the HDFS service.


    POST /clusters/c1/services/HDFS


**Response**

    201 Created

###PUT
Use the PUT method to update resources.  If an existing resource is modified then a 200 response code is retrurned to indicate successful completion of the request.  The response code 202 can also be returned to indicate that the instruction was accepted by the server (see [asynchronous response](#asynchronous-response)).

**Example**

Start the HDFS service (update the state of the HDFS service to be ‘STARTED’).


    PUT /clusters/c1/services/HDFS/

**Body**

    {
      "ServiceInfo": {
        "state" : "STARTED”
      }
    }


**Response**

The response code 202 indicates that the server has accepted the instruction to update the resource.  The body of the response contains the ID and href of the request resource that was created to carry out the instruction (see [asynchronous response](#asynchronous-response)).

    202 Accepted
    {
      "href" : "http://your.ambari.server/api/v1/clusters/c1/requests/3",
      "Requests" : {
        "id" : 3,
        "status" : "InProgress"
      } 
    }


###DELETE
Use the DELETE method to delete a resource. If an existing resource is deleted then a 200 response code is retrurned to indicate successful completion of the request.  The response code 202 can also be returned which indicates that the instruction was accepted by the server and the resource was marked for deletion (see [asynchronous response](#asynchronous-response)).

**Example**

Delete the cluster named 'c1'.

    DELETE /clusters/c1

**Response**

    200 OK

###Asynchronous Response

The managment APIs can return a response code of 202 which indicates that the request has been accepted.  The body of the response contains the ID and href of the request resource that was created to carry out the instruction. 
    
    202 Accepted
    {
      "href" : "http://your.ambari.server/api/v1/clusters/c1/requests/6",
      "Requests" : {
        "id" : 6,
        "status" : "InProgress"
      } 
    }

The href in the response body can then be used to query the associated request resource and monitor the progress of the request.  A request resource has one or more task sub resources.  The following example shows how to use [partial response](#partial-response) to query for task resources of a request resource. 

    /clusters/c1/requests/6?fields=tasks/Tasks/*   
    
The returned task resources can be used to determine the status of the request.

    {
      "href" : "http://your.ambari.server/api/v1/clusters/c1/requests/6",
      "Requests" : {
        "id" : 6,
        "cluster_name" : "c1"
      },
      "tasks" : [
        {
          "href" : "http://your.ambari.server/api/v1/clusters/c1/requests/6/tasks/32",
          "Tasks" : {
            "exit_code" : 777,
            "stdout" : "default org.apache.hadoop.mapred.CapacityTaskScheduler\nwarning: Dynamic lookup of ...",
            "status" : "IN_PROGRESS",
            "stderr" : "",
            "host_name" : "dev.hortonworks.com",
            "id" : 32,
            "cluster_name" : "c1",
            "attempt_cnt" : 1,
            "request_id" : 6,
            "command" : "START",
            "role" : "NAMENODE",
            "start_time" : 1367240498196,
            "stage_id" : 1
          }
        },
        {
          "href" : "http://your.ambari.server/api/v1/clusters/c1/requests/6/tasks/33",
          "Tasks" : {
            "exit_code" : 999,
            "stdout" : "",
            "status" : "PENDING",
            ...
          }
        },
        {
          "href" : "http://your.ambari.server/api/v1/clusters/c1/requests/6/tasks/31",
          "Tasks" : {
            "exit_code" : 0,
            "stdout" : "warning: Dynamic lookup of $ambari_db_rca_username ...",
            "status" : "COMPLETED",
            ...
          }
        }
      ]
    }

Resources
----
###Collection Resources


A collection resource is a set of resources of the same type, rather than any specific resource. For example:

    /clusters  

  _Refers to a collection of clusters_

###Instance Resources

An instance resource is a single specific resource. For example:

    /clusters/c1

  _Refers to the cluster resource identified by the id "c1"_

###Types
Resources are grouped into types.  This allows the user to query for collections of resources of the same type.  Some resource types are composed of subtypes (e.g. services are sub-resources of clusters).

The following is a list of some of the Ambari resource types with descriptions and usage examples.
 
#### clusters
Cluster resources represent named Hadoop clusters.  Clusters are top level resources. 

[Cluster Resources](cluster-resources.md)

#### services
Service resources are services of a Hadoop cluster (e.g. HDFS, MapReduce and Ganglia).  Service resources are sub-resources of clusters. 

[Service Resources](service-resources.md)

#### components
Component resources are the individual components of a service (e.g. HDFS/NameNode and MapReduce/JobTracker).  Components are sub-resources of services.

[Component Resources](component-resources.md)

#### hosts
Host resources are the host machines that make up a Hadoop cluster.  Hosts are top level resources but can also be sub-resources of clusters. 

[Host Resources](host-resources.md)


#### host_components
Host component resources are 

[Host Component Resources](host-component-resources.md)


#### configurations
Configuration resources are sets of key/value pairs that configure the services of a Hadoop cluster.

[Configuration Resource Overview](configuration.md)

Partial Response
----

Used to control which fields are returned by a query.  Partial response can be used to restrict which fields are returned and additionally, it allows a query to reach down and return data from sub-resources.  The keyword “fields” is used to specify a partial response.  Only the fields specified will be returned to the client.  To specify sub-elements, use the notation “a/b/c”.  Properties, categories and sub-resources can be specified.  The wildcard ‘*’ can be used to show all categories, fields and sub-resources for a resource.  This can be combined to provide ‘expand’ functionality for sub-components.  Some fields are always returned for a resource regardless of the specified partial response fields.  These fields are the fields, which uniquely identify the resource.  This would be the primary id field of the resource and the foreign keys to the primary id fields of all ancestors of the resource.

**Example: Using Partial Response to restrict response to a specific field**

    GET    /api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=metrics/disk/disk_total

    200 OK
	{
    	“href”: “.../api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=metrics/disk/disk_total”,
    	“ServiceComponentInfo” : {
        	“cluster_name” : “c1”,
        	“component_name” : NAMENODE”,
        	“service_name” : “HDFS”
    	},
    	“metrics” : {
        	"disk" : {       
            	"disk_total" : 100000
        	}
    	}
    }

**Example: Using Partial Response to restrict response to specified category**

    GET    /api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=metrics/disk

    200 OK
	{
    	“href”: “.../api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=metrics/disk”,
    	“ServiceComponentInfo” : {
        	“cluster_name” : “c1”,
        	“component_name” : NAMENODE”,
        	“service_name” : “HDFS”
    	},
    	“metrics” : {
        	"disk" : {       
            	"disk_total" : 100000,
            	“disk_free” : 50000,
            	“part_max_used” : 1010
        	}
    	}
	}

**Example – Using Partial Response to restrict response to multiple fields/categories**

	GET	/api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=metrics/disk/disk_total,metrics/cpu
	
	200 OK
	{
    	“href”: “.../api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=metrics/disk/disk_total,metrics/cpu”,
    	“ServiceComponentInfo” : {
        	“cluster_name” : “c1”,
        	“component_name” : NAMENODE”,
        	“service_name” : “HDFS”
    	},
    	“metrics” : {
        	"disk" : {       
            	"disk_total" : 100000
        	},
        	“cpu” : {
            	“cpu_speed” : 10000000,
            	“cpu_num” : 4,
            	“cpu_idle” : 999999,
            	...
        	}
    	}
	}

**Example – Using Partial Response to restrict response to a sub-resource**

	GET	/api/v1/clusters/c1/hosts/host1?fields=host_components

	200 OK
	{
    	“href”: “.../api/v1/clusters/c1/hosts/host1?fields=host_components”,
    	“Hosts” : {
        	“cluster_name” : “c1”,
        	“host_name” : “host1”
    	},
    	“host_components”: [
        	{
            	“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/NAMENODE”
            	“HostRoles” : {
                	“cluster_name” : “c1”,
                	“component_name” : “NAMENODE”,
                	“host_name” : “host1”
            	}
        	},
        	{
            	“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/DATANODE”
            	“HostRoles” : {
                	“cluster_name” : “c1”,
                	“component_name” : DATANODE”,
                	“host_name” : “host1”
            	}
        	},
            ... 
    	]
	}

**Example – Using Partial Response to expand a sub-resource one level deep**

	GET	/api/v1/clusters/c1/hosts/host1?fields=host_components/*

	200 OK
	{
    	“href”: “.../api/v1/clusters/c1/hosts/host1?fields=host_components/*”,
    	“Hosts” : {
        	“cluster_name” : “c1”,
        	“host_name” : “host1”
        },
        “host_components”: [
        	{
            	“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/DATANODE”
            	“HostRoles” : {
                	“cluster_name” : “c1”,
               		“component_name” : DATANODE”,
                	“host_name” : “host1”,
                	“state” : “RUNNING”,
                	...
            	},        
            	"host" : {     
                	"href" : ".../api/v1/clusters/c1/hosts/host1"  
            	},
            	“metrics” : {
                	"disk" : {       
                    	"disk_total" : 100000000,       
                    	"disk_free" : 5000000,       
                    	"part_max_used" : 10101     
                	},
                	...
            	},
            	"component" : {
                	"href" : "http://ambari.server/api/v1/clusters/c1/services/HDFS/components/NAMENODE", 
                	“ServiceComponentInfo” : {
                    	"cluster_name" : "c1",         
                    	"component_name" : "NAMENODE",         
                    	"service_name" : "HDFS"       
                	}
            	}  
        	},
        	...
    	]
	}

**Example – Using Partial Response for multi-level expansion of sub-resources**
	
	GET /api/v1/clusters/c1/hosts/host1?fields=host_components/component/*
	
	200 OK
	{
    	“href”: “http://ambari.server/api/v1/clusters/c1/hosts/host1?fields=host_components/*”,
    	“Hosts” : {
        	“cluster_name” : “c1”,
        	“host_name” : “host1”
        	...
    	},
    	“host_components”: [
    		{
            	“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/DATANODE”,
            	“HostRoles” : {
                	“cluster_name” : “c1”,
                	“component_name” : DATANODE”,
                	“host_name” : “host1”
            	}, 
            	"component" : {
                	"href" : "http://ambari.server/api/v1/clusters/c1/services/HDFS/components/DATANODE", 
                	“ServiceComponentInfo” : {
                   		"cluster_name" : "c1",         
                    	"component_name" : "DATANODE",         
                    	"service_name" : "HDFS"  
                    	...     
                	},
             		“metrics”: {
                   		“dfs”: {
                       		“datanode” : {
          	                	“blocks_written " :  10000,
          	                	“blocks_read" : 5000,
                             	...
                        	}
                    	},
                    	“disk”: {
                       		"disk_total " :  1000000,
                        	“disk_free" : 50000,
                        	...
                    	},
                   		... 	
					}
            	}
        	},
        	{
            	“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/NAMENODE”,
            	“HostRoles” : {
                	“cluster_name” : “c1”,
                	“component_name” : NAMENODE”,
                	“host_name” : “host1”
            	}, 
            	"component" : {
                	"href" : "http://ambari.server/api/v1/clusters/c1/services/HDFS/components/NAMENODE", 
                	“ServiceComponentInfo” : {
                   		"cluster_name" : "c1",         
                    	"component_name" : "NAMENODE",         
                    	"service_name" : "HDFS"       
                	},
             		“metrics”: {
                    	“dfs”: {
                       		“namenode” : {
          	            		“FilesRenamed " :  10,
          	            		“FilesDeleted" : 5
                         		…
                    		}
						},	
                    	“disk”: {
                       		"disk_total " :  1000000,
                       		“disk_free" : 50000,
                        	...
                    	}
                	},
                	...
            	}
        	},
        	...
    	]
	}

**Example: Using Partial Response to expand collection resource instances one level deep**

	GET /api/v1/clusters/c1/hosts?fields=*

	200 OK
	{
    	“href” : “http://ambari.server/api/v1/clusters/c1/hosts/?fields=*”,    
    	“items”: [ 
        	{
            	“href” : “http://ambari.server/api/v1/clusters/c1/hosts/host1”,
            	“Hosts” : {
                	“cluster_name” :  “c1”,
                	“host_name” : “host1”
            	},
            	“metrics”: {
                	“process”: {          	    
                   		"proc_total" : 1000,
          	       		"proc_run" : 1000
                	},
                	...
            	},
            	“host_components”: [
                	{
                   		“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/NAMENODE”
                    	“HostRoles” : {
                       		“cluster_name” : “c1”,
                         	“component_name” : “NAMENODE”,
                        	“host_name” : “host1”
                    	}
                	},
                	{
                    	“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/DATANODE”
                    	“HostRoles” : {
                       		“cluster_name” : “c1”,
                        	“component_name” : DATANODE”,
                        	“host_name” : “host1”
                    	}
                	},
                	...
            	},
            	...
        	},
        	{
            	“href” : “http://ambari.server/api/v1/clusters/c1/hosts/host2”,
            	“Hosts” : {
                	“cluster_name” :  “c1”,
                	“host_name” : “host2”
            	},
            	“metrics”: {
               		“process”: {          	    
                   		"proc_total" : 555,
          	     		"proc_run" : 55
                	},
                	...
            	},
            	“host_components”: [
                	{
                   		“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/DATANODE”
                    	“HostRoles” : {
                       		“cluster_name” : “c1”,
                        	“component_name” : “DATANODE”,
                        	“host_name” : “host2”
                    	}
                	},
                	...
            	],
            	...
        	},
        	...
    	]
	}

### Additional Partial Response Examples

**Example – For each cluster, get cluster name, all hostname’s and all service names**

	GET   /api/v1/clusters?fields=Clusters/cluster_name,hosts/Hosts/host_name,services/ServiceInfo/service_name

**Example - Get all hostname’s for a given component**

	GET	/api/v1/clusters/c1/services/HDFS/components/DATANODE?fields=host_components/HostRoles/host_name

**Example - Get all hostname’s and component names for a given service**

	GET	/api/v1/clusters/c1/services/HDFS?fields=components/host_components/HostRoles/host_name,
                                      	          components/host_components/HostRoles/component_name



Query Predicates
----

Used to limit which data is returned by a query.  This is synonymous to the “where” clause in a SQL query.  Providing query parameters does not result in any link expansion in the data that is returned, with the exception of the fields used in the predicates.  Query predicates can only be applied to collection resources.  A predicate consists of at least one relational expression.  Predicates with multiple relational expressions also contain logical operators, which connect the relational expressions.  Predicates may also use brackets for explicit grouping of expressions. 

###Relational Query Operators

<table>
  <tr>
    <th>Operator</th>
    <th>Example</th>
    <th>Description</th>
  </tr>
  <tr>
    <td>=</td>
    <td>name=host1</td>
    <td>String or numerical EQUALS</td>
  </tr>
  <tr>
    <td>!=</td>
    <td>name!=host1</td>
    <td>String or numerical NOT EQUALS</td>
  </tr>
  <tr>
    <td>&lt;</td>
    <td>disk_total&lt;50</td>
    <td>Numerical LESS THAN</td>
  </tr>
  <tr>
    <td>&gt;</td>
    <td>disk_total&gt;50</td>
    <td>Numerical GREATER THAN</td>
  </tr>
  <tr>
    <td>&lt;=</td>
    <td>disk_total&lt;=50</td>
    <td>Numerical LESS THAN OR EQUALS</td>
  </tr>
  <tr>
    <td>&gt;=</td>
    <td>disk_total&gt;=50</td>
    <td>Numerical GREATER THAN OR EQUALS</td>
  </tr>  
</table>

###Logical Query Operators

<table>
  <tr>
    <th>Operator</th>
    <th>Example</th>
    <th>Description</th>
  </tr>
  <tr>
    <td>|</td>
    <td>name=host1|name=host2</td>
    <td>Logical OR operator</td>
  </tr>
  <tr>
    <td>&</td>
    <td>prop1=foo&prop2=bar</td>
    <td>Logical AND operator</td>
  </tr>
  <tr>
    <td>!</td>
    <td>!prop<50</td>
    <td>Logical NOT operator</td>
  </tr>
</table>

**Logical Operator Precedence**

Standard logical operator precedence rules apply.  The above logical operators are listed in order of precedence starting with the lowest priority.  

###Brackets

<table>
  <tr>
    <th>Bracket</th>
    <th>Description</th>
  </tr>
  <tr>
    <td>(</td>
    <td>Opening Bracket</td>
  </tr>
  <tr>
    <td>)</td>
    <td>Closing Bracket</td>
  </tr>

</table>
  
Brackets can be used to provide explicit grouping of expressions. Expressions within brackets have the highest precedence.

###Operator Functions
 
<table>
  <tr>
    <th>Operator</th>
    <th>Example</th>
    <th>Description</th>
  </tr>
  <tr>
    <td>in()</td>
    <td>name.in(foo,bar)</td>
    <td>IN function.  More compact form of name=foo|name=bar. </td>
  </tr>
  <tr>
    <td>isEmpty()</td>
    <td>category.isEmpty()</td>
    <td>Used to determine if a category contains any properties. </td>
  </tr>
</table>
Operator functions behave like relational operators and provide additional functionality.  Some operator functions, such as in(), act as binary operators like the above relational operators, where there is a left and right operand.  Some operator functions are unary operators, such as isEmpty(), where there is only a single operand.

###Query Examples

**Example – Get all hosts with “HEALTHY” status that have 2 or more cpu**
	
	GET	/api/v1/clusters/c1/hosts?Hosts/host_status=HEALTHY&Hosts/cpu_count>=2
	
**Example – Get all hosts with less than 2 cpu or host status != HEALTHY**
	

	GET	/api/v1/clusters/c1/hosts?Hosts/cpu_count<2|Hosts/host_status!=HEALTHY

**Example – Get all “rhel6” hosts with less than 2 cpu or “centos6” hosts with 3 or more cpu**  

	GET	/api/v1/clusters/c1/hosts?Hosts/os_type=rhel6&Hosts/cpu_count<2|Hosts/os_type=centos6&Hosts/cpu_count>=3

**Example – Get all hosts where either state != “HEALTHY” or last_heartbeat_time < 1360600135905 and rack_info=”default_rack”**

	GET	/api/v1/clusters/c1/hosts?(Hosts/host_status!=HEALTHY|Hosts/last_heartbeat_time<1360600135905)
                                  &Hosts/rack_info=default_rack

**Example – Get hosts with host name of host1 or host2 or host3 using IN operator**
	
	GET	/api/v1/clusters/c1/hosts?Hosts/host_name.in(host1,host2,host3)

**Example – Get and expand all HDFS components, which have at least 1 property in the “metrics/jvm” category (combines query and partial response syntax)**

	GET	/api/v1/clusters/c1/services/HDFS/components?!metrics/jvm.isEmpty()&fields=*

**Example – Update the state of all ‘INSTALLED’ services to be ‘STARTED’**

	PUT /api/v1/clusters/c1/services?ServiceInfo/state=INSTALLED 
    {
      "ServiceInfo": {
        "state" : "STARTED”
      }
    }



Temporal Metrics
----

Some metrics have values that are available across a range in time.  To query a metric for a range of values, the following partial response syntax is used.  

To get temporal data for a single property:
?fields=category/property[start-time,end-time,step]	

To get temporal data for all properties in a category:
?fields=category[start-time,end-time,step]

start-time: Required field.  The start time for the query in Unix epoch time format.
end-time: Optional field, defaults to now.  The end time for the query in Unix epoch time format.
step: Optional field, defaults to the corresponding metrics system’s default value.  If provided, end-time must also be provided. The interval of time between returned data points specified in seconds. The larger the value provided, the fewer data points returned so this can be used to limit how much data is returned for the given time range.  This is only used as a suggestion so the result interval may differ from the one specified.

The returned result is a list of data points over the specified time range.  Each data point is a value / timestamp pair.

**Note**: It is important to understand that requesting large amounts of temporal data may result in severe performance degradation.  **Always** request the minimal amount of information necessary.  If large amounts of data are required, consider splitting the request up into multiple smaller requests.

**Example – Temporal Query for a single property using only start-time**

	GET	/api/v1/clusters/c1/hosts/host1?fields=metrics/jvm/gcCount[1360610225]

	
	200 OK
	{
    	“href” : …/api/v1/clusters/c1/hosts/host1?fields=metrics/jvm/gcCount[1360610225]”,
    	...
    	“metrics”: [
        	{
            	“jvm”: {
          	    	"gcCount" : [
                   		[10, 1360610165],
                     	[12, 1360610180],
                     	[13, 1360610195],
                     	[14, 1360610210],
                     	[15, 1360610225]
                  	]
             	}
         	}
    	]
	}

**Example – Temporal Query for a category using start-time, end-time and step**

	GET	/api/v1/clusters/c1/hosts/host1?fields=metrics/jvm[1360610200,1360610500,100]

	200 OK
	{
    	“href” : …/clusters/c1/hosts/host1?fields=metrics/jvm[1360610200,1360610500,100]”,
    	...
    	“metrics”: [
        	{
            	“jvm”: {
          	    	"gcCount" : [
                   		[10, 1360610200],
                     	[12, 1360610300],
                     	[13, 1360610400],
                     	[14, 1360610500]
                  	],
                	"gcTimeMillis" : [
                   		[1000, 1360610200],
                     	[2000, 1360610300],
                     	[5000, 1360610400],
                     	[9500, 1360610500]
                  	],
                  	...
             	}
         	}
    	]
	}

	


HTTP Return Codes
----

The following HTTP codes may be returned by the API.
<table>
  <tr>
    <th>HTTP CODE</th>
    <th>Description</th>
  </tr>
  <tr>
    <td>200</td>
    <td>OK</td>  
  </tr>
  <tr>
    <td>400</td>
    <td>Bad Request</td>  
  </tr>
  <tr>
    <td>401</td>
    <td>Unauthorized</td>  
  </tr>
  <tr>
    <td>403</td>
    <td>Forbidden</td>  
  </tr> 
  <tr>
    <td>404</td>
    <td>Not Found</td>  
  </tr>
  <tr>
    <td>500</td>
    <td>Internal Server Error</td>  
  </tr>
</table>


Errors
----

**Example errors responses**

    404 Not Found
	{   
    	"status" : 404,   
    	"message" : "The requested resource doesn't exist: Cluster not found, clusterName=someInvalidCluster" 
	} 

&nbsp;

	400 Bad Request
	{   
    	"status" : 400,   
    	"message" : "The properties [foo] specified in the request or predicate are not supported for the 
                	 resource type Cluster."
	}


\ No newline at end of file
+<!---
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

Ambari API Reference v1
=========

The Ambari API facilitates the management and monitoring of the resources of an Apache Hadoop cluster. This document describes the resources and syntax used in the Ambari API and is intended for developers who want to integrate with Ambari.

- [Release Version](#release-version)
- [Authentication](#authentication)
- [Monitoring](#monitoring)
- [Management](#management)
- [Resources](#resources)
- [Partial Response](#partial-response)
- [Query Parameters](#query-parameters)
- [Batch Requests](#batch-requests)
- [RequestInfo](#request-info)
- [Errors](#errors)


Release Version
----
_Last Updated April 25, 2013_

Authentication
----

The operations you perform against the Ambari API require authentication. Access to the API requires the use of **Basic Authentication**. To use Basic Authentication, you need to send the **Authorization: Basic** header with your requests. For example, this can be handled when using curl and the --user option.

    curl --user name:password http://{your.ambari.server}/api/v1/clusters

_Note: The authentication method and source is configured at the Ambari Server. Changing and configuring the authentication method and source is not covered in this document._

Monitoring
----
The Ambari API provides access to monitoring and metrics information of an Apache Hadoop cluster.

###GET
Use the GET method to read the properties, metrics and sub-resources of an Ambari resource.  Calling the GET method returns the requested resources and produces no side-effects.  A response code of 200 indicates that the request was successfully processed with the requested resource included in the response body.
 
**Example**

Get the DATANODE component resource for the HDFS service of the cluster named 'c1'.

    GET /clusters/c1/services/HDFS/components/DATANODE

**Response**

    200 OK
    {
    	"href" : "http://your.ambari.server/api/v1/clusters/c1/services/HDFS/components/DATANODE",
    	"metrics" : {
    		"process" : {
              "proc_total" : 697.75,
              "proc_run" : 0.875
    		},
      		"rpc" : {
        		...
      		},
      		"ugi" : {
      			...
      		},
      		"dfs" : {
        		"datanode" : {
          		...
        		}
      		},
      		"disk" : {
        		...
      		},
      		"cpu" : {
        		...
      		}
      		...
        },
    	"ServiceComponentInfo" : {
      		"cluster_name" : "c1",
      		"component_name" : "DATANODE",
      		"service_name" : "HDFS"
      		"state" : "STARTED"
    	},
    	"host_components" : [
      		{
      			"href" : "http://your.ambari.server/api/v1/clusters/c1/hosts/host1/host_components/DATANODE",
      			"HostRoles" : {
        			"cluster_name" : "c1",
        			"component_name" : "DATANODE",
        			"host_name" : "host1"
        		}
      		}
       	]
    }


Management
----
The Ambari API provides for the management of the resources of an Apache Hadoop cluster.  This includes the creation, deletion and updating of resources.

###POST
The POST method creates a new resource. If a new resource is created then a 201 response code is returned.  The code 202 can also be returned to indicate that the instruction was accepted by the server (see [asynchronous response](#asynchronous-response)). 

**Example**

Create the HDFS service.


    POST /clusters/c1/services/HDFS


**Response**

    201 Created

###PUT
Use the PUT method to update resources.  If an existing resource is modified then a 200 response code is retrurned to indicate successful completion of the request.  The response code 202 can also be returned to indicate that the instruction was accepted by the server (see [asynchronous response](#asynchronous-response)).

**Example**

Start the HDFS service (update the state of the HDFS service to be ‘STARTED’).


    PUT /clusters/c1/services/HDFS/

**Body**

    {
      "ServiceInfo": {
        "state" : "STARTED”
      }
    }


**Response**

The response code 202 indicates that the server has accepted the instruction to update the resource.  The body of the response contains the ID and href of the request resource that was created to carry out the instruction (see [asynchronous response](#asynchronous-response)).

    202 Accepted
    {
      "href" : "http://your.ambari.server/api/v1/clusters/c1/requests/3",
      "Requests" : {
        "id" : 3,
        "status" : "InProgress"
      } 
    }


###DELETE
Use the DELETE method to delete a resource. If an existing resource is deleted then a 200 response code is retrurned to indicate successful completion of the request.  The response code 202 can also be returned which indicates that the instruction was accepted by the server and the resource was marked for deletion (see [asynchronous response](#asynchronous-response)).

**Example**

Delete the cluster named 'c1'.

    DELETE /clusters/c1

**Response**

    200 OK

###Asynchronous Response

The managment APIs can return a response code of 202 which indicates that the request has been accepted.  The body of the response contains the ID and href of the request resource that was created to carry out the instruction. 
    
    202 Accepted
    {
      "href" : "http://your.ambari.server/api/v1/clusters/c1/requests/6",
      "Requests" : {
        "id" : 6,
        "status" : "InProgress"
      } 
    }

The href in the response body can then be used to query the associated request resource and monitor the progress of the request.  A request resource has one or more task sub resources.  The following example shows how to use [partial response](#partial-response) to query for task resources of a request resource. 

    /clusters/c1/requests/6?fields=tasks/Tasks/*   
    
The returned task resources can be used to determine the status of the request.

    {
      "href" : "http://your.ambari.server/api/v1/clusters/c1/requests/6",
      "Requests" : {
        "id" : 6,
        "cluster_name" : "c1"
      },
      "tasks" : [
        {
          "href" : "http://your.ambari.server/api/v1/clusters/c1/requests/6/tasks/32",
          "Tasks" : {
            "exit_code" : 777,
            "stdout" : "default org.apache.hadoop.mapred.CapacityTaskScheduler\nwarning: Dynamic lookup of ...",
            "status" : "IN_PROGRESS",
            "stderr" : "",
            "host_name" : "dev.hortonworks.com",
            "id" : 32,
            "cluster_name" : "c1",
            "attempt_cnt" : 1,
            "request_id" : 6,
            "command" : "START",
            "role" : "NAMENODE",
            "start_time" : 1367240498196,
            "stage_id" : 1
          }
        },
        {
          "href" : "http://your.ambari.server/api/v1/clusters/c1/requests/6/tasks/33",
          "Tasks" : {
            "exit_code" : 999,
            "stdout" : "",
            "status" : "PENDING",
            ...
          }
        },
        {
          "href" : "http://your.ambari.server/api/v1/clusters/c1/requests/6/tasks/31",
          "Tasks" : {
            "exit_code" : 0,
            "stdout" : "warning: Dynamic lookup of $ambari_db_rca_username ...",
            "status" : "COMPLETED",
            ...
          }
        }
      ]
    }

Resources
----
###Collection Resources


A collection resource is a set of resources of the same type, rather than any specific resource. For example:

    /clusters  

  _Refers to a collection of clusters_

###Instance Resources

An instance resource is a single specific resource. For example:

    /clusters/c1

  _Refers to the cluster resource identified by the id "c1"_

###Types
Resources are grouped into types.  This allows the user to query for collections of resources of the same type.  Some resource types are composed of subtypes (e.g. services are sub-resources of clusters).

The following is a list of some of the Ambari resource types with descriptions and usage examples.
 
#### clusters
Cluster resources represent named Hadoop clusters.  Clusters are top level resources. 

[Cluster Resources](cluster-resources.md)

#### services
Service resources are services of a Hadoop cluster (e.g. HDFS, MapReduce and Ganglia).  Service resources are sub-resources of clusters. 

[Service Resources](service-resources.md)

#### components
Component resources are the individual components of a service (e.g. HDFS/NameNode and MapReduce/JobTracker).  Components are sub-resources of services.

[Component Resources](component-resources.md)

#### hosts
Host resources are the host machines that make up a Hadoop cluster.  Hosts are top level resources but can also be sub-resources of clusters. 

[Host Resources](host-resources.md)


#### host_components
Host component resources are usages of a component on a particular host.  Host components are sub-resources of hosts.

[Host Component Resources](host-component-resources.md)


#### configurations
Configuration resources are sets of key/value pairs that configure the services of a Hadoop cluster.

[Configuration Resource Overview](configuration.md)

Partial Response
----

Used to control which fields are returned by a query.  Partial response can be used to restrict which fields are returned and additionally, it allows a query to reach down and return data from sub-resources.  The keyword “fields” is used to specify a partial response.  Only the fields specified will be returned to the client.  To specify sub-elements, use the notation “a/b/c”.  Properties, categories and sub-resources can be specified.  The wildcard ‘*’ can be used to show all categories, fields and sub-resources for a resource.  This can be combined to provide ‘expand’ functionality for sub-components.  Some fields are always returned for a resource regardless of the specified partial response fields.  These fields are the fields, which uniquely identify the resource.  This would be the primary id field of the resource and the foreign keys to the primary id fields of all ancestors of the resource.

**Example: Using Partial Response to restrict response to a specific field**

    GET    /api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=metrics/disk/disk_total

    200 OK
	{
    	“href”: “.../api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=metrics/disk/disk_total”,
    	“ServiceComponentInfo” : {
        	“cluster_name” : “c1”,
        	“component_name” : NAMENODE”,
        	“service_name” : “HDFS”
    	},
    	“metrics” : {
        	"disk" : {       
            	"disk_total" : 100000
        	}
    	}
    }

**Example: Using Partial Response to restrict response to specified category**

    GET    /api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=metrics/disk

    200 OK
	{
    	“href”: “.../api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=metrics/disk”,
    	“ServiceComponentInfo” : {
        	“cluster_name” : “c1”,
        	“component_name” : NAMENODE”,
        	“service_name” : “HDFS”
    	},
    	“metrics” : {
        	"disk" : {       
            	"disk_total" : 100000,
            	“disk_free” : 50000,
            	“part_max_used” : 1010
        	}
    	}
	}

**Example – Using Partial Response to restrict response to multiple fields/categories**

	GET	/api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=metrics/disk/disk_total,metrics/cpu
	
	200 OK
	{
    	“href”: “.../api/v1/clusters/c1/services/HDFS/components/NAMENODE?fields=metrics/disk/disk_total,metrics/cpu”,
    	“ServiceComponentInfo” : {
        	“cluster_name” : “c1”,
        	“component_name” : NAMENODE”,
        	“service_name” : “HDFS”
    	},
    	“metrics” : {
        	"disk" : {       
            	"disk_total" : 100000
        	},
        	“cpu” : {
            	“cpu_speed” : 10000000,
            	“cpu_num” : 4,
            	“cpu_idle” : 999999,
            	...
        	}
    	}
	}

**Example – Using Partial Response to restrict response to a sub-resource**

	GET	/api/v1/clusters/c1/hosts/host1?fields=host_components

	200 OK
	{
    	“href”: “.../api/v1/clusters/c1/hosts/host1?fields=host_components”,
    	“Hosts” : {
        	“cluster_name” : “c1”,
        	“host_name” : “host1”
    	},
    	“host_components”: [
        	{
            	“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/NAMENODE”
            	“HostRoles” : {
                	“cluster_name” : “c1”,
                	“component_name” : “NAMENODE”,
                	“host_name” : “host1”
            	}
        	},
        	{
            	“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/DATANODE”
            	“HostRoles” : {
                	“cluster_name” : “c1”,
                	“component_name” : DATANODE”,
                	“host_name” : “host1”
            	}
        	},
            ... 
    	]
	}

**Example – Using Partial Response to expand a sub-resource one level deep**

	GET	/api/v1/clusters/c1/hosts/host1?fields=host_components/*

	200 OK
	{
    	“href”: “.../api/v1/clusters/c1/hosts/host1?fields=host_components/*”,
    	“Hosts” : {
        	“cluster_name” : “c1”,
        	“host_name” : “host1”
        },
        “host_components”: [
        	{
            	“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/DATANODE”
            	“HostRoles” : {
                	“cluster_name” : “c1”,
               		“component_name” : DATANODE”,
                	“host_name” : “host1”,
                	“state” : “RUNNING”,
                	...
            	},        
            	"host" : {     
                	"href" : ".../api/v1/clusters/c1/hosts/host1"  
            	},
            	“metrics” : {
                	"disk" : {       
                    	"disk_total" : 100000000,       
                    	"disk_free" : 5000000,       
                    	"part_max_used" : 10101     
                	},
                	...
            	},
            	"component" : {
                	"href" : "http://ambari.server/api/v1/clusters/c1/services/HDFS/components/NAMENODE", 
                	“ServiceComponentInfo” : {
                    	"cluster_name" : "c1",         
                    	"component_name" : "NAMENODE",         
                    	"service_name" : "HDFS"       
                	}
            	}  
        	},
        	...
    	]
	}

**Example – Using Partial Response for multi-level expansion of sub-resources**
	
	GET /api/v1/clusters/c1/hosts/host1?fields=host_components/component/*
	
	200 OK
	{
    	“href”: “http://ambari.server/api/v1/clusters/c1/hosts/host1?fields=host_components/*”,
    	“Hosts” : {
        	“cluster_name” : “c1”,
        	“host_name” : “host1”
        	...
    	},
    	“host_components”: [
    		{
            	“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/DATANODE”,
            	“HostRoles” : {
                	“cluster_name” : “c1”,
                	“component_name” : DATANODE”,
                	“host_name” : “host1”
            	}, 
            	"component" : {
                	"href" : "http://ambari.server/api/v1/clusters/c1/services/HDFS/components/DATANODE", 
                	“ServiceComponentInfo” : {
                   		"cluster_name" : "c1",         
                    	"component_name" : "DATANODE",         
                    	"service_name" : "HDFS"  
                    	...     
                	},
             		“metrics”: {
                   		“dfs”: {
                       		“datanode” : {
          	                	“blocks_written " :  10000,
          	                	“blocks_read" : 5000,
                             	...
                        	}
                    	},
                    	“disk”: {
                       		"disk_total " :  1000000,
                        	“disk_free" : 50000,
                        	...
                    	},
                   		... 	
					}
            	}
        	},
        	{
            	“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/NAMENODE”,
            	“HostRoles” : {
                	“cluster_name” : “c1”,
                	“component_name” : NAMENODE”,
                	“host_name” : “host1”
            	}, 
            	"component" : {
                	"href" : "http://ambari.server/api/v1/clusters/c1/services/HDFS/components/NAMENODE", 
                	“ServiceComponentInfo” : {
                   		"cluster_name" : "c1",         
                    	"component_name" : "NAMENODE",         
                    	"service_name" : "HDFS"       
                	},
             		“metrics”: {
                    	“dfs”: {
                       		“namenode” : {
          	            		“FilesRenamed " :  10,
          	            		“FilesDeleted" : 5
                         		…
                    		}
						},	
                    	“disk”: {
                       		"disk_total " :  1000000,
                       		“disk_free" : 50000,
                        	...
                    	}
                	},
                	...
            	}
        	},
        	...
    	]
	}

**Example: Using Partial Response to expand collection resource instances one level deep**

	GET /api/v1/clusters/c1/hosts?fields=*

	200 OK
	{
    	“href” : “http://ambari.server/api/v1/clusters/c1/hosts/?fields=*”,    
    	“items”: [ 
        	{
            	“href” : “http://ambari.server/api/v1/clusters/c1/hosts/host1”,
            	“Hosts” : {
                	“cluster_name” :  “c1”,
                	“host_name” : “host1”
            	},
            	“metrics”: {
                	“process”: {          	    
                   		"proc_total" : 1000,
          	       		"proc_run" : 1000
                	},
                	...
            	},
            	“host_components”: [
                	{
                   		“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/NAMENODE”
                    	“HostRoles” : {
                       		“cluster_name” : “c1”,
                         	“component_name” : “NAMENODE”,
                        	“host_name” : “host1”
                    	}
                	},
                	{
                    	“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/DATANODE”
                    	“HostRoles” : {
                       		“cluster_name” : “c1”,
                        	“component_name” : DATANODE”,
                        	“host_name” : “host1”
                    	}
                	},
                	...
            	},
            	...
        	},
        	{
            	“href” : “http://ambari.server/api/v1/clusters/c1/hosts/host2”,
            	“Hosts” : {
                	“cluster_name” :  “c1”,
                	“host_name” : “host2”
            	},
            	“metrics”: {
               		“process”: {          	    
                   		"proc_total" : 555,
          	     		"proc_run" : 55
                	},
                	...
            	},
            	“host_components”: [
                	{
                   		“href”: “…/api/v1/clusters/c1/hosts/host1/host_components/DATANODE”
                    	“HostRoles” : {
                       		“cluster_name” : “c1”,
                        	“component_name” : “DATANODE”,
                        	“host_name” : “host2”
                    	}
                	},
                	...
            	],
            	...
        	},
        	...
    	]
	}

### Additional Partial Response Examples

**Example – For each cluster, get cluster name, all hostname’s and all service names**

	GET   /api/v1/clusters?fields=Clusters/cluster_name,hosts/Hosts/host_name,services/ServiceInfo/service_name

**Example - Get all hostname’s for a given component**

	GET	/api/v1/clusters/c1/services/HDFS/components/DATANODE?fields=host_components/HostRoles/host_name

**Example - Get all hostname’s and component names for a given service**

	GET	/api/v1/clusters/c1/services/HDFS?fields=components/host_components/HostRoles/host_name,
                                      	          components/host_components/HostRoles/component_name



Query Predicates
----

Used to limit which data is returned by a query.  This is synonymous to the “where” clause in a SQL query.  Providing query parameters does not result in any link expansion in the data that is returned, with the exception of the fields used in the predicates.  Query predicates can only be applied to collection resources.  A predicate consists of at least one relational expression.  Predicates with multiple relational expressions also contain logical operators, which connect the relational expressions.  Predicates may also use brackets for explicit grouping of expressions. 

###Relational Query Operators

<table>
  <tr>
    <th>Operator</th>
    <th>Example</th>
    <th>Description</th>
  </tr>
  <tr>
    <td>=</td>
    <td>name=host1</td>
    <td>String or numerical EQUALS</td>
  </tr>
  <tr>
    <td>!=</td>
    <td>name!=host1</td>
    <td>String or numerical NOT EQUALS</td>
  </tr>
  <tr>
    <td>&lt;</td>
    <td>disk_total&lt;50</td>
    <td>Numerical LESS THAN</td>
  </tr>
  <tr>
    <td>&gt;</td>
    <td>disk_total&gt;50</td>
    <td>Numerical GREATER THAN</td>
  </tr>
  <tr>
    <td>&lt;=</td>
    <td>disk_total&lt;=50</td>
    <td>Numerical LESS THAN OR EQUALS</td>
  </tr>
  <tr>
    <td>&gt;=</td>
    <td>disk_total&gt;=50</td>
    <td>Numerical GREATER THAN OR EQUALS</td>
  </tr>  
</table>

###Logical Query Operators

<table>
  <tr>
    <th>Operator</th>
    <th>Example</th>
    <th>Description</th>
  </tr>
  <tr>
    <td>|</td>
    <td>name=host1|name=host2</td>
    <td>Logical OR operator</td>
  </tr>
  <tr>
    <td>&</td>
    <td>prop1=foo&prop2=bar</td>
    <td>Logical AND operator</td>
  </tr>
  <tr>
    <td>!</td>
    <td>!prop<50</td>
    <td>Logical NOT operator</td>
  </tr>
</table>

**Logical Operator Precedence**

Standard logical operator precedence rules apply.  The above logical operators are listed in order of precedence starting with the lowest priority.  

###Brackets

<table>
  <tr>
    <th>Bracket</th>
    <th>Description</th>
  </tr>
  <tr>
    <td>(</td>
    <td>Opening Bracket</td>
  </tr>
  <tr>
    <td>)</td>
    <td>Closing Bracket</td>
  </tr>

</table>
  
Brackets can be used to provide explicit grouping of expressions. Expressions within brackets have the highest precedence.

###Operator Functions
 
<table>
  <tr>
    <th>Operator</th>
    <th>Example</th>
    <th>Description</th>
  </tr>
  <tr>
    <td>in()</td>
    <td>name.in(foo,bar)</td>
    <td>IN function.  More compact form of name=foo|name=bar. </td>
  </tr>
  <tr>
    <td>isEmpty()</td>
    <td>category.isEmpty()</td>
    <td>Used to determine if a category contains any properties. </td>
  </tr>
</table>
Operator functions behave like relational operators and provide additional functionality.  Some operator functions, such as in(), act as binary operators like the above relational operators, where there is a left and right operand.  Some operator functions are unary operators, such as isEmpty(), where there is only a single operand.

###Query Examples

**Example – Get all hosts with “HEALTHY” status that have 2 or more cpu**
	
	GET	/api/v1/clusters/c1/hosts?Hosts/host_status=HEALTHY&Hosts/cpu_count>=2
	
**Example – Get all hosts with less than 2 cpu or host status != HEALTHY**
	

	GET	/api/v1/clusters/c1/hosts?Hosts/cpu_count<2|Hosts/host_status!=HEALTHY

**Example – Get all “rhel6” hosts with less than 2 cpu or “centos6” hosts with 3 or more cpu**  

	GET	/api/v1/clusters/c1/hosts?Hosts/os_type=rhel6&Hosts/cpu_count<2|Hosts/os_type=centos6&Hosts/cpu_count>=3

**Example – Get all hosts where either state != “HEALTHY” or last_heartbeat_time < 1360600135905 and rack_info=”default_rack”**

	GET	/api/v1/clusters/c1/hosts?(Hosts/host_status!=HEALTHY|Hosts/last_heartbeat_time<1360600135905)
                                  &Hosts/rack_info=default_rack

**Example – Get hosts with host name of host1 or host2 or host3 using IN operator**
	
	GET	/api/v1/clusters/c1/hosts?Hosts/host_name.in(host1,host2,host3)

**Example – Get and expand all HDFS components, which have at least 1 property in the “metrics/jvm” category (combines query and partial response syntax)**

	GET	/api/v1/clusters/c1/services/HDFS/components?!metrics/jvm.isEmpty()&fields=*

**Example – Update the state of all ‘INSTALLED’ services to be ‘STARTED’**

	PUT /api/v1/clusters/c1/services?ServiceInfo/state=INSTALLED 
    {
      "ServiceInfo": {
        "state" : "STARTED”
      }
    }


Temporal Metrics
----

Some metrics have values that are available across a range in time.  To query a metric for a range of values, the following partial response syntax is used.  

To get temporal data for a single property:
?fields=category/property[start-time,end-time,step]	

To get temporal data for all properties in a category:
?fields=category[start-time,end-time,step]

start-time: Required field.  The start time for the query in Unix epoch time format.
end-time: Optional field, defaults to now.  The end time for the query in Unix epoch time format.
step: Optional field, defaults to the corresponding metrics system’s default value.  If provided, end-time must also be provided. The interval of time between returned data points specified in seconds. The larger the value provided, the fewer data points returned so this can be used to limit how much data is returned for the given time range.  This is only used as a suggestion so the result interval may differ from the one specified.

The returned result is a list of data points over the specified time range.  Each data point is a value / timestamp pair.

**Note**: It is important to understand that requesting large amounts of temporal data may result in severe performance degradation.  **Always** request the minimal amount of information necessary.  If large amounts of data are required, consider splitting the request up into multiple smaller requests.

**Example – Temporal Query for a single property using only start-time**

	GET	/api/v1/clusters/c1/hosts/host1?fields=metrics/jvm/gcCount[1360610225]

	
	200 OK
	{
    	“href” : …/api/v1/clusters/c1/hosts/host1?fields=metrics/jvm/gcCount[1360610225]”,
    	...
    	“metrics”: [
        	{
            	“jvm”: {
          	    	"gcCount" : [
                   		[10, 1360610165],
                     	[12, 1360610180],
                     	[13, 1360610195],
                     	[14, 1360610210],
                     	[15, 1360610225]
                  	]
             	}
         	}
    	]
	}

**Example – Temporal Query for a category using start-time, end-time and step**

	GET	/api/v1/clusters/c1/hosts/host1?fields=metrics/jvm[1360610200,1360610500,100]

	200 OK
	{
    	“href” : …/clusters/c1/hosts/host1?fields=metrics/jvm[1360610200,1360610500,100]”,
    	...
    	“metrics”: [
        	{
            	“jvm”: {
          	    	"gcCount" : [
                   		[10, 1360610200],
                     	[12, 1360610300],
                     	[13, 1360610400],
                     	[14, 1360610500]
                  	],
                	"gcTimeMillis" : [
                   		[1000, 1360610200],
                     	[2000, 1360610300],
                     	[5000, 1360610400],
                     	[9500, 1360610500]
                  	],
                  	...
             	}
         	}
    	]
	}

	


HTTP Return Codes
----

The following HTTP codes may be returned by the API.
<table>
  <tr>
    <th>HTTP CODE</th>
    <th>Description</th>
  </tr>
  <tr>
    <td>200</td>
    <td>OK</td>  
  </tr>
  <tr>
    <td>400</td>
    <td>Bad Request</td>  
  </tr>
  <tr>
    <td>401</td>
    <td>Unauthorized</td>  
  </tr>
  <tr>
    <td>403</td>
    <td>Forbidden</td>  
  </tr> 
  <tr>
    <td>404</td>
    <td>Not Found</td>  
  </tr>
  <tr>
    <td>500</td>
    <td>Internal Server Error</td>  
  </tr>
</table>


Errors
----

**Example errors responses**

    404 Not Found
	{   
    	"status" : 404,   
    	"message" : "The requested resource doesn't exist: Cluster not found, clusterName=someInvalidCluster" 
	} 

&nbsp;

	400 Bad Request
	{   
    	"status" : 400,   
    	"message" : "The properties [foo] specified in the request or predicate are not supported for the 
                	 resource type Cluster."
	}


\ No newline at end of file

Modified: incubator/ambari/trunk/ambari-server/docs/api/v1/service-resources.md
URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/docs/api/v1/service-resources.md?rev=1480363&r1=1480362&r2=1480363&view=diff
==============================================================================
--- incubator/ambari/trunk/ambari-server/docs/api/v1/service-resources.md (original)
+++ incubator/ambari/trunk/ambari-server/docs/api/v1/service-resources.md Wed May  8 17:21:55 2013
@@ -18,6 +18,94 @@ limitations under the License.
 # Service Resources
 Service resources are services of a Hadoop cluster (e.g. HDFS, MapReduce and Ganglia).  Service resources are sub-resources of clusters. 
 
+###States
+
+The current state of a service resource can be determined by looking at the ServiceInfo/state property.
+
+
+    GET api/v1/clusters/c1/services/HDFS?fields=ServiceInfo/state
+
+**Response**
+
+    200 OK
+    {
+      "href" : "http://your.ambari.server/api/v1/clusters/c1/services/HDFS?fields=ServiceInfo/state",
+      "ServiceInfo" : {
+        "cluster_name" : "c1",
+        "state" : "INSTALLED",
+        "service_name" : "HDFS"
+      }
+    }
+
+The following table lists the possible values of the service resource ServiceInfo/state property.
+<table>
+  <tr>
+    <th>State</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>INIT</td>
+    <td>The initial clean state after the service is first created.</td>  
+  </tr>
+  <tr>
+    <td>INSTALLING</td>
+    <td>In the process of installing the service.</td>  
+  </tr>
+  <tr>
+    <td>INSTALL_FAILED</td>
+    <td>The service install failed.</td>  
+  </tr>
+  <tr>
+    <td>INSTALLED</td>
+    <td>The service has been installed successfully but is not currently running.</td>  
+  </tr>
+  <tr>
+    <td>STARTING</td>
+    <td>In the process of starting the service.</td>  
+  </tr>
+  <tr>
+    <td>STARTED</td>
+    <td>The service has been installed and started.</td>  
+  </tr>
+  <tr>
+    <td>STOPPING</td>
+    <td>In the process of stopping the service.</td>  
+  </tr>
+
+  <tr>
+    <td>UNINSTALLING</td>
+    <td>In the process of uninstalling the service.</td>  
+  </tr>
+  <tr>
+    <td>UNINSTALLED</td>
+    <td>The service has been successfully uninstalled.</td>  
+  </tr>
+  <tr>
+    <td>WIPING_OUT</td>
+    <td>In the process of wiping out the installed service.</td>  
+  </tr>
+  <tr>
+    <td>UPGRADING</td>
+    <td>In the process of upgrading the service.</td>  
+  </tr>
+  <tr>
+    <td>MAINTENANCE</td>
+    <td>The service has been marked for maintenance.</td>  
+  </tr>
+  <tr>
+    <td>UNKNOWN</td>
+    <td>The service state can not be determined.</td>  
+  </tr>
+</table>
+
+###Starting
+A service can be started through the API by setting its state to be STARTED (see [update service](update-service.md)).
+
+###Starting
+A service can be stopped through the API by setting its state to be INSTALLED (see [update service](update-service.md)).
+
+###Examples
+
 - [List services](services.md)
 - [View service information](services-service.md)
 - [Create service](create-service.md)

Added: incubator/ambari/trunk/ambari-server/docs/api/v1/update-hostcomponent.md
URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/docs/api/v1/update-hostcomponent.md?rev=1480363&view=auto
==============================================================================
--- incubator/ambari/trunk/ambari-server/docs/api/v1/update-hostcomponent.md (added)
+++ incubator/ambari/trunk/ambari-server/docs/api/v1/update-hostcomponent.md Wed May  8 17:21:55 2013
@@ -0,0 +1,94 @@
+
+<!---
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+Update Host Component
+=====
+
+[Back to Resources](index.md#resources)
+
+###Start the NameNode Component
+Start the NAMENODE component by updating its state to 'STARTED'.
+
+
+    PUT api/v1/clusters/c1/hosts/hostname/host_components/NAMENODE
+    
+    {
+      "HostRoles":{
+        "state":"STARTED"
+      }
+    }
+
+
+**Response**
+
+    202 Accepted
+    {
+      "href" : "http://your.ambari.server:8080/api/v1/clusters/c1/requests/12",
+      "Requests" : {
+        "id" : 12,
+        "status" : "InProgress"
+      }
+    }
+    
+###Stop the NameNode Component
+Stop the NAMENODE component by updating its state to 'INSTALLED'.
+
+
+    PUT api/v1/clusters/c1/hosts/hostname/host_components/NAMENODE
+    
+    {
+      "HostRoles":{
+        "state":"INSTALLED"
+      }
+    }
+
+
+**Response**
+
+    202 Accepted
+    {
+      "href" : "http://your.ambari.server:8080/api/v1/clusters/c1/requests/13",
+      "Requests" : {
+        "id" : 13,
+        "status" : "InProgress"
+      }
+    }
+    
+###Set MAINTENANCE Mode    
+Put the NAMENODE component into 'MAINTENANCE' mode.
+
+
+    PUT api/v1/clusters/c1/hosts/hostname/host_components/NAMENODE
+    
+    {
+      "HostRoles":{
+        "state":"MAINTENANCE"
+      }
+    }
+
+
+**Response**
+
+    202 Accepted
+    {
+      "href" : "http://your.ambari.server:8080/api/v1/clusters/c1/requests/14",
+      "Requests" : {
+        "id" : 14,
+        "status" : "InProgress"
+      }
+    }    

Modified: incubator/ambari/trunk/ambari-server/docs/api/v1/update-service.md
URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/docs/api/v1/update-service.md?rev=1480363&r1=1480362&r2=1480363&view=diff
==============================================================================
--- incubator/ambari/trunk/ambari-server/docs/api/v1/update-service.md (original)
+++ incubator/ambari/trunk/ambari-server/docs/api/v1/update-service.md Wed May  8 17:21:55 2013
@@ -1,4 +1,3 @@
-
 <!---
 Licensed to the Apache Software Foundation (ASF) under one or more
 contributor license agreements. See the NOTICE file distributed with
@@ -21,6 +20,7 @@ Update Service
 
 [Back to Resources](index.md#resources)
 
+###Start the HDFS Service
 Start the HDFS service (update the state of the HDFS service to be ‘STARTED’).
 
 
@@ -45,3 +45,29 @@ Start the HDFS service (update the state
         "status" : "InProgress"
       } 
     }
+
+###Stop the HDFS Service
+Stop the HDFS service (update the state of the HDFS service to be ‘INSTALLED’).
+
+
+    PUT /clusters/c1/services/HDFS/
+
+**Body**
+
+    {
+      "ServiceInfo": {
+        "state" : "INSTALLED”
+      }
+    }
+
+
+**Response**
+
+    202 Accepted
+    {
+      "href" : "http://your.ambari.server/api/v1/clusters/c1/requests/3",
+      "Requests" : {
+        "id" : 4,
+        "status" : "InProgress"
+      } 
+    }