You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@gearpump.apache.org by ma...@apache.org on 2017/01/19 07:21:25 UTC

[2/5] incubator-gearpump git commit: [GEARPUMP-213] Fix EditOnGitHub link and rename docs/docs to docs/con…

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/dev/dev-custom-serializer.md
----------------------------------------------------------------------
diff --git a/docs/docs/dev/dev-custom-serializer.md b/docs/docs/dev/dev-custom-serializer.md
deleted file mode 100644
index b1abeda..0000000
--- a/docs/docs/dev/dev-custom-serializer.md
+++ /dev/null
@@ -1,137 +0,0 @@
-Gearpump has a built-in serialization framework with a shaded Kryo version, which allows you to customize how a specific message type can be serialized. 
-
-#### Register a class before serialization.
-
-Note, to use built-in kryo serialization framework, Gearpump requires all classes to be registered explicitly before using, no matter you want to use a custom serializer or not. If not using custom serializer, Gearpump will use default com.esotericsoftware.kryo.serializers.FieldSerializer to serialize the class. 
-
-To register a class, you need to change the configuration file gear.conf(or application.conf if you want it only take effect for single application).
-
-	:::json
-	gearpump {
-	  serializers {
-	    ## We will use default FieldSerializer to serialize this class type
-	    "org.apache.gearpump.UserMessage" = ""
-	    
-	    ## we will use custom serializer to serialize this class type
-	    "org.apache.gearpump.UserMessage2" = "org.apache.gearpump.UserMessageSerializer"
-	  }
-	}
-	
-
-#### How to define a custom serializer for built-in kryo serialization framework
-
-When you decide that you want to define a custom serializer, you can do this in two ways.
-
-Please note that Gearpump shaded the original Kryo dependency. The package name ```com.esotericsoftware``` was relocated to ```org.apache.gearpump.esotericsoftware```. So in the following customization, you should import corresponding shaded classes, the example code will show that part.
-
-In general you should use the shaded version of a library whenever possible in order to avoid binary incompatibilities, eg don't use:
-
-	:::scala
-	import com.google.common.io.Files
-
-
-but rather
-
-	:::scala
-	import org.apache.gearpump.google.common.io.Files
-
-
-##### System Level Serializer
-
-If the serializer is widely used, you can define a global serializer which is available to all applications(or worker or master) in the system.
-
-###### Step1: you first need to develop a java library which contains the custom serializer class. here is an example:
-
-	:::scala
-	package org.apache.gearpump
-	
-	import org.apache.gearpump.esotericsoftware.kryo.{Kryo, Serializer}
-	import org.apache.gearpump.esotericsoftware.kryo.io.{Input, Output}
-	
-	class UserMessage(longField: Long, intField: Int)
-	
-	class UserMessageSerializer extends Serializer[UserMessage] {
-	  override def write(kryo: Kryo, output: Output, obj: UserMessage) = {
-	    output.writeLong(obj.longField)
-	    output.writeInt(obj.intField)
-	  }
-	
-	  override def read(kryo: Kryo, input: Input, typ: Class[UserMessage]): UserMessage = {
-	    val longField = input.readLong()
-	    val intField = input.readInt()
-	    new UserMessage(longField, intField)
-	  }
-	}
-
-
-###### Step2: Distribute the libraries
-
-Distribute the jar file to lib/ folder of every Gearpump installation in the cluster.
-
-###### Step3: change gear.conf on every machine of the cluster:
-
-	:::json
-	gearpump {
-	  serializers {
-	    "org.apache.gearpump.UserMessage" = "org.apache.gearpump.UserMessageSerializer"
-	  }
-	}
-	
-
-###### All set!
-
-##### Define Application level custom serializer
-If all you want is to define an application level serializer, which is only visible to current application AppMaster and Executors(including tasks), you can follow a different approach.
-
-###### Step1: Define your custom Serializer class
-
-You should include the Serializer class in your application jar. Here is an example to define a custom serializer:
-
-	:::scala
-	package org.apache.gearpump
-	
-	import org.apache.gearpump.esotericsoftware.kryo.{Kryo, Serializer}
-	import org.apache.gearpump.esotericsoftware.kryo.io.{Input, Output}
-	
-	class UserMessage(longField: Long, intField: Int)
-	
-	class UserMessageSerializer extends Serializer[UserMessage] {
-	  override def write(kryo: Kryo, output: Output, obj: UserMessage) = {
-	    output.writeLong(obj.longField)
-	    output.writeInt(obj.intField)
-	  }
-	
-	  override def read(kryo: Kryo, input: Input, typ: Class[UserMessage]): UserMessage = {
-	    val longField = input.readLong()
-	    val intField = input.readInt()
-	    new UserMessage(longField, intField)
-	  }
-	}
-
-
-###### Step2: Put a application.conf in your classpath on Client machine where you submit the application, 
-
-	:::json
-	### content of application.conf
-	gearpump {
-	  serializers {
-	    "org.apache.gearpump.UserMessage" = "org.apache.gearpump.UserMessageSerializer"
-	  }
-	}
-	
-
-###### Step3: All set!
-
-#### Advanced: Choose another serialization framework
-
-Note: This is only for advanced user which require deep customization of Gearpump platform.
-
-There are other serialization framework besides Kryo, like Protobuf. If user don't want to use the built-in kryo serialization framework, he can customize a new serialization framework. 
-
-basically, user need to define in gear.conf(or application.conf for single application's scope) file like this:
-
-	:::bash
-	gearpump.serialization-framework = "org.apache.gearpump.serializer.CustomSerializationFramework"
-	
-
-Please find an example in gearpump storm module, search "StormSerializationFramework" in source code.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/dev/dev-ide-setup.md
----------------------------------------------------------------------
diff --git a/docs/docs/dev/dev-ide-setup.md b/docs/docs/dev/dev-ide-setup.md
deleted file mode 100644
index fa983ba..0000000
--- a/docs/docs/dev/dev-ide-setup.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### Intellij IDE Setup
-
-1. In Intellij, download scala plugin.  We are using scala version {{SCALA_BINARY_VERSION}}
-2. Open menu "File->Open" to open Gearpump root project, then choose the Gearpump source folder.
-3. All set.
-
-**NOTE:** Intellij Scala plugin is already bundled with sbt. If you have Scala plugin installed, please don't install additional sbt plugin. Check your settings at "Settings -> Plugins"
-**NOTE:** If you are behind a proxy, to speed up the build, please set the proxy for sbt in "Settings -> Build Tools > SBT". in input field "VM parameters", add 
-
-	:::bash
-	-Dhttp.proxyHost=<proxy host>
-	-Dhttp.proxyPort=<port like 911>
-	-Dhttps.proxyHost=<proxy host>
-	-Dhttps.proxyPort=<port like 911>
-	
-
-### Eclipse IDE Setup
-
-I will show how to do this in eclipse LUNA.
-
-There is a sbt-eclipse plugin to generate eclipse project files, but seems there are some bugs, and some manual fix is still required. Here is the steps that works for me:
-
-1. Install latest version eclipse luna
-2. Install latest scala-IDE http://scala-ide.org/download/current.html   I use update site address: http://download.scala-ide.org/sdk/lithium/e44/scala211/stable/site
-3. Open a sbt shell under the root folder of Gearpump. enter "eclipse", then we get all eclipse project file generated.
-4. Use eclipse import wizard. File->Import->Existing projects into Workspace, make sure to tick the option "Search for nested projects"
-5. Then it may starts to complain about encoding error, like "IO error while decoding". You need to fix the eclipse default text encoding by changing configuration at "Window->Preference->General->Workspace->Text file encoding" to UTF-8.
-6. Then the project gearpump-external-kafka may still cannot compile. The reason is that there is some dependencies missing in generated .classpath file by sbt-eclipse. We need to do some manual fix. Right click on project icon of gearpump-external-kafka in eclipse, then choose menu "Build Path->Configure Build Path". A window will popup. Under the tab "projects", click add, choose "gearpump-streaming"
-7. All set. Now the project should compile OK in eclipse.

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/dev/dev-non-streaming-example.md
----------------------------------------------------------------------
diff --git a/docs/docs/dev/dev-non-streaming-example.md b/docs/docs/dev/dev-non-streaming-example.md
deleted file mode 100644
index 3d1d5e0..0000000
--- a/docs/docs/dev/dev-non-streaming-example.md
+++ /dev/null
@@ -1,133 +0,0 @@
-We'll use [Distributed Shell](https://github.com/apache/incubator-gearpump/blob/master/examples/distributedshell) as an example to illustrate how to do that.
-
-What Distributed Shell do is that user send a shell command to the cluster and the command will the executed on each node, then the result will be return to user.
-
-### Maven/Sbt Settings
-
-Repository and library dependencies can be found at [Maven Setting](http://gearpump.incubator.apache.org/downloads.html#maven-dependencies)
-
-### Define Executor Class
-
-	:::scala
-	class ShellExecutor(executorContext: ExecutorContext, userConf : UserConfig) extends Actor{
-	  import executorContext._
-	
-	  override def receive: Receive = {
-	    case ShellCommand(command, args) =>
-	      val process = Try(s"$command $args" !!)
-	      val result = process match {
-	        case Success(msg) => msg
-	        case Failure(ex) => ex.getMessage
-	      }
-	      sender ! ShellCommandResult(executorId, result)
-	  }
-	}
-
-So ShellExecutor just receive the ShellCommand and try to execute it and return the result to the sender, which is quite simple.
-
-### Define AppMaster Class
-For a non-streaming application, you have to write your own AppMaster.
-
-Here is a typical user defined AppMaster, please note that some trivial codes are omitted.
-
-	:::scala
-	class DistShellAppMaster(appContext : AppMasterContext, app : Application) extends ApplicationMaster {
-	  protected var currentExecutorId = 0
-	
-	  override def preStart(): Unit = {
-	    ActorUtil.launchExecutorOnEachWorker(masterProxy, getExecutorJvmConfig, self)
-	  }
-	
-	  override def receive: Receive = {
-	    case ExecutorSystemStarted(executorSystem) =>
-	      import executorSystem.{address, worker, resource => executorResource}
-	      val executorContext = ExecutorContext(currentExecutorId, worker.workerId, appId, self, executorResource)
-	      val executor = context.actorOf(Props(classOf[ShellExecutor], executorContext, app.userConfig)
-	          .withDeploy(Deploy(scope = RemoteScope(address))), currentExecutorId.toString)
-	      executorSystem.bindLifeCycleWith(executor)
-	      currentExecutorId += 1
-	    case StartExecutorSystemTimeout =>
-	      masterProxy ! ShutdownApplication(appId)
-	      context.stop(self)
-	    case msg: ShellCommand =>
-	      Future.fold(context.children.map(_ ? msg))(new ShellCommandResultAggregator) { (aggregator, response) =>
-	        aggregator.aggregate(response.asInstanceOf[ShellCommandResult])
-	      }.map(_.toString()) pipeTo sender
-	  }
-	
-	  private def getExecutorJvmConfig: ExecutorSystemJvmConfig = {
-	    val config: Config = Option(app.clusterConfig).map(_.getConfig).getOrElse(ConfigFactory.empty())
-	    val jvmSetting = Util.resolveJvmSetting(config.withFallback(context.system.settings.config)).executor
-	    ExecutorSystemJvmConfig(jvmSetting.classPath, jvmSetting.vmargs,
-	      appJar, username, config)
-	  }
-	}
-	
-
-So when this `DistShellAppMaster` started, first it will request resources to launch one executor on each node, which is done in method `preStart`
-
-Then the DistShellAppMaster's receive handler will handle the allocated resource to launch the `ShellExecutor` we want. If you want to write your application, you can just use this part of code. The only thing needed is replacing the Executor class.
-
-There may be a situation that the resource allocation failed which will bring the message `StartExecutorSystemTimeout`, the normal pattern to handle that is just what we do: shut down the application.
-
-The real application logic part is in `ShellCommand` message handler, which is specific to different applications. Here we distribute the shell command to each executor and aggregate the results to the client.
-
-For method `getExecutorJvmConfig`, you can just use this part of code in your own application.
-
-### Define Application
-Now its time to launch the application.
-
-	:::scala
-	object DistributedShell extends App with ArgumentsParser {
-	  private val LOG: Logger = LogUtil.getLogger(getClass)
-	
-	  override val options: Array[(String, CLIOption[Any])] = Array.empty
-	
-	  LOG.info(s"Distributed shell submitting application...")
-	  val context = ClientContext()
-	  val appId = context.submit(Application[DistShellAppMaster]("DistributedShell", UserConfig.empty))
-	  context.close()
-	  LOG.info(s"Distributed Shell Application started with appId $appId !")
-	}
-
-The application class extends `App` and `ArgumentsParser which make it easier to parse arguments and run main functions. This part is similar to the streaming applications.
-
-The main class `DistributeShell` will submit an application to `Master`, whose `AppMaster` is `DistShellAppMaster`.
-
-### Define an optional Client class
-
-Now, we can define a `Client` class to talk with `AppMaster` to pass our commands to it.
-
-	:::scala
-	object DistributedShellClient extends App with ArgumentsParser  {
-	  implicit val timeout = Constants.FUTURE_TIMEOUT
-	  import scala.concurrent.ExecutionContext.Implicits.global
-	  private val LOG: Logger = LoggerFactory.getLogger(getClass)
-	
-	  override val options: Array[(String, CLIOption[Any])] = Array(
-	    "master" -> CLIOption[String]("<host1:port1,host2:port2,host3:port3>", required = true),
-	    "appid" -> CLIOption[Int]("<the distributed shell appid>", required = true),
-	    "command" -> CLIOption[String]("<shell command>", required = true),
-	    "args" -> CLIOption[String]("<shell arguments>", required = true)
-	  )
-	
-	  val config = parse(args)
-	  val context = ClientContext(config.getString("master"))
-	  val appid = config.getInt("appid")
-	  val command = config.getString("command")
-	  val arguments = config.getString("args")
-	  val appMaster = context.resolveAppID(appid)
-	  (appMaster ? ShellCommand(command, arguments)).map { reslut =>
-	    LOG.info(s"Result: $reslut")
-	    context.close()
-	  }
-	}
-	
-
-In the `DistributedShellClient`, it will resolve the appid to the real appmaster(the application id will be printed when launching `DistributedShell`).
-
-Once we got the `AppMaster`, then we can send `ShellCommand` to it and wait for the result.
-
-### Submit application
-
-After all these, you need to package everything into a uber jar and submit the jar to Gearpump Cluster. Please check [Application submission tool](../introduction/commandline) to command line tool syntax.

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/dev/dev-rest-api.md
----------------------------------------------------------------------
diff --git a/docs/docs/dev/dev-rest-api.md b/docs/docs/dev/dev-rest-api.md
deleted file mode 100644
index 85c4706..0000000
--- a/docs/docs/dev/dev-rest-api.md
+++ /dev/null
@@ -1,1083 +0,0 @@
-## Authentication.
-
-For all REST API calls, We need authentication by default. If you don't want authentication, you can disable them.
-
-### How to disable Authentication
-To disable Authentication, you can set `gearpump-ui.gearpump.ui-security.authentication-enabled = false`
-in gear.conf, please check [UI Authentication](../deployment/deployment-ui-authentication) for details.
-
-### How to authenticate if Authentication is enabled.
-
-#### For User-Password based authentication
-
-If Authentication is enabled, then you need to login before calling REST API.
-
-	:::bash
-	curl  -X POST  --data username=admin --data password=admin --cookie-jar outputAuthenticationCookie.txt http://127.0.0.1:8090/login
-	
-
-This will use default user "admin:admin" to login, and store the authentication cookie to file outputAuthenticationCookie.txt.
-
-In All subsequent Rest API calls, you need to add the authentication cookie. For example
-
-	:::bash
-	curl --cookie outputAuthenticationCookie.txt http://127.0.0.1/api/v1.0/master
-	
-
-for more information, please check [UI Authentication](../deployment/deployment-ui-authentication).
-
-#### For OAuth2 based authentication
-
-For OAuth2 based authentication, it requires you to have an access token in place.
-
-Different OAuth2 service provider have different way to return an access token.
-
-**For Google**, you can refer to [OAuth Doc](https://developers.google.com/identity/protocols/OAuth2).
-
-**For CloudFoundry UAA**, you can use the uaac command to get the access token.
-
-	:::bash
-	$ uaac target http://login.gearpump.gotapaas.eu/
-	$ uaac token get <user_email_address>
-	
-	### Find access token
-	$ uaac context
-	
-	[0]*[http://login.gearpump.gotapaas.eu]
-	
-	  [0]*[<user_email_address>]
-	      user_id: 34e33a79-42c6-479b-a8c1-8c471ff027fb
-	      client_id: cf
-	      token_type: bearer
-	      access_token: eyJhbGciOiJSUzI1NiJ9.eyJqdGkiOiI
-	      expires_in: 599
-	      scope: password.write openid cloud_controller.write cloud_controller.read
-	      jti: 74ea49e4-1001-4757-9f8d-a66e52a27557
-	
-
-For more information on uaac, please check [UAAC guide](https://docs.cloudfoundry.org/adminguide/uaa-user-management.html)
-
-Now, we have the access token, then let's login to Gearpump UI server with this access token:
-
-	:::bash
-	## Please replace cloudfoundryuaa with actual OAuth2 service name you have configured in gear.conf
-	curl  -X POST  --data accesstoken=eyJhbGciOiJSUzI1NiJ9.eyJqdGkiOiI --cookie-jar outputAuthenticationCookie.txt http://127.0.0.1:8090/login/oauth2/cloudfoundryuaa/accesstoken
-	
-
-This will use user  `user_email_address` to login, and store the authentication cookie to file outputAuthenticationCookie.txt.
-
-In All subsequent Rest API calls, you need to add the authentication cookie. For example
-
-	:::bash
-	curl --cookie outputAuthenticationCookie.txt http://127.0.0.1/api/v1.0/master
-	
-
-**NOTE:** You can default the default permission level for OAuth2 user. for more information,
-please check [UI Authentication](../deployment/deployment-ui-authentication).
-
-## Query version
-
-### GET version
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/version
-	
-
-Sample Response:
-
-	:::bash
-	{{GEARPUMP_VERSION}}
-	
-
-## Master Service
-
-### GET api/v1.0/master
-Get information of masters
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/master
-	
-
-Sample Response:
-
-	:::json
-	{
-	  "masterDescription": {
-	    "leader":{"host":"master@127.0.0.1","port":3000},
-	    "cluster":[{"host":"127.0.0.1","port":3000}]
-	    "aliveFor": "642941",
-	    "logFile": "/Users/foobar/gearpump/logs",
-	    "jarStore": "jarstore/",
-	    "masterStatus": "synced",
-	    "homeDirectory": "/Users/foobar/gearpump"
-	  }
-	}
-	
-
-### GET api/v1.0/master/applist
-Query information of all applications
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/master/applist
-	
-
-Sample Response:
-
-	:::json
-	{
-	  "appMasters": [
-	    {
-	      "status": "active",
-	      "appId": 1,
-	      "appName": "wordCount",
-	      "appMasterPath": "akka.tcp://app1-executor-1@127.0.0.1:52212/user/daemon/appdaemon1/$c",
-	      "workerPath": "akka.tcp://master@127.0.0.1:3000/user/Worker0",
-	      "submissionTime": "1450758114766",
-	      "startTime": "1450758117294",
-	      "user": "lisa"
-	    }
-	  ]
-	}
-
-### GET api/v1.0/master/workerlist
-Query information of all workers
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/master/workerlist
-
-Sample Response:
-
-	:::json
-	[
-	  {
-	    "workerId": "1",
-	    "state": "active",
-	    "actorPath": "akka.tcp://master@127.0.0.1:3000/user/Worker0",
-	    "aliveFor": "431565",
-	    "logFile": "logs/",
-	    "executors": [
-	      {
-	        "appId": 1,
-	        "executorId": -1,
-	        "slots": 1
-	      },
-	      {
-	        "appId": 1,
-	        "executorId": 0,
-	        "slots": 1
-	      }
-	    ],
-	    "totalSlots": 1000,
-	    "availableSlots": 998,
-	    "homeDirectory": "/usr/lisa/gearpump/",
-	    "jvmName": "11788@lisa"
-	  },
-	  {
-	    "workerId": "0",
-	    "state": "active",
-	    "actorPath": "akka.tcp://master@127.0.0.1:3000/user/Worker1",
-	    "aliveFor": "431546",
-	    "logFile": "logs/",
-	    "executors": [
-	      {
-	        "appId": 1,
-	        "executorId": 1,
-	        "slots": 1
-	      }
-	    ],
-	    "totalSlots": 1000,
-	    "availableSlots": 999,
-	    "homeDirectory": "/usr/lisa/gearpump/",
-	    "jvmName": "11788@lisa"
-	  }
-	]
-
-### GET api/v1.0/master/config
-Get the configuration of all masters
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/master/config
-
-Sample Response:
-
-	:::json
-	{
-	  "extensions": [
-	    "akka.contrib.datareplication.DataReplication$"
-	  ]
-	  "akka": {
-	    "loglevel": "INFO"
-	    "log-dead-letters": "off"
-	    "log-dead-letters-during-shutdown": "off"
-	    "actor": {
-	      ## Master forms a akka cluster
-	      "provider": "akka.cluster.ClusterActorRefProvider"
-	    }
-	    "cluster": {
-	      "roles": ["master"]
-	      "auto-down-unreachable-after": "15s"
-	    }
-	    "remote": {
-	      "log-remote-lifecycle-events": "off"
-	    }
-	  }
-	}
-	
-
-### GET api/v1.0/master/metrics/&lt;query_path&gt;?readLatest=&lt;true|false&gt;
-Get the master node metrics.
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/master/metrics/master?readLatest=true
-
-Sample Response:
-
-	:::bash
-	{
-	    "path"
-	:
-	    "master", "metrics"
-	:
-	    [{
-	        "time": "1450758725070",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "master:memory.heap.used", "value": "59764272"}
-	    }, {
-	        "time": "1450758725070",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "master:thread.daemon.count", "value": "18"}
-	    }, {
-	        "time": "1450758725070",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "master:memory.total.committed",
-	            "value": "210239488"
-	        }
-	    }, {
-	        "time": "1450758725070",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "master:memory.heap.max", "value": "880017408"}
-	    }, {
-	        "time": "1450758725070",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "master:memory.total.max", "value": "997457920"}
-	    }, {
-	        "time": "1450758725070",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "master:memory.heap.committed",
-	            "value": "179830784"
-	        }
-	    }, {
-	        "time": "1450758725070",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "master:memory.total.used", "value": "89117352"}
-	    }, {
-	        "time": "1450758725070",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "master:thread.count", "value": "28"}
-	    }]
-	}
-	
-
-### POST api/v1.0/master/submitapp
-Submit a streaming job jar to Gearpump cluster. It functions like command line
-
-	:::bash
-	gear app -jar xx.jar -conf yy.conf -executors 1 <command line arguments>
-	
-
-Required MIME type: "multipart/form-data"
-
-Required post form fields:
-
-1. field name "jar", job jar file.
-
-Optional post form fields:
-
-1. "configfile", configuration file, in UTF8 format.
-2. "configstring", text body of configuration file, in UTF8 format.
-3. "executorcount", The count of JVM process to start across the cluster for this application job
-4. "args", command line arguments for this job jar.
-
-Example html:
-
-	:::html
-	<form id="submitapp" action="http://127.0.0.1:8090/api/v1.0/master/submitapp"
-	method="POST" enctype="multipart/form-data">
-	 
-	Job Jar (*.jar) [Required]:  <br/>
-	<input type="file" name="jar"/> <br/> <br/>
-	 
-	Config file (*.conf) [Optional]:  <br/>
-	<input type="file" name="configfile"/> <br/>  <br/>
-	 
-	Config String, Config File in string format. [Optional]: <br/>
-	<input type="text" name="configstring" value="a.b.c.d=1"/> <br/><br/>
-	 
-	Executor count (integer, how many process to start for this streaming job) [Optional]: <br/>
-	<input type="text" name="executorcount" value="1"/> <br/><br/>
-	 
-	Application arguments (String) [Optional]: <br/>
-	<input type="text" name="args" value=""/> <br/><br/>
-	 
-	<input type="submit" value="Submit"/>
-	 
-	</table>
-	 
-	</form>
-
-### POST api/v1.0/master/submitstormapp
-Submit a storm jar to Gearpump cluster. It functions like command line
-
-	:::bash
-	storm app -jar xx.jar -conf yy.yaml <command line arguments>
-
-Required MIME type: "multipart/form-data"
-
-Required post form fields:
-
-1. field name "jar", job jar file.
-
-Optional post form fields:
-
-1. "configfile", .yaml configuration file, in UTF8 format.
-2. "args", command line arguments for this job jar.
-
-Example html:
-
-	:::html
-	<form id="submitstormapp" action="http://127.0.0.1:8090/api/v1.0/master/submitstormapp"
-	method="POST" enctype="multipart/form-data">
-	 
-	Job Jar (*.jar) [Required]:  <br/>
-	<input type="file" name="jar"/> <br/> <br/>
-	 
-	Config file (*.yaml) [Optional]:  <br/>
-	<input type="file" name="configfile"/> <br/>  <br/>
-	
-	Application arguments (String) [Optional]: <br/>
-	<input type="text" name="args" value=""/> <br/><br/>
-	 
-	<input type="submit" value="Submit"/>
-	 
-	</table>
-	 
-	</form>
-	
-
-## Worker service
-
-### GET api/v1.0/worker/&lt;workerId&gt;
-Query worker information.
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/worker/0
-
-Sample Response:
-
-	:::json
-	{
-	  "workerId": "0",
-	  "state": "active",
-	  "actorPath": "akka.tcp://master@127.0.0.1:3000/user/Worker1",
-	  "aliveFor": "831069",
-	  "logFile": "logs/",
-	  "executors": [
-	    {
-	      "appId": 1,
-	      "executorId": 1,
-	      "slots": 1
-	    }
-	  ],
-	  "totalSlots": 1000,
-	  "availableSlots": 999,
-	  "homeDirectory": "/usr/lisa/gearpump/",
-	  "jvmName": "11788@lisa"
-	}
-
-### GET api/v1.0/worker/&lt;workerId&gt;/config
-Query worker config
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/worker/0/config
-	
-
-Sample Response:
-
-	:::json
-	{
-	  "extensions": [
-	    "akka.contrib.datareplication.DataReplication$"
-	  ]
-	  "akka": {
-	    "loglevel": "INFO"
-	    "log-dead-letters": "off"
-	    "log-dead-letters-during-shutdown": "off"
-	    "actor": {
-	      ## Master forms a akka cluster
-	      "provider": "akka.cluster.ClusterActorRefProvider"
-	    }
-	    "cluster": {
-	      "roles": ["master"]
-	      "auto-down-unreachable-after": "15s"
-	    }
-	    "remote": {
-	      "log-remote-lifecycle-events": "off"
-	    }
-	  }
-	}
-	
-
-### GET api/v1.0/worker/&lt;workerId&gt;/metrics/&lt;query_path&gt;?readLatest=&lt;true|false&gt;
-Get the worker node metrics.
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/worker/0/metrics/worker?readLatest=true
-
-Sample Response:
-
-	:::json
-	{
-	    "path"
-	:
-	    "worker", "metrics"
-	:
-	    [{
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker1:memory.total.used",
-	            "value": "152931440"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "worker1:thread.daemon.count", "value": "18"}
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker0:memory.heap.used",
-	            "value": "123139640"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker0:memory.total.max",
-	            "value": "997457920"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker0:memory.heap.committed",
-	            "value": "179830784"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "worker0:thread.count", "value": "28"}
-	    }, {
-	        "time": "1450759137860",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "worker0:memory.heap.max", "value": "880017408"}
-	    }, {
-	        "time": "1450759137860",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "worker1:memory.heap.max", "value": "880017408"}
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker0:memory.total.committed",
-	            "value": "210239488"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker0:memory.total.used",
-	            "value": "152931440"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "worker1:thread.count", "value": "28"}
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker1:memory.total.max",
-	            "value": "997457920"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker1:memory.heap.committed",
-	            "value": "179830784"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker1:memory.total.committed",
-	            "value": "210239488"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "worker0:thread.daemon.count", "value": "18"}
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker1:memory.heap.used",
-	            "value": "123139640"
-	        }
-	    }]
-	}
-	
-
-## Supervisor Service
-
-Supervisor service allows user to add or remove a worker machine.
-
-### POST api/v1.0/supervisor/status
-Query whether the supervisor service is enabled. If Supervisor service is disabled, you are not allowed to use API like addworker/removeworker.
-
-Example:
-
-	:::bash
-	curl -X POST [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/supervisor/status
-
-Sample Response:
-
-	:::json
-	{"enabled":true}
-	
-
-### GET api/v1.0/supervisor
-Get the supervisor path
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/supervisor
-	
-
-Sample Response:
-
-	:::json
-	{path: "supervisor actor path"}
-
-### POST api/v1.0/supervisor/addworker/&lt;worker-count&gt;
-Add workerCount new workers in the cluster. It will use the low level resource scheduler like
-YARN to start new containers and then boot Gearpump worker process.
-
-Example:
-
-	:::bash
-	curl -X POST [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/supervisor/addworker/2
-	
-
-
-Sample Response:
-
-	:::json
-	{success: true}
-
-### POST api/v1.0/supervisor/removeworker/&lt;worker-id&gt;
-Remove single worker instance by specifying a worker Id.
-
-**NOTE:* Use with caution!
-
-**NOTE:** All executors JVMs under this worker JVM will also be destroyed. It will trigger failover for all
-applications that have executor started under this worker.
-
-Example:
-
-	:::bash
-	curl -X POST [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/supervisor/removeworker/3
-
-
-Sample Response:
-
-	:::json
-	{success: true}
-
-## Application service
-
-### GET api/v1.0/appmaster/&lt;appId&gt;?detail=&lt;true|false&gt;
-Query information of an specific application of Id appId
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/appmaster/1?detail=true
-
-Sample Response:
-
-	:::json
-	{
-	  "appId": 1,
-	  "appName": "wordCount",
-	  "processors": [
-	    [
-	      0,
-	      {
-	        "id": 0,
-	        "taskClass": "org.apache.gearpump.streaming.examples.wordcount.Split",
-	        "parallelism": 1,
-	        "description": "",
-	        "taskConf": {
-	          "_config": {}
-	        },
-	        "life": {
-	          "birth": "0",
-	          "death": "9223372036854775807"
-	        },
-	        "executors": [
-	          1
-	        ],
-	        "taskCount": [
-	          [
-	            1,
-	            {
-	              "count": 1
-	            }
-	          ]
-	        ]
-	      }
-	    ],
-	    [
-	      1,
-	      {
-	        "id": 1,
-	        "taskClass": "org.apache.gearpump.streaming.examples.wordcount.Sum",
-	        "parallelism": 1,
-	        "description": "",
-	        "taskConf": {
-	          "_config": {}
-	        },
-	        "life": {
-	          "birth": "0",
-	          "death": "9223372036854775807"
-	        },
-	        "executors": [
-	          0
-	        ],
-	        "taskCount": [
-	          [
-	            0,
-	            {
-	              "count": 1
-	            }
-	          ]
-	        ]
-	      }
-	    ]
-	  ],
-	  "processorLevels": [
-	    [
-	      0,
-	      0
-	    ],
-	    [
-	      1,
-	      1
-	    ]
-	  ],
-	  "dag": {
-	    "vertexList": [
-	      0,
-	      1
-	    ],
-	    "edgeList": [
-	      [
-	        0,
-	        "org.apache.gearpump.partitioner.HashPartitioner",
-	        1
-	      ]
-	    ]
-	  },
-	  "actorPath": "akka.tcp://app1-executor-1@127.0.0.1:52212/user/daemon/appdaemon1/$c/appmaster",
-	  "clock": "1450759382430",
-	  "executors": [
-	    {
-	      "executorId": 0,
-	      "executor": "akka.tcp://app1system0@127.0.0.1:52240/remote/akka.tcp/app1-executor-1@127.0.0.1:52212/user/daemon/appdaemon1/$c/appmaster/executors/0#-1554950276",
-	      "workerId": "1",
-	      "status": "active"
-	    },
-	    {
-	      "executorId": 1,
-	      "executor": "akka.tcp://app1system1@127.0.0.1:52241/remote/akka.tcp/app1-executor-1@127.0.0.1:52212/user/daemon/appdaemon1/$c/appmaster/executors/1#928082134",
-	      "workerId": "0",
-	      "status": "active"
-	    },
-	    {
-	      "executorId": -1,
-	      "executor": "akka://app1-executor-1/user/daemon/appdaemon1/$c/appmaster",
-	      "workerId": "1",
-	      "status": "active"
-	    }
-	  ],
-	  "startTime": "1450758117306",
-	  "uptime": "1268472",
-	  "user": "lisa",
-	  "homeDirectory": "/usr/lisa/gearpump/",
-	  "logFile": "logs/",
-	  "historyMetricsConfig": {
-	    "retainHistoryDataHours": 72,
-	    "retainHistoryDataIntervalMs": 3600000,
-	    "retainRecentDataSeconds": 300,
-	    "retainRecentDataIntervalMs": 15000
-	  }
-	}
-	
-
-### DELETE api/v1.0/appmaster/&lt;appId&gt;
-shutdown application appId
-
-### GET api/v1.0/appmaster/&lt;appId&gt;/stallingtasks
-Query list of unhealthy tasks of an specific application of Id appId
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/appmaster/2/stallingtasks
-	
-
-Sample Response:
-
-	:::json
-	{
-	  "tasks": [
-	    {
-	      "processorId": 0,
-	      "index": 0
-	    }
-	  ]
-	}
-	
-
-### GET api/v1.0/appmaster/&lt;appId&gt;/config
-Query the configuration of specific application appId
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/appmaster/1/config
-	
-
-Sample Response:
-
-	:::json
-	{
-	    "gearpump" : {
-	        "appmaster" : {
-	            "extraClasspath" : "",
-	            "vmargs" : "-server -Xms512M -Xmx1024M -Xss1M -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=80 -XX:+UseParNewGC -XX:NewRatio=3"
-	        },
-	        "cluster" : {
-	            "masters" : [
-	                "127.0.0.1:3000"
-	            ]
-	        },
-	        "executor" : {
-	            "extraClasspath" : "",
-	            "vmargs" : "-server -Xms512M -Xmx1024M -Xss1M -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=80 -XX:+UseParNewGC -XX:NewRatio=3"
-	        },
-	        "jarstore" : {
-	            "rootpath" : "jarstore/"
-	        },
-	        "log" : {
-	            "application" : {
-	                "dir" : "logs"
-	            },
-	            "daemon" : {
-	                "dir" : "logs"
-	            }
-	        },
-	        "metrics" : {
-	            "enabled" : true,
-	            "graphite" : {
-	                "host" : "127.0.0.1",
-	                "port" : 2003
-	            },
-	            "logfile" : {},
-	            "report-interval-ms" : 15000,
-	            "reporter" : "akka",
-	            "retainHistoryData" : {
-	                "hours" : 72,
-	                "intervalMs" : 3600000
-	            },
-	            "retainRecentData" : {
-	                "intervalMs" : 15000,
-	                "seconds" : 300
-	            },
-	            "sample-rate" : 10
-	        },
-	        "netty" : {
-	            "base-sleep-ms" : 100,
-	            "buffer-size" : 5242880,
-	            "flush-check-interval" : 10,
-	            "max-retries" : 30,
-	            "max-sleep-ms" : 1000,
-	            "message-batch-size" : 262144
-	        },
-	        "netty-dispatcher" : "akka.actor.default-dispatcher",
-	        "scheduling" : {
-	            "scheduler-class" : "org.apache.gearpump.cluster.scheduler.PriorityScheduler"
-	        },
-	        "serializers" : {
-	            "[B" : "",
-	            "[C" : "",
-	            "[D" : "",
-	            "[F" : "",
-	            "[I" : "",
-	            "[J" : "",
-	            "[Ljava.lang.String;" : "",
-	            "[S" : "",
-	            "[Z" : "",
-	            "org.apache.gearpump.Message" : "org.apache.gearpump.streaming.MessageSerializer",
-	            "org.apache.gearpump.streaming.task.Ack" : "org.apache.gearpump.streaming.AckSerializer",
-	            "org.apache.gearpump.streaming.task.AckRequest" : "org.apache.gearpump.streaming.AckRequestSerializer",
-	            "org.apache.gearpump.streaming.task.LatencyProbe" : "org.apache.gearpump.streaming.LatencyProbeSerializer",
-	            "org.apache.gearpump.streaming.task.TaskId" : "org.apache.gearpump.streaming.TaskIdSerializer",
-	            "scala.Tuple1" : "",
-	            "scala.Tuple2" : "",
-	            "scala.Tuple3" : "",
-	            "scala.Tuple4" : "",
-	            "scala.Tuple5" : "",
-	            "scala.Tuple6" : "",
-	            "scala.collection.immutable.$colon$colon" : "",
-	            "scala.collection.immutable.List" : ""
-	        },
-	        "services" : {
-	            # gear.conf: 112
-	            "host" : "127.0.0.1",
-	            # gear.conf: 113
-	            "http" : 8090,
-	            # gear.conf: 114
-	            "ws" : 8091
-	        },
-	        "task-dispatcher" : "akka.actor.pined-dispatcher",
-	        "worker" : {
-	            # reference.conf: 100
-	            # # How many slots each worker contains
-	            "slots" : 100
-	        }
-	    }
-	}
-	
-
-### GET api/v1.0/appmaster/&lt;appId&gt;/metrics/&lt;query_path&gt;?readLatest=&lt;true|false&gt;&aggregator=&lt;aggregator_class&gt;
-Query metrics information of a specific application appId
-Filter metrics with path metrics path
-
-aggregator points to a aggregator class, which will aggregate on the current metrics, and return a smaller set.
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/appmaster/1/metrics/app1?readLatest=true&aggregator=org.apache.gearpump.streaming.metrics.ProcessorAggregator
-	
-
-Sample Response:
-
-	:::json
-	{
-	    "path"
-	:
-	    "worker", "metrics"
-	:
-	    [{
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker1:memory.total.used",
-	            "value": "152931440"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "worker1:thread.daemon.count", "value": "18"}
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker0:memory.heap.used",
-	            "value": "123139640"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker0:memory.total.max",
-	            "value": "997457920"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker0:memory.heap.committed",
-	            "value": "179830784"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "worker0:thread.count", "value": "28"}
-	    }, {
-	        "time": "1450759137860",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "worker0:memory.heap.max", "value": "880017408"}
-	    }, {
-	        "time": "1450759137860",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "worker1:memory.heap.max", "value": "880017408"}
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker0:memory.total.committed",
-	            "value": "210239488"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker0:memory.total.used",
-	            "value": "152931440"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "worker1:thread.count", "value": "28"}
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker1:memory.total.max",
-	            "value": "997457920"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker1:memory.heap.committed",
-	            "value": "179830784"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker1:memory.total.committed",
-	            "value": "210239488"
-	        }
-	    }, {
-	        "time": "1450759137860",
-	        "value": {"$type": "org.apache.gearpump.metrics.Metrics.Gauge", "name": "worker0:thread.daemon.count", "value": "18"}
-	    }, {
-	        "time": "1450759137860",
-	        "value": {
-	            "$type": "org.apache.gearpump.metrics.Metrics.Gauge",
-	            "name": "worker1:memory.heap.used",
-	            "value": "123139640"
-	        }
-	    }]
-	}
-
-
-### GET api/v1.0/appmaster/&lt;appId&gt;/errors
-Get task error messages
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/appmaster/1/errors
-	
-
-Sample Response:
-
-	:::json
-	{"time":"0","error":null}
-	
-
-### POST api/v1.0/appmaster/&lt;appId&gt;/restart
-Restart the application
-
-## Executor Service
-
-### GET api/v1.0/appmaster/&lt;appId&gt;/executor/&lt;executorid&gt;/config
-Get executor config
-
-Example:
-
-	:::bash
-	curl http://127.0.0.1:8090/api/v1.0/appmaster/1/executor/1/config
-	
-
-Sample Response:
-
-	:::json
-	{
-	  "extensions": [
-	    "akka.contrib.datareplication.DataReplication$"
-	  ]
-	  "akka": {
-	    "loglevel": "INFO"
-	    "log-dead-letters": "off"
-	    "log-dead-letters-during-shutdown": "off"
-	    "actor": {
-	      ## Master forms a akka cluster
-	      "provider": "akka.cluster.ClusterActorRefProvider"
-	    }
-	    "cluster": {
-	      "roles": ["master"]
-	      "auto-down-unreachable-after": "15s"
-	    }
-	    "remote": {
-	      "log-remote-lifecycle-events": "off"
-	    }
-	  }
-	}
-
-
-### GET api/v1.0/appmaster/&lt;appId&gt;/executor/&lt;executorid&gt;
-Get executor information.
-
-Example:
-
-	:::bash
-	curl [--cookie outputAuthenticationCookie.txt] http://127.0.0.1:8090/api/v1.0/appmaster/1/executor/1
-
-
-Sample Response:
-
-	:::json
-	{
-	  "id": 1,
-	  "workerId": "0",
-	  "actorPath": "akka.tcp://app1system1@127.0.0.1:52241/remote/akka.tcp/app1-executor-1@127.0.0.1:52212/user/daemon/appdaemon1/$c/appmaster/executors/1",
-	  "logFile": "logs/",
-	  "status": "active",
-	  "taskCount": 1,
-	  "tasks": [
-	    [
-	      0,
-	      [
-	        {
-	          "processorId": 0,
-	          "index": 0
-	        }
-	      ]
-	    ]
-	  ],
-	  "jvmName": "21304@lisa"
-	}
-

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/dev/dev-storm.md
----------------------------------------------------------------------
diff --git a/docs/docs/dev/dev-storm.md b/docs/docs/dev/dev-storm.md
deleted file mode 100644
index e60b505..0000000
--- a/docs/docs/dev/dev-storm.md
+++ /dev/null
@@ -1,214 +0,0 @@
-Gearpump provides **binary compatibility** for Apache Storm applications. That is to say, users could easily grab an existing Storm jar and run it 
-on Gearpump. This documentation illustrates Gearpump's compatibility with Storm.  
-
-## What Storm features are supported on Gearpump 
-
-### Storm 0.9.x
-
-| Feature | Support |
-| ------- | ------- |
-| basic topology | yes |
-| DRPC | yes |
-| multi-lang | yes |
-| storm-kafka | yes |
-| Trident | no |
-
-### Storm 0.10.x
-
-| Feature | Support |
-| ----------- | -------------|
-| basic topology | yes | 
-| DRPC | yes |
-| multi-lang | yes |
-| storm-kafka | yes |
-| storm-hdfs| yes | 
-| storm-hbase | yes |
-| storm-hive | yes |
-| storm-jdbc | yes |
-| storm-redis | yes |
-| flux | yes |
-| storm-eventhubs | not verified |
-| Trident | no |
-
-### At Least Once support
-
-With Ackers enabled, there are two kinds of At Least Once support in both Storm 0.9.x and Storm 0.10.x.
-
-1. spout will replay messages on message loss as long as spout is alive
-2. If `KafkaSpout` is used, messages could be replayed from Kafka even if the spout crashes. 
-
-Gearpump supports the second for both Storm versions. 
-
-### Security support 
-
-Storm 0.10.x adds security support for following connectors 
-
-* [storm-hdfs](https://github.com/apache/storm/blob/0.10.x-branch/external/storm-hdfs/README.md)
-* [storm-hive](https://github.com/apache/storm/blob/0.10.x-branch/external/storm-hive/README.md)
-* [storm-hbase](https://github.com/apache/storm/blob/0.10.x-branch/external/storm-hbase/README.md)
-
-That means users could access kerberos enabled HDFS, Hive and HBase with these connectors. Generally, Storm provides two approaches (please refer to above links for more information)
-
-1. configure nimbus to automatically get delegation tokens on behalf of the topology submitter user
-2. kerberos keytabs are already distributed on worker hosts; users configure keytab path and principal
-
-Gearpump supports the second approach and users needs to add classpath of HDFS/Hive/HBase to `gearpump.executor.extraClasspath` in `gear.conf` on each node. For example, 
-
-	:::json
-	###################
-	### Executor argument configuration
-	### Executor JVM can contains multiple tasks
-	###################
-	executor {
-	vmargs = "-server -Xms512M -Xmx1024M -Xss1M -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=80 -XX:+UseParNewGC -XX:NewRatio=3  -Djava.rmi.server.hostname=localhost"
-	extraClasspath = "/etc/hadoop/conf"
-	}
-
-
-## How to run a Storm application on Gearpump
-
-This section shows how to run an existing Storm jar in a local Gearpump cluster.
-
-1. launch a local cluster
-  
-		:::bash   
-		bin/local
-   
-
-2. start a Gearpump Nimbus server 
-
-	Users need server's address(`nimbus.host` and `nimbus.thrift.port`) to submit topologies later. The address is written to a yaml config file set with `-output` option. 
-	Users can provide an existing config file where only the address will be overwritten. If not provided, a new file `app.yaml` is created with the config.
-
-		:::bash
-		bin/storm nimbus -output [conf <custom yaml config>]
-   
-   
-3. submit Storm applications
-
-	Users can either submit Storm applications through command line or UI. 
-   
-	a. submit Storm applications through command line
-
-		:::bash
-		bin/storm app -verbose -config app.yaml -jar storm-starter-${STORM_VERSION}.jar storm.starter.ExclamationTopology exclamation 
-     
-  
-	Users are able to configure their applications through following options
-   
-     * `jar` - set the path of a Storm application jar
-     * `config` - submit the custom configuration file generated when launching Nimbus
-  
-  
-	b. submit Storm application through UI
-   
-     1. Click on the "Create" button on the applications page on UI. 
-     2. Click on the "Submit Storm Application" item in the pull down menu.
-     3. In the popup console, upload the Storm application jar and the configuration file generated when launching Nimbus,
-         and fill in `storm.starter.ExclamationTopology exclamation` as arguments.
-     4. Click on the "Submit" button   
-
-   Either way, check the dashboard and you should see data flowing through your topology. 
-  
-## How is it different from running on Storm
-
-### Topology submission
-
-When a client submits a Storm topology, Gearpump launches locally a simplified version of Storm's  Nimbus server `GearpumpNimbus`. `GearpumpNimbus` then translates topology to a directed acyclic graph (DAG) of Gearpump, which is submitted to Gearpump master and deployed as a Gearpump application. 
-
-![storm_gearpump_cluster](../img/storm_gearpump_cluster.png)
-
-`GearpumpNimbus` supports the following methods
-  
-* `submitTopology` / `submitTopologyWithOpts`
-* `killTopology` / `killTopologyWithOpts`
-* `getTopology` / `getUserTopology`
-* `getClusterInfo`
-
-### Topology translation
-
-Here's an example of `WordCountTopology` with acker bolts (ackers) being translated into a Gearpump DAG.
-
-![storm_gearpump_dag](../img/storm_gearpump_dag.png)
-
-Gearpump creates a `StormProducer` for each Storm spout and a `StormProcessor` for each Storm bolt (except for ackers) with the same parallelism, and wires them together using the same grouping strategy (partitioning in Gearpump) as in Storm. 
-
-At runtime, spouts and bolts are running inside `StormProducer` tasks and `StormProcessor` tasks respectively. Messages emitted by spout are passed to `StormProducer`, transferred to `StormProcessor` and passed down to bolt.  Messages are serialized / de-serialized with Storm serializers.
-
-Storm ackers are dropped since Gearpump has a different mechanism of message tracking and flow control. 
-
-### Task execution
-
-Each Storm task is executed by a dedicated thread while all Gearpump tasks of an executor share a thread pool. Generally, we can achieve better performance with a shared thread pool. It's possible, however, some tasks block and take up all the threads. In that case, we can 
-fall back to the Storm way by setting `gearpump.task-dispatcher` to `"gearpump.single-thread-dispatcher"` in `gear.conf`.
-
-### Message tracking 
-
-Storm tracks the lineage of each message with ackers to guarantee at-least-once message delivery. Failed messages are re-sent from spout.
-
-Gearpump [tracks messages between a sender and receiver in an efficient way](../introduction/gearpump-internals#how-do-we-detect-message-loss). Message loss causes the whole application to replay from the [minimum timestamp of all pending messages in the system](../introduction/gearpump-internals#application-clock-and-global-clock-service). 
-
-### Flow control
-
-Storm throttles flow rate at spout, which stops sending messages if the number of unacked messages exceeds `topology.max.spout.pending`. 
-
-Gearpump has flow control between tasks such that [sender cannot flood receiver](../introduction/gearpump-internals#how-do-we-do-flow-control), which is backpressured till the source.
-
-### Configurations
-
-All Storm configurations are respected with the following priority order 
-
-	:::bash
-	defaults.yaml < custom file config < application config < component config
-
-
-where
-
-* application config is submit from Storm application along with the topology 
-* component config is set in spout / bolt with `getComponentConfiguration`
-* custom file config is specified with the `-config` option when submitting Storm application from command line or uploaded from UI
-
-## StreamCQL Support
-
-[StreamCQL](https://github.com/HuaweiBigData/StreamCQL) is a Continuous Query Language on RealTime Computation System open sourced by Huawei.
-Since StreamCQL already supports Storm, it's straightforward to run StreamCQL over Gearpump.
-
-1. Install StreamCQL as in the official [README](https://github.com/HuaweiBigData/StreamCQL#install-streamcql)
-
-2. Launch Gearpump Nimbus Server as before 
-
-3. Go to the installed stream-cql-binary, and change following settings in `conf/streaming-site.xml` with the output Nimbus configs in Step 2.
-
-		:::xml
-		<property>
-		  <name>streaming.storm.nimbus.host</name>
-		  <value>${nimbus.host}</value>
-		</property>
-		<property>
-		  <name>streaming.storm.nimbus.port</name>
-		  <value>${nimbus.thrift.port}</value>
-		</property>
-   
- 
-4. Open CQL client shell with `bin/cql` and execute a simple cql example  
-
-		:::sql
-		Streaming> CREATE INPUT STREAM s
-		   (id INT, name STRING, type INT)
-		SOURCE randomgen
-		   PROPERTIES ( timeUnit = "SECONDS", period = "1",
-		       eventNumPerperiod = "1", isSchedule = "true" );
-		   
-		CREATE OUTPUT STREAM rs
-		   (type INT, cc INT)
-		SINK consoleOutput;
-		   
-		INSERT INTO STREAM rs SELECT type, COUNT(id) as cc
-		   FROM s[RANGE 20 SECONDS BATCH]
-		   WHERE id > 5 GROUP BY type;
-		   
-		SUBMIT APPLICATION example;    
-   
-   
-5. Check the dashboard and you should see data flowing through a topology of 3 components.
-

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/dev/dev-write-1st-app.md
----------------------------------------------------------------------
diff --git a/docs/docs/dev/dev-write-1st-app.md b/docs/docs/dev/dev-write-1st-app.md
deleted file mode 100644
index f28a303..0000000
--- a/docs/docs/dev/dev-write-1st-app.md
+++ /dev/null
@@ -1,370 +0,0 @@
-We'll use [wordcount](https://github.com/apache/incubator-gearpump/tree/master/examples/streaming/wordcount/src/main/scala/org/apache/gearpump/streaming/examples/wordcount) as an example to illustrate how to write Gearpump applications.
-
-### Maven/Sbt Settings
-
-Repository and library dependencies can be found at [Maven Setting](http://gearpump.apache.org/downloads.html#maven-dependencies).
-
-### IDE Setup (Optional)
-You can get your preferred IDE ready for Gearpump by following [this guide](dev-ide-setup).
-
-### Decide which language and API to use for writing 
-Gearpump supports two level APIs:
-
-1. Low level API, which is more similar to Akka programming, operating on each event. The API document can be found at [Low Level API Doc](http://gearpump.apache.org/releases/latest/api/scala/index.html#org.apache.gearpump.streaming.package).
-
-2. High level API (aka DSL), which is operating on streaming instead of individual event. The API document can be found at [DSL API Doc](http://gearpump.apache.org/releases/latest/api/scala/index.html#org.apache.gearpump.streaming.dsl.package).
-
-And both APIs have their Java version and Scala version.
-
-So, before you writing your first Gearpump application, you need to decide which API to use and which language to use. 
-
-## DSL version for Wordcount
-
-The easiest way to write your streaming application is to write it with Gearpump DSL. 
-Below will demostrate how to write WordCount application via Gearpump DSL.
-
-#### In Scala
-
-	:::scala     
-	/** WordCount with High level DSL */
-	object WordCount extends AkkaApp with ArgumentsParser {
-	
-	  override val options: Array[(String, CLIOption[Any])] = Array.empty
-	
-	  override def main(akkaConf: Config, args: Array[String]): Unit = {
-	    val context = ClientContext(akkaConf)
-	    val app = StreamApp("dsl", context)
-	    val data = "This is a good start, bingo!! bingo!!"
-	
-	    //count for each word and output to log
-	    app.source(data.lines.toList, 1, "source").
-	      // word => (word, count)
-	      flatMap(line => line.split("[\\s]+")).map((_, 1)).
-	      // (word, count1), (word, count2) => (word, count1 + count2)
-	      groupByKey().sum.log
-	
-	    val appId = context.submit(app)
-	    context.close()
-	  }
-	}
-	
-
-#### In Java
-
-	:::java   
-	/** Java version of WordCount with high level DSL API */
-	public class WordCount {
-	
-	  public static void main(String[] args) throws InterruptedException {
-	    main(ClusterConfig.defaultConfig(), args);
-	  }
-	
-	  public static void main(Config akkaConf, String[] args) throws InterruptedException {
-	    ClientContext context = new ClientContext(akkaConf);
-	    JavaStreamApp app = new JavaStreamApp("JavaDSL", context, UserConfig.empty());
-	    List<String> source = Lists.newArrayList("This is a good start, bingo!! bingo!!");
-	
-	    //create a stream from the string list.
-	    JavaStream<String> sentence = app.source(source, 1, UserConfig.empty(), "source");
-	
-	    //tokenize the strings and create a new stream
-	    JavaStream<String> words = sentence.flatMap(new FlatMapFunction<String, String>() {
-	      @Override
-	      public Iterator<String> apply(String s) {
-	        return Lists.newArrayList(s.split("\\s+")).iterator();
-	      }
-	    }, "flatMap");
-	
-	    //map each string as (string, 1) pair
-	    JavaStream<Tuple2<String, Integer>> ones = words.map(new MapFunction<String, Tuple2<String, Integer>>() {
-	      @Override
-	      public Tuple2<String, Integer> apply(String s) {
-	        return new Tuple2<String, Integer>(s, 1);
-	      }
-	    }, "map");
-	
-	    //group by according to string
-	    JavaStream<Tuple2<String, Integer>> groupedOnes = ones.groupBy(new GroupByFunction<Tuple2<String, Integer>, String>() {
-	      @Override
-	      public String apply(Tuple2<String, Integer> tuple) {
-	        return tuple._1();
-	      }
-	    }, 1, "groupBy");
-	
-	    //for each group, make the sum
-	    JavaStream<Tuple2<String, Integer>> wordcount = groupedOnes.reduce(new ReduceFunction<Tuple2<String, Integer>>() {
-	      @Override
-	      public Tuple2<String, Integer> apply(Tuple2<String, Integer> t1, Tuple2<String, Integer> t2) {
-	        return new Tuple2<String, Integer>(t1._1(), t1._2() + t2._2());
-	      }
-	    }, "reduce");
-	
-	    //output result using log
-	    wordcount.log();
-	
-	    app.run();
-	    context.close();
-	  }
-	}
-
-
-## Low level API based Wordcount
-
-### Define Processor(Task) class and Partitioner class
-
-An application is a Directed Acyclic Graph (DAG) of processors. In the wordcount example, We will firstly define two processors `Split` and `Sum`, and then weave them together.
-
-
-#### Split processor
-
-In the `Split` processor, we simply split a predefined text (the content is simplified for conciseness) and send out each split word to `Sum`.
-
-#### In Scala
-
-	:::scala
-	class Split(taskContext : TaskContext, conf: UserConfig) extends Task(taskContext, conf) {
-	  import taskContext.output
-	
-	  override def onStart(startTime : StartTime) : Unit = {
-	    self ! Message("start")
-	  }
-	
-	  override def onNext(msg : Message) : Unit = {
-	    Split.TEXT_TO_SPLIT.lines.foreach { line =>
-	      line.split("[\\s]+").filter(_.nonEmpty).foreach { msg =>
-	        output(new Message(msg, System.currentTimeMillis()))
-	      }
-	    }
-	    self ! Message("continue", System.currentTimeMillis())
-	  }
-	}
-	
-	object Split {
-	  val TEXT_TO_SPLIT = "some text"
-	}
-
-#### In Java
-
-	:::java
-	public class Split extends Task {
-	
-	  public static String TEXT = "This is a good start for java! bingo! bingo! ";
-	
-	  public Split(TaskContext taskContext, UserConfig userConf) {
-	    super(taskContext, userConf);
-	  }
-	
-	  private Long now() {
-	    return System.currentTimeMillis();
-	  }
-	
-	  @Override
-	  public void onStart(StartTime startTime) {
-	    self().tell(new Message("start", now()), self());
-	  }
-	
-	  @Override
-	  public void onNext(Message msg) {
-	
-	    // Split the TEXT to words
-	    String[] words = TEXT.split(" ");
-	    for (int i = 0; i < words.length; i++) {
-	      context.output(new Message(words[i], now()));
-	    }
-	    self().tell(new Message("next", now()), self());
-	  }
-	}
-	```
-
-
-Essentially, each processor consists of two descriptions:
-
-1. A `Task` to define the operation.
-
-2. A parallelism level to define the number of tasks of this processor in parallel. 
- 
-Just like `Split`, every processor extends `Task`.  The `onStart` method is called once before any message comes in; `onNext` method is called to process every incoming message. Note that Gearpump employs the message-driven model and that's why Split sends itself a message at the end of `onStart` and `onNext` to trigger next message processing.
-
-#### Sum Processor
-
-The structure of `Sum` processor looks much alike. `Sum` does not need to send messages to itself since it receives messages from `Split`.
-
-#### In Scala
-
-	:::scala
-	class Sum (taskContext : TaskContext, conf: UserConfig) extends Task(taskContext, conf) {
-	  private[wordcount] val map : mutable.HashMap[String, Long] = new mutable.HashMap[String, Long]()
-	
-	  private[wordcount] var wordCount : Long = 0
-	  private var snapShotTime : Long = System.currentTimeMillis()
-	  private var snapShotWordCount : Long = 0
-	
-	  private var scheduler : Cancellable = null
-	
-	  override def onStart(startTime : StartTime) : Unit = {
-	    scheduler = taskContext.schedule(new FiniteDuration(5, TimeUnit.SECONDS),
-	      new FiniteDuration(5, TimeUnit.SECONDS))(reportWordCount)
-	  }
-	
-	  override def onNext(msg : Message) : Unit = {
-	    if (null == msg) {
-	      return
-	    }
-	    val current = map.getOrElse(msg.msg.asInstanceOf[String], 0L)
-	    wordCount += 1
-	    map.put(msg.msg.asInstanceOf[String], current + 1)
-	  }
-	
-	  override def onStop() : Unit = {
-	    if (scheduler != null) {
-	      scheduler.cancel()
-	    }
-	  }
-	
-	  def reportWordCount() : Unit = {
-	    val current : Long = System.currentTimeMillis()
-	    LOG.info(s"Task ${taskContext.taskId} Throughput: ${(wordCount - snapShotWordCount, (current - snapShotTime) / 1000)} (words, second)")
-	    snapShotWordCount = wordCount
-	    snapShotTime = current
-	  }
-	}
-	
-#### In Java
-
-	:::java
-	public class Sum extends Task {
-	
-	  private Logger LOG = super.LOG();
-	  private HashMap<String, Integer> wordCount = new HashMap<String, Integer>();
-	
-	  public Sum(TaskContext taskContext, UserConfig userConf) {
-	    super(taskContext, userConf);
-	  }
-	
-	  @Override
-	  public void onStart(StartTime startTime) {
-	    //skip
-	  }
-	
-	  @Override
-	  public void onNext(Message messagePayLoad) {
-	    String word = (String) (messagePayLoad.msg());
-	    Integer current = wordCount.get(word);
-	    if (current == null) {
-	      current = 0;
-	    }
-	    Integer newCount = current + 1;
-	    wordCount.put(word, newCount);
-	  }
-	}
-
-
-Besides counting the sum, in Scala version, we also define a scheduler to report throughput every 5 seconds. The scheduler should be cancelled when the computation completes, which could be accomplished overriding the `onStop` method. The default implementation of `onStop` is a no-op.
-
-#### Partitioner
-
-A processor could be parallelized to a list of tasks. A `Partitioner` defines how the data is shuffled among tasks of Split and Sum. Gearpump has already provided two partitioners
-
-* `HashPartitioner`: partitions data based on the message's hashcode
-* `ShufflePartitioner`: partitions data in a round-robin way.
-
-You could define your own partitioner by extending the `Partitioner` trait/interface and overriding the `getPartition` method.
-
-
-	:::scala
-	trait Partitioner extends Serializable {
-	  def getPartition(msg : Message, partitionNum : Int) : Int
-	}
-
-### Wrap up as an application 
-
-Now, we are able to write our application class, weaving the above components together.
-
-The application class extends `App` and `ArgumentsParser which make it easier to parse arguments and run main functions.
-
-#### In Scala
-
-	:::scala
-	object WordCount extends App with ArgumentsParser {
-	  private val LOG: Logger = LogUtil.getLogger(getClass)
-	  val RUN_FOR_EVER = -1
-	
-	  override val options: Array[(String, CLIOption[Any])] = Array(
-	    "split" -> CLIOption[Int]("<how many split tasks>", required = false, defaultValue = Some(1)),
-	    "sum" -> CLIOption[Int]("<how many sum tasks>", required = false, defaultValue = Some(1))
-	  )
-	
-	  def application(config: ParseResult) : StreamApplication = {
-	    val splitNum = config.getInt("split")
-	    val sumNum = config.getInt("sum")
-	    val partitioner = new HashPartitioner()
-	    val split = Processor[Split](splitNum)
-	    val sum = Processor[Sum](sumNum)
-	    val app = StreamApplication("wordCount", Graph[Processor[_ <: Task], Partitioner](split ~ partitioner ~> sum), UserConfig.empty)
-	    app
-	  }
-	
-	  val config = parse(args)
-	  val context = ClientContext()
-	  val appId = context.submit(application(config))
-	  context.close()
-	}
-	
-
-
-We override `options` value and define an array of command line arguments to parse. We want application users to pass in masters' hosts and ports, the parallelism of split and sum tasks, and how long to run the example. We also specify whether an option is `required` and provide `defaultValue` for some arguments.
-
-#### In Java
-
-	:::java
-	
-	/** Java version of WordCount with Processor Graph API */
-	public class WordCount {
-	
-	  public static void main(String[] args) throws InterruptedException {
-	    main(ClusterConfig.defaultConfig(), args);
-	  }
-	
-	  public static void main(Config akkaConf, String[] args) throws InterruptedException {
-	
-	    // For split task, we config to create two tasks
-	    int splitTaskNumber = 2;
-	    Processor split = new Processor(Split.class).withParallelism(splitTaskNumber);
-	
-	    // For sum task, we have two summer.
-	    int sumTaskNumber = 2;
-	    Processor sum = new Processor(Sum.class).withParallelism(sumTaskNumber);
-	
-	    // construct the graph
-	    Graph graph = new Graph();
-	    graph.addVertex(split);
-	    graph.addVertex(sum);
-	
-	    Partitioner partitioner = new HashPartitioner();
-	    graph.addEdge(split, partitioner, sum);
-	
-	    UserConfig conf = UserConfig.empty();
-	    StreamApplication app = new StreamApplication("wordcountJava", conf, graph);
-	
-	    // create master client
-	    // It will read the master settings under gearpump.cluster.masters
-	    ClientContext masterClient = new ClientContext(akkaConf);
-	
-	    masterClient.submit(app);
-	
-	    masterClient.close();
-	  }
-	}
-
-
-## Submit application
-
-After all these, you need to package everything into a uber jar and submit the jar to Gearpump Cluster. Please check [Application submission tool](../introduction/commandline) to command line tool syntax.
-
-## Advanced topic
-For a real application, you definitely need to define your own customized message passing between processors.
-Customized message needs customized serializer to help message passing over wire.
-Check [this guide](dev-custom-serializer) for how to customize serializer.
-
-### Gearpump for Non-Streaming Usage
-Gearpump is also able to as a base platform to develop non-streaming applications. See [this guide](dev-non-streaming-example) on how to use Gearpump to develop a distributed shell.

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/actor_hierarchy.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/actor_hierarchy.png b/docs/docs/img/actor_hierarchy.png
deleted file mode 100644
index d971745..0000000
Binary files a/docs/docs/img/actor_hierarchy.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/checkpoint_equation.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/checkpoint_equation.png b/docs/docs/img/checkpoint_equation.png
deleted file mode 100644
index 14da93b..0000000
Binary files a/docs/docs/img/checkpoint_equation.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/checkpoint_interval_equation.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/checkpoint_interval_equation.png b/docs/docs/img/checkpoint_interval_equation.png
deleted file mode 100644
index 0c0414c..0000000
Binary files a/docs/docs/img/checkpoint_interval_equation.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/checkpointing.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/checkpointing.png b/docs/docs/img/checkpointing.png
deleted file mode 100644
index f11eb53..0000000
Binary files a/docs/docs/img/checkpointing.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/checkpointing_interval.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/checkpointing_interval.png b/docs/docs/img/checkpointing_interval.png
deleted file mode 100644
index dc46317..0000000
Binary files a/docs/docs/img/checkpointing_interval.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/clock.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/clock.png b/docs/docs/img/clock.png
deleted file mode 100644
index 906d51d..0000000
Binary files a/docs/docs/img/clock.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/dag.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/dag.png b/docs/docs/img/dag.png
deleted file mode 100644
index c0ca79f..0000000
Binary files a/docs/docs/img/dag.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/dashboard.gif
----------------------------------------------------------------------
diff --git a/docs/docs/img/dashboard.gif b/docs/docs/img/dashboard.gif
deleted file mode 100644
index 0170c5f..0000000
Binary files a/docs/docs/img/dashboard.gif and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/dashboard.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/dashboard.png b/docs/docs/img/dashboard.png
deleted file mode 100644
index 0b5eedd..0000000
Binary files a/docs/docs/img/dashboard.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/dashboard_3.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/dashboard_3.png b/docs/docs/img/dashboard_3.png
deleted file mode 100644
index 47259fc..0000000
Binary files a/docs/docs/img/dashboard_3.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/download.jpg
----------------------------------------------------------------------
diff --git a/docs/docs/img/download.jpg b/docs/docs/img/download.jpg
deleted file mode 100644
index 7129c52..0000000
Binary files a/docs/docs/img/download.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/dynamic.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/dynamic.png b/docs/docs/img/dynamic.png
deleted file mode 100644
index 09b8a35..0000000
Binary files a/docs/docs/img/dynamic.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/exact.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/exact.png b/docs/docs/img/exact.png
deleted file mode 100644
index f11eb53..0000000
Binary files a/docs/docs/img/exact.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/failures.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/failures.png b/docs/docs/img/failures.png
deleted file mode 100644
index fa98cdc..0000000
Binary files a/docs/docs/img/failures.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/flow_control.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/flow_control.png b/docs/docs/img/flow_control.png
deleted file mode 100644
index 7ea9bd2..0000000
Binary files a/docs/docs/img/flow_control.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/flowcontrol.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/flowcontrol.png b/docs/docs/img/flowcontrol.png
deleted file mode 100644
index 7ea9bd2..0000000
Binary files a/docs/docs/img/flowcontrol.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/ha.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/ha.png b/docs/docs/img/ha.png
deleted file mode 100644
index 5474d84..0000000
Binary files a/docs/docs/img/ha.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/kafka_wordcount.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/kafka_wordcount.png b/docs/docs/img/kafka_wordcount.png
deleted file mode 100644
index a43fa55..0000000
Binary files a/docs/docs/img/kafka_wordcount.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/layout.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/layout.png b/docs/docs/img/layout.png
deleted file mode 100644
index edffdf8..0000000
Binary files a/docs/docs/img/layout.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/logo.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/logo.png b/docs/docs/img/logo.png
deleted file mode 100644
index 7575892..0000000
Binary files a/docs/docs/img/logo.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/logo.svg
----------------------------------------------------------------------
diff --git a/docs/docs/img/logo.svg b/docs/docs/img/logo.svg
deleted file mode 100644
index 5897ca4..0000000
--- a/docs/docs/img/logo.svg
+++ /dev/null
@@ -1,71 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
-   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
-   width="89mm"
-   height="89mm"
-   viewBox="0 0 89 89"
-   id="svg4684"
-   version="1.1"
-   inkscape:version="0.91 r13725"
-   sodipodi:docname="gearpump_logo_simple.svg"
-   inkscape:export-filename="/Users/kamkasravi/Dropbox/gearpump/gearpump_logo_simple.png"
-   inkscape:export-xdpi="157.11"
-   inkscape:export-ydpi="157.11">
-  <defs
-     id="defs4686" />
-  <sodipodi:namedview
-     inkscape:document-units="mm"
-     pagecolor="#ffffff"
-     bordercolor="#666666"
-     borderopacity="0"
-     inkscape:pageopacity="0"
-     inkscape:pageshadow="2"
-     inkscape:zoom="1.2162434"
-     inkscape:cx="395.86179"
-     inkscape:cy="372.04726"
-     inkscape:current-layer="layer1"
-     id="namedview4688"
-     showgrid="false"
-     inkscape:window-width="1920"
-     inkscape:window-height="1107"
-     inkscape:window-x="0"
-     inkscape:window-y="1"
-     inkscape:window-maximized="1"
-     fit-margin-top="0"
-     fit-margin-left="0"
-     fit-margin-right="0"
-     fit-margin-bottom="0"
-     inkscape:showpageshadow="false"
-     showborder="true" />
-  <metadata
-     id="metadata4690">
-    <rdf:RDF>
-      <cc:Work
-         rdf:about="">
-        <dc:format>image/svg+xml</dc:format>
-        <dc:type
-           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title />
-      </cc:Work>
-    </rdf:RDF>
-  </metadata>
-  <g
-     inkscape:label="Layer 1"
-     inkscape:groupmode="layer"
-     id="layer1"
-     transform="translate(5.4566873e-7,-121)">
-    <path
-       style="fill:#556ae6;fill-opacity:1;stroke:none;stroke-width:7.08661413;stroke-miterlimit:4;stroke-dasharray:none"
-       d="m 59.451523,122.95775 a 44.775326,44.775326 0 0 0 -21.03861,-2.01136 l -0.6557,4.60251 a 23.524902,23.524902 0 0 1 4.20498,3.77358 l 6.73498,-1.78044 a 29.907628,29.907628 0 0 1 4.33792,8.99185 l -5.54364,4.16221 a 23.524902,23.524902 0 0 1 0.32108,5.63603 l 6.03185,3.50988 a 29.907628,29.907628 0 0 1 -1.23578,4.84982 29.907628,29.907628 0 0 1 -2.06484,4.5672 l -6.85567,-0.97589 a 23.524902,23.524902 0 0 1 -3.77339,4.20446 l 1.78078,6.73569 a 29.907628,29.907628 0 0 1 -8.99151,4.33863 l -4.16256,-5.54435 a 23.524902,23.524902 0 0 1 -5.63602,0.32108 l -3.50989,6.03185 a 29.907628,29.907628 0 0 1 -4.849808,-1.23578 29.907628,29.907628 0 0 1 -4.5675477,-2.06554 l 0.9757077,-6.85515 a 23.524902,23.524902 0 0 1 -4.2041157,-3.7727 l -6.64489998,1.75628 a 44.775326,44.775326 0 0 0 29.97375368,45.3569 44.775326,44.775326 0 0 0 27.28111,0.61075 l -0.0231,-0.19065 a 23.524902,23.524902 0 0 1 -5.04173,-2.54001 l -6.05573,3.46919 a 29.907628,29.907628 0 0 1 -3.58191,-3.49598 29.907628
 ,29.907628 0 0 1 -2.9233,-4.07194 l 4.27314,-5.44883 a 23.524902,23.524902 0 0 1 -1.75454,-5.36936 l -6.72324,-1.82616 a 29.907628,29.907628 0 0 1 0.73798,-9.9567 l 6.88304,-0.8322 a 23.524902,23.524902 0 0 1 2.54001,-5.04173 l -3.46885,-6.05502 a 29.907628,29.907628 0 0 1 3.49494,-3.58228 29.907628,29.907628 0 0 1 4.07194,-2.92331 l 5.44901,4.27262 a 23.524902,23.524902 0 0 1 5.36988,-1.75436 l 1.82616,-6.72324 a 29.907628,29.907628 0 0 1 9.95599,0.73833 l 0.8329,6.8827 a 23.524902,23.524902 0 0 1 5.04121,2.53983 l 6.055556,-3.46867 a 29.907628,29.907628 0 0 1 3.58227,3.49494 29.907628,29.907628 0 0 1 1.52975,1.98522 44.775326,44.775326 0 0 0 -29.979576,-45.3099 z m -29.93126,7.04882 a 15.745845,15.745845 0 0 0 -20.0396247,9.71017 15.745845,15.745845 0 0 0 9.7101647,20.03962 15.745845,15.745845 0 0 0 20.03963,-9.71016 15.745845,15.745845 0 0 0 -9.71017,-20.03963 z m 59.360536,42.82725 -2.32895,2.97009 a 23.524902,23.524902 0 0 1 0.99553,2.56761 44.775326,44.775326 0 0 0 1.33342,-5.
 5377 z m -19.143966,-2.09047 a 15.745845,15.745845 0 0 0 -14.75675,3.19169 15.745845,15.745845 0 0 0 -1.61104,22.20982 15.745845,15.745845 0 0 0 21.90246,1.85092 44.775326,44.775326 0 0 0 2.25015,-2.25582 15.745845,15.745845 0 0 0 -0.3321,-20.19458 15.745845,15.745845 0 0 0 -7.45272,-4.80203 z"
-       id="path5303"
-       inkscape:connector-curvature="0" />
-  </g>
-</svg>

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/logo2.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/logo2.png b/docs/docs/img/logo2.png
deleted file mode 100644
index 959d39e..0000000
Binary files a/docs/docs/img/logo2.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/messageLoss.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/messageLoss.png b/docs/docs/img/messageLoss.png
deleted file mode 100644
index 80b330a..0000000
Binary files a/docs/docs/img/messageLoss.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/netty_transport.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/netty_transport.png b/docs/docs/img/netty_transport.png
deleted file mode 100644
index 17d57c3..0000000
Binary files a/docs/docs/img/netty_transport.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/replay.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/replay.png b/docs/docs/img/replay.png
deleted file mode 100644
index 8bbbc43..0000000
Binary files a/docs/docs/img/replay.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/shuffle.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/shuffle.png b/docs/docs/img/shuffle.png
deleted file mode 100644
index 40c4a2d..0000000
Binary files a/docs/docs/img/shuffle.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/storm_gearpump_cluster.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/storm_gearpump_cluster.png b/docs/docs/img/storm_gearpump_cluster.png
deleted file mode 100644
index d318623..0000000
Binary files a/docs/docs/img/storm_gearpump_cluster.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/storm_gearpump_dag.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/storm_gearpump_dag.png b/docs/docs/img/storm_gearpump_dag.png
deleted file mode 100644
index 24920f7..0000000
Binary files a/docs/docs/img/storm_gearpump_dag.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/submit.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/submit.png b/docs/docs/img/submit.png
deleted file mode 100644
index 609c0d7..0000000
Binary files a/docs/docs/img/submit.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/submit2.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/submit2.png b/docs/docs/img/submit2.png
deleted file mode 100644
index d3939ee..0000000
Binary files a/docs/docs/img/submit2.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/img/through_vs_message_size.png
----------------------------------------------------------------------
diff --git a/docs/docs/img/through_vs_message_size.png b/docs/docs/img/through_vs_message_size.png
deleted file mode 100644
index a98c528..0000000
Binary files a/docs/docs/img/through_vs_message_size.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-gearpump/blob/5f90b70f/docs/docs/index.md
----------------------------------------------------------------------
diff --git a/docs/docs/index.md b/docs/docs/index.md
deleted file mode 100644
index 1cd98b8..0000000
--- a/docs/docs/index.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-layout: global
-displayTitle: Gearpump Overview
-title: Overview
-description: Gearpump GEARPUMP_VERSION documentation homepage
----
-
-Gearpump is a real-time big data streaming engine.
-It is inspired by recent advances in the [Akka](http://akka.io/) framework and a desire to improve on existing streaming frameworks.
-Gearpump is event/message based and featured as low latency handling, high performance, exactly once semantics,
-dynamic topology update, [Apache Storm](https://storm.apache.org/) compatibility, etc.
-
-The	name	Gearpump	is	a	reference to	the	engineering term "gear	pump,"	which	is	a	super simple
-pump	that	consists of	only	two	gears,	but	is	very	powerful at	streaming water.
-
-### Gearpump Technical Highlights
-Gearpump's feature set includes:
-
-* Extremely high performance
-* Low latency
-* Configurable message delivery guarantee (at least once, exactly once).
-* Highly extensible
-* Dynamic DAG
-* Storm compatibility
-* Samoa compatibility
-* Both high level and low level API
-
-### Gearpump Performance
-Per initial benchmarks we are able to process 18 million messages/second (100 bytes per message) with a 8ms latency on a 4-node cluster.
-
-![Dashboard](img/dashboard.png)
-
-### Gearpump and Akka
-Gearpump is a 100% Akka based platform. We model big data streaming within the Akka actor hierarchy.
-![Actor Hierarchy](img/actor_hierarchy.png)