You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@griffin.apache.org by gu...@apache.org on 2018/10/15 05:53:15 UTC

incubator-griffin git commit: always use apache griffin together as our mark

Repository: incubator-griffin
Updated Branches:
  refs/heads/master 05680f01a -> cbde1e4fc


always use apache griffin together as our mark

Author: William Guo <gu...@apache.org>

Closes #436 from guoyuepeng/use_apache_griffin_as_trademark.


Project: http://git-wip-us.apache.org/repos/asf/incubator-griffin/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-griffin/commit/cbde1e4f
Tree: http://git-wip-us.apache.org/repos/asf/incubator-griffin/tree/cbde1e4f
Diff: http://git-wip-us.apache.org/repos/asf/incubator-griffin/diff/cbde1e4f

Branch: refs/heads/master
Commit: cbde1e4fcc0b77b7655a493203d3f0d0896ae924
Parents: 05680f0
Author: William Guo <gu...@apache.org>
Authored: Mon Oct 15 13:53:08 2018 +0800
Committer: William Guo <gu...@apache.org>
Committed: Mon Oct 15 13:53:08 2018 +0800

----------------------------------------------------------------------
 griffin-doc/deploy/deploy-guide.md              | 12 ++++----
 griffin-doc/dev/code-style.md                   |  6 ++--
 griffin-doc/dev/dev-env-build.md                | 14 ++++-----
 griffin-doc/docker/griffin-docker-guide.md      | 24 +++++++--------
 griffin-doc/measure/dsl-guide.md                | 32 ++++++++++----------
 .../measure/measure-configuration-guide.md      |  2 +-
 griffin-doc/measure/measure-streaming-sample.md |  8 ++---
 griffin-doc/roadmap.md                          |  8 ++---
 griffin-doc/service/api-guide.md                | 22 +++++++-------
 .../service/hibernate_eclipselink_switch.md     | 10 +++---
 griffin-doc/service/mysql_postgresql_switch.md  |  2 +-
 griffin-doc/ui/user-guide.md                    |  2 +-
 12 files changed, 71 insertions(+), 71 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-griffin/blob/cbde1e4f/griffin-doc/deploy/deploy-guide.md
----------------------------------------------------------------------
diff --git a/griffin-doc/deploy/deploy-guide.md b/griffin-doc/deploy/deploy-guide.md
index b2f3536..58bff6b 100644
--- a/griffin-doc/deploy/deploy-guide.md
+++ b/griffin-doc/deploy/deploy-guide.md
@@ -18,7 +18,7 @@ under the License.
 -->
 
 # Apache Griffin Deployment Guide
-For Griffin users, please follow the instructions below to deploy Griffin in your environment. Note that there are some dependencies that should be installed firstly.
+For Apache Griffin users, please follow the instructions below to deploy Apache Griffin in your environment. Note that there are some dependencies that should be installed firstly.
 
 ### Prerequisites
 You need to install following items
@@ -30,7 +30,7 @@ You need to install following items
 - [Hive](http://apache.claz.org/hive/hive-2.2.0/apache-hive-2.2.0-bin.tar.gz) (version 2.2.0), you can get some help [here](https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-RunningHive).
     You need to make sure that your spark cluster could access your HiveContext.
 - [Livy](http://archive.cloudera.com/beta/livy/livy-server-0.3.0.zip), you can get some help [here](http://livy.io/quickstart.html).
-    Griffin need to schedule spark jobs by server, we use livy to submit our jobs.
+    Apache Griffin need to schedule spark jobs by server, we use livy to submit our jobs.
     For some issues of Livy for HiveContext, we need to download 3 files or get them from Spark lib `$SPARK_HOME/lib/`, and put them into HDFS.
     ```
     datanucleus-api-jdo-3.2.6.jar
@@ -38,7 +38,7 @@ You need to install following items
     datanucleus-rdbms-3.2.9.jar
     ```
 - ElasticSearch (5.0 or later versions).
-	ElasticSearch works as a metrics collector, Griffin produces metrics into it, and our default UI gets metrics from it, you can use them by your own way as well.
+	ElasticSearch works as a metrics collector, Apache Griffin produces metrics into it, and our default UI gets metrics from it, you can use them by your own way as well.
 
 ### Configuration
 
@@ -99,12 +99,12 @@ curl -XPUT http://es:9200/griffin -d '
 '
 ```
 
-You should also modify some configurations of Griffin for your environment.
+You should also modify some configurations of Apache Griffin for your environment.
 
 - <b>service/src/main/resources/application.properties</b>
 
     ```
-    # griffin server port (default 8080)
+    # Apache Griffin server port (default 8080)
     server.port = 8080
     # jpa
     spring.datasource.url = jdbc:postgresql://<your IP>:5432/quartz?autoReconnect=true&useSSL=false
@@ -199,7 +199,7 @@ After all environment services startup, we can start our server.
   java -jar service/target/service.jar
   ```
 
-After a few seconds, we can visit our default UI of Griffin (by default the port of spring boot is 8080).
+After a few seconds, we can visit our default UI of Apache Griffin (by default the port of spring boot is 8080).
 
   ```
   http://<your IP>:8080

http://git-wip-us.apache.org/repos/asf/incubator-griffin/blob/cbde1e4f/griffin-doc/dev/code-style.md
----------------------------------------------------------------------
diff --git a/griffin-doc/dev/code-style.md b/griffin-doc/dev/code-style.md
index 5b65c22..c21abbb 100644
--- a/griffin-doc/dev/code-style.md
+++ b/griffin-doc/dev/code-style.md
@@ -24,7 +24,7 @@ keep any codes submitted by various committers consistent.
 
 ## Config Java Code Style
 We suggest developers use automatic tools to check code violation like [CheckStyle](https://github.com/checkstyle/checkstyle).<br>
-+ Firstly, you need to apply Griffin's Java Code Style Rules. Copy configuration and save as griffin_check.xml
++ Firstly, you need to apply Apache Griffin's Java Code Style Rules. Copy configuration and save as griffin_check.xml
 ```xml
 <?xml version="1.0"?>
 <!DOCTYPE module PUBLIC
@@ -300,7 +300,7 @@ We suggest developers use automatic tools to check code violation like [CheckSty
 
         ![idea plugin](../img/devguide/style-check-plugin-idea.png)
 
-        + Add Griffin Checks In IDEA Setting
+        + Add Apache Griffin Checks In IDEA Setting
 
         ![idea setting](../img/devguide/config-addition-idea.png)
 
@@ -313,4 +313,4 @@ We suggest developers use automatic tools to check code violation like [CheckSty
 to do
 
 ## Config Angular 2 Code Style
-to do
\ No newline at end of file
+to do

http://git-wip-us.apache.org/repos/asf/incubator-griffin/blob/cbde1e4f/griffin-doc/dev/dev-env-build.md
----------------------------------------------------------------------
diff --git a/griffin-doc/dev/dev-env-build.md b/griffin-doc/dev/dev-env-build.md
index d92a735..89a4210 100644
--- a/griffin-doc/dev/dev-env-build.md
+++ b/griffin-doc/dev/dev-env-build.md
@@ -18,7 +18,7 @@ under the License.
 -->
 
 # Apache Griffin Development Environment Building Guide
-We have pre-built Griffin docker images for Griffin developers. You can use those images directly, which set up a ready development environment for you much faster than building the environment locally.
+We have pre-built Apache Griffin docker images for Apache Griffin developers. You can use those images directly, which set up a ready development environment for you much faster than building the environment locally.
 
 ## Set Up with Docker Images
 Here are step-by-step instructions of how to [pull Docker images](../docker/griffin-docker-guide.md#environment-preparation) from the repository and run containers using the images.
@@ -85,7 +85,7 @@ mvn clean install
 ```
 
 ### For service module and ui module
-1. Login to docker container, and stop running griffin service.
+1. Login to docker container, and stop running Apache Griffin service.
 ```
 docker exec -it <griffin docker container id> bash
 cd ~/service
@@ -114,10 +114,10 @@ docker exec -it <griffin docker container id> bash
 hadoop fs -rm /griffin/griffin-measure.jar
 hadoop fs -put /root/measure/griffin-measure.jar /griffin/griffin-measure.jar
 ```
-Now the griffin service will submit jobs by using this new griffin-measure.jar.
+Now the Apache Griffin service will submit jobs by using this new griffin-measure.jar.
 
-## Build new griffin docker image
-For end2end test, you will need to build a new griffin docker image, for more convenient test.
+## Build new Apache Griffin docker image
+For end2end test, you will need to build a new Apache Griffin docker image, for more convenient test.
 1. Pull the docker build repo on your docker host.
 ```
 git clone https://github.com/bhlx3lyx7/griffin-docker.git
@@ -127,7 +127,7 @@ git clone https://github.com/bhlx3lyx7/griffin-docker.git
 cp service-<version>.jar <path to>/griffin-docker/griffin_spark2/prep/service/service.jar
 cp measure-<version>.jar <path to>/griffin-docker/griffin_spark2/prep/measure/griffin-measure.jar
 ```
-3. Build your new griffin docker image.
+3. Build your new Apache Griffin docker image.
 In griffin_spark2 directory.
 ```
 cd <path to>/griffin-docker/griffin_spark2
@@ -138,7 +138,7 @@ docker build -t <image name>[:<image version>] .
 griffin:
   image: <image name>[:<image version>]
 ```
-5. Now you can run your new griffin docker image.
+5. Now you can run your new Apache Griffin docker image.
 ```
 docker-compose -f <docker-compose file> up -d
 ```

http://git-wip-us.apache.org/repos/asf/incubator-griffin/blob/cbde1e4f/griffin-doc/docker/griffin-docker-guide.md
----------------------------------------------------------------------
diff --git a/griffin-doc/docker/griffin-docker-guide.md b/griffin-doc/docker/griffin-docker-guide.md
index a5fe98f..750a654 100644
--- a/griffin-doc/docker/griffin-docker-guide.md
+++ b/griffin-doc/docker/griffin-docker-guide.md
@@ -18,7 +18,7 @@ under the License.
 -->
 
 # Apache Griffin Docker Guide
-Griffin docker images are pre-built on docker hub, users can pull them to try griffin in docker.
+Griffin docker images are pre-built on docker hub, users can pull them to try Apache Griffin in docker.
 
 ## Preparation
 
@@ -33,7 +33,7 @@ Griffin docker images are pre-built on docker hub, users can pull them to try gr
     For other platforms, please reference to this link from elastic.co
     [max_map_count kernel setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html)
     
-3. Pull griffin pre-built docker images, but if you access docker repository easily(NOT in China).
+3. Pull Apache Griffin pre-built docker images, but if you access docker repository easily(NOT in China).
     ```
     docker pull apachegriffin/griffin_spark2:0.3.0
     docker pull apachegriffin/elasticsearch
@@ -47,15 +47,15 @@ Griffin docker images are pre-built on docker hub, users can pull them to try gr
     docker pull registry.docker-cn.com/apachegriffin/kafka
     docker pull zookeeper:3.5
     ```
-   The docker images are the griffin environment images.
-    - `apachegriffin/griffin_spark2`: This image contains mysql, hadoop, hive, spark, livy, griffin service, griffin measure, and some prepared demo data, it works as a single node spark cluster, providing spark engine and griffin service.
+   The docker images are the Apache Griffin environment images.
+    - `apachegriffin/griffin_spark2`: This image contains mysql, hadoop, hive, spark, livy, Apache Griffin service, Apache Griffin measure, and some prepared demo data, it works as a single node spark cluster, providing spark engine and Apache Griffin service.
     - `apachegriffin/elasticsearch`: This image is based on official elasticsearch, adding some configurations to enable cors requests, to provide elasticsearch service for metrics persist.
     - `apachegriffin/kafka`: This image contains kafka 0.8, and some demo streaming data, to provide streaming data source in streaming mode.
     - `zookeeper:3.5`: This image is official zookeeper, to provide zookeeper service in streaming mode.
 
-### How to use griffin docker images in batch mode
+### How to use Apache Griffin docker images in batch mode
 1. Copy [docker-compose-batch.yml](compose/docker-compose-batch.yml) to your work path.
-2. In your work path, start docker containers by using docker compose, wait for about one minute, then griffin service is ready.
+2. In your work path, start docker containers by using docker compose, wait for about one minute, then Apache Griffin service is ready.
     ```bash
     $ docker-compose -f docker-compose-batch.yml up -d
     ```
@@ -66,14 +66,14 @@ Griffin docker images are pre-built on docker hub, users can pull them to try gr
     bfec3192096d        apachegriffin/griffin_spark2:0.3.0   "/etc/bootstrap-al..."   5 hours ago         Up 5 hours          6066/tcp, 8030-8033/tcp, 8040/tcp, 9000/tcp, 10020/tcp, 19888/tcp, 27017/tcp, 49707/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090/tcp, 0.0.0.0:32122->2122/tcp, 0.0.0.0:33306->3306/tcp, 0.0.0.0:35432->5432/tcp, 0.0.0.0:38042->8042/tcp, 0.0.0.0:38080->8080/tcp, 0.0.0.0:38088->8088/tcp, 0.0.0.0:38998->8998/tcp, 0.0.0.0:39083->9083/tcp   griffin
     fb9d04285070        apachegriffin/elasticsearch          "/docker-entrypoin..."   5 hours ago         Up 5 hours          0.0.0.0:39200->9200/tcp, 0.0.0.0:39300->9300/tcp                                                                                                                                                                                                                                                                                                         es
     ```
-3. Now you can try griffin APIs by using any http client, here we use [postman](https://github.com/postmanlabs/postman-app-support) as example.
+3. Now you can try Apache Griffin APIs by using any http client, here we use [postman](https://github.com/postmanlabs/postman-app-support) as example.
 We have prepared two postman configuration files, you can download them from [json files](../service/postman).<br><br>For sake of usage, you need to import two files into postman firstly.<br><br>
 ![import ](../img/devguide/import_postman_conf.png)<br><br>
 And change the initial environment `BASE_PATH` value to `<your local IP address>:38080`.<br><br>
 ![update env](../img/devguide/revise_postman_env.png)<br><br>
-4. You can try the api `Basic -> Get griffin version`, to make sure griffin service has started up.<br><br>
+4. You can try the api `Basic -> Get griffin version`, to make sure Apache Griffin service has started up.<br><br>
 ![update env](../img/devguide/call_postman.png)<br><br>
-5. Add an accuracy measure through api `Measures -> Add measure`, to create a measure in griffin.<br><br>
+5. Add an accuracy measure through api `Measures -> Add measure`, to create a measure in Apache Griffin.<br><br>
 ![update env](../img/devguide/add-measure.png)<br><br>
 6. Add a job to through api `jobs -> Add job`, to schedule a job to execute the measure. In the example, the schedule interval is 5 minutes.<br><br>
 ![update env](../img/devguide/add-job.png)<br><br>
@@ -82,13 +82,13 @@ And change the initial environment `BASE_PATH` value to `<your local IP address>
     curl -XGET '<your local IP address>:39200/griffin/accuracy/_search?pretty&filter_path=hits.hits._source' -d '{"query":{"match_all":{}},  "sort": [{"tmst": {"order": "asc"}}]}'
     ```
 
-### How to use griffin docker images in streaming mode
+### How to use Apache Griffin docker images in streaming mode
 1. Copy [docker-compose-streaming.yml](compose/docker-compose-streaming.yml) to your work path.
-2. In your work path, start docker containers by using docker compose, wait for about one minutes, then griffin service is ready.
+2. In your work path, start docker containers by using docker compose, wait for about one minutes, then Apache Griffin service is ready.
     ```
     docker-compose -f docker-compose-streaming.yml up -d
     ```
-3. Enter the griffin docker container.
+3. Enter the Apache Griffin docker container.
     ```
     docker exec -it griffin bash
     ```

http://git-wip-us.apache.org/repos/asf/incubator-griffin/blob/cbde1e4f/griffin-doc/measure/dsl-guide.md
----------------------------------------------------------------------
diff --git a/griffin-doc/measure/dsl-guide.md b/griffin-doc/measure/dsl-guide.md
index 779ea6a..4eb294e 100644
--- a/griffin-doc/measure/dsl-guide.md
+++ b/griffin-doc/measure/dsl-guide.md
@@ -20,7 +20,7 @@ under the License.
 # Apache Griffin DSL Guide
 Griffin DSL is designed for DQ measurement, as a SQL-like language, which describes the DQ domain request.
 
-## Griffin DSL Syntax Description
+## Apache Griffin DSL Syntax Description
 Griffin DSL syntax is easy to learn as it's SQL-like, case insensitive.
 
 ### Supporting process
@@ -100,7 +100,7 @@ Griffin DSL syntax is easy to learn as it's SQL-like, case insensitive.
 	e.g. `max(source.age, target.age)`, `count(*)`
 
 ### Clause
-- **select clause**: the result columns like sql select clause, we can ignore the word "select" in Griffin DSL.  
+- **select clause**: the result columns like sql select clause, we can ignore the word "select" in Apache Griffin DSL.  
 	e.g. `select user_id.count(), age.max() as max`, `source.user_id.count() as cnt, source.age.min()`
 - **from clause**: the table name like sql from clause, in which the data source name must be one of data source names or the output table name of the former rule steps, we can ignore this clause by configoring the data source name.  
 	e.g. `from source`, ``from `target` ``
@@ -114,28 +114,28 @@ Griffin DSL syntax is easy to learn as it's SQL-like, case insensitive.
 	e.g. `limit 5`
 
 ### Accuracy Rule
-Accuracy rule expression in Griffin DSL is a logical expression, telling the mapping relation between data sources.  
+Accuracy rule expression in Apache Griffin DSL is a logical expression, telling the mapping relation between data sources.  
 	e.g. `source.id = target.id and source.name = target.name and source.age between (target.age, target.age + 5)`
 
 ### Profiling Rule
-Profiling rule expression in Griffin DSL is a sql-like expression, with select clause ahead, following optional from clause, where clause, group-by clause, order-by clause, limit clause in order.  
+Profiling rule expression in Apache Griffin DSL is a sql-like expression, with select clause ahead, following optional from clause, where clause, group-by clause, order-by clause, limit clause in order.  
 	e.g. `source.gender, source.id.count() where source.age > 20 group by source.gender`, `select country, max(age), min(age), count(*) as cnt from source group by country order by cnt desc limit 5`
 
 ### Distinctness Rule
-Distinctness rule expression in Griffin DSL is a list of selection expressions separated by comma, indicates the columns to check if is distinct.
+Distinctness rule expression in Apache Griffin DSL is a list of selection expressions separated by comma, indicates the columns to check if is distinct.
     e.g. `name, age`, `name, (age + 1) as next_age`
 
 ### Timeliness Rule
-Timeliness rule expression in Griffin DSL is a list of selection expressions separated by comma, indicates the input time and output time (calculate time as default if not set).  
+Timeliness rule expression in Apache Griffin DSL is a list of selection expressions separated by comma, indicates the input time and output time (calculate time as default if not set).  
 	e.g. `ts`, `ts, end_ts`
 
-## Griffin DSL translation to SQL
-Griffin DSL is defined for DQ measurement, to describe DQ domain problem.  
-Actually, in Griffin, we get Griffin DSL rules, translate them into spark-sql rules for calculation in spark-sql engine.  
+## Apache Griffin DSL translation to SQL
+Apache Griffin DSL is defined for DQ measurement, to describe DQ domain problem.  
+Actually, in Apache Griffin, we get Apache Griffin DSL rules, translate them into spark-sql rules for calculation in spark-sql engine.  
 In DQ domain, there're multiple dimensions, we need to translate them in different ways.
 
 ### Accuracy
-For accuracy, we need to get the match count between source and target, the rule describes the mapping relation between data sources. Griffin needs to translate the dsl rule into multiple sql rules.  
+For accuracy, we need to get the match count between source and target, the rule describes the mapping relation between data sources. Apache Griffin needs to translate the dsl rule into multiple sql rules.  
 For example, the dsl rule is `source.id = target.id and source.name = target.name`, which represents the match condition of accuracy. After the translation, the sql rules are as below:  
 - **get miss items from source**: `SELECT source.* FROM source LEFT JOIN target ON coalesce(source.id, '') = coalesce(target.id, '') and coalesce(source.name, '') = coalesce(target.name, '') WHERE (NOT (source.id IS NULL AND source.name IS NULL)) AND (target.id IS NULL AND target.name IS NULL)`, save as table `miss_items`.
 - **get miss count**: `SELECT COUNT(*) AS miss FROM miss_items`, save as table `miss_count`.
@@ -175,7 +175,7 @@ For example, the dsl rule is `ts, out_ts`, the first column means the input time
 After the translation, the metrics will be persisted in table `time_metric`.
 
 ## Alternative Rules
-You can simply use Griffin DSL rule to describe your problem in DQ domain, for some complicate requirement, you can also use some alternative rules supported by Griffin.  
+You can simply use Griffin DSL rule to describe your problem in DQ domain, for some complicate requirement, you can also use some alternative rules supported by Apache Griffin.  
 
 ### Spark sql
 Griffin supports spark-sql directly, you can write rule in sql like this:  
@@ -201,12 +201,12 @@ Griffin supports some other operations on data frame in spark, like converting j
 }
 ```
 Griffin will do the operation to extract json strings.  
-Actually, you can also extend the df-opr engine and df-opr adaptor in Griffin to support more types of data frame operations.  
+Actually, you can also extend the df-opr engine and df-opr adaptor in Apache Griffin to support more types of data frame operations.  
 
 ## Tips
 Griffin engine runs on spark, it might work in two phases, pre-proc phase and run phase.  
-- **Pre-proc phase**: Griffin calculates data source directly, to get appropriate data format, as a preparation for DQ calculation. In this phase, you can use df-opr and spark-sql rules.  
+- **Pre-proc phase**: Apache Griffin calculates data source directly, to get appropriate data format, as a preparation for DQ calculation. In this phase, you can use df-opr and spark-sql rules.  
 After preparation, to support streaming DQ calculation, a timestamp column will be added in each row of data, so the data frame in run phase contains an extra column named "__tmst".  
-- **Run phase**: Griffin calculates with prepared data, to get the DQ metrics. In this phase, you can use griffin-dsl, spark-sql rules, and a part of df-opr rules.  
-For griffin-dsl rule, griffin translates it into spark-sql rule with a group-by condition for column "__tmst", it's useful for especially streaming DQ calculation.  
-But for spark-sql rule, griffin use it directly, you need to add the "__tmst" column in your spark-sql rule explicitly, or you can't get correct metrics result after calculation.
+- **Run phase**: Apache Griffin calculates with prepared data, to get the DQ metrics. In this phase, you can use griffin-dsl, spark-sql rules, and a part of df-opr rules.  
+For griffin-dsl rule, Apache Griffin translates it into spark-sql rule with a group-by condition for column "__tmst", it's useful for especially streaming DQ calculation.  
+But for spark-sql rule, Apache Griffin use it directly, you need to add the "__tmst" column in your spark-sql rule explicitly, or you can't get correct metrics result after calculation.

http://git-wip-us.apache.org/repos/asf/incubator-griffin/blob/cbde1e4f/griffin-doc/measure/measure-configuration-guide.md
----------------------------------------------------------------------
diff --git a/griffin-doc/measure/measure-configuration-guide.md b/griffin-doc/measure/measure-configuration-guide.md
index 4f00803..13774e1 100644
--- a/griffin-doc/measure/measure-configuration-guide.md
+++ b/griffin-doc/measure/measure-configuration-guide.md
@@ -17,7 +17,7 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Griffin Measure Configuration Guide
+# Apache Griffin Measure Configuration Guide
 Apache Griffin measure module needs two configuration files to define the parameters of execution, one is for environment, the other is for dq job.
 
 ## Environment Parameters

http://git-wip-us.apache.org/repos/asf/incubator-griffin/blob/cbde1e4f/griffin-doc/measure/measure-streaming-sample.md
----------------------------------------------------------------------
diff --git a/griffin-doc/measure/measure-streaming-sample.md b/griffin-doc/measure/measure-streaming-sample.md
index 846d870..1d9f70e 100644
--- a/griffin-doc/measure/measure-streaming-sample.md
+++ b/griffin-doc/measure/measure-streaming-sample.md
@@ -147,9 +147,9 @@ Above is the configure file of streaming accuracy job.
 
 ### Data source
 In this sample, we use kafka topics as source and target.  
-At current, griffin supports kafka 0.8, for 1.0 or later version is during implementation.  
-In griffin implementation, we can only support json string as kafka data, which could describe itself in data. In some other solution, there might be a schema proxy for kafka binary data, you can implement such data source connector if you need, it's also during implementation by us.
-In streaming cases, the data from topics always needs some pre-process first, which is configured in `pre.proc`, just like the `rules`, griffin will not parse sql content, so we use some pattern to mark your temporory tables. `${this}` means the origin data set, and the output table name should also be `${this}`.
+At current, Apache Griffin supports kafka 0.8, for 1.0 or later version is during implementation.  
+In Apache Griffin implementation, we can only support json string as kafka data, which could describe itself in data. In some other solution, there might be a schema proxy for kafka binary data, you can implement such data source connector if you need, it's also during implementation by us.
+In streaming cases, the data from topics always needs some pre-process first, which is configured in `pre.proc`, just like the `rules`, Apache Griffin will not parse sql content, so we use some pattern to mark your temporory tables. `${this}` means the origin data set, and the output table name should also be `${this}`.
 
 For example, you can create two topics in kafka, for source and target data, the format could be json string.
 Source data could be:
@@ -262,4 +262,4 @@ In this sample, we use kafka topics as source.
 
 ### Evaluate rule
 In this profiling sample, the rule describes the profiling request: `select count(name) as `cnt`, max(age) as `max`, min(age) as `min` from source` and `select name, count(*) as `cnt` from source group by name`.  
-The profiling metrics will be persisted as metric, with these two results in one json.
\ No newline at end of file
+The profiling metrics will be persisted as metric, with these two results in one json.

http://git-wip-us.apache.org/repos/asf/incubator-griffin/blob/cbde1e4f/griffin-doc/roadmap.md
----------------------------------------------------------------------
diff --git a/griffin-doc/roadmap.md b/griffin-doc/roadmap.md
index 53023f4..9a5e291 100644
--- a/griffin-doc/roadmap.md
+++ b/griffin-doc/roadmap.md
@@ -22,7 +22,7 @@ under the License.
 In the current release, we've implemented main DQ features below
 
 - **Data Asset Detection**
-  Enabling configuration in service module, Griffin can detect the Hive tables metadata through Hive metastore service.
+  Enabling configuration in service module, Apache Griffin can detect the Hive tables metadata through Hive metastore service.
 
 - **Measure Management**
   Performing operations on UI, user can create, delete and update three types of measures, including: accuracy, profiling and publish metrics.
@@ -42,11 +42,11 @@ In the current release, we've implemented main DQ features below
 ## Short-term Roadmap
 
 - **Support more data source types**
-  Currently, Griffin only supports Hive table, avro files on hdfs as data source in batch mode, Kafka as data source in streaming mode.
+  Currently, Apache Griffin only supports Hive table, avro files on hdfs as data source in batch mode, Kafka as data source in streaming mode.
   We plan to support more data source types, like RDBM, elasticsearch.
 
 - **Support more data quality dimensions**
-  Griffin needs to support more data quality dimensions, like consistency and validity.
+  Apache Griffin needs to support more data quality dimensions, like consistency and validity.
 
 - **Anomaly Detection**
-  Griffin plans to support anomaly detection, by analyzing metrics calculated from elasticsearch.
\ No newline at end of file
+  Apache Griffin plans to support anomaly detection, by analyzing metrics calculated from elasticsearch.

http://git-wip-us.apache.org/repos/asf/incubator-griffin/blob/cbde1e4f/griffin-doc/service/api-guide.md
----------------------------------------------------------------------
diff --git a/griffin-doc/service/api-guide.md b/griffin-doc/service/api-guide.md
index 93956ea..a7ab348 100644
--- a/griffin-doc/service/api-guide.md
+++ b/griffin-doc/service/api-guide.md
@@ -19,7 +19,7 @@ under the License.
 
 # Apache Griffin API Guide
 
-This page lists the major RESTful APIs provided by Griffin.
+This page lists the major RESTful APIs provided by Apache Griffin.
 
 Apache Griffin default `BASE_PATH` is `http://<your ip>:8080`. 
 
@@ -41,7 +41,7 @@ Apache Griffin default `BASE_PATH` is `http://<your ip>:8080`.
 <h2 id = "0"></h2>
 
 ## HTTP Response Design
-We follow general rules to design Griffin's REST APIs. In the HTTP response that is sent to a client, 
+We follow general rules to design Apache Griffin's REST APIs. In the HTTP response that is sent to a client, 
 the status code, which is a three-digit number, is accompanied by a reason phrase (also known as status text) that simply describes the meaning of the code. 
 The status codes are classified by number range, with each class of codes having the same basic meaning.
 * The range 100-199 is classed as Informational.
@@ -50,7 +50,7 @@ The status codes are classified by number range, with each class of codes having
 * 400-499 is Client error.
 * 500-599 is Server error.
 
-### Valid Griffin Response
+### Valid Apache Griffin Response
 The valid HTTP response is designed as follows:
 
 | Action | HTTP Status | Response Body |
@@ -62,7 +62,7 @@ The valid HTTP response is designed as follows:
 
 ***Note that:*** The metric module is implemented with elasticsearch bulk api, so the responses do not follow rules above.
 
-### Invalid Griffin Response
+### Invalid Apache Griffin Response
 The response for exception is designed as follows:
 
 | Action | HTTP Status | Response Body |
@@ -103,9 +103,9 @@ Description:
 
 <h2 id = "1"></h2>
 
-## Griffin Basic
+## Apache Griffin Basic
 
-### Get griffin version
+### Get Apache Griffin version
 `GET /api/v1/version`
 
 #### Response Body Sample
@@ -132,9 +132,9 @@ Description:
 
 #### Request Body example 
 
-There are two kind of different measures, griffin measure and external measure. And for each type of measure, the 'dq.type' can be 'accuracy' or 'profiling'.
+There are two kind of different measures, Apache Griffin measure and external measure. And for each type of measure, the 'dq.type' can be 'accuracy' or 'profiling'.
 
-Here is a request body example to create a griffin measure of  profiling:
+Here is a request body example to create a Apache Griffin measure of  profiling:
 ```
 {
     "name":"profiling_measure",
@@ -194,7 +194,7 @@ Here is a request body example to create a griffin measure of  profiling:
     }
 }
 ```
-And for griffin measure of accuracy:
+And for Apache Griffin measure of accuracy:
 ```
 {
     "name":"accuracy_measure",
@@ -457,9 +457,9 @@ The response body should be the created measure if success. For example:
 | measure | measure entity | Measure |
 
 #### Request Body example 
-There are two kind of different measures, griffin measure and external measure. And for each type of measure, the 'dq.type' can be 'accuracy' or 'profiling'.
+There are two kind of different measures, Apache Griffin measure and external measure. And for each type of measure, the 'dq.type' can be 'accuracy' or 'profiling'.
 
-Here is a request body example to update a griffin measure of accuracy:
+Here is a request body example to update a Apache Griffin measure of accuracy:
 ```
 {
     "id": 1,

http://git-wip-us.apache.org/repos/asf/incubator-griffin/blob/cbde1e4f/griffin-doc/service/hibernate_eclipselink_switch.md
----------------------------------------------------------------------
diff --git a/griffin-doc/service/hibernate_eclipselink_switch.md b/griffin-doc/service/hibernate_eclipselink_switch.md
index edb87af..2890352 100644
--- a/griffin-doc/service/hibernate_eclipselink_switch.md
+++ b/griffin-doc/service/hibernate_eclipselink_switch.md
@@ -69,7 +69,7 @@ To use it in our Spring Boot application, we just need to add the org.eclipse.pe
 <h2 id = "1.3"></h2> 
 
 ### Configure EclipseLink static weaving
-EclipseLink requires the domain types to be instrumented to implement lazy-loading. This can be achieved either through static weaving at compile time or dynamically at class loading time (load-time weaving). In Griffin,we use static weaving in pom.xml.
+EclipseLink requires the domain types to be instrumented to implement lazy-loading. This can be achieved either through static weaving at compile time or dynamically at class loading time (load-time weaving). In Apache Griffin,we use static weaving in pom.xml.
 
     <build>
         <plugins>
@@ -102,7 +102,7 @@ EclipseLink requires the domain types to be instrumented to implement lazy-loadi
 **JpaBaseConfiguration is an abstract class which defines beans for JPA** in Spring Boot. Spring  provides a configuration implementation for Hibernate out of the box called HibernateJpaAutoConfiguration. However, for EclipseLink, we have to create a custom configuration.To customize it, we have to implement some methods like createJpaVendorAdapter() or getVendorProperties().
 First, we need to implement the createJpaVendorAdapter() method which specifies the JPA implementation to use.
 Also, we have to define some vendor-specific properties which will be used by EclipseLink.We can add these via the getVendorProperties() method.
-**Add following code as a class to org.apache.griffin.core.config package in Griffin project.**
+**Add following code as a class to org.apache.griffin.core.config package in Apache Griffin project.**
    
 
         @Configuration
@@ -131,7 +131,7 @@ Also, we have to define some vendor-specific properties which will be used by Ec
 <h2 id = "1.5"></h2>
 
 #### Configure properties
-You need to configure properties according to the database you use in Griffin.
+You need to configure properties according to the database you use in Apache Griffin.
 Please see [Mysql and postgresql switch](https://github.com/apache/incubator-griffin/blob/master/griffin-doc/service/mysql_postgresql_switch.md) to configure.
 
 <h2 id = "0.1"></h2>
@@ -140,7 +140,7 @@ Please see [Mysql and postgresql switch](https://github.com/apache/incubator-gri
 Here we'll go through steps necessary to migrate applications from using EclipseLink JPA to using Hibernate JPA.The migration will not need to convert any EclipseLink annotations to Hibernate annotations in application code. 
 
 ## Quick use
-In Griffin, we provide **hibernate_mysql_pom.xml** file for hibernate and mysql. If you want to quick use hibernate and mysql with jar, firstly you should [configure properties](#2.3) and then use command `mvn clean package -f pom_hibernate.xml` to package jar.
+In Apache Griffin, we provide **hibernate_mysql_pom.xml** file for hibernate and mysql. If you want to quick use hibernate and mysql with jar, firstly you should [configure properties](#2.3) and then use command `mvn clean package -f pom_hibernate.xml` to package jar.
 
 ## Migration main steps
 - [add hibernate dependency](#2.1)
@@ -200,5 +200,5 @@ By default, Spring Data uses Hibernate as the default JPA implementation provide
 <h2 id = "2.3"></h2>
 
 #### Configure properties
-You need to configure properties according to the database you use in Griffin.
+You need to configure properties according to the database you use in Apache Griffin.
 Please see [Mysql and postgresql switch](https://github.com/apache/incubator-griffin/blob/master/griffin-doc/service/mysql_postgresql_switch.md) to configure.

http://git-wip-us.apache.org/repos/asf/incubator-griffin/blob/cbde1e4f/griffin-doc/service/mysql_postgresql_switch.md
----------------------------------------------------------------------
diff --git a/griffin-doc/service/mysql_postgresql_switch.md b/griffin-doc/service/mysql_postgresql_switch.md
index 4d8c07f..7511144 100644
--- a/griffin-doc/service/mysql_postgresql_switch.md
+++ b/griffin-doc/service/mysql_postgresql_switch.md
@@ -20,7 +20,7 @@ under the License.
 # Mysql and postgresql switch
 
 ## Overview
-By default, Griffin uses EclipseLink as the default JPA implementation. This document provides ways to switch mysql and postgresql.
+By default, Apache Griffin uses EclipseLink as the default JPA implementation. This document provides ways to switch mysql and postgresql.
 
 - [Use mysql database](#1.1)
 - [Use postgresql database](#1.2)

http://git-wip-us.apache.org/repos/asf/incubator-griffin/blob/cbde1e4f/griffin-doc/ui/user-guide.md
----------------------------------------------------------------------
diff --git a/griffin-doc/ui/user-guide.md b/griffin-doc/ui/user-guide.md
index 775d16f..a19bcab 100644
--- a/griffin-doc/ui/user-guide.md
+++ b/griffin-doc/ui/user-guide.md
@@ -139,7 +139,7 @@ Fill out the block of job configuration.
 - Begin: data segment start time comparing with trigger time
 - End: data segment end time comparing with trigger time.
 
-After submit the job, griffin will schedule the job in background, and after calculation, you can monitor the dashboard to view the result on UI.
+After submit the job, Apache Griffin will schedule the job in background, and after calculation, you can monitor the dashboard to view the result on UI.
 
 ## 3 Metrics dashboard