You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spot.apache.org by ev...@apache.org on 2017/03/29 16:51:54 UTC

[30/50] [abbrv] incubator-spot git commit: Updating setup documentation

Updating setup documentation


Project: http://git-wip-us.apache.org/repos/asf/incubator-spot/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-spot/commit/70db6eec
Tree: http://git-wip-us.apache.org/repos/asf/incubator-spot/tree/70db6eec
Diff: http://git-wip-us.apache.org/repos/asf/incubator-spot/diff/70db6eec

Branch: refs/heads/SPOT-35_graphql_api
Commit: 70db6eec3b7346fed4c8920e7786e9320dd7d666
Parents: 03e6319
Author: Moises Valdovinos <mv...@mvaldovi-mac01.amr.corp.intel.com>
Authored: Thu Mar 9 01:50:20 2017 -0600
Committer: Diego Ortiz Huerta <di...@intel.com>
Committed: Wed Mar 15 11:49:48 2017 -0700

----------------------------------------------------------------------
 spot-setup/README.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-spot/blob/70db6eec/spot-setup/README.md
----------------------------------------------------------------------
diff --git a/spot-setup/README.md b/spot-setup/README.md
index ad72eb2..e0125b3 100644
--- a/spot-setup/README.md
+++ b/spot-setup/README.md
@@ -18,7 +18,7 @@ To collaborate and run spot-setup, it is required the following prerequisites:
 
 ## General Description
 
-The main script in the repository is **hdfs_setup.sh** which is responsible of loading environment variables, creating folders in Hadoop for the different use cases (flow, DNS or Proxy), create the Hive database, and finally execute hive query scripts that creates Hive tables needed to access netflow, dns and proxy data.
+The main script in the repository is **hdfs_setup.sh** which is responsible of loading environment variables, creating folders in Hadoop for the different use cases (flow, DNS or Proxy), create the Impala database, and finally execute Impala query scripts that creates Impala tables needed to access netflow, dns and proxy data.
 
 ## Environment Variables
 
@@ -32,7 +32,7 @@ To read more about these variables, please review the [documentation] (http://sp
 
 spot-setup contains a script per use case, as of today, there is a table creation script for each DNS, flow and Proxy data.
 
-These HQL scripts are intended to be executed as a Hive statement and must comply HQL standards.
+These HQL scripts are intended to be executed as a Impala statement and must comply HQL standards.
 
 We create tables using Parquet format to get a faster query performance. This format is an industry standard and you can find more information about it on:
 - Parquet is a columnar storage format - https://parquet.apache.org/