You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tajo.apache.org by hy...@apache.org on 2015/03/09 03:35:30 UTC

svn commit: r1665114 [2/30] - in /tajo/site/docs: 0.10.0/ 0.10.0/_sources/ 0.10.0/_sources/backup_and_restore/ 0.10.0/_sources/configuration/ 0.10.0/_sources/functions/ 0.10.0/_sources/getting_started/ 0.10.0/_sources/index/ 0.10.0/_sources/partitionin...

Added: tajo/site/docs/0.10.0/_sources/functions/string_func_and_operators.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/functions/string_func_and_operators.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/functions/string_func_and_operators.txt (added)
+++ tajo/site/docs/0.10.0/_sources/functions/string_func_and_operators.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,154 @@
+*******************************
+String Functions and Operators
+*******************************
+
+.. function:: str1 || str2
+
+  Returns the concatnenated string of both side strings ``str1`` and ``str2``.
+
+  :param str1: first string
+  :param str2: second string
+  :rtype: text
+  :example:
+
+  .. code-block:: sql
+
+    select ‘Ta’ || ‘jo’; 
+    > 'Tajo'
+  
+
+.. function:: char_length (string text)
+
+  Returns Number of characters in string
+
+  :param string: to be counted
+  :rtype: int4
+  :alias: character_length
+  :example:
+
+  .. code-block:: sql
+
+    select char_length(‘Tajo’);
+    > 4
+
+
+.. function:: trim([leading | trailing | both] [characters] from string)
+
+  Removes the characters (a space by default) from the start/end/both ends of the string
+
+  :param string: 
+  :param characters: 
+  :rtype: text
+  :example:
+
+  .. code-block:: sql
+
+    select trim(both ‘x’ from ‘xTajoxx’);
+    > Tajo   
+
+
+.. function:: btrim(string text, [characters text])
+
+  Removes the characters (a space by default) from the both ends of the string
+  
+  :param string: 
+  :param characters: 
+  :rtype: text
+  :alias: trim
+  :example:
+
+  .. code-block:: sql
+
+    select btrim(‘xTajoxx’, ‘x’);
+    > Tajo 
+
+
+.. function:: ltrim(string text, [characters text])
+
+  Removes the characters (a space by default) from the start ends of the string
+
+  :param string: 
+  :param characters: 
+  :rtype: text
+  :example:
+
+  .. code-block:: sql
+
+    select ltrim(‘xxTajo’, ‘x’);
+    > Tajo 
+
+
+.. function:: rtrim(string text, [characters text])
+
+  Removes the characters (a space by default) from the end ends of the string
+
+  :param string: 
+  :param characters: 
+  :rtype: text
+  :example:
+
+  .. code-block:: sql
+
+    select rtrim('Tajoxx', 'x');
+    > Tajo 
+
+
+.. function:: split_part(string text, delimiter text, field int)
+
+  Splits a string on delimiter and return the given field (counting from one)
+
+  :param string: 
+  :param delimiter: 
+  :param field: 
+  :rtype: text
+  :example:
+
+  .. code-block:: sql
+
+    select split_part(‘ab_bc_cd’,‘_’,2);   
+    > bc 
+
+
+
+.. function:: regexp_replace(string text, pattern text, replacement text)
+
+  Replaces substrings matched to a given regular expression pattern
+
+  :param string: 
+  :param pattern: 
+  :param replacement: 
+  :rtype: text
+  :example:
+
+  .. code-block:: sql
+
+    select regexp_replace(‘abcdef’, ‘(ˆab|ef$)’, ‘–’); 
+    > –cd–
+
+
+.. function:: upper(string text)
+
+  makes an input text to be upper case
+
+  :param string:
+  :rtype: text
+  :example:
+
+  .. code-block:: sql
+
+    select upper('tajo');
+    > TAJO
+
+
+.. function:: lower(string text)
+
+  makes an input text to be lower case
+
+  :param string:
+  :rtype: text
+  :example:
+
+  .. code-block:: sql
+
+    select lower('TAJO');
+    > tajo

Added: tajo/site/docs/0.10.0/_sources/getting_started.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/getting_started.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/getting_started.txt (added)
+++ tajo/site/docs/0.10.0/_sources/getting_started.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,182 @@
+***************
+Getting Started
+***************
+
+In this section, we explain setup of a standalone Tajo instance. It will run against the local filesystem. In later sections, we will present how to run Tajo cluster instance on Apache Hadoop's HDFS, a distributed filesystem. This section shows you how to start up a Tajo cluster, create tables in your Tajo cluster, submit SQL queries via Tajo shell, and shutting down your Tajo cluster instance. The below exercise should take no more than ten minutes.
+
+======================
+Prerequisites
+======================
+
+ * Hadoop 2.3.0 or higher (up to 2.5.1)
+ * Java 1.6 or 1.7
+ * Protocol buffer 2.5.0
+
+===================================
+Dowload and unpack the source code
+===================================
+
+You can either download the source code release of Tajo or check out the development codebase from Git.
+
+-----------------------------------
+Download the latest source release
+-----------------------------------
+
+Choose a download site from this list of `Apache Download Mirrors <http://www.apache.org/dyn/closer.cgi/tajo>`_.
+Click on the suggested mirror link. This will take you to a mirror of Tajo Releases. 
+Download the file that ends in .tar.gz to your local filesystem, e.g. tajo-x.y.z-src.tar.gz.
+
+Decompress and untar your downloaded file and then change into the unpacked directory. ::
+
+  tar xzvf tajo-x.y.z-src.tar.gz
+
+-----------------------------------
+Check out the source code via Git
+-----------------------------------
+
+The development codebase can also be downloaded from `the Apache git repository <https://git-wip-us.apache.org/repos/asf/tajo.git>`_ as follows: ::
+
+  git clone https://git-wip-us.apache.org/repos/asf/tajo.git
+
+A read-only git repository is also mirrored on `Github <https://github.com/apache/tajo>`_.
+
+
+=================
+Build source code
+=================
+
+You prepare the prerequisites and the source code, you can build the source code now.
+
+The first step of the installation procedure is to configure the source tree for your system and choose the options you would like. This is done by running the configure script. For a default installation simply enter:
+
+You can compile source code and get a binary archive as follows:
+
+.. code-block:: bash
+
+  $ cd tajo-x.y.z
+  $ mvn clean install -DskipTests -Pdist -Dtar -Dhadoop.version=2.X.X
+  $ ls tajo-dist/target/tajo-x.y.z-SNAPSHOT.tar.gz
+
+.. note::
+
+  If you don't specify the hadoop version, tajo cluster may not run correctly. Thus, we highly recommend that you specify your hadoop version with maven build command.
+
+  Example:
+
+    $ mvn clean install -DskipTests -Pdist -Dtar -Dhadoop.version=2.5.1
+
+Then, after you move some proper directory, discompress the tar.gz file as follows:
+
+.. code-block:: bash
+
+  $ cd [a directory to be parent of tajo binary]
+  $ tar xzvf ${TAJO_SRC}/tajo-dist/target/tajo-x.y.z-SNAPSHOT.tar.gz
+
+================================
+Setting up a local Tajo cluster
+================================
+
+Apache Tajo™ provides two run modes: local mode and fully distributed mode. Here, we explain only the local mode where a Tajo instance runs on a local file system. A local mode Tajo instance can start up with very simple configurations.
+
+First of all, you need to add the environment variables to conf/tajo-env.sh.
+
+.. code-block:: bash
+
+  # Hadoop home. Required
+  export HADOOP_HOME= ...
+
+  # The java implementation to use.  Required.
+  export JAVA_HOME= ...
+
+To launch the tajo master, execute start-tajo.sh.
+
+.. code-block:: bash
+
+  $ $TAJO_HOME/bin/start-tajo.sh
+
+.. note::
+
+  If you want to how to setup a fully distributed mode of Tajo, please see :doc:`/configuration/cluster_setup`.
+
+.. warning::
+
+  By default, *Catalog server* which manages table meta data uses `Apache Derby <http://db.apache.org/derby/>`_ as a persistent storage, and Derby stores data into ``/tmp/tajo-catalog-${username}`` directory. But, some operating systems may remove all contents in ``/tmp`` when booting up. In order to ensure persistent store of your catalog data, you need to set a proper location of derby directory. To learn Catalog configuration, please refer to :doc:`/configuration/catalog_configuration`.
+
+======================
+First query execution
+======================
+
+First of all, we need to prepare some table for query execution. For example, you can make a simple text-based table as follows: 
+
+.. code-block:: bash
+
+  $ mkdir /home/x/table1
+  $ cd /home/x/table1
+  $ cat > data.csv
+  1|abc|1.1|a
+  2|def|2.3|b
+  3|ghi|3.4|c
+  4|jkl|4.5|d
+  5|mno|5.6|e
+  <CTRL + D>
+
+
+Apache Tajo™ provides a SQL shell which allows users to interactively submit SQL queries. In order to use this shell, please execute ``bin/tsql`` ::
+
+  $ $TAJO_HOME/bin/tsql
+  tajo>
+
+In order to load the table we created above, we should think of a schema of the table.
+Here, we assume the schema as (int, text, float, text). ::
+
+  $ $TAJO_HOME/bin/tsql
+  tajo> create external table table1 (
+        id int,
+        name text, 
+        score float, 
+        type text) 
+        using csv with ('text.delimiter'='|') location 'file:/home/x/table1';
+
+To load an external table, you need to use ‘create external table’ statement. 
+In the location clause, you should use the absolute directory path with an appropriate scheme. 
+If the table resides in HDFS, you should use ‘hdfs’ instead of ‘file’.
+
+If you want to know DDL statements in more detail, please see Query Language. ::
+
+  tajo> \d
+  table1
+
+ ``\d`` command shows the list of tables. ::
+
+  tajo> \d table1
+
+  table name: table1
+  table path: file:/home/x/table1
+  store type: CSV
+  number of rows: 0
+  volume (bytes): 78 B
+  schema:
+  id      INT
+  name    TEXT
+  score   FLOAT
+  type    TEXT
+
+``\d [table name]`` command shows the description of a given table.
+
+Also, you can execute SQL queries as follows: ::
+
+  tajo> select * from table1 where id > 2;
+  final state: QUERY_SUCCEEDED, init time: 0.069 sec, response time: 0.397 sec
+  result: file:/tmp/tajo-hadoop/staging/q_1363768615503_0001_000001/RESULT, 3 rows ( 35B)
+
+  id,  name,  score,  type
+  - - - - - - - - - -  - - -
+  3,  ghi,  3.4,  c
+  4,  jkl,  4.5,  d
+  5,  mno,  5.6,  e
+
+  tajo> \q
+  bye
+
+Feel free to enjoy Tajo with SQL standards. 
+If you want to know more explanation for SQL supported by Tajo, please refer :doc:`/sql_language`.
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/getting_started/building.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/getting_started/building.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/getting_started/building.txt (added)
+++ tajo/site/docs/0.10.0/_sources/getting_started/building.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,30 @@
+*****************
+Build source code
+*****************
+
+You prepare the prerequisites and the source code, you can build the source code now.
+
+The first step of the installation procedure is to configure the source tree for your system and choose the options you would like. This is done by running the configure script. For a default installation simply enter:
+
+You can compile source code and get a binary archive as follows:
+
+.. code-block:: bash
+
+  $ cd tajo-x.y.z
+  $ mvn clean install -DskipTests -Pdist -Dtar -Dhadoop.version=2.X.X
+  $ ls tajo-dist/target/tajo-x.y.z-SNAPSHOT.tar.gz
+
+.. note::
+
+  If you don't specify the hadoop version, tajo cluster may not run correctly. Thus, we highly recommend that you specify your hadoop version with maven build command.
+
+  Example:
+
+    $ mvn clean install -DskipTests -Pdist -Dtar -Dhadoop.version=2.5.1
+
+Then, after you move some proper directory, discompress the tar.gz file as follows:
+
+.. code-block:: bash
+
+  $ cd [a directory to be parent of tajo binary]
+  $ tar xzvf ${TAJO_SRC}/tajo-dist/target/tajo-x.y.z-SNAPSHOT.tar.gz
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/getting_started/downloading_source.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/getting_started/downloading_source.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/getting_started/downloading_source.txt (added)
+++ tajo/site/docs/0.10.0/_sources/getting_started/downloading_source.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,31 @@
+*************************************
+Dowload and unpack the source code
+*************************************
+
+You can either download the source code release of Tajo or check out the development codebase from Git.
+
+================================================
+Download the latest source release
+================================================
+
+Choose a download site from this list of `Apache Download Mirrors <http://www.apache.org/dyn/closer.cgi/tajo>`_.
+Click on the suggested mirror link. This will take you to a mirror of Tajo Releases. 
+Download the file that ends in .tar.gz to your local filesystem, e.g. tajo-x.y.z-src.tar.gz.
+
+Decompress and untar your downloaded file and then change into the unpacked directory. ::
+
+  tar xzvf tajo-x.y.z-src.tar.gz
+
+================================================
+Check out the source code via Git
+================================================
+
+The development codebase can also be downloaded from `the Apache git repository <https://git-wip-us.apache.org/repos/asf/tajo.git>`_ as follows: ::
+
+  git clone https://git-wip-us.apache.org/repos/asf/tajo.git
+
+A read-only git repository is also mirrored on `Github <https://github.com/apache/tajo>`_.
+
+
+
+

Added: tajo/site/docs/0.10.0/_sources/getting_started/first_query.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/getting_started/first_query.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/getting_started/first_query.txt (added)
+++ tajo/site/docs/0.10.0/_sources/getting_started/first_query.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,78 @@
+************************
+First query execution
+************************
+
+First of all, we need to prepare some table for query execution. For example, you can make a simple text-based table as follows: 
+
+.. code-block:: bash
+
+  $ mkdir /home/x/table1
+  $ cd /home/x/table1
+  $ cat > data.csv
+  1|abc|1.1|a
+  2|def|2.3|b
+  3|ghi|3.4|c
+  4|jkl|4.5|d
+  5|mno|5.6|e
+  <CTRL + D>
+
+
+Apache Tajo™ provides a SQL shell which allows users to interactively submit SQL queries. In order to use this shell, please execute ``bin/tsql`` ::
+
+  $ $TAJO_HOME/bin/tsql
+  tajo>
+
+In order to load the table we created above, we should think of a schema of the table.
+Here, we assume the schema as (int, text, float, text). ::
+
+  $ $TAJO_HOME/bin/tsql
+  tajo> create external table table1 (
+        id int,
+        name text, 
+        score float, 
+        type text) 
+        using csv with ('text.delimiter'='|') location 'file:/home/x/table1';
+
+To load an external table, you need to use ‘create external table’ statement. 
+In the location clause, you should use the absolute directory path with an appropriate scheme. 
+If the table resides in HDFS, you should use ‘hdfs’ instead of ‘file’.
+
+If you want to know DDL statements in more detail, please see Query Language. ::
+
+  tajo> \d
+  table1
+
+ ``\d`` command shows the list of tables. ::
+
+  tajo> \d table1
+
+  table name: table1
+  table path: file:/home/x/table1
+  store type: CSV
+  number of rows: 0
+  volume (bytes): 78 B
+  schema:
+  id      INT
+  name    TEXT
+  score   FLOAT
+  type    TEXT
+
+``\d [table name]`` command shows the description of a given table.
+
+Also, you can execute SQL queries as follows: ::
+
+  tajo> select * from table1 where id > 2;
+  final state: QUERY_SUCCEEDED, init time: 0.069 sec, response time: 0.397 sec
+  result: file:/tmp/tajo-hadoop/staging/q_1363768615503_0001_000001/RESULT, 3 rows ( 35B)
+
+  id,  name,  score,  type
+  - - - - - - - - - -  - - -
+  3,  ghi,  3.4,  c
+  4,  jkl,  4.5,  d
+  5,  mno,  5.6,  e
+
+  tajo> \q
+  bye
+
+Feel free to enjoy Tajo with SQL standards. 
+If you want to know more explanation for SQL supported by Tajo, please refer :doc:`/sql_language`.
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/getting_started/local_setup.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/getting_started/local_setup.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/getting_started/local_setup.txt (added)
+++ tajo/site/docs/0.10.0/_sources/getting_started/local_setup.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,31 @@
+**********************************
+Setting up a local Tajo cluster
+**********************************
+
+Apache Tajo™ provides two run modes: local mode and fully distributed mode. Here, we explain only the local mode where a Tajo instance runs on a local file system. A local mode Tajo instance can start up with very simple configurations.
+
+First of all, you need to add the environment variables to conf/tajo-env.sh.
+
+.. code-block:: bash
+
+  # Hadoop home. Required
+  export HADOOP_HOME= ...
+
+  # The java implementation to use.  Required.
+  export JAVA_HOME= ...
+
+To launch the tajo master, execute start-tajo.sh.
+
+.. code-block:: bash
+
+  $ $TAJO_HOME/bin/start-tajo.sh
+
+.. note::
+
+  If you want to how to setup a fully distributed mode of Tajo, please see :doc:`/configuration/cluster_setup`.
+
+.. warning::
+
+  By default, *Catalog server* which manages table meta data uses `Apache Derby <http://db.apache.org/derby/>`_ as a persistent storage, and Derby stores data into ``/tmp/tajo-catalog-${username}`` directory. But, some operating systems may remove all contents in ``/tmp`` when booting up. In order to ensure persistent store of your catalog data, you need to set a proper location of derby directory. To learn Catalog configuration, please refer to :doc:`/configuration/catalog_configuration`.
+
+

Added: tajo/site/docs/0.10.0/_sources/getting_started/prerequisites.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/getting_started/prerequisites.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/getting_started/prerequisites.txt (added)
+++ tajo/site/docs/0.10.0/_sources/getting_started/prerequisites.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,7 @@
+**********************
+Prerequisites
+**********************
+
+ * Hadoop 2.3.0 or higher (up to 2.5.1)
+ * Java 1.6 or 1.7
+ * Protocol buffer 2.5.0

Added: tajo/site/docs/0.10.0/_sources/hbase_integration.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/hbase_integration.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/hbase_integration.txt (added)
+++ tajo/site/docs/0.10.0/_sources/hbase_integration.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,183 @@
+*************************************
+HBase Integration
+*************************************
+
+Apache Tajo™ storage supports integration with Apache HBase™.
+This integration allows Tajo to access all tables used in Apache HBase.
+
+In order to use this feature, you need to build add some configs into ``conf/tajo-env.sh`` and then add some properties into a table create statement.
+
+This section describes how to setup HBase integration.
+
+First, you need to set your HBase home directory to the environment variable ``HBASE_HOME`` in conf/tajo-env.sh as follows: ::
+
+  export HBASE_HOME=/path/to/your/hbase/directory
+
+If you set the directory, Tajo will add HBase library file to classpath.
+
+
+
+========================
+CREATE TABLE
+========================
+
+*Synopsis*
+
+.. code-block:: sql
+
+  CREATE [EXTERNAL] TABLE [IF NOT EXISTS] <table_name> [(<column_name> <data_type>, ... )]
+  USING hbase
+  WITH ('table'='<hbase_table_name>'
+  , 'columns'=':key,<column_family_name>:<qualifier_name>, ...'
+  , 'hbase.zookeeper.quorum'='<zookeeper_address>'
+  , 'hbase.zookeeper.property.clientPort'='<zookeeper_client_port>'
+  )
+
+Options
+
+* ``table`` : Set hbase origin table name. If you want to create an external table, the table must exists on HBase. The other way, if you want to create a managed table, the table must doesn't exist on HBase.
+* ``columns`` : :key means HBase row key. The number of columns entry need to equals to the number of Tajo table column
+* ``hbase.zookeeper.quorum`` : Set zookeeper quorum address. You can use different zookeeper cluster on the same Tajo database. If you don't set the zookeeper address, Tajo will refer the property of hbase-site.xml file.
+* ``hbase.zookeeper.property.clientPort`` : Set zookeeper client port. If you don't set the port, Tajo will refer the property of hbase-site.xml file.
+
+``IF NOT EXISTS`` allows ``CREATE [EXTERNAL] TABLE`` statement to avoid an error which occurs when the table does not exist.
+
+
+
+========================
+ DROP TABLE
+========================
+
+*Synopsis*
+
+.. code-block:: sql
+
+  DROP TABLE [IF EXISTS] <table_name> [PURGE]
+
+``IF EXISTS`` allows ``DROP TABLE`` statement to avoid an error which occurs when the table does not exist. ``DROP TABLE`` statement removes a table from Tajo catalog, but it does not remove the contents on HBase cluster. If ``PURGE`` option is given, ``DROP TABLE`` statement will eliminate the entry in the catalog as well as the contents on HBase cluster.
+
+
+========================
+INSERT (OVERWRITE) INTO
+========================
+
+INSERT OVERWRITE statement overwrites a table data of an existing table. Tajo's INSERT OVERWRITE statement follows ``INSERT INTO SELECT`` statement of SQL. The examples are as follows:
+
+.. code-block:: sql
+
+  -- when a target table schema and output schema are equivalent to each other
+  INSERT OVERWRITE INTO t1 SELECT l_orderkey, l_partkey, l_quantity FROM lineitem;
+  -- or
+  INSERT OVERWRITE INTO t1 SELECT * FROM lineitem;
+
+  -- when the output schema are smaller than the target table schema
+  INSERT OVERWRITE INTO t1 SELECT l_orderkey FROM lineitem;
+
+  -- when you want to specify certain target columns
+  INSERT OVERWRITE INTO t1 (col1, col3) SELECT l_orderkey, l_quantity FROM lineitem;
+
+
+.. note::
+
+  If you don't set row key option, You are never able to use your table data. Because Tajo need to have some key columns for sorting before creating result data.
+
+
+
+========================
+Usage
+========================
+
+In order to create a new HBase table which is to be managed by Tajo, use the USING clause on CREATE TABLE:
+
+.. code-block:: sql
+
+  CREATE EXTERNAL TABLE blog (rowkey text, author text, register_date text, title text)
+  USING hbase WITH (
+    'table'='blog'
+    , 'columns'=':key,info:author,info:date,content:title');
+
+After executing the command above, you should be able to see the new table in the HBase shell:
+
+.. code-block:: sql
+
+  $ hbase shell
+  create 'blog', {NAME=>'info'}, {NAME=>'content'}
+  put 'blog', 'hyunsik-02', 'content:title', 'Getting started with Tajo on your desktop'
+  put 'blog', 'hyunsik-02', 'info:author', 'Hyunsik Choi'
+  put 'blog', 'hyunsik-02', 'info:date', '2014-12-03'
+  put 'blog', 'blrunner-01', 'content:title', 'Apache Tajo: A Big Data Warehouse System on Hadoop'
+  put 'blog', 'blrunner-01', 'info:author', 'Jaehwa Jung'
+  put 'blog', 'blrunner-01', 'info:date', '2014-10-31'
+  put 'blog', 'jhkim-01', 'content:title', 'APACHE TAJO™ v0.9 HAS ARRIVED!'
+  put 'blog', 'jhkim-01', 'info:author', 'Jinho Kim'
+  put 'blog', 'jhkim-01', 'info:date', '2014-10-22'
+
+And then create the table and query the table meta data with ``\d`` option:
+
+.. code-block:: sql
+
+  default> \d blog;
+
+  table name: default.blog
+  table path:
+  store type: HBASE
+  number of rows: unknown
+  volume: 0 B
+  Options:
+          'columns'=':key,info:author,info:date,content:title'
+          'table'='blog'
+
+  schema:
+  rowkey  TEXT
+  author  TEXT
+  register_date   TEXT
+  title   TEXT
+
+
+And then query the table as follows:
+
+.. code-block:: sql
+
+  default> SELECT * FROM blog;
+  rowkey,  author,  register_date,  title
+  -------------------------------
+  blrunner-01,  Jaehwa Jung,  2014-10-31,  Apache Tajo: A Big Data Warehouse System on Hadoop
+  hyunsik-02,  Hyunsik Choi,  2014-12-03,  Getting started with Tajo on your desktop
+  jhkim-01,  Jinho Kim,  2014-10-22,  APACHE TAJO™ v0.9 HAS ARRIVED!
+
+  default> SELECT * FROM blog WHERE rowkey = 'blrunner-01';
+  Progress: 100%, response time: 2.043 sec
+  rowkey,  author,  register_date,  title
+  -------------------------------
+  blrunner-01,  Jaehwa Jung,  2014-10-31,  Apache Tajo: A Big Data Warehouse System on Hadoop
+
+
+Here's how to insert data the HBase table:
+
+.. code-block:: sql
+
+  CREATE TABLE blog_backup(rowkey text, author text, register_date text, title text)
+  USING hbase WITH (
+    'table'='blog_backup'
+    , 'columns'=':key,info:author,info:date,content:title');
+  INSERT OVERWRITE INTO blog_backup SELECT * FROM blog;
+
+
+Use HBase shell to verify that the data actually got loaded:
+
+.. code-block:: sql
+
+  hbase(main):004:0> scan 'blog_backup'
+   ROW          COLUMN+CELL
+   blrunner-01  column=content:title, timestamp=1421227531054, value=Apache Tajo: A Big Data Warehouse System on Hadoop
+   blrunner-01  column=info:author, timestamp=1421227531054, value=Jaehwa Jung
+   blrunner-01  column=info:date, timestamp=1421227531054, value=2014-10-31
+   hyunsik-02   column=content:title, timestamp=1421227531054, value=Getting started with Tajo on your desktop
+   hyunsik-02   column=info:author, timestamp=1421227531054, value=Hyunsik Choi
+   hyunsik-02   column=info:date, timestamp=1421227531054, value=2014-12-03
+   jhkim-01     column=content:title, timestamp=1421227531054, value=APACHE TAJO\xE2\x84\xA2 v0.9 HAS ARRIVED!
+   jhkim-01     column=info:author, timestamp=1421227531054, value=Jinho Kim
+   jhkim-01     column=info:date, timestamp=1421227531054, value=2014-10-22
+  3 row(s) in 0.0470 seconds
+
+

Added: tajo/site/docs/0.10.0/_sources/hcatalog_integration.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/hcatalog_integration.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/hcatalog_integration.txt (added)
+++ tajo/site/docs/0.10.0/_sources/hcatalog_integration.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,52 @@
+*************************************
+HCatalog Integration
+*************************************
+
+Apache Tajo™ catalog supports HCatalogStore driver to integrate with Apache Hive™. 
+This integration allows Tajo to access all tables used in Apache Hive. 
+Depending on your purpose, you can execute either SQL queries or HiveQL queries on the 
+same tables managed in Apache Hive.
+
+In order to use this feature, you need to build Tajo with a specified maven profile 
+and then add some configs into ``conf/tajo-env.sh`` and ``conf/catalog-site.xml``. 
+This section describes how to setup HCatalog integration. 
+This instruction would take no more than ten minutes.
+
+First, you need to compile the source code with hcatalog profile. 
+Currently, Tajo supports hcatalog-0.11.0 and hcatalog-0.12.0 profile.
+So, if you want to use Hive 0.11.0, you need to set ``-Phcatalog-0.11.0`` as the maven profile ::
+
+  $ mvn clean package -DskipTests -Pdist -Dtar -Phcatalog-0.11.0
+
+Or, if you want to use Hive 0.12.0, you need to set ``-Phcatalog-0.12.0`` as the maven profile ::
+
+  $ mvn clean package -DskipTests -Pdist -Dtar -Phcatalog-0.12.0
+
+Then, you need to set your Hive home directory to the environment variable ``HIVE_HOME`` in conf/tajo-env.sh as follows: ::
+
+  export HIVE_HOME=/path/to/your/hive/directory
+
+If you need to use jdbc to connect HiveMetaStore, you have to prepare MySQL jdbc driver.
+Next, you should set the path of MySQL JDBC driver jar file to the environment variable HIVE_JDBC_DRIVER_DIR in conf/tajo-env.sh as follows: ::
+
+  export HIVE_JDBC_DRIVER_DIR==/path/to/your/mysql_jdbc_driver/mysql-connector-java-x.x.x-bin.jar
+
+Finally, you should specify HCatalogStore as Tajo catalog driver class in ``conf/catalog-site.xml`` as follows: ::
+
+  <property>
+    <name>tajo.catalog.store.class</name>
+    <value>org.apache.tajo.catalog.store.HCatalogStore</value>
+  </property>
+
+.. note::
+
+  Hive stores a list of partitions for each table in its metastore. If new partitions are
+  directly added to HDFS, HiveMetastore will not able aware of these partitions unless the user
+  ``ALTER TABLE table_name ADD PARTITION`` commands on each of the newly added partitions or
+  ``MSCK REPAIR TABLE  table_name`` command.
+
+  But current tajo doesn't provide ``ADD PARTITION`` command and hive doesn't provide an api for
+  responding to ``MSK REPAIR TABLE`` command. Thus, if you insert data to hive partitioned
+  table and you want to scan the updated partitions through Tajo, you must run following command on hive ::
+
+  $ MSCK REPAIR TABLE [table_name];

Added: tajo/site/docs/0.10.0/_sources/index.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/index.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/index.txt (added)
+++ tajo/site/docs/0.10.0/_sources/index.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,54 @@
+.. Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+.. http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+.. Apache Tajo documentation master file, created by
+   sphinx-quickstart on Thu Feb 27 08:29:11 2014.
+   You can adapt this file completely to your liking, but it should at least
+   contain the root `toctree` directive.
+
+Apache Tajo™ 0.10.0 - User documentation
+===========================================================================
+
+.. warning::
+   
+  This documentation is based on the development branch (master).
+  As a result, some contents can be mismatched to the actual implementation.
+
+Table of Contents:
+
+.. toctree::
+   :maxdepth: 3
+
+   introduction
+   getting_started
+   configuration
+   tsql
+   sql_language
+   time_zone
+   functions
+   table_management
+   table_partitioning
+   index_overview
+   backup_and_restore
+   hcatalog_integration
+   hbase_integration
+   jdbc_driver
+   tajo_client_api
+   faq
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
+

Added: tajo/site/docs/0.10.0/_sources/index/future_work.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/index/future_work.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/index/future_work.txt (added)
+++ tajo/site/docs/0.10.0/_sources/index/future_work.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,8 @@
+*************************************
+Future Works
+*************************************
+
+* Providing more index types, such as bitmap and HBase index
+* Supporting index on partitioned tables
+* Supporting the backup and restore feature
+* Cost-based query optimization by estimating the query selectivity
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/index/how_to_use.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/index/how_to_use.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/index/how_to_use.txt (added)
+++ tajo/site/docs/0.10.0/_sources/index/how_to_use.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,69 @@
+*************************************
+How to use index?
+*************************************
+
+-------------------------------------
+1. Create index
+-------------------------------------
+
+The first step for utilizing index is index creation. You can create index using SQL (:doc:`/sql_language/ddl`) or Tajo API (:doc:`/tajo_client_api`). For example, you can create a BST index on the lineitem table by submitting the following SQL to Tajo.
+
+.. code-block:: sql
+
+     default> create index l_orderkey_idx on lineitem (l_orderkey);
+
+If the index is created successfully, you can see the information about that index as follows: ::
+
+  default> \d lineitem
+
+  table name: default.lineitem
+  table path: hdfs://localhost:7020/tpch/lineitem
+  store type: CSV
+  number of rows: unknown
+  volume: 753.9 MB
+  Options:
+  	'text.delimiter'='|'
+
+  schema:
+  l_orderkey	INT8
+  l_partkey	INT8
+  l_suppkey	INT8
+  l_linenumber	INT8
+  l_quantity	FLOAT4
+  l_extendedprice	FLOAT4
+  l_discount	FLOAT4
+  l_tax	FLOAT4
+  l_returnflag	TEXT
+  l_linestatus	TEXT
+  l_shipdate	DATE
+  l_commitdate	DATE
+  l_receiptdate	DATE
+  l_shipinstruct	TEXT
+  l_shipmode	TEXT
+  l_comment	TEXT
+
+
+  Indexes:
+  "l_orderkey_idx" TWO_LEVEL_BIN_TREE (l_orderkey ASC NULLS LAST )
+
+For more information about index creation, please refer to the above links.
+
+-------------------------------------
+2. Enable/disable index scans
+-------------------------------------
+
+When an index is successfully created, you must enable the index scan feature as follows:
+
+.. code-block:: sql
+
+     default> \set INDEX_ENABLED true
+
+If you don't want to use the index scan feature anymore, you can simply disable it as follows:
+
+.. code-block:: sql
+
+     default> \set INDEX_ENABLED false
+
+.. note::
+
+     Once the index scan feature is enabled, Tajo currently always performs the index scan regardless of its efficiency. You should set this option when the expected number of retrieved tuples is sufficiently small.
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/index/types.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/index/types.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/index/types.txt (added)
+++ tajo/site/docs/0.10.0/_sources/index/types.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,7 @@
+*************************************
+Index Types
+*************************************
+
+Currently, Tajo supports only one type of index, ``TWO_LEVEL_BIN_TREE``, shortly ``BST``. The BST index is a kind of binary search tree which is extended to be permanently stored on disk. It consists of two levels of nodes; a leaf node indexes the keys with the positions of data in an HDFS block and a root node indexes the keys with the leaf node indices.
+
+When an index scan is started, the query engine first reads the root node and finds the search key. If it finds a leaf node corresponding to the search key, it subsequently finds the search key in that leaf node. Finally, it directly reads a tuple corresponding to the search key from HDFS.
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/index_overview.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/index_overview.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/index_overview.txt (added)
+++ tajo/site/docs/0.10.0/_sources/index_overview.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,20 @@
+*****************************
+Index (Experimental Feature)
+*****************************
+
+An index is a data structure that is used for efficient query processing. Using an index, the Tajo query engine can directly retrieve search values.
+
+This is still an experimental feature. In order to use indexes, you must check out the source code of the ``index_support`` branch::
+
+  git clone -b index_support https://git-wip-us.apache.org/repos/asf/tajo.git tajo-index
+
+For the source code build, please refer to :doc:`getting_started`.
+
+The following sections describe the supported index types, the query execution with an index, and the future works.
+
+.. toctree::
+      :maxdepth: 1
+
+      index/types
+      index/how_to_use
+      index/future_work
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/introduction.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/introduction.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/introduction.txt (added)
+++ tajo/site/docs/0.10.0/_sources/introduction.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,13 @@
+***************
+Introduction
+***************
+
+The main goal of Apache Tajo project is to build an advanced open source
+data warehouse system in Hadoop for processing web-scale data sets. 
+Basically, Tajo provides SQL standard as a query language.
+Tajo is designed for both interactive and batch queries on data sets
+stored on HDFS and other data sources. Without hurting query response
+times, Tajo provides fault-tolerance and dynamic load balancing which
+are necessary for long-running queries. Tajo employs a cost-based and
+progressive query optimization techniques for reoptimizing running
+queries in order to avoid the worst query plans.
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/jdbc_driver.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/jdbc_driver.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/jdbc_driver.txt (added)
+++ tajo/site/docs/0.10.0/_sources/jdbc_driver.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,113 @@
+*************************************
+Tajo JDBC Driver
+*************************************
+
+Apache Tajo™ provides JDBC driver
+which enables Java applciations to easily access Apache Tajo in a RDBMS-like manner.
+In this section, we explain how to get JDBC driver and an example code.
+
+How to get JDBC driver
+=======================
+
+From Binary Distribution
+--------------------------------
+
+Tajo binary distribution provides JDBC jar file and its dependent JAR files.
+Those files are located in ``${TAJO_HOME}/share/jdbc-dist/``.
+
+
+From Building Source Code
+--------------------------------
+
+You can build Tajo from the source code and then get JAR files as follows:
+
+.. code-block:: bash
+
+  $ tar xzvf tajo-x.y.z-src.tar.gz
+  $ mvn clean package -DskipTests -Pdist -Dtar
+  $ ls -l tajo-dist/target/tajo-x.y.z/share/jdbc-dist
+
+
+Setting the CLASSPATH
+=======================
+
+In order to use the JDBC driver, you should set the jar files included in 
+``tajo-dist/target/tajo-x.y.z/share/jdbc-dist`` to your ``CLASSPATH``.
+In addition, you should add hadoop clsspath into your ``CLASSPATH``.
+So, ``CLASSPATH`` will be set as follows:
+
+.. code-block:: bash
+
+  CLASSPATH=path/to/tajo-jdbc/*:path/to/tajo-site.xml:path/to/core-site.xml:path/to/hdfs-site.xml
+
+.. note::
+
+  You must add the locations which include Tajo config files (i.e., ``tajo-site.xml``) and
+  Hadoop config files (i.e., ``core-site.xml`` and ``hdfs-site.xml``) to your ``CLASSPATH``.
+
+
+An Example JDBC Client
+=======================
+
+The JDBC driver class name is ``org.apache.tajo.jdbc.TajoDriver``.
+You can get the driver ``Class.forName("org.apache.tajo.jdbc.TajoDriver")``.
+The connection url should be ``jdbc:tajo://<TajoMaster hostname>:<TajoMaster client rpc port>/<database name>``.
+The default TajoMaster client rpc port is ``26002``.
+If you want to change the listening port, please refer :doc:`/configuration/cluster_setup`.
+
+.. note::
+  
+  Currently, Tajo does not support the concept of database and namespace. 
+  All tables are contained in ``default`` database. So, you don't need to specify any database name.
+
+The following shows an example of JDBC Client.
+
+.. code-block:: java
+
+  import java.sql.Connection;
+  import java.sql.ResultSet;
+  import java.sql.Statement;
+  import java.sql.DriverManager;
+
+  public class TajoJDBCClient {
+    
+    ....
+
+    public static void main(String[] args) throws Exception {
+
+      try {
+        Class.forName("org.apache.tajo.jdbc.TajoDriver");
+      } catch (ClassNotFoundException e) {
+        // fill your handling code
+      }
+
+      Connection conn = DriverManager.getConnection("jdbc:tajo://127.0.0.1:26002/default");
+
+      Statement stmt = null;
+      ResultSet rs = null;
+      try {
+        stmt = conn.createStatement();
+        rs = stmt.executeQuery("select * from table1");
+        while (rs.next()) {
+          System.out.println(rs.getString(1) + "," + rs.getString(3));
+        }
+      } finally {
+        if (rs != null) rs.close();
+        if (stmt != null) stmt.close();
+        if (conn != null) conn.close();
+      }
+    }
+  }
+
+
+FAQ
+===========================================
+
+java.nio.channels.UnresolvedAddressException
+--------------------------------------------
+
+When retriving the final result, Tajo JDBC Driver tries to access HDFS data nodes.
+So, the network access between JDBC client and HDFS data nodes must be available.
+In many cases, a HDFS cluster is built in a private network which use private hostnames.
+So, the host names must be shared with the JDBC client side.
+

Added: tajo/site/docs/0.10.0/_sources/partitioning/column_partitioning.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/partitioning/column_partitioning.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/partitioning/column_partitioning.txt (added)
+++ tajo/site/docs/0.10.0/_sources/partitioning/column_partitioning.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,52 @@
+*********************************
+Column Partitioning
+*********************************
+
+The column table partition is designed to support the partition of Apache Hive™.
+
+================================================
+How to Create a Column Partitioned Table
+================================================
+
+You can create a partitioned table by using the ``PARTITION BY`` clause. For a column partitioned table, you should use
+the ``PARTITION BY COLUMN`` clause with partition keys.
+
+For example, assume there is a table ``orders`` composed of the following schema. ::
+
+  id          INT,
+  item_name   TEXT,
+  price       FLOAT
+
+Also, assume that you want to use ``order_date TEXT`` and ``ship_date TEXT`` as the partition keys.
+Then, you should create a table as follows:
+
+.. code-block:: sql
+
+  CREATE TABLE orders (
+    id INT,
+    item_name TEXT,
+    price
+  ) PARTITION BY COLUMN (order_date TEXT, ship_date TEXT);
+
+==================================================
+Partition Pruning on Column Partitioned Tables
+==================================================
+
+The following predicates in the ``WHERE`` clause can be used to prune unqualified column partitions without processing
+during query planning phase.
+
+* ``=``
+* ``<>``
+* ``>``
+* ``<``
+* ``>=``
+* ``<=``
+* LIKE predicates with a leading wild-card character
+* IN list predicates
+
+==================================================
+Compatibility Issues with Apache Hive™
+==================================================
+
+If partitioned tables of Hive are created as external tables in Tajo, Tajo can process the Hive partitioned tables directly.
+There haven't known compatibility issues yet.
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/partitioning/hash_partitioning.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/partitioning/hash_partitioning.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/partitioning/hash_partitioning.txt (added)
+++ tajo/site/docs/0.10.0/_sources/partitioning/hash_partitioning.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,5 @@
+********************************
+Hash Partitioning
+********************************
+
+.. todo::
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/partitioning/intro_to_partitioning.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/partitioning/intro_to_partitioning.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/partitioning/intro_to_partitioning.txt (added)
+++ tajo/site/docs/0.10.0/_sources/partitioning/intro_to_partitioning.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,15 @@
+**************************************
+Introduction to Partitioning
+**************************************
+
+Table partitioning provides two benefits: easy table management and data pruning by partition keys.
+Currently, Apache Tajo only provides Apache Hive-compatible column partitioning.
+
+=========================
+Partitioning Methods
+=========================
+
+Tajo provides the following partitioning methods:
+ * Column Partitioning
+ * Range Partitioning (TODO)
+ * Hash Partitioning (TODO)
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/partitioning/range_partitioning.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/partitioning/range_partitioning.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/partitioning/range_partitioning.txt (added)
+++ tajo/site/docs/0.10.0/_sources/partitioning/range_partitioning.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,5 @@
+***************************
+Range Partitioning
+***************************
+
+.. todo::
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/sql_language.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/sql_language.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/sql_language.txt (added)
+++ tajo/site/docs/0.10.0/_sources/sql_language.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,13 @@
+************
+SQL Language
+************
+
+.. toctree::
+    :maxdepth: 1
+
+    sql_language/data_model
+    sql_language/ddl
+    sql_language/insert
+    sql_language/queries    
+    sql_language/sql_expression
+    sql_language/predicates
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/sql_language/data_model.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/sql_language/data_model.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/sql_language/data_model.txt (added)
+++ tajo/site/docs/0.10.0/_sources/sql_language/data_model.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,66 @@
+**********
+Data Model
+**********
+
+===============
+Data Types
+===============
+
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+| Support   | SQL Type Name  |  Alias                     | Size (byte) | Description                                       | Range                                                                    |
++===========+================+============================+=============+===================================================+==========================================================================+ 
+| O         | boolean        |  bool                      |  1          |                                                   | true/false                                                               |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+  
+|           | bit            |                            |  1          |                                                   | 1/0                                                                      | 
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+|           | varbit         |  bit varying               |             |                                                   |                                                                          |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+| O         | smallint       |  tinyint, int2             |  2          | small-range integer value                         | -2^15 (-32,768) to 2^15 (32,767)                                         |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+| O         | integer        |  int, int4                 |  4          | integer value                                     | -2^31 (-2,147,483,648) to 2^31 - 1 (2,147,483,647)                       |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+| O         | bigint         |  bit varying               |  8          | larger range integer value                        | -2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807) |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+| O         | real           |  int8                      |  4          | variable-precision, inexact, real number value    | -3.4028235E+38 to 3.4028235E+38 (6 decimal digits precision)             |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+| O         | float[(n)]     |  float4                    |  4 or 8     | variable-precision, inexact, real number value    |                                                                          |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+| O         | double         |  float8, double precision  |  8          | variable-precision, inexact, real number value    | 1 .7E–308 to 1.7E+308 (15 decimal digits precision)                      |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+|           | number         |  decimal                   |             |                                                   |                                                                          |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+|           | char[(n)]      |  character                 |             |                                                   |                                                                          |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+|           | varchar[(n)]   |  character varying         |             |                                                   |                                                                          |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+| O         | text           |  text                      |             | variable-length unicode text                      |                                                                          |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+|           | binary         |  binary                    |             |                                                   |                                                                          |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+|           | varbinary[(n)] |  binary varying            |             |                                                   |                                                                          |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+| O         | blob           |  bytea                     |             | variable-length binary string                     |                                                                          |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+| O         | date           |                            |             |                                                   |                                                                          | 
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+| O         | time           |                            |             |                                                   |                                                                          | 
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+|           | timetz         |  time with time zone       |             |                                                   |                                                                          |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+| O         | timestamp      |                            |             |                                                   |                                                                          |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+|           | timestamptz    |                            |             |                                                   |                                                                          |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+ 
+| O         | inet4          |                            | 4           | IPv4 address                                      |                                                                          |
++-----------+----------------+----------------------------+-------------+---------------------------------------------------+--------------------------------------------------------------------------+
+
+-----------------------------------------
+Using real number value (real and double)
+-----------------------------------------
+
+The real and double data types are mapped to float and double of java primitives respectively. Java primitives float and double follows the IEEE 754 specification. So, these types are correctly matched to SQL standard data types.
+
++ float[( n )] is mapped to either float or double according to a given length n. If n is specified, it must be bewtween 1 and 53. The default value of n is 53.
++ If 1 <- n <- 24, a value is mapped to float (6 decimal digits precision).
++ If 25 <- n <- 53, a value is mapped to double (15 decimal digits precision). 
++ Do not use approximate real number columns in WHERE clause in order to compare some exact matches, especially the - and <> operators. The > or < comparisons work well. 
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/sql_language/ddl.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/sql_language/ddl.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/sql_language/ddl.txt (added)
+++ tajo/site/docs/0.10.0/_sources/sql_language/ddl.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,109 @@
+************************
+Data Definition Language
+************************
+
+========================
+CREATE DATABASE
+========================
+
+*Synopsis*
+
+.. code-block:: sql
+
+  CREATE DATABASE [IF NOT EXISTS] <database_name> 
+
+``IF NOT EXISTS`` allows ``CREATE DATABASE`` statement to avoid an error which occurs when the database exists.
+
+========================
+DROP DATABASE
+========================
+
+*Synopsis*
+
+.. code-block:: sql
+
+  DROP DATABASE [IF EXISTS] <database_name>
+
+``IF EXISTS`` allows ``DROP DATABASE`` statement to avoid an error which occurs when the database does not exist.
+
+========================
+CREATE TABLE
+========================
+
+*Synopsis*
+
+.. code-block:: sql
+
+  CREATE TABLE [IF NOT EXISTS] <table_name> [(<column_name> <data_type>, ... )]
+  [using <storage_type> [with (<key> = <value>, ...)]] [AS <select_statement>]
+
+  CREATE EXTERNAL TABLE [IF NOT EXISTS] <table_name> (<column_name> <data_type>, ... )
+  using <storage_type> [with (<key> = <value>, ...)] LOCATION '<path>'
+
+``IF NOT EXISTS`` allows ``CREATE [EXTERNAL] TABLE`` statement to avoid an error which occurs when the table does not exist.
+
+------------------------
+ Compression
+------------------------
+
+If you want to add an external table that contains compressed data, you should give 'compression.code' parameter to CREATE TABLE statement.
+
+.. code-block:: sql
+
+  create EXTERNAL table lineitem (
+  L_ORDERKEY bigint, 
+  L_PARTKEY bigint, 
+  ...
+  L_COMMENT text) 
+
+  USING csv WITH ('text.delimiter'='|','compression.codec'='org.apache.hadoop.io.compress.DeflateCodec')
+  LOCATION 'hdfs://localhost:9010/tajo/warehouse/lineitem_100_snappy';
+
+`compression.codec` parameter can have one of the following compression codecs:
+  * org.apache.hadoop.io.compress.BZip2Codec
+  * org.apache.hadoop.io.compress.DeflateCodec
+  * org.apache.hadoop.io.compress.GzipCodec
+  * org.apache.hadoop.io.compress.SnappyCodec 
+
+========================
+ DROP TABLE
+========================
+
+*Synopsis*
+
+.. code-block:: sql
+
+  DROP TABLE [IF EXISTS] <table_name> [PURGE]
+
+``IF EXISTS`` allows ``DROP DATABASE`` statement to avoid an error which occurs when the database does not exist. ``DROP TABLE`` statement removes a table from Tajo catalog, but it does not remove the contents. If ``PURGE`` option is given, ``DROP TABLE`` statement will eliminate the entry in the catalog as well as the contents.
+
+========================
+ CREATE INDEX
+========================
+
+*Synopsis*
+
+.. code-block:: sql
+
+  CREATE INDEX [ name ] ON table_name [ USING method ]
+  ( { column_name | ( expression ) } [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [, ...] )
+  [ WHERE predicate ]
+
+------------------------
+ Index method
+------------------------
+
+Currently, Tajo supports only one type of index.
+
+Index methods:
+  * TWO_LEVEL_BIN_TREE: This method is used by default in Tajo. For more information about its structure, please refer to :doc:`/index/types`.
+
+========================
+ DROP INDEX
+========================
+
+*Synopsis*
+
+.. code-block:: sql
+
+  DROP INDEX name
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/sql_language/insert.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/sql_language/insert.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/sql_language/insert.txt (added)
+++ tajo/site/docs/0.10.0/_sources/sql_language/insert.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,26 @@
+*************************
+INSERT (OVERWRITE) INTO
+*************************
+
+INSERT OVERWRITE statement overwrites a table data of an existing table or a data in a given directory. Tajo's INSERT OVERWRITE statement follows ``INSERT INTO SELECT`` statement of SQL. The examples are as follows:
+
+.. code-block:: sql
+
+  create table t1 (col1 int8, col2 int4, col3 float8);
+
+  -- when a target table schema and output schema are equivalent to each other
+  INSERT OVERWRITE INTO t1 SELECT l_orderkey, l_partkey, l_quantity FROM lineitem;
+  -- or
+  INSERT OVERWRITE INTO t1 SELECT * FROM lineitem;
+
+  -- when the output schema are smaller than the target table schema
+  INSERT OVERWRITE INTO t1 SELECT l_orderkey FROM lineitem;
+
+  -- when you want to specify certain target columns
+  INSERT OVERWRITE INTO t1 (col1, col3) SELECT l_orderkey, l_quantity FROM lineitem;
+
+In addition, INSERT OVERWRITE statement overwrites table data as well as a specific directory.
+
+.. code-block:: sql
+
+  INSERT OVERWRITE INTO LOCATION '/dir/subdir' SELECT l_orderkey, l_quantity FROM lineitem;
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/sql_language/predicates.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/sql_language/predicates.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/sql_language/predicates.txt (added)
+++ tajo/site/docs/0.10.0/_sources/sql_language/predicates.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,159 @@
+*****************
+ Predicates
+*****************
+
+=====================
+ IN Predicate
+=====================
+
+IN predicate provides row and array comparison.
+
+*Synopsis*
+
+.. code-block:: sql
+
+  column_reference IN (val1, val2, ..., valN)
+  column_reference NOT IN (val1, val2, ..., valN)
+
+
+Examples are as follows:
+
+.. code-block:: sql
+
+  -- this statement filters lists down all the records where col1 value is 1, 2 or 3:
+  SELECT col1, col2 FROM table1 WHERE col1 IN (1, 2, 3);
+
+  -- this statement filters lists down all the records where col1 value is neither 1, 2 nor 3:
+  SELECT col1, col2 FROM table1 WHERE col1 NOT IN (1, 2, 3);
+
+You can use 'IN clause' on text data domain as follows:
+
+.. code-block:: sql
+
+  SELECT col1, col2 FROM table1 WHERE col2 IN ('tajo', 'hadoop');
+
+  SELECT col1, col2 FROM table1 WHERE col2 NOT IN ('tajo', 'hadoop');
+
+
+==================================
+String Pattern Matching Predicates
+==================================
+
+--------------------
+LIKE
+--------------------
+
+LIKE operator returns true or false depending on whether its pattern matches the given string. An underscore (_) in pattern matches any single character. A percent sign (%) matches any sequence of zero or more characters.
+
+*Synopsis*
+
+.. code-block:: sql
+
+  string LIKE pattern
+  string NOT LIKE pattern
+
+
+--------------------
+ILIKE
+--------------------
+
+ILIKE is the same to LIKE, but it is a case insensitive operator. It is not in the SQL standard. We borrow this operator from PostgreSQL.
+
+*Synopsis*
+
+.. code-block:: sql
+
+  string ILIKE pattern
+  string NOT ILIKE pattern
+
+
+--------------------
+SIMILAR TO
+--------------------
+
+*Synopsis*
+
+.. code-block:: sql
+
+  string SIMILAR TO pattern
+  string NOT SIMILAR TO pattern
+
+It returns true or false depending on whether its pattern matches the given string. Also like LIKE, ``SIMILAR TO`` uses ``_`` and ``%`` as metacharacters denoting any single character and any string, respectively.
+
+In addition to these metacharacters borrowed from LIKE, 'SIMILAR TO' supports more powerful pattern-matching metacharacters borrowed from regular expressions:
+
++------------------------+-------------------------------------------------------------------------------------------+
+| metacharacter          | description                                                                               |
++========================+===========================================================================================+
+| &#124;                 | denotes alternation (either of two alternatives).                                         |
++------------------------+-------------------------------------------------------------------------------------------+
+| *                      | denotes repetition of the previous item zero or more times.                               |
++------------------------+-------------------------------------------------------------------------------------------+
+| +                      | denotes repetition of the previous item one or more times.                                |
++------------------------+-------------------------------------------------------------------------------------------+
+| ?                      | denotes repetition of the previous item zero or one time.                                 |
++------------------------+-------------------------------------------------------------------------------------------+
+| {m}                    | denotes repetition of the previous item exactly m times.                                  |
++------------------------+-------------------------------------------------------------------------------------------+
+| {m,}                   | denotes repetition of the previous item m or more times.                                  |
++------------------------+-------------------------------------------------------------------------------------------+
+| {m,n}                  | denotes repetition of the previous item at least m and not more than n times.             |
++------------------------+-------------------------------------------------------------------------------------------+
+| []                     | A bracket expression specifies a character class, just as in POSIX regular expressions.   |
++------------------------+-------------------------------------------------------------------------------------------+
+| ()                     | Parentheses can be used to group items into a single logical item.                        |
++------------------------+-------------------------------------------------------------------------------------------+
+
+Note that `.`` is not used as a metacharacter in ``SIMILAR TO`` operator.
+
+---------------------
+Regular expressions
+---------------------
+
+Regular expressions provide a very powerful means for string pattern matching. In the current Tajo, regular expressions are based on Java-style regular expressions instead of POSIX regular expression. The main difference between java-style one and POSIX's one is character class.
+
+*Synopsis*
+
+.. code-block:: sql
+
+  string ~ pattern
+  string !~ pattern
+
+  string ~* pattern
+  string !~* pattern
+
++----------+---------------------------------------------------------------------------------------------------+
+| operator | Description                                                                                       |
++==========+===================================================================================================+
+| ~        | It returns true if a given regular expression is matched to string. Otherwise, it returns false.  |
++----------+---------------------------------------------------------------------------------------------------+
+| !~       | It returns false if a given regular expression is matched to string. Otherwise, it returns true.  |
++----------+---------------------------------------------------------------------------------------------------+
+| ~*       | It is the same to '~', but it is case insensitive.                                                |
++----------+---------------------------------------------------------------------------------------------------+
+| !~*      | It is the same to '!~', but it is case insensitive.                                               |
++----------+---------------------------------------------------------------------------------------------------+
+
+Here are examples:
+
+.. code-block:: sql
+
+  'abc'   ~   '.*c'               true
+  'abc'   ~   'c'                 false
+  'aaabc' ~   '([a-z]){3}bc       true
+  'abc'   ~*  '.*C'               true
+  'abc'   !~* 'B.*'               true
+
+Regular expressions operator is not in the SQL standard. We borrow this operator from PostgreSQL.
+
+*Synopsis for REGEXP and RLIKE operators*
+
+.. code-block:: sql
+
+  string REGEXP pattern
+  string NOT REGEXP pattern
+
+  string RLIKE pattern
+  string NOT RLIKE pattern
+
+But, they do not support case-insensitive operators.
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/sql_language/queries.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/sql_language/queries.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/sql_language/queries.txt (added)
+++ tajo/site/docs/0.10.0/_sources/sql_language/queries.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,256 @@
+**************************
+Queries
+**************************
+
+=====================
+Overview
+=====================
+
+*Synopsis*
+
+.. code-block:: sql
+
+  SELECT [distinct [all]] * | <expression> [[AS] <alias>] [, ...]
+    [FROM <table reference> [[AS] <table alias name>] [, ...]]
+    [WHERE <condition>]
+    [GROUP BY <expression> [, ...]]
+    [HAVING <condition>]
+    [ORDER BY <expression> [ASC|DESC] [NULL FIRST|NULL LAST] [, ...]]
+
+
+
+=====================
+From Clause
+=====================
+
+*Synopsis*
+
+.. code-block:: sql
+
+  [FROM <table reference> [[AS] <table alias name>] [, ...]]
+
+
+The ``FROM`` clause specifies one or more other tables given in a comma-separated table reference list.
+A table reference can be a relation name , or a subquery, a table join, or complex combinations of them.
+
+-----------------------
+Table and Table Aliases
+-----------------------
+
+A temporary name can be given to tables and complex table references to be used
+for references to the derived table in the rest of the query. This is called a table alias.
+
+To create a a table alias, please use ``AS``:
+
+.. code-block:: sql
+
+  FROM table_reference AS alias
+
+or
+
+.. code-block:: sql
+
+  FROM table_reference alias
+
+The ``AS`` keyword can be omitted, and *Alias* can be any identifier.
+
+A typical application of table aliases is to give short names to long table references. For example:
+
+.. code-block:: sql
+
+  SELECT * FROM long_table_name_1234 s JOIN another_long_table_name_5678 a ON s.id = a.num;
+
+-------------
+Joined Tables
+-------------
+
+Tajo supports all kinds of join types.
+
+Join Types
+~~~~~~~~~~
+
+Cross Join
+^^^^^^^^^^
+
+.. code-block:: sql
+
+  FROM T1 CROSS JOIN T2
+
+Cross join, also called *Cartesian product*, results in every possible combination of rows from T1 and T2.
+
+``FROM T1 CROSS JOIN T2`` is equivalent to ``FROM T1, T2``.
+
+Qualified joins
+^^^^^^^^^^^^^^^
+
+Qualified joins implicitly or explicitly have join conditions. Inner/Outer/Natural Joins all are qualified joins.
+Except for natural join, ``ON`` or ``USING`` clause in each join is used to specify a join condition. 
+A join condition must include at least one boolean expression, and it can also include just filter conditions.
+
+**Inner Join**
+
+.. code-block:: sql
+
+  T1 [INNER] JOIN T2 ON boolean_expression
+  T1 [INNER] JOIN T2 USING (join column list)
+
+``INNER`` keyword is the default, and so ``INNER`` can be omitted when you use inner join.
+
+**Outer Join**
+
+.. code-block:: sql
+
+  T1 (LEFT|RIGHT|FULL) OUTER JOIN T2 ON boolean_expression
+  T1 (LEFT|RIGHT|FULL) OUTER JOIN T2 USING (join column list)
+
+One of ``LEFT``, ``RIGHT``, or ``FULL`` must be specified for outer joins. 
+Join conditions in outer join will have different behavior according to corresponding table references of join conditions.
+To know outer join behavior in more detail, please refer to 
+`Advanced outer join constructs <http://www.ibm.com/developerworks/data/library/techarticle/purcell/0201purcell.html>`_.
+
+**Natural Join**
+
+.. code-block:: sql
+
+  T1 NATURAL JOIN T2
+
+``NATURAL`` is a short form of ``USING``. It forms a ``USING`` list consisting of all common column names that appear in 
+both join tables. These common columns appear only once in the output table. If there are no common columns, 
+``NATURAL`` behaves like ``CROSS JOIN``.
+
+**Subqueries**
+
+Subqueries allow users to specify a derived table. It requires enclosing a SQL statement in parentheses and an alias name. 
+For example:
+
+.. code-block:: sql
+
+  FROM (SELECT * FROM table1) AS alias_name
+
+=====================
+Where Clause
+=====================
+
+The syntax of the WHERE Clause is
+
+*Synopsis*
+
+.. code-block:: sql
+
+  WHERE search_condition
+
+``search_condition`` can be any boolean expression. 
+In order to know additional predicates, please refer to :doc:`/sql_language/predicates`.
+
+==========================
+Groupby and Having Clauses
+==========================
+
+*Synopsis*
+
+.. code-block:: sql
+
+  SELECT select_list
+      FROM ...
+      [WHERE ...]
+      GROUP BY grouping_column_reference [, grouping_column_reference]...
+      [HAVING boolean_expression]
+
+The rows which passes ``WHERE`` filter may be subject to grouping, specified by ``GROUP BY`` clause.
+Grouping combines a set of rows having common values into one group, and then computes rows in the group with aggregation functions. ``HAVING`` clause can be used with only ``GROUP BY`` clause. It eliminates the unqualified result rows of grouping.
+
+``grouping_column_reference`` can be a column reference, a complex expression including scalar functions and arithmetic operations.
+
+.. code-block:: sql
+
+  SELECT l_orderkey, SUM(l_quantity) AS quantity FROM lineitem GROUP BY l_orderkey;
+
+  SELECT substr(l_shipdate,1,4) as year, SUM(l_orderkey) AS total2 FROM lineitem GROUP BY substr(l_shipdate,1,4);
+
+If a SQL statement includes ``GROUP BY`` clause, expressions in select list must be either grouping_column_reference or aggregation function. For example, the following example query is not allowed because ``l_orderkey`` does not occur in ``GROUP BY`` clause.
+
+.. code-block:: sql
+
+  SELECT l_orderkey, l_partkey, SUM(l_orderkey) AS total FROM lineitem GROUP BY l_partkey;
+
+Aggregation functions can be used with ``DISTINCT`` keywords. It forces an individual aggregate function to take only distinct values of the argument expression. ``DISTINCT`` keyword is used as follows:
+
+.. code-block:: sql
+
+  SELECT l_partkey, COUNT(distinct l_quantity), SUM(distinct l_extendedprice) AS total FROM lineitem GROUP BY l_partkey;
+
+==========================
+Orderby and Limit Clauses
+==========================
+
+*Synopsis*
+
+.. code-block:: sql
+
+  FROM ... ORDER BY <sort_expr> [(ASC|DESC)] [NULL (FIRST|LAST) [,...]
+
+``sort_expr`` can be a column reference, aliased column reference, or a complex expression. 
+``ASC`` indicates an ascending order of ``sort_expr`` values. ``DESC`` indicates a descending order of ``sort_expr`` values.
+``ASC`` is the default order.
+
+``NULLS FIRST`` and ``NULLS LAST`` options can be used to determine whether nulls values appear 
+before or after non-null values in the sort ordering. By default, null values are dealt as if larger than any non-null value; 
+that is, ``NULLS FIRST`` is the default for ``DESC`` order, and ``NULLS LAST`` otherwise.
+
+==========================
+Window Functions
+==========================
+
+A window function performs a calculation across multiple table rows that belong to some window frame.
+
+*Synopsis*
+
+.. code-block:: sql
+
+  SELECT ...., func(param) OVER ([PARTITION BY partition-expr [, ...]] [ORDER BY sort-expr [, ...]]), ....,  FROM
+
+The PARTITION BY list within OVER specifies dividing the rows into groups, or partitions, that share the same values of 
+the PARTITION BY expression(s). For each row, the window function is computed across the rows that fall into 
+the same partition as the current row.
+
+We will briefly explain some examples using window functions.
+
+---------
+Examples
+---------
+
+Multiple window functions can be used in a SQL statement as follows:
+
+.. code-block:: sql
+
+  SELECT l_orderkey, sum(l_discount) OVER (PARTITION BY l_orderkey), sum(l_quantity) OVER (PARTITION BY l_orderkey) FROM LINEITEM;
+
+If ``OVER()`` clause is empty as following, it makes all table rows into one window frame.
+
+.. code-block:: sql
+
+  SELECT salary, sum(salary) OVER () FROM empsalary;
+
+Also, ``ORDER BY`` clause can be used without ``PARTITION BY`` clause as follows:
+
+.. code-block:: sql
+
+  SELECT salary, sum(salary) OVER (ORDER BY salary) FROM empsalary;
+
+Also, all expressions and aggregation functions are allowed in ``ORDER BY`` clause as follows:
+
+.. code-block:: sql
+
+  select
+    l_orderkey,
+    count(*) as cnt,
+    row_number() over (partition by l_orderkey order by count(*) desc)
+    row_num
+  from
+    lineitem
+  group by
+    l_orderkey
+
+.. note::
+
+  Currently, Tajo does not support multiple different partition-expressions in one SQL statement.
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/sql_language/sql_expression.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/sql_language/sql_expression.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/sql_language/sql_expression.txt (added)
+++ tajo/site/docs/0.10.0/_sources/sql_language/sql_expression.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,31 @@
+============================
+ SQL Expressions
+============================
+
+-------------------------
+ Arithmetic Expressions
+-------------------------
+
+-------------------------
+Type Casts
+-------------------------
+A type cast converts a specified-typed data to another-typed data. Tajo has two type cast syntax:
+
+.. code-block:: sql
+
+  CAST ( expression AS type )
+  expression::type
+
+
+-------------------------
+String Expressions
+-------------------------
+
+
+-------------------------
+Function Call
+-------------------------
+
+.. code-block:: sql
+
+  function_name ([expression [, expression ... ]] )
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/table_management.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/table_management.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/table_management.txt (added)
+++ tajo/site/docs/0.10.0/_sources/table_management.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,12 @@
+******************
+Table Management
+******************
+
+In Tajo, a table is a logical view of one data sources. Logically, one table consists of a logical schema, partitions, URL, and various properties. Physically, A table can be a directory in HDFS, a single file, one HBase table, or a RDBMS table. In order to make good use of Tajo, users need to understand features and physical characteristics of their physical layout. This section explains all about table management.
+
+.. toctree::
+    :maxdepth: 1
+
+    table_management/table_overview
+    table_management/file_formats
+    table_management/compression
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/table_management/compression.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/table_management/compression.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/table_management/compression.txt (added)
+++ tajo/site/docs/0.10.0/_sources/table_management/compression.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,5 @@
+*********************************
+Compression
+*********************************
+
+.. todo::
\ No newline at end of file

Added: tajo/site/docs/0.10.0/_sources/table_management/csv.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/table_management/csv.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/table_management/csv.txt (added)
+++ tajo/site/docs/0.10.0/_sources/table_management/csv.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,115 @@
+*************************************
+CSV (TextFile)
+*************************************
+
+A character-separated values (CSV) file represents a tabular data set consisting of rows and columns.
+Each row is a plan-text line. A line is usually broken by a character line feed ``\n`` or carriage-return ``\r``.
+The line feed ``\n`` is the default delimiter in Tajo. Each record consists of multiple fields, separated by
+some other character or string, most commonly a literal vertical bar ``|``, comma ``,`` or tab ``\t``.
+The vertical bar is used as the default field delimiter in Tajo.
+
+=========================================
+How to Create a CSV Table ?
+=========================================
+
+If you are not familiar with the ``CREATE TABLE`` statement, please refer to the Data Definition Language :doc:`/sql_language/ddl`.
+
+In order to specify a certain file format for your table, you need to use the ``USING`` clause in your ``CREATE TABLE``
+statement. The below is an example statement for creating a table using CSV files.
+
+.. code-block:: sql
+
+ CREATE TABLE
+  table1 (
+    id int,
+    name text,
+    score float,
+    type text
+  ) USING CSV;
+
+=========================================
+Physical Properties
+=========================================
+
+Some table storage formats provide parameters for enabling or disabling features and adjusting physical parameters.
+The ``WITH`` clause in the CREATE TABLE statement allows users to set those parameters.
+
+Now, the CSV storage format provides the following physical properties.
+
+* ``text.delimiter``: delimiter character. ``|`` or ``\u0001`` is usually used, and the default field delimiter is ``|``.
+* ``text.null``: NULL character. The default NULL character is an empty string ``''``. Hive's default NULL character is ``'\\N'``.
+* ``compression.codec``: Compression codec. You can enable compression feature and set specified compression algorithm. The compression algorithm used to compress files. The compression codec name should be the fully qualified class name inherited from `org.apache.hadoop.io.compress.CompressionCodec <https://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/compress/CompressionCodec.html>`_. By default, compression is disabled.
+* ``csvfile.serde`` (deprecated): custom (De)serializer class. ``org.apache.tajo.storage.TextSerializerDeserializer`` is the default (De)serializer class.
+* ``timezone``: the time zone that the table uses for writting. When table rows are read or written, ```timestamp``` and ```time``` column values are adjusted by this timezone if it is set. Time zone can be an abbreviation form like 'PST' or 'DST'. Also, it accepts an offset-based form like 'UTC+9' or a location-based form like 'Asia/Seoul'.
+* ``text.error-tolerance.max-num``: the maximum number of permissible parsing errors. This value should be an integer value. By default, ``text.error-tolerance.max-num`` is ``0``. According to the value, parsing errors will be handled in different ways.
+  * If ``text.error-tolerance.max-num < 0``, all parsing errors are ignored.
+  * If ``text.error-tolerance.max-num == 0``, any parsing error is not allowed. If any error occurs, the query will be failed. (default)
+  * If ``text.error-tolerance.max-num > 0``, the given number of parsing errors in each task will be pemissible.
+
+The following example is to set a custom field delimiter, NULL character, and compression codec:
+
+.. code-block:: sql
+
+ CREATE TABLE table1 (
+  id int,
+  name text,
+  score float,
+  type text
+ ) USING CSV WITH('text.delimiter'='\u0001',
+                  'text.null'='\\N',
+                  'compression.codec'='org.apache.hadoop.io.compress.SnappyCodec');
+
+.. warning::
+
+  Be careful when using ``\n`` as the field delimiter because CSV uses ``\n`` as the line delimiter.
+  At the moment, Tajo does not provide a way to specify the line delimiter.
+
+=========================================
+Custom (De)serializer
+=========================================
+
+The CSV storage format not only provides reading and writing interfaces for CSV data but also allows users to process custom
+plan-text file formats with user-defined (De)serializer classes.
+For example, with custom (de)serializers, Tajo can process JSON file formats or any specialized plan-text file formats.
+
+In order to specify a custom (De)serializer, set a physical property ``csvfile.serde``.
+The property value should be a fully qualified class name.
+
+For example:
+
+.. code-block:: sql
+
+ CREATE TABLE table1 (
+  id int,
+  name text,
+  score float,
+  type text
+ ) USING CSV WITH ('csvfile.serde'='org.my.storage.CustomSerializerDeserializer')
+
+
+=========================================
+Null Value Handling Issues
+=========================================
+In default, NULL character in CSV files is an empty string ``''``.
+In other words, an empty field is basically recognized as a NULL value in Tajo.
+If a field domain is ``TEXT``, an empty field is recognized as a string value ``''`` instead of NULL value.
+Besides, You can also use your own NULL character by specifying a physical property ``text.null``.
+
+=========================================
+Compatibility Issues with Apache Hive™
+=========================================
+
+CSV files generated in Tajo can be processed directly by Apache Hive™ without further processing.
+In this section, we explain some compatibility issue for users who use both Hive and Tajo.
+
+If you set a custom field delimiter, the CSV tables cannot be directly used in Hive.
+In order to specify the custom field delimiter in Hive, you need to use ``ROW FORMAT DELIMITED FIELDS TERMINATED BY``
+clause in a Hive's ``CREATE TABLE`` statement as follows:
+
+.. code-block:: sql
+
+ CREATE TABLE table1 (id int, name string, score float, type string)
+ ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'
+ STORED AS TEXT
+
+To the best of our knowledge, there is not way to specify a custom NULL character in Hive.

Added: tajo/site/docs/0.10.0/_sources/table_management/file_formats.txt
URL: http://svn.apache.org/viewvc/tajo/site/docs/0.10.0/_sources/table_management/file_formats.txt?rev=1665114&view=auto
==============================================================================
--- tajo/site/docs/0.10.0/_sources/table_management/file_formats.txt (added)
+++ tajo/site/docs/0.10.0/_sources/table_management/file_formats.txt Mon Mar  9 02:35:26 2015
@@ -0,0 +1,13 @@
+*************************************
+File Formats
+*************************************
+
+Currently, Tajo provides four file formats as follows:
+
+.. toctree::
+    :maxdepth: 1
+
+    csv
+    rcfile
+    parquet
+    sequencefile
\ No newline at end of file