You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@asterixdb.apache.org by bu...@apache.org on 2016/10/19 02:32:00 UTC

[02/24] asterixdb git commit: Documentation cleanup.

http://git-wip-us.apache.org/repos/asf/asterixdb/blob/10351a74/asterixdb/asterix-doc/src/main/markdown/sqlpp/3_query.md
----------------------------------------------------------------------
diff --git a/asterixdb/asterix-doc/src/main/markdown/sqlpp/3_query.md b/asterixdb/asterix-doc/src/main/markdown/sqlpp/3_query.md
index 5ca0e1f..85787e8 100644
--- a/asterixdb/asterix-doc/src/main/markdown/sqlpp/3_query.md
+++ b/asterixdb/asterix-doc/src/main/markdown/sqlpp/3_query.md
@@ -72,23 +72,23 @@ The following shows the (rich) grammar for the `SELECT` statement in SQL++.
     OrderbyClause      ::= <ORDER> <BY> Expression ( <ASC> | <DESC> )? ( "," Expression ( <ASC> | <DESC> )? )*
     LimitClause        ::= <LIMIT> Expression ( <OFFSET> Expression )?
 
-In this section, we will make use of two stored collections of records (datasets), `GleambookUsers` and `GleambookMessages`, in a series of running examples to explain `SELECT` queries. The contents of the example collections are as follows:
+In this section, we will make use of two stored collections of objects (datasets), `GleambookUsers` and `GleambookMessages`, in a series of running examples to explain `SELECT` queries. The contents of the example collections are as follows:
 
 `GleambookUsers` collection:
 
-    {"id":1,"alias":"Margarita","name":"MargaritaStoddard","nickname":"Mags","userSince":datetime("2012-08-20T10:10:00"),"friendIds":{{2,3,6,10}},"employment":[{"organizationName":"Codetechno","start-date":date("2006-08-06")},{"organizationName":"geomedia","start-date":date("2010-06-17"),"end-date":date("2010-01-26")}],"gender":"F"}
-    {"id":2,"alias":"Isbel","name":"IsbelDull","nickname":"Izzy","userSince":datetime("2011-01-22T10:10:00"),"friendIds":{{1,4}},"employment":[{"organizationName":"Hexviafind","startDate":date("2010-04-27")}]}
-    {"id":3,"alias":"Emory","name":"EmoryUnk","userSince":datetime("2012-07-10T10:10:00"),"friendIds":{{1,5,8,9}},"employment":[{"organizationName":"geomedia","startDate":date("2010-06-17"),"endDate":date("2010-01-26")}]}
+    {"id":1,"alias":"Margarita","name":"MargaritaStoddard","nickname":"Mags","userSince":"2012-08-20T10:10:00","friendIds":[2,3,6,10],"employment":[{"organizationName":"Codetechno","start-date":"2006-08-06"},{"organizationName":"geomedia","start-date":"2010-06-17","end-date":"2010-01-26"}],"gender":"F"}
+    {"id":2,"alias":"Isbel","name":"IsbelDull","nickname":"Izzy","userSince":"2011-01-22T10:10:00","friendIds":[1,4],"employment":[{"organizationName":"Hexviafind","startDate":"2010-04-27"}]}
+    {"id":3,"alias":"Emory","name":"EmoryUnk","userSince":"2012-07-10T10:10:00","friendIds":[1,5,8,9],"employment":[{"organizationName":"geomedia","startDate":"2010-06-17","endDate":"2010-01-26"}]}
 
 `GleambookMessages` collection:
 
-    {"messageId":2,"authorId":1,"inResponseTo":4,"senderLocation":point("41.66,80.87"),"message":" dislike iphone its touch-screen is horrible"}
-    {"messageId":3,"authorId":2,"inResponseTo":4,"senderLocation":point("48.09,81.01"),"message":" like samsung the plan is amazing"}
-    {"messageId":4,"authorId":1,"inResponseTo":2,"senderLocation":point("37.73,97.04"),"message":" can't stand at&t the network is horrible:("}
-    {"messageId":6,"authorId":2,"inResponseTo":1,"senderLocation":point("31.5,75.56"),"message":" like t-mobile its platform is mind-blowing"}
-    {"messageId":8,"authorId":1,"inResponseTo":11,"senderLocation":point("40.33,80.87"),"message":" like verizon the 3G is awesome:)"}
-    {"messageId":10,"authorId":1,"inResponseTo":12,"senderLocation":point("42.5,70.01"),"message":" can't stand motorola the touch-screen is terrible"}
-    {"messageId":11,"authorId":1,"inResponseTo":1,"senderLocation":point("38.97,77.49"),"message":" can't stand at&t its plan is terrible"}
+    {"messageId":2,"authorId":1,"inResponseTo":4,"senderLocation":[41.66,80.87],"message":" dislike iphone its touch-screen is horrible"}
+    {"messageId":3,"authorId":2,"inResponseTo":4,"senderLocation":[48.09,81.01],"message":" like samsung the plan is amazing"}
+    {"messageId":4,"authorId":1,"inResponseTo":2,"senderLocation":[37.73,97.04],"message":" can't stand at&t the network is horrible:("}
+    {"messageId":6,"authorId":2,"inResponseTo":1,"senderLocation":[31.5,75.56],"message":" like t-mobile its platform is mind-blowing"}
+    {"messageId":8,"authorId":1,"inResponseTo":11,"senderLocation":[40.33,80.87],"message":" like verizon the 3G is awesome:)"}
+    {"messageId":10,"authorId":1,"inResponseTo":12,"senderLocation":[42.5,70.01],"message":" can't stand motorola the touch-screen is terrible"}
+    {"messageId":11,"authorId":1,"inResponseTo":1,"senderLocation":[38.97,77.49],"message":" can't stand at&t its plan is terrible"}
 
 ## <a id="Select_clauses">SELECT Clause</a>
 The SQL++ `SELECT` clause always returns a collection value as its result (even if the result is empty or a singleton).
@@ -107,29 +107,29 @@ The following example shows a query that selects one user from the GleambookUser
 This query returns:
 
     [{
-    	"userSince": "2012-08-20T10:10:00.000Z",
-    	"friendIds": [
-    		2,
-    		3,
-    		6,
-    		10
-    	],
-    	"gender": "F",
-    	"name": "MargaritaStoddard",
-    	"nickname": "Mags",
-    	"alias": "Margarita",
-    	"id": 1,
-    	"employment": [
-    		{
-    			"organizationName": "Codetechno",
-    			"start-date": "2006-08-06"
-    		},
-    		{
-    			"end-date": "2010-01-26",
-    			"organizationName": "geomedia",
-    			"start-date": "2010-06-17"
-    		}
-    	]
+        "userSince": "2012-08-20T10:10:00.000Z",
+        "friendIds": [
+            2,
+            3,
+            6,
+            10
+        ],
+        "gender": "F",
+        "name": "MargaritaStoddard",
+        "nickname": "Mags",
+        "alias": "Margarita",
+        "id": 1,
+        "employment": [
+            {
+                "organizationName": "Codetechno",
+                "start-date": "2006-08-06"
+            },
+            {
+                "end-date": "2010-01-26",
+                "organizationName": "geomedia",
+                "start-date": "2010-06-17"
+            }
+        ]
     } ]
 
 ### <a id="SQL_select">SQL-style SELECT</a>
@@ -145,12 +145,12 @@ This syntax can also be reformulated in a `SELECT VALUE` based manner in SQL++.
 Returns:
 
     [ {
-    	"user_name": "MargaritaStoddard",
-    	"user_alias": "Margarita"
+        "user_name": "MargaritaStoddard",
+        "user_alias": "Margarita"
     } ]
 
 ### <a id="Select_star">SELECT *</a>
-In SQL++, `SELECT *` returns a record with a nested field for each input tuple. Each field has as its field name the name of a binding variable generated by either the `FROM` clause or `GROUP BY` clause in the current enclosing `SELECT` statement, and its field is the value of that binding variable.
+In SQL++, `SELECT *` returns a object with a nested field for each input tuple. Each field has as its field name the name of a binding variable generated by either the `FROM` clause or `GROUP BY` clause in the current enclosing `SELECT` statement, and its field is the value of that binding variable.
 
 ##### Example
 
@@ -160,69 +160,69 @@ In SQL++, `SELECT *` returns a record with a nested field for each input tuple.
 Since `user` is the only binding variable generated in the `FROM` clause, this query returns:
 
     [ {
-    	"user": {
-    		"userSince": "2012-08-20T10:10:00.000Z",
-    		"friendIds": [
-    			2,
-    			3,
-    			6,
-    			10
-    		],
-    		"gender": "F",
-    		"name": "MargaritaStoddard",
-    		"nickname": "Mags",
-    		"alias": "Margarita",
-    		"id": 1,
-    		"employment": [
-    			{
-    				"organizationName": "Codetechno",
-    				"start-date": "2006-08-06"
-    			},
-    			{
-    				"end-date": "2010-01-26",
-    				"organizationName": "geomedia",
-    				"start-date": "2010-06-17"
-    			}
-    		]
-    	}
+        "user": {
+            "userSince": "2012-08-20T10:10:00.000Z",
+            "friendIds": [
+                2,
+                3,
+                6,
+                10
+            ],
+            "gender": "F",
+            "name": "MargaritaStoddard",
+            "nickname": "Mags",
+            "alias": "Margarita",
+            "id": 1,
+            "employment": [
+                {
+                    "organizationName": "Codetechno",
+                    "start-date": "2006-08-06"
+                },
+                {
+                    "end-date": "2010-01-26",
+                    "organizationName": "geomedia",
+                    "start-date": "2010-06-17"
+                }
+            ]
+        }
     }, {
-    	"user": {
-    		"userSince": "2011-01-22T10:10:00.000Z",
-    		"friendIds": [
-    			1,
-    			4
-    		],
-    		"name": "IsbelDull",
-    		"nickname": "Izzy",
-    		"alias": "Isbel",
-    		"id": 2,
-    		"employment": [
-    			{
-    				"organizationName": "Hexviafind",
-    				"startDate": "2010-04-27"
-    			}
-    		]
-    	}
+        "user": {
+            "userSince": "2011-01-22T10:10:00.000Z",
+            "friendIds": [
+                1,
+                4
+            ],
+            "name": "IsbelDull",
+            "nickname": "Izzy",
+            "alias": "Isbel",
+            "id": 2,
+            "employment": [
+                {
+                    "organizationName": "Hexviafind",
+                    "startDate": "2010-04-27"
+                }
+            ]
+        }
     }, {
-    	"user": {
-    		"userSince": "2012-07-10T10:10:00.000Z",
-    		"friendIds": [
-    			1,
-    			5,
-    			8,
-    			9
-    		],
-    		"name": "EmoryUnk",
-    		"alias": "Emory",
-    		"id": 3,
-    		"employment": [
-    			{
-    				"organizationName": "geomedia",
-    				"endDate": "2010-01-26",
-    				"startDate": "2010-06-17"
-    			}
-    		]
-    	}
+        "user": {
+            "userSince": "2012-07-10T10:10:00.000Z",
+            "friendIds": [
+                1,
+                5,
+                8,
+                9
+            ],
+            "name": "EmoryUnk",
+            "alias": "Emory",
+            "id": 3,
+            "employment": [
+                {
+                    "organizationName": "geomedia",
+                    "endDate": "2010-01-26",
+                    "startDate": "2010-06-17"
+                }
+            ]
+        }
     } ]
 
 ### <a id="Select_distinct">SELECT DISTINCT</a>
@@ -235,11 +235,11 @@ SQL++'s `DISTINCT` keyword is used to eliminate duplicate items in results. The
 This query returns:
 
     [ {
-    	"foo": 1
+        "foo": 1
     }, {
-    	"foo": 2
+        "foo": 2
     }, {
-    	"foo": 3
+        "foo": 3
     } ]
 
 ##### Example
@@ -270,8 +270,8 @@ Name generation has three cases:
 This query outputs:
 
     [ {
-    	"alias": "Margarita",
-    	"$1": "Stoddard"
+        "alias": "Margarita",
+        "$1": "Stoddard"
     } ]
 
 In the result, `$1` is the generated name for `substr(user.name, 1)`, while `alias` is the generated name for `user.alias`.
@@ -288,15 +288,15 @@ As in standard SQL, SQL++ field access expressions can be abbreviated (not recom
 Outputs:
 
     [ {
-    	"lname": "Stoddard",
-    	"alias": "Margarita"
+        "lname": "Stoddard",
+        "alias": "Margarita"
     } ]
 
 ## <a id="Unnest_clauses">UNNEST Clause</a>
 For each of its input tuples, the `UNNEST` clause flattens a collection-valued expression into individual items, producing multiple tuples, each of which is one of the expression's original input tuples augmented with a flattened item from its collection.
 
 ### <a id="Inner_unnests">Inner UNNEST</a>
-The following example is a query that retrieves the names of the organizations that a selected user has worked for. It uses the `UNNEST` clause to unnest the nested collection `employment` in the user's record.
+The following example is a query that retrieves the names of the organizations that a selected user has worked for. It uses the `UNNEST` clause to unnest the nested collection `employment` in the user's object.
 
 ##### Example
 
@@ -308,17 +308,17 @@ The following example is a query that retrieves the names of the organizations t
 This query returns:
 
     [ {
-    	"orgName": "Codetechno",
-    	"userId": 1
+        "orgName": "Codetechno",
+        "userId": 1
     }, {
-    	"orgName": "geomedia",
-    	"userId": 1
+        "orgName": "geomedia",
+        "userId": 1
     } ]
 
 Note that `UNNEST` has SQL's inner join semantics --- that is, if a user has no employment history, no tuple corresponding to that user will be emitted in the result.
 
 ### <a id="Left_outer_unnests">Left outer UNNEST</a>
-As an alternative, the `LEFT OUTER UNNEST` clause offers SQL's left outer join semantics. For example, no collection-valued field named `hobbies` exists in the record for the user whose id is 1, but the following query's result still includes user 1.
+As an alternative, the `LEFT OUTER UNNEST` clause offers SQL's left outer join semantics. For example, no collection-valued field named `hobbies` exists in the object for the user whose id is 1, but the following query's result still includes user 1.
 
 ##### Example
 
@@ -330,14 +330,14 @@ As an alternative, the `LEFT OUTER UNNEST` clause offers SQL's left outer join s
 Returns:
 
     [ {
-    	"userId": 1
+        "userId": 1
     } ]
 
 Note that if `u.hobbies` is an empty collection or leads to a `MISSING` (as above) or `NULL` value for a given input tuple, there is no corresponding binding value for variable `h` for an input tuple. A `MISSING` value will be generated for `h` so that the input tuple can still be propagated.
 
 ### <a id="Expressing_joins_using_unnests">Expressing joins using UNNEST</a>
 The SQL++ `UNNEST` clause is similar to SQL's `JOIN` clause except that it allows its right argument to be correlated to its left argument, as in the examples above --- i.e., think "correlated cross-product".
-The next example shows this via a query that joins two data sets, GleambookUsers and GleambookMessages, returning user/message pairs. The results contain one record per pair, with result records containing the user's name and an entire message. The query can be thought of as saying "for each Gleambook user, unnest the `GleambookMessages` collection and filter the output with the condition `message.authorId = user.id`".
+The next example shows this via a query that joins two data sets, GleambookUsers and GleambookMessages, returning user/message pairs. The results contain one object per pair, with result objects containing the user's name and an entire message. The query can be thought of as saying "for each Gleambook user, unnest the `GleambookMessages` collection and filter the output with the condition `message.authorId = user.id`".
 
 ##### Example
 
@@ -349,26 +349,26 @@ The next example shows this via a query that joins two data sets, GleambookUsers
 This returns:
 
     [ {
-    	"uname": "MargaritaStoddard",
-    	"message": " can't stand at&t its plan is terrible"
+        "uname": "MargaritaStoddard",
+        "message": " can't stand at&t its plan is terrible"
     }, {
-    	"uname": "MargaritaStoddard",
-    	"message": " dislike iphone its touch-screen is horrible"
+        "uname": "MargaritaStoddard",
+        "message": " dislike iphone its touch-screen is horrible"
     }, {
-    	"uname": "MargaritaStoddard",
-    	"message": " can't stand at&t the network is horrible:("
+        "uname": "MargaritaStoddard",
+        "message": " can't stand at&t the network is horrible:("
     }, {
-    	"uname": "MargaritaStoddard",
-    	"message": " like verizon the 3G is awesome:)"
+        "uname": "MargaritaStoddard",
+        "message": " like verizon the 3G is awesome:)"
     }, {
-    	"uname": "MargaritaStoddard",
-    	"message": " can't stand motorola the touch-screen is terrible"
+        "uname": "MargaritaStoddard",
+        "message": " can't stand motorola the touch-screen is terrible"
     }, {
-    	"uname": "IsbelDull",
-    	"message": " like t-mobile its platform is mind-blowing"
+        "uname": "IsbelDull",
+        "message": " like t-mobile its platform is mind-blowing"
     }, {
-    	"uname": "IsbelDull",
-    	"message": " like samsung the plan is amazing"
+        "uname": "IsbelDull",
+        "message": " like samsung the plan is amazing"
     } ]
 
 Similarly, the above query can also be expressed as the `UNNEST`ing of a correlated SQL++ subquery:
@@ -451,26 +451,26 @@ The next two examples show queries that do not provide binding variables in thei
 Returns:
 
     [ {
-    	"name": "MargaritaStoddard",
-    	"message": " like verizon the 3G is awesome:)"
+        "name": "MargaritaStoddard",
+        "message": " like verizon the 3G is awesome:)"
     }, {
-    	"name": "MargaritaStoddard",
-    	"message": " can't stand motorola the touch-screen is terrible"
+        "name": "MargaritaStoddard",
+        "message": " can't stand motorola the touch-screen is terrible"
     }, {
-    	"name": "MargaritaStoddard",
-    	"message": " can't stand at&t its plan is terrible"
+        "name": "MargaritaStoddard",
+        "message": " can't stand at&t its plan is terrible"
     }, {
-    	"name": "MargaritaStoddard",
-    	"message": " dislike iphone its touch-screen is horrible"
+        "name": "MargaritaStoddard",
+        "message": " dislike iphone its touch-screen is horrible"
     }, {
-    	"name": "MargaritaStoddard",
-    	"message": " can't stand at&t the network is horrible:("
+        "name": "MargaritaStoddard",
+        "message": " can't stand at&t the network is horrible:("
     }, {
-    	"name": "IsbelDull",
-    	"message": " like samsung the plan is amazing"
+        "name": "IsbelDull",
+        "message": " like samsung the plan is amazing"
     }, {
-    	"name": "IsbelDull",
-    	"message": " like t-mobile its platform is mind-blowing"
+        "name": "IsbelDull",
+        "message": " like t-mobile its platform is mind-blowing"
     } ]
 
 ##### Example
@@ -508,31 +508,31 @@ SQL++ supports SQL's notion of left outer join. The following query is an exampl
 Returns:
 
     [ {
-    	"uname": "MargaritaStoddard",
-    	"message": " like verizon the 3G is awesome:)"
+        "uname": "MargaritaStoddard",
+        "message": " like verizon the 3G is awesome:)"
     }, {
-    	"uname": "MargaritaStoddard",
-    	"message": " can't stand motorola the touch-screen is terrible"
+        "uname": "MargaritaStoddard",
+        "message": " can't stand motorola the touch-screen is terrible"
     }, {
-    	"uname": "MargaritaStoddard",
-    	"message": " can't stand at&t its plan is terrible"
+        "uname": "MargaritaStoddard",
+        "message": " can't stand at&t its plan is terrible"
     }, {
-    	"uname": "MargaritaStoddard",
-    	"message": " dislike iphone its touch-screen is horrible"
+        "uname": "MargaritaStoddard",
+        "message": " dislike iphone its touch-screen is horrible"
     }, {
-    	"uname": "MargaritaStoddard",
-    	"message": " can't stand at&t the network is horrible:("
+        "uname": "MargaritaStoddard",
+        "message": " can't stand at&t the network is horrible:("
     }, {
-    	"uname": "IsbelDull",
-    	"message": " like samsung the plan is amazing"
+        "uname": "IsbelDull",
+        "message": " like samsung the plan is amazing"
     }, {
-    	"uname": "IsbelDull",
-    	"message": " like t-mobile its platform is mind-blowing"
+        "uname": "IsbelDull",
+        "message": " like t-mobile its platform is mind-blowing"
     }, {
-    	"uname": "EmoryUnk"
+        "uname": "EmoryUnk"
     } ]
 
-For non-matching left-side tuples, SQL++ produces `MISSING` values for the right-side binding variables; that is why the last record in the above result doesn't have a `message` field. Note that this is slightly different from standard SQL, which instead would fill in `NULL` values for the right-side fields. The reason for this difference is that, for non-matches in its join results, SQL++ views fields from the right-side as being "not there" (a.k.a. `MISSING`) instead of as being "there but unknown" (i.e., `NULL`).
+For non-matching left-side tuples, SQL++ produces `MISSING` values for the right-side binding variables; that is why the last object in the above result doesn't have a `message` field. Note that this is slightly different from standard SQL, which instead would fill in `NULL` values for the right-side fields. The reason for this difference is that, for non-matches in its join results, SQL++ views fields from the right-side as being "not there" (a.k.a. `MISSING`) instead of as being "there but unknown" (i.e., `NULL`).
 
 The left-outer join query can also be expressed using `LEFT OUTER UNNEST`:
 
@@ -551,7 +551,7 @@ The SQL++ `GROUP BY` clause generalizes standard SQL's grouping and aggregation
 
 ### <a id="Group_variables">Group variables</a>
 In a `GROUP BY` clause, in addition to the binding variable(s) defined for the grouping key(s), SQL++ allows a user to define a *group variable* by using the clause's `GROUP AS` extension to denote the resulting group.
-After grouping, then, the query's in-scope variables include the grouping key's binding variables as well as this group variable which will be bound to one collection value for each group. This per-group collection value will be a set of nested records in which each field of the record is the result of a renamed variable defined in parentheses following the group variable's name. The `GROUP AS` syntax is as follows:
+After grouping, then, the query's in-scope variables include the grouping key's binding variables as well as this group variable which will be bound to one collection value for each group. This per-group collection value will be a set of nested objects in which each field of the object is the result of a renamed variable defined in parentheses following the group variable's name. The `GROUP AS` syntax is as follows:
 
     <GROUP> <AS> Variable ("(" Variable <AS> VariableReference ("," Variable <AS> VariableReference )* ")")?
 
@@ -564,108 +564,108 @@ After grouping, then, the query's in-scope variables include the grouping key's
 This first example query returns:
 
     [ {
-    	"msgs": [
-    		{
-    			"msg": {
-    				"senderLocation": [
-    					38.97,
-    					77.49
-    				],
-    				"inResponseTo": 1,
-    				"messageId": 11,
-    				"authorId": 1,
-    				"message": " can't stand at&t its plan is terrible"
-    			}
-    		},
-    		{
-    			"msg": {
-    				"senderLocation": [
-    					41.66,
-    					80.87
-    				],
-    				"inResponseTo": 4,
-    				"messageId": 2,
-    				"authorId": 1,
-    				"message": " dislike iphone its touch-screen is horrible"
-    			}
-    		},
-    		{
-    			"msg": {
-    				"senderLocation": [
-    					37.73,
-    					97.04
-    				],
-    				"inResponseTo": 2,
-    				"messageId": 4,
-    				"authorId": 1,
-    				"message": " can't stand at&t the network is horrible:("
-    			}
-    		},
-    		{
-    			"msg": {
-    				"senderLocation": [
-    					40.33,
-    					80.87
-    				],
-    				"inResponseTo": 11,
-    				"messageId": 8,
-    				"authorId": 1,
-    				"message": " like verizon the 3G is awesome:)"
-    			}
-    		},
-    		{
-    			"msg": {
-    				"senderLocation": [
-    					42.5,
-    					70.01
-    				],
-    				"inResponseTo": 12,
-    				"messageId": 10,
-    				"authorId": 1,
-    				"message": " can't stand motorola the touch-screen is terrible"
-    			}
-    		}
-    	],
-    	"uid": 1
+        "msgs": [
+            {
+                "msg": {
+                    "senderLocation": [
+                        38.97,
+                        77.49
+                    ],
+                    "inResponseTo": 1,
+                    "messageId": 11,
+                    "authorId": 1,
+                    "message": " can't stand at&t its plan is terrible"
+                }
+            },
+            {
+                "msg": {
+                    "senderLocation": [
+                        41.66,
+                        80.87
+                    ],
+                    "inResponseTo": 4,
+                    "messageId": 2,
+                    "authorId": 1,
+                    "message": " dislike iphone its touch-screen is horrible"
+                }
+            },
+            {
+                "msg": {
+                    "senderLocation": [
+                        37.73,
+                        97.04
+                    ],
+                    "inResponseTo": 2,
+                    "messageId": 4,
+                    "authorId": 1,
+                    "message": " can't stand at&t the network is horrible:("
+                }
+            },
+            {
+                "msg": {
+                    "senderLocation": [
+                        40.33,
+                        80.87
+                    ],
+                    "inResponseTo": 11,
+                    "messageId": 8,
+                    "authorId": 1,
+                    "message": " like verizon the 3G is awesome:)"
+                }
+            },
+            {
+                "msg": {
+                    "senderLocation": [
+                        42.5,
+                        70.01
+                    ],
+                    "inResponseTo": 12,
+                    "messageId": 10,
+                    "authorId": 1,
+                    "message": " can't stand motorola the touch-screen is terrible"
+                }
+            }
+        ],
+        "uid": 1
     }, {
-    	"msgs": [
-    		{
-    			"msg": {
-    				"senderLocation": [
-    					31.5,
-    					75.56
-    				],
-    				"inResponseTo": 1,
-    				"messageId": 6,
-    				"authorId": 2,
-    				"message": " like t-mobile its platform is mind-blowing"
-    			}
-    		},
-    		{
-    			"msg": {
-    				"senderLocation": [
-    					48.09,
-    					81.01
-    				],
-    				"inResponseTo": 4,
-    				"messageId": 3,
-    				"authorId": 2,
-    				"message": " like samsung the plan is amazing"
-    			}
-    		}
-    	],
-    	"uid": 2
+        "msgs": [
+            {
+                "msg": {
+                    "senderLocation": [
+                        31.5,
+                        75.56
+                    ],
+                    "inResponseTo": 1,
+                    "messageId": 6,
+                    "authorId": 2,
+                    "message": " like t-mobile its platform is mind-blowing"
+                }
+            },
+            {
+                "msg": {
+                    "senderLocation": [
+                        48.09,
+                        81.01
+                    ],
+                    "inResponseTo": 4,
+                    "messageId": 3,
+                    "authorId": 2,
+                    "message": " like samsung the plan is amazing"
+                }
+            }
+        ],
+        "uid": 2
     } ]
 
 As we can see from the above query result, each group in the example query's output has an associated group
 variable value called `msgs` that appears in the `SELECT *`'s result.
-This variable contains a collection of records associated with the group; each of the group's `message` values
-appears in the `msg` field of the records in the `msgs` collection.
+This variable contains a collection of objects associated with the group; each of the group's `message` values
+appears in the `msg` field of the objects in the `msgs` collection.
 
 The group variable in SQL++ makes more complex, composable, nested subqueries over a group possible, which is
 important given the more complex data model of SQL++ (relative to SQL).
 As a simple example of this, as we really just want the messages associated with each user, we might wish to avoid
-the "extra wrapping" of each message as the `msg` field of a record.
+the "extra wrapping" of each message as the `msg` field of a object.
 (That wrapping is useful in more complex cases, but is essentially just in the way here.)
 We can use a subquery in the `SELECT` clase to tunnel through the extra nesting and produce the desired result.
 
@@ -678,83 +678,83 @@ We can use a subquery in the `SELECT` clase to tunnel through the extra nesting
 This variant of the example query returns:
 
        [ {
-       	"msgs": [
-       		{
-       			"senderLocation": [
-       				38.97,
-       				77.49
-       			],
-       			"inResponseTo": 1,
-       			"messageId": 11,
-       			"authorId": 1,
-       			"message": " can't stand at&t its plan is terrible"
-       		},
-       		{
-       			"senderLocation": [
-       				41.66,
-       				80.87
-       			],
-       			"inResponseTo": 4,
-       			"messageId": 2,
-       			"authorId": 1,
-       			"message": " dislike iphone its touch-screen is horrible"
-       		},
-       		{
-       			"senderLocation": [
-       				37.73,
-       				97.04
-       			],
-       			"inResponseTo": 2,
-       			"messageId": 4,
-       			"authorId": 1,
-       			"message": " can't stand at&t the network is horrible:("
-       		},
-       		{
-       			"senderLocation": [
-       				40.33,
-       				80.87
-       			],
-       			"inResponseTo": 11,
-       			"messageId": 8,
-       			"authorId": 1,
-       			"message": " like verizon the 3G is awesome:)"
-       		},
-       		{
-       			"senderLocation": [
-       				42.5,
-       				70.01
-       			],
-       			"inResponseTo": 12,
-       			"messageId": 10,
-       			"authorId": 1,
-       			"message": " can't stand motorola the touch-screen is terrible"
-       		}
-       	],
-       	"uid": 1
+           "msgs": [
+               {
+                   "senderLocation": [
+                       38.97,
+                       77.49
+                   ],
+                   "inResponseTo": 1,
+                   "messageId": 11,
+                   "authorId": 1,
+                   "message": " can't stand at&t its plan is terrible"
+               },
+               {
+                   "senderLocation": [
+                       41.66,
+                       80.87
+                   ],
+                   "inResponseTo": 4,
+                   "messageId": 2,
+                   "authorId": 1,
+                   "message": " dislike iphone its touch-screen is horrible"
+               },
+               {
+                   "senderLocation": [
+                       37.73,
+                       97.04
+                   ],
+                   "inResponseTo": 2,
+                   "messageId": 4,
+                   "authorId": 1,
+                   "message": " can't stand at&t the network is horrible:("
+               },
+               {
+                   "senderLocation": [
+                       40.33,
+                       80.87
+                   ],
+                   "inResponseTo": 11,
+                   "messageId": 8,
+                   "authorId": 1,
+                   "message": " like verizon the 3G is awesome:)"
+               },
+               {
+                   "senderLocation": [
+                       42.5,
+                       70.01
+                   ],
+                   "inResponseTo": 12,
+                   "messageId": 10,
+                   "authorId": 1,
+                   "message": " can't stand motorola the touch-screen is terrible"
+               }
+           ],
+           "uid": 1
        }, {
-       	"msgs": [
-       		{
-       			"senderLocation": [
-       				31.5,
-       				75.56
-       			],
-       			"inResponseTo": 1,
-       			"messageId": 6,
-       			"authorId": 2,
-       			"message": " like t-mobile its platform is mind-blowing"
-       		},
-       		{
-       			"senderLocation": [
-       				48.09,
-       				81.01
-       			],
-       			"inResponseTo": 4,
-       			"messageId": 3,
-       			"authorId": 2,
-       			"message": " like samsung the plan is amazing"
-       		}
-       	],
-       	"uid": 2
+           "msgs": [
+               {
+                   "senderLocation": [
+                       31.5,
+                       75.56
+                   ],
+                   "inResponseTo": 1,
+                   "messageId": 6,
+                   "authorId": 2,
+                   "message": " like t-mobile its platform is mind-blowing"
+               },
+               {
+                   "senderLocation": [
+                       48.09,
+                       81.01
+                   ],
+                   "inResponseTo": 4,
+                   "messageId": 3,
+                   "authorId": 2,
+                   "message": " like samsung the plan is amazing"
+               }
+           ],
+           "uid": 2
        } ]
 
 Because this is a fairly common case, a third variant with output identical to the second variant is also possible:
@@ -786,43 +786,43 @@ Here the subquery further processes the groups.
 This example query returns:
 
     [ {
-    	"msgs": [
-    		{
-    			"senderLocation": [
-    				40.33,
-    				80.87
-    			],
-    			"inResponseTo": 11,
-    			"messageId": 8,
-    			"authorId": 1,
-    			"message": " like verizon the 3G is awesome:)"
-    		}
-    	],
-    	"uid": 1
+        "msgs": [
+            {
+                "senderLocation": [
+                    40.33,
+                    80.87
+                ],
+                "inResponseTo": 11,
+                "messageId": 8,
+                "authorId": 1,
+                "message": " like verizon the 3G is awesome:)"
+            }
+        ],
+        "uid": 1
     }, {
-    	"msgs": [
-    		{
-    			"senderLocation": [
-    				48.09,
-    				81.01
-    			],
-    			"inResponseTo": 4,
-    			"messageId": 3,
-    			"authorId": 2,
-    			"message": " like samsung the plan is amazing"
-    		},
-    		{
-    			"senderLocation": [
-    				31.5,
-    				75.56
-    			],
-    			"inResponseTo": 1,
-    			"messageId": 6,
-    			"authorId": 2,
-    			"message": " like t-mobile its platform is mind-blowing"
-    		}
-    	],
-    	"uid": 2
+        "msgs": [
+            {
+                "senderLocation": [
+                    48.09,
+                    81.01
+                ],
+                "inResponseTo": 4,
+                "messageId": 3,
+                "authorId": 2,
+                "message": " like samsung the plan is amazing"
+            },
+            {
+                "senderLocation": [
+                    31.5,
+                    75.56
+                ],
+                "inResponseTo": 1,
+                "messageId": 6,
+                "authorId": 2,
+                "message": " like t-mobile its platform is mind-blowing"
+            }
+        ],
+        "uid": 2
     } ]
 
 ### <a id="Implicit_group_key_variables">Implicit grouping key variables</a>
@@ -850,43 +850,43 @@ The next example illustrates a query that doesn't provide binding variables for
 This query returns:
 
         [ {
-    	"msgs": [
-    		{
-    			"senderLocation": [
-    				40.33,
-    				80.87
-    			],
-    			"inResponseTo": 11,
-    			"messageId": 8,
-    			"authorId": 1,
-    			"message": " like verizon the 3G is awesome:)"
-    		}
-    	],
-    	"authorId": 1
+        "msgs": [
+            {
+                "senderLocation": [
+                    40.33,
+                    80.87
+                ],
+                "inResponseTo": 11,
+                "messageId": 8,
+                "authorId": 1,
+                "message": " like verizon the 3G is awesome:)"
+            }
+        ],
+        "authorId": 1
     }, {
-    	"msgs": [
-    		{
-    			"senderLocation": [
-    				48.09,
-    				81.01
-    			],
-    			"inResponseTo": 4,
-    			"messageId": 3,
-    			"authorId": 2,
-    			"message": " like samsung the plan is amazing"
-    		},
-    		{
-    			"senderLocation": [
-    				31.5,
-    				75.56
-    			],
-    			"inResponseTo": 1,
-    			"messageId": 6,
-    			"authorId": 2,
-    			"message": " like t-mobile its platform is mind-blowing"
-    		}
-    	],
-    	"authorId": 2
+        "msgs": [
+            {
+                "senderLocation": [
+                    48.09,
+                    81.01
+                ],
+                "inResponseTo": 4,
+                "messageId": 3,
+                "authorId": 2,
+                "message": " like samsung the plan is amazing"
+            },
+            {
+                "senderLocation": [
+                    31.5,
+                    75.56
+                ],
+                "inResponseTo": 1,
+                "messageId": 6,
+                "authorId": 2,
+                "message": " like t-mobile its platform is mind-blowing"
+            }
+        ],
+        "authorId": 2
     } ]
 
 Based on the three variable generation rules, the generated variable for the grouping key expression `message.authorId`
@@ -913,22 +913,22 @@ binding variables defined in the `FROM` clause of the current enclosing `SELECT`
 This query returns:
 
     [ {
-    	"msgs": [
-    		{
-    			"message": " like verizon the 3G is awesome:)"
-    		}
-    	],
-    	"uid": 1
+        "msgs": [
+            {
+                "message": " like verizon the 3G is awesome:)"
+            }
+        ],
+        "uid": 1
     }, {
-    	"msgs": [
-    		{
-    			"message": " like samsung the plan is amazing"
-    		},
-    		{
-    			"message": " like t-mobile its platform is mind-blowing"
-    		}
-    	],
-    	"uid": 2
+        "msgs": [
+            {
+                "message": " like samsung the plan is amazing"
+            },
+            {
+                "message": " like t-mobile its platform is mind-blowing"
+            }
+        ],
+        "uid": 2
     } ]
 
 Note that in the query above, in principle, `message` is not an in-scope variable in the `SELECT` clause.
@@ -994,11 +994,11 @@ This example returns:
 This query returns:
 
     [ {
-    	"uid": 1,
-    	"msgCnt": 5
+        "uid": 1,
+        "msgCnt": 5
     }, {
-    	"uid": 2,
-    	"msgCnt": 2
+        "uid": 2,
+        "msgCnt": 2
     } ]
 
 Notice how the query forms groups where each group involves a message author and their messages.
@@ -1045,11 +1045,11 @@ The following query is such an example:
 This query outputs:
 
     [ {
-    	"authorId": 1,
-    	"$1": 5
+        "authorId": 1,
+        "$1": 5
     }, {
-    	"authorId": 2,
-    	"$1": 2
+        "authorId": 2,
+        "$1": 2
     } ]
 
 In principle, a `msg` reference in the query's `SELECT` clause would be "sugarized" as a collection
@@ -1074,11 +1074,11 @@ SQL++ also allows column aliases to be used as `GROUP BY` keys or `ORDER BY` key
 This query returns:
 
     [ {
-    	"$1": 5,
-    	"aid": 1
+        "$1": 5,
+        "aid": 1
     }, {
-    	"$1": 2,
-    	"aid": 2
+        "$1": 2,
+        "aid": 2
     } ]
 
 ## <a id="Where_having_clauses">WHERE clauses and HAVING clauses</a>
@@ -1101,63 +1101,63 @@ The following example returns all `GleambookUsers` ordered by their friend numbe
 This query returns:
 
       [ {
-      	"userSince": "2012-08-20T10:10:00.000Z",
-      	"friendIds": [
-      		2,
-      		3,
-      		6,
-      		10
-      	],
-      	"gender": "F",
-      	"name": "MargaritaStoddard",
-      	"nickname": "Mags",
-      	"alias": "Margarita",
-      	"id": 1,
-      	"employment": [
-      		{
-      			"organizationName": "Codetechno",
-      			"start-date": "2006-08-06"
-      		},
-      		{
-      			"end-date": "2010-01-26",
-      			"organizationName": "geomedia",
-      			"start-date": "2010-06-17"
-      		}
-      	]
+          "userSince": "2012-08-20T10:10:00.000Z",
+          "friendIds": [
+              2,
+              3,
+              6,
+              10
+          ],
+          "gender": "F",
+          "name": "MargaritaStoddard",
+          "nickname": "Mags",
+          "alias": "Margarita",
+          "id": 1,
+          "employment": [
+              {
+                  "organizationName": "Codetechno",
+                  "start-date": "2006-08-06"
+              },
+              {
+                  "end-date": "2010-01-26",
+                  "organizationName": "geomedia",
+                  "start-date": "2010-06-17"
+              }
+          ]
       }, {
-      	"userSince": "2012-07-10T10:10:00.000Z",
-      	"friendIds": [
-      		1,
-      		5,
-      		8,
-      		9
-      	],
-      	"name": "EmoryUnk",
-      	"alias": "Emory",
-      	"id": 3,
-      	"employment": [
-      		{
-      			"organizationName": "geomedia",
-      			"endDate": "2010-01-26",
-      			"startDate": "2010-06-17"
-      		}
-      	]
+          "userSince": "2012-07-10T10:10:00.000Z",
+          "friendIds": [
+              1,
+              5,
+              8,
+              9
+          ],
+          "name": "EmoryUnk",
+          "alias": "Emory",
+          "id": 3,
+          "employment": [
+              {
+                  "organizationName": "geomedia",
+                  "endDate": "2010-01-26",
+                  "startDate": "2010-06-17"
+              }
+          ]
       }, {
-      	"userSince": "2011-01-22T10:10:00.000Z",
-      	"friendIds": [
-      		1,
-      		4
-      	],
-      	"name": "IsbelDull",
-      	"nickname": "Izzy",
-      	"alias": "Isbel",
-      	"id": 2,
-      	"employment": [
-      		{
-      			"organizationName": "Hexviafind",
-      			"startDate": "2010-04-27"
-      		}
-      	]
+          "userSince": "2011-01-22T10:10:00.000Z",
+          "friendIds": [
+              1,
+              4
+          ],
+          "name": "IsbelDull",
+          "nickname": "Izzy",
+          "alias": "Isbel",
+          "id": 2,
+          "employment": [
+              {
+                  "organizationName": "Hexviafind",
+                  "startDate": "2010-04-27"
+              }
+          ]
       } ]
 
 ## <a id="Limit_clauses">LIMIT clauses</a>
@@ -1174,29 +1174,29 @@ The use of the `LIMIT` clause is illustrated in the next example.
 This query returns:
 
       [ {
-      	"userSince": "2012-08-20T10:10:00.000Z",
-      	"friendIds": [
-      		2,
-      		3,
-      		6,
-      		10
-      	],
-      	"gender": "F",
-      	"name": "MargaritaStoddard",
-      	"nickname": "Mags",
-      	"alias": "Margarita",
-      	"id": 1,
-      	"employment": [
-      		{
-      			"organizationName": "Codetechno",
-      			"start-date": "2006-08-06"
-      		},
-      		{
-      			"end-date": "2010-01-26",
-      			"organizationName": "geomedia",
-      			"start-date": "2010-06-17"
-      		}
-      	]
+          "userSince": "2012-08-20T10:10:00.000Z",
+          "friendIds": [
+              2,
+              3,
+              6,
+              10
+          ],
+          "gender": "F",
+          "name": "MargaritaStoddard",
+          "nickname": "Mags",
+          "alias": "Margarita",
+          "id": 1,
+          "employment": [
+              {
+                  "organizationName": "Codetechno",
+                  "start-date": "2006-08-06"
+              },
+              {
+                  "end-date": "2010-01-26",
+                  "organizationName": "geomedia",
+                  "start-date": "2010-06-17"
+              }
+          ]
       } ]
 
 ## <a id="With_clauses">WITH clauses</a>
@@ -1216,47 +1216,47 @@ The next query shows an example.
 This query returns:
 
     [ {
-    	"userSince": "2012-08-20T10:10:00.000Z",
-    	"friendIds": [
-    		2,
-    		3,
-    		6,
-    		10
-    	],
-    	"gender": "F",
-    	"name": "MargaritaStoddard",
-    	"nickname": "Mags",
-    	"alias": "Margarita",
-    	"id": 1,
-    	"employment": [
-    		{
-    			"organizationName": "Codetechno",
-    			"start-date": "2006-08-06"
-    		},
-    		{
-    			"end-date": "2010-01-26",
-    			"organizationName": "geomedia",
-    			"start-date": "2010-06-17"
-    		}
-    	]
+        "userSince": "2012-08-20T10:10:00.000Z",
+        "friendIds": [
+            2,
+            3,
+            6,
+            10
+        ],
+        "gender": "F",
+        "name": "MargaritaStoddard",
+        "nickname": "Mags",
+        "alias": "Margarita",
+        "id": 1,
+        "employment": [
+            {
+                "organizationName": "Codetechno",
+                "start-date": "2006-08-06"
+            },
+            {
+                "end-date": "2010-01-26",
+                "organizationName": "geomedia",
+                "start-date": "2010-06-17"
+            }
+        ]
     }, {
-    	"userSince": "2012-07-10T10:10:00.000Z",
-    	"friendIds": [
-    		1,
-    		5,
-    		8,
-    		9
-    	],
-    	"name": "EmoryUnk",
-    	"alias": "Emory",
-    	"id": 3,
-    	"employment": [
-    		{
-    			"organizationName": "geomedia",
-    			"endDate": "2010-01-26",
-    			"startDate": "2010-06-17"
-    		}
-    	]
+        "userSince": "2012-07-10T10:10:00.000Z",
+        "friendIds": [
+            1,
+            5,
+            8,
+            9
+        ],
+        "name": "EmoryUnk",
+        "alias": "Emory",
+        "id": 3,
+        "employment": [
+            {
+                "organizationName": "geomedia",
+                "endDate": "2010-01-26",
+                "startDate": "2010-06-17"
+            }
+        ]
     } ]
 
 The query is equivalent to the following, more complex, inlined form of the query:
@@ -1297,83 +1297,83 @@ Similar to `WITH` clauses, `LET` clauses can be useful when a (complex) expressi
 This query lists `GleambookUsers` that have posted `GleambookMessages` and shows all authored messages for each listed user. It returns:
 
     [ {
-    	"uname": "MargaritaStoddard",
-    	"messages": [
-    		{
-    			"senderLocation": [
-    				38.97,
-    				77.49
-    			],
-    			"inResponseTo": 1,
-    			"messageId": 11,
-    			"authorId": 1,
-    			"message": " can't stand at&t its plan is terrible"
-    		},
-    		{
-    			"senderLocation": [
-    				41.66,
-    				80.87
-    			],
-    			"inResponseTo": 4,
-    			"messageId": 2,
-    			"authorId": 1,
-    			"message": " dislike iphone its touch-screen is horrible"
-    		},
-    		{
-    			"senderLocation": [
-    				37.73,
-    				97.04
-    			],
-    			"inResponseTo": 2,
-    			"messageId": 4,
-    			"authorId": 1,
-    			"message": " can't stand at&t the network is horrible:("
-    		},
-    		{
-    			"senderLocation": [
-    				40.33,
-    				80.87
-    			],
-    			"inResponseTo": 11,
-    			"messageId": 8,
-    			"authorId": 1,
-    			"message": " like verizon the 3G is awesome:)"
-    		},
-    		{
-    			"senderLocation": [
-    				42.5,
-    				70.01
-    			],
-    			"inResponseTo": 12,
-    			"messageId": 10,
-    			"authorId": 1,
-    			"message": " can't stand motorola the touch-screen is terrible"
-    		}
-    	]
+        "uname": "MargaritaStoddard",
+        "messages": [
+            {
+                "senderLocation": [
+                    38.97,
+                    77.49
+                ],
+                "inResponseTo": 1,
+                "messageId": 11,
+                "authorId": 1,
+                "message": " can't stand at&t its plan is terrible"
+            },
+            {
+                "senderLocation": [
+                    41.66,
+                    80.87
+                ],
+                "inResponseTo": 4,
+                "messageId": 2,
+                "authorId": 1,
+                "message": " dislike iphone its touch-screen is horrible"
+            },
+            {
+                "senderLocation": [
+                    37.73,
+                    97.04
+                ],
+                "inResponseTo": 2,
+                "messageId": 4,
+                "authorId": 1,
+                "message": " can't stand at&t the network is horrible:("
+            },
+            {
+                "senderLocation": [
+                    40.33,
+                    80.87
+                ],
+                "inResponseTo": 11,
+                "messageId": 8,
+                "authorId": 1,
+                "message": " like verizon the 3G is awesome:)"
+            },
+            {
+                "senderLocation": [
+                    42.5,
+                    70.01
+                ],
+                "inResponseTo": 12,
+                "messageId": 10,
+                "authorId": 1,
+                "message": " can't stand motorola the touch-screen is terrible"
+            }
+        ]
     }, {
-    	"uname": "IsbelDull",
-    	"messages": [
-    		{
-    			"senderLocation": [
-    				31.5,
-    				75.56
-    			],
-    			"inResponseTo": 1,
-    			"messageId": 6,
-    			"authorId": 2,
-    			"message": " like t-mobile its platform is mind-blowing"
-    		},
-    		{
-    			"senderLocation": [
-    				48.09,
-    				81.01
-    			],
-    			"inResponseTo": 4,
-    			"messageId": 3,
-    			"authorId": 2,
-    			"message": " like samsung the plan is amazing"
-    		}
-    	]
+        "uname": "IsbelDull",
+        "messages": [
+            {
+                "senderLocation": [
+                    31.5,
+                    75.56
+                ],
+                "inResponseTo": 1,
+                "messageId": 6,
+                "authorId": 2,
+                "message": " like t-mobile its platform is mind-blowing"
+            },
+            {
+                "senderLocation": [
+                    48.09,
+                    81.01
+                ],
+                "inResponseTo": 4,
+                "messageId": 3,
+                "authorId": 2,
+                "message": " like samsung the plan is amazing"
+            }
+        ]
     } ]
 
 This query is equivalent to the following query that does not use the `LET` clause:
@@ -1406,7 +1406,7 @@ This query returns:
     [
       " like t-mobile its platform is mind-blowing"
       , {
-    	"uname": "IsbelDull"
+        "uname": "IsbelDull"
     }, " like samsung the plan is amazing"
      ]
 
@@ -1432,24 +1432,24 @@ it retrieves an array of up to two "dislike" messages per user.
 For our sample data set, this query returns:
 
     [ {
-    	"msgs": [
-    		{
-    			"senderLocation": [
-    				41.66,
-    				80.87
-    			],
-    			"inResponseTo": 4,
-    			"messageId": 2,
-    			"authorId": 1,
-    			"message": " dislike iphone its touch-screen is horrible"
-    		}
-    	],
-    	"uid": 1
+        "msgs": [
+            {
+                "senderLocation": [
+                    41.66,
+                    80.87
+                ],
+                "inResponseTo": 4,
+                "messageId": 2,
+                "authorId": 1,
+                "message": " dislike iphone its touch-screen is horrible"
+            }
+        ],
+        "uid": 1
     }, {
-    	"msgs": [
+        "msgs": [
 
-    	],
-    	"uid": 2
+        ],
+        "uid": 2
     } ]
 
 Note that a subquery, like a top-level `SELECT` statment, always returns a collection -- regardless of where
@@ -1460,7 +1460,7 @@ The following matrix is a quick "SQL-92 compatibility cheat sheet" for SQL++.
 
 | Feature |  SQL++ | SQL-92 |
 |----------|--------|--------|
-| SELECT * | Returns nested records | Returns flattened concatenated records |
+| SELECT * | Returns nested objects | Returns flattened concatenated objects |
 | Subquery | Returns a collection  | The returned collection is cast into a scalar value if the subquery appears in a SELECT list or on one side of a comparison or as input to a function |
 | LEFT OUTER JOIN |  Fills in `MISSING`(s) for non-matches  |   Fills in `NULL`(s) for non-matches    |
 | UNION ALL       | Allows heterogeneous inputs and output | Input streams must be UNION-compatible and output field names are drawn from the first input stream
@@ -1475,5 +1475,5 @@ Morever, SQL++ offers the following additional features beyond SQL-92 (hence the
   * Schema-free: The query language does not assume the existence of a static schema for any data that it processes.
   * Correlated FROM terms: A right-side FROM term expression can refer to variables defined by FROM terms on its left.
   * Powerful GROUP BY: In addition to a set of aggregate functions as in standard SQL, the groups created by the `GROUP BY` clause are directly usable in nested queries and/or to obtain nested results.
-  * Generalized SELECT clause: A SELECT clause can return any type of collection, while in SQL-92, a `SELECT` clause has to return a (homogeneous) collection of records.
+  * Generalized SELECT clause: A SELECT clause can return any type of collection, while in SQL-92, a `SELECT` clause has to return a (homogeneous) collection of objects.
 

http://git-wip-us.apache.org/repos/asf/asterixdb/blob/10351a74/asterixdb/asterix-doc/src/main/markdown/sqlpp/5_ddl.md
----------------------------------------------------------------------
diff --git a/asterixdb/asterix-doc/src/main/markdown/sqlpp/5_ddl.md b/asterixdb/asterix-doc/src/main/markdown/sqlpp/5_ddl.md
index d236003..b6577ff 100644
--- a/asterixdb/asterix-doc/src/main/markdown/sqlpp/5_ddl.md
+++ b/asterixdb/asterix-doc/src/main/markdown/sqlpp/5_ddl.md
@@ -105,12 +105,12 @@ The following example creates a new dataverse named TinySocial if one does not a
 
 ### <a id="Types"> Types</a>
 
-    TypeSpecification    ::= "TYPE" FunctionOrTypeName IfNotExists "AS" RecordTypeDef
+    TypeSpecification    ::= "TYPE" FunctionOrTypeName IfNotExists "AS" ObjectTypeDef
     FunctionOrTypeName   ::= QualifiedName
     IfNotExists          ::= ( <IF> <NOT> <EXISTS> )?
-    TypeExpr             ::= RecordTypeDef | TypeReference | ArrayTypeDef | MultisetTypeDef
-    RecordTypeDef        ::= ( <CLOSED> | <OPEN> )? "{" ( RecordField ( "," RecordField )* )? "}"
-    RecordField          ::= Identifier ":" ( TypeExpr ) ( "?" )?
+    TypeExpr             ::= ObjectTypeDef | TypeReference | ArrayTypeDef | MultisetTypeDef
+    ObjectTypeDef        ::= ( <CLOSED> | <OPEN> )? "{" ( ObjectField ( "," ObjectField )* )? "}"
+    ObjectField          ::= Identifier ":" ( TypeExpr ) ( "?" )?
     NestedField          ::= Identifier ( "." Identifier )*
     IndexField           ::= NestedField ( ":" TypeReference )?
     TypeReference        ::= Identifier
@@ -120,17 +120,17 @@ The following example creates a new dataverse named TinySocial if one does not a
 The CREATE TYPE statement is used to create a new named datatype.
 This type can then be used to create stored collections or utilized when defining one or more other datatypes.
 Much more information about the data model is available in the [data model reference guide](datamodel.html).
-A new type can be a record type, a renaming of another type, an array type, or a multiset type.
-A record type can be defined as being either open or closed.
-Instances of a closed record type are not permitted to contain fields other than those specified in the create type statement.
-Instances of an open record type may carry additional fields, and open is the default for new types if neither option is specified.
+A new type can be a object type, a renaming of another type, an array type, or a multiset type.
+A object type can be defined as being either open or closed.
+Instances of a closed object type are not permitted to contain fields other than those specified in the create type statement.
+Instances of an open object type may carry additional fields, and open is the default for new types if neither option is specified.
 
-The following example creates a new record type called GleambookUser type.
+The following example creates a new object type called GleambookUser type.
 Since it is defined as (defaulting to) being an open type,
 instances will be permitted to contain more than what is specified in the type definition.
 The first four fields are essentially traditional typed name/value pairs (much like SQL fields).
 The friendIds field is a multiset of integers.
-The employment field is an array of instances of another named record type, EmploymentType.
+The employment field is an array of instances of another named object type, EmploymentType.
 
 ##### Example
 
@@ -143,7 +143,7 @@ The employment field is an array of instances of another named record type, Empl
       employment: [ EmploymentType ]
     };
 
-The next example creates a new record type, closed this time, called MyUserTupleType.
+The next example creates a new object type, closed this time, called MyUserTupleType.
 Instances of this closed type will not be permitted to have extra fields,
 although the alias field is marked as optional and may thus be NULL or MISSING in legal instances of the type.
 Note that the type of the id field in the example is UUID.
@@ -178,16 +178,16 @@ This field type can be used if you want to have this field be an autogenerated-P
     CompactionPolicy     ::= Identifier
 
 The CREATE DATASET statement is used to create a new dataset.
-Datasets are named, multisets of record type instances;
+Datasets are named, multisets of object type instances;
 they are where data lives persistently and are the usual targets for SQL++ queries.
 Datasets are typed, and the system ensures that their contents conform to their type definitions.
 An Internal dataset (the default kind) is a dataset whose content lives within and is managed by the system.
-It is required to have a specified unique primary key field which uniquely identifies the contained records.
-(The primary key is also used in secondary indexes to identify the indexed primary data records.)
+It is required to have a specified unique primary key field which uniquely identifies the contained objects.
+(The primary key is also used in secondary indexes to identify the indexed primary data objects.)
 
 Internal datasets contain several advanced options that can be specified when appropriate.
 One such option is that random primary key (UUID) values can be auto-generated by declaring the field to be UUID and putting "AUTOGENERATED" after the "PRIMARY KEY" identifier.
-In this case, unlike other non-optional fields, a value for the auto-generated PK field should not be provided at insertion time by the user since each record's primary key field value will be auto-generated by the system.
+In this case, unlike other non-optional fields, a value for the auto-generated PK field should not be provided at insertion time by the user since each object's primary key field value will be auto-generated by the system.
 
 Another advanced option, when creating an Internal dataset, is to specify the merge policy to control which of the
 underlying LSM storage components to be merged.
@@ -214,17 +214,17 @@ making it possible to query "legacy" file data (e.g., Hive data) without having
 When defining an External dataset, an appropriate adapter type must be selected for the desired external data.
 (See the [Guide to External Data](externaldata.html) for more information on the available adapters.)
 
-The following example creates an Internal dataset for storing FacefookUserType records.
+The following example creates an Internal dataset for storing FacefookUserType objects.
 It specifies that their id field is their primary key.
 
 #### Example
 
     CREATE INTERNAL DATASET GleambookUsers(GleambookUserType) PRIMARY KEY id;
 
-The next example creates another Internal dataset (the default kind when no dataset kind is specified) for storing MyUserTupleType records.
+The next example creates another Internal dataset (the default kind when no dataset kind is specified) for storing MyUserTupleType objects.
 It specifies that the id field should be used as the primary key for the dataset.
 It also specifies that the id field is an auto-generated field,
-meaning that a randomly generated UUID value should be assigned to each incoming record by the system.
+meaning that a randomly generated UUID value should be assigned to each incoming object by the system.
 (A user should therefore not attempt to provide a value for this field.)
 Note that the id field's declared type must be UUID in this case.
 
@@ -232,7 +232,7 @@ Note that the id field's declared type must be UUID in this case.
 
     CREATE DATASET MyUsers(MyUserTupleType) PRIMARY KEY id AUTOGENERATED;
 
-The next example creates an External dataset for querying LineItemType records.
+The next example creates an External dataset for querying LineItemType objects.
 The choice of the `hdfs` adapter means that this dataset's data actually resides in HDFS.
 The example CREATE statement also provides parameters used by the hdfs adapter:
 the URL and path needed to locate the data in HDFS and a description of the data format.
@@ -264,7 +264,7 @@ An indexed field is not required to be part of the datatype associated with a da
 is declared as open **and** if the field's type is provided along with its name and if the `ENFORCED` keyword is
 specified at the end of the index definition.
 `ENFORCING` an open field introduces a check that makes sure that the actual type of the indexed field
-(if the optional field exists in the record) always matches this specified (open) field type.
+(if the optional field exists in the object) always matches this specified (open) field type.
 
 The following example creates a btree index called gbAuthorIdx on the authorId field of the GleambookMessages dataset.
 This index can be useful for accelerating exact-match queries, range search queries, and joins involving the author-id
@@ -282,7 +282,7 @@ This index can be useful for accelerating exact-match queries, range search quer
     CREATE INDEX gbSendTimeIdx ON GleambookMessages(sendTime: datetime?) TYPE BTREE ENFORCED;
 
 The following example creates a btree index called crpUserScrNameIdx on screenName,
-a nested field residing within a record-valued user field in the ChirpMessages dataset.
+a nested field residing within a object-valued user field in the ChirpMessages dataset.
 This index can be useful for accelerating exact-match queries, range search queries,
 and joins involving the nested screenName field.
 Such nested fields must be singular, i.e., one cannot index through (or on) an array-valued field.
@@ -388,7 +388,7 @@ The data to be inserted comes from a SQL++ query expression.
 This expression can be as simple as a constant expression, or in general it can be any legal SQL++ query.
 If the target dataset has an auto-generated primary key field, the insert statement should not include a
 value for that field in it.
-(The system will automatically extend the provided record with this additional field and a corresponding value.)
+(The system will automatically extend the provided object with this additional field and a corresponding value.)
 Insertion will fail if the dataset already has data with the primary key value(s) being inserted.
 
 Inserts are processed transactionally by the system.

http://git-wip-us.apache.org/repos/asf/asterixdb/blob/10351a74/asterixdb/asterix-doc/src/site/markdown/aql/externaldata.md
----------------------------------------------------------------------
diff --git a/asterixdb/asterix-doc/src/site/markdown/aql/externaldata.md b/asterixdb/asterix-doc/src/site/markdown/aql/externaldata.md
index 5095b97..1b3a858 100644
--- a/asterixdb/asterix-doc/src/site/markdown/aql/externaldata.md
+++ b/asterixdb/asterix-doc/src/site/markdown/aql/externaldata.md
@@ -34,10 +34,10 @@
 Data that needs to be processed by AsterixDB could be residing outside AsterixDB storage. Examples include data files on a distributed file system such as HDFS or on the local file system of a machine that is part of an AsterixDB cluster. For AsterixDB to process such data, an end-user may create a regular dataset in AsterixDB (a.k.a. an internal dataset) and load the dataset with the data. AsterixDB also supports \u2018\u2018external datasets\u2019\u2019 so that it is not necessary to \u201cload\u201d all data prior to using it. This also avoids creating multiple copies of data and the need to keep the copies in sync.
 
 ### <a id="IntroductionAdapterForAnExternalDataset">Adapter for an External Dataset</a> <font size="4"><a href="#toc">[Back to TOC]</a></font> ###
-External data is accessed using wrappers (adapters in AsterixDB) that abstract away the mechanism of connecting with an external service, receiving its data and transforming the data into ADM records that are understood by AsterixDB. AsterixDB comes with built-in adapters for common storage systems such as HDFS or the local file system.
+External data is accessed using wrappers (adapters in AsterixDB) that abstract away the mechanism of connecting with an external service, receiving its data and transforming the data into ADM objects that are understood by AsterixDB. AsterixDB comes with built-in adapters for common storage systems such as HDFS or the local file system.
 
 ### <a id="BuiltinAdapters">Builtin Adapters</a> <font size="4"><a href="#toc">[Back to TOC]</a></font> ###
-AsterixDB offers a set of builtin adapters that can be used to query external data or for loading data into an internal dataset using a load statement or a data feed. Each adapter requires specifying the `format` of the data in order to be able to parse records correctly. Using adapters with feeds, the parameter `output-type` must also be specified.
+AsterixDB offers a set of builtin adapters that can be used to query external data or for loading data into an internal dataset using a load statement or a data feed. Each adapter requires specifying the `format` of the data in order to be able to parse objects correctly. Using adapters with feeds, the parameter `output-type` must also be specified.
 
 Following is a listing of existing built-in adapters and their configuration parameters:
 
@@ -76,12 +76,12 @@ Following is a listing of existing built-in adapters and their configuration par
 As an example we consider the Lineitem dataset from the [TPCH schema](http://www.openlinksw.com/dataspace/doc/dav/wiki/Main/VOSTPCHLinkedData/tpch.sql).
 We assume that you have successfully created an AsterixDB instance following the instructions at [Installing AsterixDB Using Managix](../install.html). _For constructing an example, we assume a single machine setup.._
 
-Similar to a regular dataset, an external dataset has an associated datatype. We shall first create the datatype associated with each record in Lineitem data. Paste the following in the
+Similar to a regular dataset, an external dataset has an associated datatype. We shall first create the datatype associated with each object in Lineitem data. Paste the following in the
 query textbox on the webpage at http://127.0.0.1:19001 and hit \u2018Execute\u2019.
 
         create dataverse ExternalFileDemo;
         use dataverse ExternalFileDemo;
-        
+
         create type LineitemType as closed {
           l_orderkey:int32,
           l_partkey: int32,
@@ -125,8 +125,8 @@ Above, the definition is not complete as we need to provide a set of parameters
 </tr>
 <tr>
   <td> path </td>
-  <td> A fully qualified path of the form <tt>host://&lt;absolute path&gt;</tt>. 
-  Use a comma separated list if there are multiple files. 
+  <td> A fully qualified path of the form <tt>host://&lt;absolute path&gt;</tt>.
+  Use a comma separated list if there are multiple files.
   E.g. <tt>host1://&lt;absolute path&gt;</tt>, <tt>host2://&lt;absolute path&gt;</tt> and so forth. </td>
 </tr>
 <tr>
@@ -143,7 +143,7 @@ We *complete the create dataset statement* as follows.
 
 
         use dataverse ExternalFileDemo;
-        
+
         create external dataset Lineitem(LineitemType)
         using localfs
         (("path"="127.0.0.1://SOURCE_PATH"),
@@ -172,8 +172,8 @@ Next we move over to the the section [Writing Queries against an External Datase
 #### 2) Data file resides on an HDFS instance ####
 rerequisite: It is required that the Namenode and HDFS Datanodes are reachable from the hosts that form the AsterixDB cluster. AsterixDB provides a built-in adapter for data residing on HDFS. The HDFS adapter can be referred (in AQL) by its alias - \u2018hdfs\u2019. We can create an external dataset named Lineitem and associate the HDFS adapter with it as follows;
 
-		create external dataset Lineitem(LineitemType) 
-		using hdfs((\u201chdfs\u201d:\u201dhdfs://localhost:54310\u201d),(\u201cpath\u201d:\u201d/asterix/Lineitem.tbl\u201d),...,(\u201cinput- format\u201d:\u201drc-format\u201d));
+        create external dataset Lineitem(LineitemType)
+        using hdfs((\u201chdfs\u201d:\u201dhdfs://localhost:54310\u201d),(\u201cpath\u201d:\u201d/asterix/Lineitem.tbl\u201d),...,(\u201cinput- format\u201d:\u201drc-format\u201d));
 
 The expected parameters are described below:
 
@@ -191,7 +191,7 @@ The expected parameters are described below:
   <td> The absolute path to the source HDFS file or directory. Use a comma separated list if there are multiple files or directories. </td></tr>
 <tr>
   <td> input-format </td>
-  <td> The associated input format. Use 'text-input-format' for text files , 'sequence-input-format' for hadoop sequence files, 'rc-input-format' for Hadoop Record Columnar files, or a fully qualified name of an implementation of org.apache.hadoop.mapred.InputFormat. </td>
+  <td> The associated input format. Use 'text-input-format' for text files , 'sequence-input-format' for hadoop sequence files, 'rc-input-format' for Hadoop Object Columnar files, or a fully qualified name of an implementation of org.apache.hadoop.mapred.InputFormat. </td>
 </tr>
 <tr>
   <td> format </td>
@@ -203,11 +203,11 @@ The expected parameters are described below:
 </tr>
 <tr>
   <td> parser </td>
-  <td> The parser used to parse HDFS records if the format is 'binary'. Use 'hive- parser' for data deserialized by a Hive Serde (AsterixDB can understand deserialized Hive objects) or a fully qualified class name of user- implemented parser that implements the interface org.apache.asterix.external.input.InputParser. </td>
+  <td> The parser used to parse HDFS objects if the format is 'binary'. Use 'hive- parser' for data deserialized by a Hive Serde (AsterixDB can understand deserialized Hive objects) or a fully qualified class name of user- implemented parser that implements the interface org.apache.asterix.external.input.InputParser. </td>
 </tr>
 <tr>
   <td> hive-serde </td>
-  <td> The Hive serde is used to deserialize HDFS records if format is binary and the parser is hive-parser. Use a fully qualified name of a class implementation of org.apache.hadoop.hive.serde2.SerDe. </td>
+  <td> The Hive serde is used to deserialize HDFS objects if format is binary and the parser is hive-parser. Use a fully qualified name of a class implementation of org.apache.hadoop.hive.serde2.SerDe. </td>
 </tr>
 <tr>
   <td> local-socket-path </td>
@@ -218,11 +218,11 @@ The expected parameters are described below:
 *Difference between 'input-format' and 'format'*
 
 *input-format*: Files stored under HDFS have an associated storage format. For example,
-TextInputFormat represents plain text files. SequenceFileInputFormat indicates binary compressed files. RCFileInputFormat corresponds to records stored in a record columnar fashion. The parameter \u2018input-format\u2019 is used to distinguish between these and other HDFS input formats.
+TextInputFormat represents plain text files. SequenceFileInputFormat indicates binary compressed files. RCFileInputFormat corresponds to objects stored in a object columnar fashion. The parameter \u2018input-format\u2019 is used to distinguish between these and other HDFS input formats.
 
 *format*: The parameter \u2018format\u2019 refers to the type of the data contained in the file. For example, data contained in a file could be in json or ADM format, could be in delimited-text with fields separated by a delimiting character or could be in binary format.
 
-As an example. consider the [data file](../data/lineitem.tbl).  The file is a text file with each line representing a record. The fields in each record are separated by the '|' character.
+As an example. consider the [data file](../data/lineitem.tbl).  The file is a text file with each line representing a object. The fields in each object are separated by the '|' character.
 
 We assume the HDFS URL to be hdfs://localhost:54310. We further assume that the example data file is copied to HDFS at a path denoted by \u201c/asterix/Lineitem.tbl\u201d.
 
@@ -231,7 +231,7 @@ The complete set of parameters for our example file are as follows. ((\u201chdfs\u201d
 
 #### Using the Hive Parser ####
 
-if a user wants to create an external dataset that uses hive-parser to parse HDFS records, it is important that the datatype associated with the dataset matches the actual data in the Hive table for the correct initialization of the Hive SerDe. Here is the conversion from the supported Hive data types to AsterixDB data types:
+if a user wants to create an external dataset that uses hive-parser to parse HDFS objects, it is important that the datatype associated with the dataset matches the actual data in the Hive table for the correct initialization of the Hive SerDe. Here is the conversion from the supported Hive data types to AsterixDB data types:
 
 <table>
 <tr>
@@ -280,7 +280,7 @@ if a user wants to create an external dataset that uses hive-parser to parse HDF
 </tr>
 <tr>
   <td>STRUCT</td>
-  <td>Nested Record</td>
+  <td>Nested Object</td>
 </tr>
 <tr>
   <td>LIST</td>
@@ -293,25 +293,25 @@ if a user wants to create an external dataset that uses hive-parser to parse HDF
 
 *Example 1*: We can modify the create external dataset statement as follows:
 
-		create external dataset Lineitem('LineitemType)
-		using hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/Lineitem.tbl"),("input-format"="text- input-format"),("format"="delimited-text"),("delimiter"="|"));
+        create external dataset Lineitem('LineitemType)
+        using hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/Lineitem.tbl"),("input-format"="text- input-format"),("format"="delimited-text"),("delimiter"="|"));
 
-*Example 2*: Here, we create an external dataset of lineitem records stored in sequence files that has content in ADM format:
+*Example 2*: Here, we create an external dataset of lineitem objects stored in sequence files that has content in ADM format:
 
-		create external dataset Lineitem('LineitemType) 
-		using hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/SequenceLineitem.tbl"),("input- format"="sequence-input-format"),("format"="adm"));
+        create external dataset Lineitem('LineitemType)
+        using hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/SequenceLineitem.tbl"),("input- format"="sequence-input-format"),("format"="adm"));
 
-*Example 3*: Here, we create an external dataset of lineitem records stored in record-columnar files that has content in binary format parsed using hive-parser with hive ColumnarSerde:
+*Example 3*: Here, we create an external dataset of lineitem objects stored in object-columnar files that has content in binary format parsed using hive-parser with hive ColumnarSerde:
 
-		create external dataset Lineitem('LineitemType)
-		using hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/RCLineitem.tbl"),("input-format"="rc-input-format"),("format"="binary"),("parser"="hive-parser"),("hive- serde"="org.apache.hadoop.hive.serde2.columnar.ColumnarSerde"));
+        create external dataset Lineitem('LineitemType)
+        using hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/RCLineitem.tbl"),("input-format"="rc-input-format"),("format"="binary"),("parser"="hive-parser"),("hive- serde"="org.apache.hadoop.hive.serde2.columnar.ColumnarSerde"));
 
 ## <a id="WritingQueriesAgainstAnExternalDataset">Writing Queries against an External Dataset</a> <font size="4"><a href="#toc">[Back to TOC]</a></font> ##
 You may write AQL queries against an external dataset in exactly the same way that queries are written against internal datasets. The following is an example of an AQL query that applies a filter and returns an ordered result.
 
 
         use dataverse ExternalFileDemo;
-        
+
         for $c in dataset('Lineitem')
         where $c.l_orderkey <= 3
         order by $c.l_orderkey, $c.l_linenumber
@@ -321,25 +321,25 @@ You may write AQL queries against an external dataset in exactly the same way th
 AsterixDB supports building B-Tree and R-Tree indexes over static data stored in the Hadoop Distributed File System.
 To create an index, first create an external dataset over the data as follows
 
-		create external dataset Lineitem(LineitemType) 
-		using hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/Lineitem.tbl"),("input-format"="text-input- format"),("format"="delimited-text"),("delimiter"="|"));
+        create external dataset Lineitem(LineitemType)
+        using hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/Lineitem.tbl"),("input-format"="text-input- format"),("format"="delimited-text"),("delimiter"="|"));
 
 You can then create a B-Tree index on this dataset instance as if the dataset was internally stored as follows:
 
-		create index PartkeyIdx on Lineitem(l_partkey);
+        create index PartkeyIdx on Lineitem(l_partkey);
 
 You could also create an R-Tree index as follows:
 
-		\ufffccreate index IndexName on DatasetName(attribute-name) type rtree;
+        \ufffccreate index IndexName on DatasetName(attribute-name) type rtree;
 
 After building the indexes, the AsterixDB query compiler can use them to access the dataset and answer queries in a more cost effective manner.
 AsterixDB can read all HDFS input formats, but indexes over external datasets can currently be built only for HDFS datasets with 'text-input-format', 'sequence-input-format' or 'rc-input-format'.
 
 ## <a id="ExternalDataSnapshots">External Data Snapshots</a> <font size="4"><a href="#toc">[Back to TOC]</a></font> ##
-An external data snapshot represents the status of a dataset's files in HDFS at a point in time. Upon creating the first index over an external dataset, AsterixDB captures and stores a snapshot of the dataset in HDFS. Only records present at the snapshot capture time are indexed, and any additional indexes created afterwards will only contain data that was present at the snapshot capture time thus preserving consistency across all indexes of a dataset.
+An external data snapshot represents the status of a dataset's files in HDFS at a point in time. Upon creating the first index over an external dataset, AsterixDB captures and stores a snapshot of the dataset in HDFS. Only objects present at the snapshot capture time are indexed, and any additional indexes created afterwards will only contain data that was present at the snapshot capture time thus preserving consistency across all indexes of a dataset.
 To update all indexes of an external dataset and advance the snapshot time to be the present time, a user can use the refresh external dataset command as follows:
 
-		refresh external dataset DatasetName;
+        refresh external dataset DatasetName;
 
 After a refresh operation commits, all of the dataset's indexes will reflect the status of the data as of the new snapshot capture time.
 
@@ -357,7 +357,7 @@ Q. I created an index over an external dataset and then added some data to my HD
 
 A. No, queries' results are access path independent and the stored snapshot is used to determines which data are going to be included when processing queries.
 
-Q. I created an index over an external dataset and then deleted some of my dataset's files in HDFS, Will indexed data access still return the records in deleted files?
+Q. I created an index over an external dataset and then deleted some of my dataset's files in HDFS, Will indexed data access still return the objects in deleted files?
 
 A. No. When AsterixDB accesses external data, with or without the use of indexes, it only access files present in the file system at runtime.
 

http://git-wip-us.apache.org/repos/asf/asterixdb/blob/10351a74/asterixdb/asterix-doc/src/site/markdown/aql/filters.md
----------------------------------------------------------------------
diff --git a/asterixdb/asterix-doc/src/site/markdown/aql/filters.md b/asterixdb/asterix-doc/src/site/markdown/aql/filters.md
index 9a1fc4c..24461f3 100644
--- a/asterixdb/asterix-doc/src/site/markdown/aql/filters.md
+++ b/asterixdb/asterix-doc/src/site/markdown/aql/filters.md
@@ -72,7 +72,7 @@ the `send-time` field, the only available option for AsterixDB would
 be to scan the whole `TweetMessages` dataset and then apply the
 predicate as a post-processing step. However, if disk components of
 the primary index were tagged with the minimum and maximum timestamp
-values of the records they contain, we could utilize the tagged
+values of the objects they contain, we could utilize the tagged
 information to directly access the primary index and prune components
 that do not match the query predicate. Thus, we could save substantial
 cost by avoiding scanning the whole dataset and only access the