You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@knox.apache.org by mo...@apache.org on 2018/07/03 19:13:37 UTC

svn commit: r1835012 [5/5] - in /knox: site/books/knox-0-10-0/ site/books/knox-0-11-0/ site/books/knox-0-12-0/ site/books/knox-0-13-0/ site/books/knox-0-14-0/ site/books/knox-0-3-0/ site/books/knox-0-4-0/ site/books/knox-0-5-0/ site/books/knox-0-6-0/ s...

Modified: knox/trunk/books/1.1.0/quick_start.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/quick_start.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/quick_start.md (original)
+++ knox/trunk/books/1.1.0/quick_start.md Tue Jul  3 19:13:36 2018
@@ -41,7 +41,7 @@ Use the command below to check the versi
 
 #### Hadoop ####
 
-Knox 1.1.0 supports Hadoop 3.x, the quick start instructions assume a Hadoop 2.x virtual machine based environment.
+Knox 1.1.0 supports Hadoop 2.x and 3.x, the quick start instructions assume a Hadoop 2.x virtual machine based environment.
 
 
 ### 2 - Download Hadoop 2.x VM ###
@@ -72,11 +72,11 @@ See the NOTICE file contained in each re
 
 ### Verify ###
 
-While recommended, verify is an optional step. You can verify the integrity of any downloaded files using the PGP signatures.
+While recommended, verification of signatures is an optional step. You can verify the integrity of any downloaded files using the PGP signatures.
 Please read [Verifying Apache HTTP Server Releases](http://httpd.apache.org/dev/verification.html) for more information on why you should verify our releases.
 
 The PGP signatures can be verified using PGP or GPG.
-First download the [KEYS][keys] file as well as the .asc signature files for the relevant release packages.
+First download the [KEYS][keys] file as well as the `.asc` signature files for the relevant release packages.
 Make sure you get these files from the main distribution directory linked above, rather than from a mirror.
 Then verify the signatures using one of the methods below.
 
@@ -126,20 +126,20 @@ making the files in the bin directory ex
 
 ### 7 - Create the Master Secret
 
-Run the knoxcli create-master command in order to persist the master secret
+Run the `knoxcli.sh create-master` command in order to persist the master secret
 that is used to protect the key and credential stores for the gateway instance.
 
     cd {GATEWAY_HOME}
     bin/knoxcli.sh create-master
 
-The cli will prompt you for the master secret (i.e. password).
+The CLI will prompt you for the master secret (i.e. password).
 
 ### 7 - Start Knox  ###
 
 The gateway can be started using the provided shell script.
 
 The server will discover the persisted master secret during start up and complete the setup process for demo installs.
-A demo install will consist of a knox gateway instance with an identity certificate for localhost.
+A demo install will consist of a Knox gateway instance with an identity certificate for localhost.
 This will require clients to be on the same machine or to turn off hostname verification.
 For more involved deployments, See the Knox CLI section of this document for additional configuration options,
 including the ability to create a self-signed certificate for a specific hostname.
@@ -148,25 +148,25 @@ including the ability to create a self-s
     bin/gateway.sh start
 
 When starting the gateway this way the process will be run in the background.
-The log files will be written to {GATEWAY_HOME}/logs and the process ID files (PIDS) will b written to {GATEWAY_HOME}/pids.
+The log files will be written to `{GATEWAY_HOME}/logs` and the process ID files (PIDs) will be written to `{GATEWAY_HOME}/pids`.
 
-In order to stop a gateway that was started with the script use this command.
+In order to stop a gateway that was started with the script use this command:
 
     cd {GATEWAY_HOME}
     bin/gateway.sh stop
 
-If for some reason the gateway is stopped other than by using the command above you may need to clear the tracking PID.
+If for some reason the gateway is stopped other than by using the command above you may need to clear the tracking PID:
 
     cd {GATEWAY_HOME}
     bin/gateway.sh clean
 
-__NOTE: This command will also clear any .out and .err file from the {GATEWAY_HOME}/logs directory so use this with caution.__
+__NOTE: This command will also clear any `.out` and `.err` file from the `{GATEWAY_HOME}/logs` directory so use this with caution.__
 
 
-### 8 - Do Hadoop with Knox
+### 8 - Access Hadoop with Knox
 
 #### Invoke the LISTSTATUS operation on WebHDFS via the gateway.
-This will return a directory listing of the root (i.e. /) directory of HDFS.
+This will return a directory listing of the root (i.e. `/`) directory of HDFS.
 
     curl -i -k -u guest:guest-password -X GET \
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/?op=LISTSTATUS'
@@ -174,7 +174,7 @@ This will return a directory listing of
 The results of the above command should result in something to along the lines of the output below.
 The exact information returned is subject to the content within HDFS in your Hadoop cluster.
 Successfully executing this command at a minimum proves that the gateway is properly configured to provide access to WebHDFS.
-It does not necessarily provide that any of the other services are correct configured to be accessible.
+It does not necessarily mean that any of the other services are correctly configured to be accessible.
 To validate that see the sections for the individual services in #[Service Details].
 
     HTTP/1.1 200 OK
@@ -195,7 +195,7 @@ To validate that see the sections for th
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/LICENSE?op=CREATE'
 
     curl -i -k -u guest:guest-password -T LICENSE -X PUT \
-        '{Value of Location header from response   above}'
+        '{Value of Location header from response above}'
 
 #### Get a file in HDFS via Knox.
 

Modified: knox/trunk/books/1.1.0/service_config.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/service_config.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/service_config.md (original)
+++ knox/trunk/books/1.1.0/service_config.md Tue Jul  3 19:13:36 2018
@@ -19,16 +19,16 @@
 
 It is possible to override a few of the global configuration settings provided in gateway-site.xml at the service level.
 These overrides are specified as name/value pairs within the \<service> elements of a particular service.
-The overidden settings apply only to that service.
+The overridden settings apply only to that service.
 
 The following table shows the common configuration settings available at the service level via service level parameters.
 Individual services may support additional service level parameters.
 
 Property | Description | Default
 ---------|-------------|---------
-httpclient.maxConnections|The maximum number of connections that a single httpclient will maintain to a single host:port.  The default is 32.|32
-httpclient.connectionTimeout|The amount of time to wait when attempting a connection. The natural unit is milliseconds but a 's' or 'm' suffix may be used for seconds or minutes respectively. The default timeout is system dependent. | System Dependent
-httpclient.socketTimeout|The amount of time to wait for data on a socket before aborting the connection. The natural unit is milliseconds but a 's' or 'm' suffix may be used for seconds or minutes respectively. The default timeout is system dependent but is likely to be indefinite. | System Dependent
+httpclient.maxConnections    | The maximum number of connections that a single httpclient will maintain to a single host:port. | 32
+httpclient.connectionTimeout | The amount of time to wait when attempting a connection. The natural unit is milliseconds, but a 's' or 'm' suffix may be used for seconds or minutes respectively. The default timeout is system dependent. | System Dependent
+httpclient.socketTimeout     | The amount of time to wait for data on a socket before aborting the connection. The natural unit is milliseconds, but a 's' or 'm' suffix may be used for seconds or minutes respectively. The default timeout is system dependent but is likely to be indefinite. | System Dependent
 
 The example below demonstrates how these service level parameters are used.
 
@@ -39,4 +39,3 @@ The example below demonstrates how these
              <value>180s</value>
          </param>
     </service>
-

Modified: knox/trunk/books/1.1.0/service_hbase.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/service_hbase.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/service_hbase.md (original)
+++ knox/trunk/books/1.1.0/service_hbase.md Tue Jul  3 19:13:36 2018
@@ -703,5 +703,3 @@ And for the service configuration itself
     </service>
 
 Please note that there is no `<url>` tag specified here as the URLs for the Kafka servers are obtained from ZooKeeper.
-
-

Modified: knox/trunk/books/1.1.0/service_hive.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/service_hive.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/service_hive.md (original)
+++ knox/trunk/books/1.1.0/service_hive.md Tue Jul  3 19:13:36 2018
@@ -58,8 +58,8 @@ By default the gateway is configured to
 #### Hive JDBC URL Mapping ####
 
 | ------- | ------------------------------------------------------------------------------- |
-| Gateway | jdbc:hive2://{gateway-host}:{gateway-port}/;ssl=true;sslTrustStore={gateway-trust-store-path};trustStorePassword={gateway-trust-store-password};transportMode=http;httpPath={gateway-path}/{cluster-name}/hive|
-| Cluster |`http://{hive-host}:{hive-port}/{hive-path}`|
+| Gateway | `jdbc:hive2://{gateway-host}:{gateway-port}/;ssl=true;sslTrustStore={gateway-trust-store-path};trustStorePassword={gateway-trust-store-password};transportMode=http;httpPath={gateway-path}/{cluster-name}/hive` |
+| Cluster | `http://{hive-host}:{hive-port}/{hive-path}` |
 
 #### Hive Examples ####
 
@@ -270,8 +270,8 @@ Expected output:
 ### HiveServer2 HA ###
 
 Knox provides basic failover functionality for calls made to Hive Server when more than one HiveServer2 instance is
-installed in the cluster and registered with the same Zookeeper ensemble. The HA functionality in this case fetches the
-HiveServer2 URL information from a Zookeeper ensemble, so the user need only supply the necessary Zookeeper
+installed in the cluster and registered with the same ZooKeeper ensemble. The HA functionality in this case fetches the
+HiveServer2 URL information from a ZooKeeper ensemble, so the user need only supply the necessary ZooKeeper
 configuration and not the Hive connection URLs.
 
 To enable HA functionality for Hive in Knox the following configuration has to be added to the topology file.

Modified: knox/trunk/books/1.1.0/service_kafka.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/service_kafka.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/service_kafka.md (original)
+++ knox/trunk/books/1.1.0/service_kafka.md Tue Jul  3 19:13:36 2018
@@ -50,17 +50,17 @@ Some of the various calls that can be ma
 
     # 0. Getting topic info
     
-	curl -ikv -u guest:guest-password -X GET 'https://localhost:8443/gateway/sandbox/kafka/topics'
+    curl -ikv -u guest:guest-password -X GET 'https://localhost:8443/gateway/sandbox/kafka/topics'
 
     # 1. Publish message to topic
     
-	curl -ikv -u guest:guest-password -X POST 'https://localhost:8443/gateway/sandbox/kafka/topics/TOPIC1' -H 'Content-Type: application/vnd.kafka.json.v2+json' -H 'Accept: application/vnd.kafka.v2+json' --data '"records":[{"value":{"foo":"bar"}}]}'
+    curl -ikv -u guest:guest-password -X POST 'https://localhost:8443/gateway/sandbox/kafka/topics/TOPIC1' -H 'Content-Type: application/vnd.kafka.json.v2+json' -H 'Accept: application/vnd.kafka.v2+json' --data '"records":[{"value":{"foo":"bar"}}]}'
 
 ### Kafka HA ###
 
-Knox provides basic failover functionality for calls made to Kafka.  Since the Confluent Kafka REST Proxy does not register
+Knox provides basic failover functionality for calls made to Kafka. Since the Confluent Kafka REST Proxy does not register
 itself with ZooKeeper, the HA component looks in ZooKeeper for instances of Kafka and then performs a light weight ping for
-the presence of the REST Proxy on the same hosts.  As such the Kafka REST Proxy must be installed on the same host as Kafka.
+the presence of the REST Proxy on the same hosts. As such the Kafka REST Proxy must be installed on the same host as Kafka.
 The user should not supply URLs in the service definition.  
 
 Note: Users of Ambari must manually startup the Confluent Kafka REST Proxy.
@@ -106,4 +106,3 @@ And for the service configuration itself
     </service>
 
 Please note that there is no `<url>` tag specified here as the URLs for the Kafka servers are obtained from ZooKeeper.
-

Modified: knox/trunk/books/1.1.0/service_livy.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/service_livy.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/service_livy.md (original)
+++ knox/trunk/books/1.1.0/service_livy.md Tue Jul  3 19:13:36 2018
@@ -34,20 +34,19 @@ In the topology XML file, add the follow
     </service>
 
 Livy server will use proxyUser to run the Spark session. To avoid that a user can 
-provide here any user (e.g. a more privileged), Knox will need to rewrite the the 
-json body to replace what so ever is the value of proxyUser is with the username of
+provide here any user (e.g. a more privileged), Knox will need to rewrite the 
+JSON body to replace what so ever is the value of proxyUser is with the username of
 the authenticated user.
 
-   {  
+    {  
       "driverMemory":"2G",
       "executorCores":4,
       "executorMemory":"8G",
       "proxyUser":"bernhard",
       "conf":{  
-         "spark.master":"yarn-cluster",
-         "spark.jars.packages":"com.databricks:spark-csv_2.10:1.5.0"
-         }
+        "spark.master":"yarn-cluster",
+        "spark.jars.packages":"com.databricks:spark-csv_2.10:1.5.0"
+      }
     } 
 
 The above is an example request body to be used to create a Spark session via Livy server and illustrates the "proxyUser" that requires rewrite.
-

Modified: knox/trunk/books/1.1.0/service_oozie.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/service_oozie.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/service_oozie.md (original)
+++ knox/trunk/books/1.1.0/service_oozie.md Tue Jul  3 19:13:36 2018
@@ -55,7 +55,7 @@ If HDFS has been configured to be in Hig
         <value>ha-service</value>
     </property>
 
-Please note, only one of the URLs, either the RPC endpoint or the HA service name should be used as the NAMENODE hdfs URL in the gateway topology file.
+Please note, only one of the URLs, either the RPC endpoint or the HA service name should be used as the NAMENODE HDFS URL in the gateway topology file.
 
 The information above must be provided to the gateway via a topology descriptor file.
 These topology descriptor files are placed in `{GATEWAY_HOME}/deployments`.
@@ -115,11 +115,11 @@ A copy of that jar has been included in
 
 In addition a workflow definition and configuration file is required.
 These have not been included but are available for download.
-Download [workflow-definition.xml](workflow-definition.xml) and [workflow-configuration.xml](workflow-configuration.xml) and store them in the {GATEWAY_HOME} directory.
+Download [workflow-definition.xml](workflow-definition.xml) and [workflow-configuration.xml](workflow-configuration.xml) and store them in the `{GATEWAY_HOME}` directory.
 Review the contents of workflow-configuration.xml to ensure that it matches your environment.
 
 Take care to follow the instructions below where replacement values are required.
-These replacement values are identified with { } markup.
+These replacement values are identified with `{ }` markup.
 
     # 0. Optionally cleanup the test directory in case a previous example was run without cleaning up.
     curl -i -k -u guest:guest-password -X DELETE \
@@ -192,6 +192,3 @@ These replacement values are identified
 ### Oozie HA ###
 
 Please look at #[Default Service HA support]
-
-
-

Modified: knox/trunk/books/1.1.0/service_solr.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/service_solr.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/service_solr.md (original)
+++ knox/trunk/books/1.1.0/service_solr.md Tue Jul  3 19:13:36 2018
@@ -15,58 +15,58 @@
    limitations under the License.
 --->
 
-### SOLR ###
+### Solr ###
 
-Knox provides gateway functionality to SOLR with support for versions 5.5+ and 6+. The SOLR REST APIs allow the user to view the status 
+Knox provides gateway functionality to Solr with support for versions 5.5+ and 6+. The Solr REST APIs allow the user to view the status 
 of the collections, perform administrative actions and query collections.
 
-See the SOLR Quickstart (http://lucene.apache.org/solr/quickstart.html) section of the SOLR documentation for examples of the SOLR REST API.
+See the Solr Quickstart (http://lucene.apache.org/solr/quickstart.html) section of the Solr documentation for examples of the Solr REST API.
 
-Since Knox provides an abstraction over SOLR and ZooKeeper, the use of the SOLRJ CloudSolrClient is no longer supported.  You should replace 
+Since Knox provides an abstraction over Solr and ZooKeeper, the use of the SolrJ CloudSolrClient is no longer supported.  You should replace 
 instances of CloudSolrClient with HttpSolrClient.
 
-<p>Note: Updates to SOLR via Knox require a POST operation require the use of preemptive authentication which is not directly supported by the 
-SOLRJ API at this time.</p>  
+<p>Note: Updates to Solr via Knox require a POST operation require the use of preemptive authentication which is not directly supported by the 
+SolrJ API at this time.</p>  
 
 To enable this functionality, a topology file needs to have the following configuration:
 
     <service>
         <role>SOLR</role>
-		<version>6.0.0</version>
+        <version>6.0.0</version>
         <url>http://<solr-host>:<solr-port></url>
     </service>
 
-The default SOLR port is 8983.  Adjust the version specified to either '5.5.0 or '6.0.0'.
+The default Solr port is 8983. Adjust the version specified to either '5.5.0 or '6.0.0'.
 
-#### SOLR URL Mapping ####
+#### Solr URL Mapping ####
 
-For SOLR URLs, the mapping of Knox Gateway accessible URLs to direct SOLR URLs is the following.
+For Solr URLs, the mapping of Knox Gateway accessible URLs to direct Solr URLs is the following.
 
 | ------- | ------------------------------------------------------------------------------------- |
 | Gateway | `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/solr` |
 | Cluster | `http://{solr-host}:{solr-port}/solr`                               |
 
 
-#### SOLR Examples via cURL
+#### Solr Examples via cURL
 
 Some of the various calls that can be made and examples using curl are listed below.
 
     # 0. Query collection
     
-	curl -ikv -u guest:guest-password -X GET 'https://localhost:8443/gateway/sandbox/solr/select?q=*:*&wt=json'
+    curl -ikv -u guest:guest-password -X GET 'https://localhost:8443/gateway/sandbox/solr/select?q=*:*&wt=json'
 
     # 1. Query cluster status
     
-	curl -ikv -u guest:guest-password -X POST 'https://localhost:8443/gateway/sandbox/solr/admin/collections?action=CLUSTERSTATUS' 
+    curl -ikv -u guest:guest-password -X POST 'https://localhost:8443/gateway/sandbox/solr/admin/collections?action=CLUSTERSTATUS' 
 
-### SOLR HA ###
+### Solr HA ###
 
-Knox provides basic failover functionality for calls made to SOLR Cloud when more than one SOLR instance is
-installed in the cluster and registered with the same Zookeeper ensemble. The HA functionality in this case fetches the
-SOLR URL information from a Zookeeper ensemble, so the user need only supply the necessary Zookeeper
-configuration and not the SOLR connection URLs.
+Knox provides basic failover functionality for calls made to Solr Cloud when more than one Solr instance is
+installed in the cluster and registered with the same ZooKeeper ensemble. The HA functionality in this case fetches the
+Solr URL information from a ZooKeeper ensemble, so the user need only supply the necessary ZooKeeper
+configuration and not the Solr connection URLs.
 
-To enable HA functionality for SOLR Cloud in Knox the following configuration has to be added to the topology file.
+To enable HA functionality for Solr Cloud in Knox the following configuration has to be added to the topology file.
 
     <provider>
         <role>ha</role>
@@ -88,7 +88,7 @@ The various configuration parameters are
 This is the maximum number of times a failover will be attempted. The failover strategy at this time is very simplistic
 in that the next URL in the list of URLs provided for the service is used and the one that failed is put at the bottom
 of the list. If the list is exhausted and the maximum number of attempts is not reached then the first URL will be tried
-again after the list is fetched again from Zookeeper (a refresh of the list is done at this point)
+again after the list is fetched again from ZooKeeper (a refresh of the list is done at this point)
 
 * failoverSleep -
 The amount of time in millis that the process will wait or sleep before attempting to failover.
@@ -97,15 +97,14 @@ The amount of time in millis that the pr
 Flag to turn the particular service on or off for HA.
 
 * zookeeperEnsemble -
-A comma separated list of host names (or IP addresses) of the zookeeper hosts that consist of the ensemble that the SOLR
+A comma separated list of host names (or IP addresses) of the zookeeper hosts that consist of the ensemble that the Solr
 servers register their information with. 
 
 And for the service configuration itself the URLs need NOT be added to the list. For example.
 
     <service>
         <role>SOLR</role>
-		<version>6.0.0</version>
+        <version>6.0.0</version>
     </service>
 
-Please note that there is no `<url>` tag specified here as the URLs for the SOLR servers are obtained from Zookeeper.
-
+Please note that there is no `<url>` tag specified here as the URLs for the Solr servers are obtained from ZooKeeper.

Modified: knox/trunk/books/1.1.0/service_storm.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/service_storm.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/service_storm.md (original)
+++ knox/trunk/books/1.1.0/service_storm.md Tue Jul  3 19:13:36 2018
@@ -36,7 +36,7 @@ found in `storm.yaml` as the value for t
 
 In addition to the storm service configuration above, a STORM-LOGVIEWER service must be configured if the
 log files are to be retrieved through Knox. The value of the port for the logviewer can be found by the property
-'logviewer.port' also in the file storm.yaml.
+`logviewer.port` also in the file `storm.yaml`.
 
     <service>
         <role>STORM-LOGVIEWER</role>
@@ -110,4 +110,3 @@ In particular the 'ring-session' header
 
     curl -ik -b ~/cookiejar.txt -c ~/cookiejar.txt -u guest:guest-password -H 'x-csrf-token:{token-value}' -X POST \
      http://localhost:8744/api/v1/topology/{id}/kill/0
-

Modified: knox/trunk/books/1.1.0/service_webhcat.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/service_webhcat.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/service_webhcat.md (original)
+++ knox/trunk/books/1.1.0/service_webhcat.md Tue Jul  3 19:13:36 2018
@@ -179,4 +179,3 @@ A complete example is available here: ht
 ### WebHCat HA ###
 
 Please look at #[Default Service HA support]
-

Modified: knox/trunk/books/1.1.0/service_webhdfs.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/service_webhdfs.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/service_webhdfs.md (original)
+++ knox/trunk/books/1.1.0/service_webhdfs.md Tue Jul  3 19:13:36 2018
@@ -17,12 +17,14 @@
 
 ### WebHDFS ###
 
-REST API access to HDFS in a Hadoop cluster is provided by WebHDFS.
+REST API access to HDFS in a Hadoop cluster is provided by WebHDFS or HttpFS.
+Both services provide the same API.
 The [WebHDFS REST API](http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html) documentation is available online.
-WebHDFS must be enabled in the hdfs-site.xml configuration file.
+WebHDFS must be enabled in the `hdfs-site.xml` configuration file and exposes the API on each NameNode and DataNode.
+HttpFS however is a separate server to be configured and started separately.
 In the sandbox this configuration file is located at `/etc/hadoop/conf/hdfs-site.xml`.
 Note the properties shown below as they are related to configuration required by the gateway.
-Some of these represent the default values and may not actually be present in hdfs-site.xml.
+Some of these represent the default values and may not actually be present in `hdfs-site.xml`.
 
     <property>
         <name>dfs.webhdfs.enabled</name>
@@ -45,6 +47,8 @@ The values above need to be reflected in
 The gateway by default includes a sample topology descriptor file `{GATEWAY_HOME}/deployments/sandbox.xml`.
 The values in this sample are configured to work with an installed Sandbox VM.
 
+Please also note that the port changed from 50070 to 9870 in Hadoop 3.0.
+
     <service>
         <role>NAMENODE</role>
         <url>hdfs://localhost:8020</url>
@@ -338,9 +342,5 @@ URL (at the time of configuration) shoul
         <url>http://{host1}:50070/webhdfs</url>
         <url>http://{host2}:50070/webhdfs</url>
     </service>
-    
-
-
-
 
 

Modified: knox/trunk/books/1.1.0/websocket-support.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/websocket-support.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/websocket-support.md (original)
+++ knox/trunk/books/1.1.0/websocket-support.md Tue Jul  3 19:13:36 2018
@@ -15,16 +15,16 @@
    limitations under the License.
 -->
 
-## Websocket Support ##
+## WebSocket Support ##
 
 ### Introduction
 
-Websocket is a communication protocol that allows full duplex communication over a single TCP connection.
-Knox provides out-of-the-box support for websocket protocol, currently only text messages are supported.
+WebSocket is a communication protocol that allows full duplex communication over a single TCP connection.
+Knox provides out-of-the-box support for the WebSocket protocol, currently only text messages are supported.
 
 ### Configuration ###
 
-By default websocket functionality is disabled, it can be easily enabled by changing the 'gateway.websocket.feature.enabled' property to 'true' in <KNOX-HOME>/conf/gateway-site.xml file.  
+By default WebSocket functionality is disabled, it can be easily enabled by changing the `gateway.websocket.feature.enabled` property to `true` in `<KNOX-HOME>/conf/gateway-site.xml` file.  
 
       <property>
           <name>gateway.websocket.feature.enabled</name>
@@ -36,11 +36,11 @@ Service and rewrite rules need to change
 
 ### Example ###
 
-In the following sample configuration we assume that the backend websocket URL is ws://myhost:9999/ws. And 'gateway.websocket.feature.enabled' property is set to 'true' as shown above.
+In the following sample configuration we assume that the backend WebSocket URL is ws://myhost:9999/ws. And 'gateway.websocket.feature.enabled' property is set to 'true' as shown above.
 
 #### rewrite ####
 
-Example code snippet from <KNOX-HOME>/data/services/{myservice}/{version}/rewrite.xml where myservice = websocket and version = 0.6.0
+Example code snippet from `<KNOX-HOME>/data/services/{myservice}/{version}/rewrite.xml` where myservice = websocket and version = 0.6.0
 
       <rules>
         <rule dir="IN" name="WEBSOCKET/ws/inbound" pattern="*://*:*/**/ws">
@@ -50,7 +50,7 @@ Example code snippet from <KNOX-HOME>/da
 
 #### service ####
 
-Example code snippet from <KNOX-HOME>/data/services/{myservice}/{version}/service.xml where myservice = websocket and version = 0.6.0
+Example code snippet from `<KNOX-HOME>/data/services/{myservice}/{version}/service.xml` where myservice = websocket and version = 0.6.0
 
       <service role="WEBSOCKET" name="websocket" version="0.6.0">
         <policies>
@@ -68,7 +68,7 @@ Example code snippet from <KNOX-HOME>/da
 
 #### topology ####
 
-Finally, update the topology file at <KNOX-HOME>/conf/{topology}.xml  with the backend service url
+Finally, update the topology file at `<KNOX-HOME>/conf/{topology}.xml`  with the backend service URL
 
       <service>
           <role>WEBSOCKET</role>

Modified: knox/trunk/books/1.1.0/x-forwarded-headers.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/x-forwarded-headers.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/x-forwarded-headers.md (original)
+++ knox/trunk/books/1.1.0/x-forwarded-headers.md Tue Jul  3 19:13:36 2018
@@ -16,7 +16,7 @@
 --->
 
 ### X-Forwarded-* Headers Support ###
-Out-of-the-box Knox provides support for some X-Forwarded-* headers through the use of a Servlet Filter. Specifically the
+Out-of-the-box Knox provides support for some `X-Forwarded-*` headers through the use of a Servlet Filter. Specifically the
 headers handled/populated by Knox are:
 
 * X-Forwarded-For
@@ -26,7 +26,7 @@ headers handled/populated by Knox are:
 * X-Forwarded-Server
 * X-Forwarded-Context
 
-If this functionality can be turned off by a configuration setting in the file gateway-site.xml and redeploying the
+This functionality can be turned off by a configuration setting in the file gateway-site.xml and redeploying the
 necessary topology/topologies.
 
 The setting is (under the 'configuration' tag) :
@@ -36,8 +36,8 @@ The setting is (under the 'configuration
         <value>false</value>
     </property>
 
-If this setting is absent, the default behavior is that the X-Forwarded-* header support is on or in other words,
-'gateway.xforwarded.enabled' is set to 'true' by default.
+If this setting is absent, the default behavior is that the `X-Forwarded-*` header support is on or in other words,
+`gateway.xforwarded.enabled` is set to `true` by default.
 
 
 #### Header population ####

Modified: knox/trunk/books/common/header.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/common/header.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/common/header.md (original)
+++ knox/trunk/books/common/header.md Tue Jul  3 19:13:36 2018
@@ -22,7 +22,7 @@
 [site]: http://knox.apache.org
 [jira]: https://issues.apache.org/jira/browse/KNOX
 [mirror]: http://www.apache.org/dyn/closer.cgi/knox
-[sandbox]: http://hortonworks.com/products/hortonworks-sandbox
+[sandbox]: https://hortonworks.com/products/sandbox/
 
 [y]: check.png "Yes"
 [n]: error.png "No"