You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@nifi.apache.org by al...@apache.org on 2018/07/17 01:35:44 UTC

svn commit: r1836075 [21/39] - in /nifi/site/trunk/docs/nifi-docs: ./ components/org.apache.nifi/nifi-ambari-nar/1.7.1/ components/org.apache.nifi/nifi-ambari-nar/1.7.1/org.apache.nifi.reporting.ambari.AmbariReportingTask/ components/org.apache.nifi/ni...

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.csv.CSVRecordSetWriter/index.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.csv.CSVRecordSetWriter/index.html?rev=1836075&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.csv.CSVRecordSetWriter/index.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.csv.CSVRecordSetWriter/index.html Tue Jul 17 01:35:38 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"></meta><title>CSVRecordSetWriter</title><link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"></link></head><script type="text/javascript">window.onload = function(){if(self==top) { document.getElementById('nameHeader').style.display = "inherit"; } }</script><body><h1 id="nameHeader" style="display: none;">CSVRecordSetWriter</h1><h2>Description: </h2><p>Writes the contents of a RecordSet as CSV data. The first line written will be the column names (unless the 'Include Header Line' property is false). All subsequent lines will be the values corresponding to the record fields.</p><h3>Tags: </h3><p>csv, result, set, recordset, record, writer, serializer, row, tsv, tab, separated, delimited</p><h3>Properties: </h3><p>In the list below, the names of required properties appear in <strong>bold</strong>. Any other properties (not in bold) are considered optional. The table also indicates any default va
 lues, and whether a property supports the <a href="../../../../../html/expression-language-guide.html">NiFi Expression Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td id="name"><strong>Schema Write Strategy</strong></td><td id="default-value">schema-name</td><td id="allowable-values"><ul><li>Set 'schema.name' Attribute <img src="../../../../../html/images/iconInfo.png" alt="The FlowFile will be given an attribute named 'schema.name' and this attribute will indicate the name of the schema in the Schema Registry. Note that ifthe schema for a record is not obtained from a Schema Registry, then no attribute will be added." title="The FlowFile will be given an attribute named 'schema.name' and this attribute will indicate the name of the schema in the Schema Registry. Note that ifthe schema for a record is not obtained from a Schema Registry, then no attribute will be added."></img></li><li>Set 'avro
 .schema' Attribute <img src="../../../../../html/images/iconInfo.png" alt="The FlowFile will be given an attribute named 'avro.schema' and this attribute will contain the Avro Schema that describes the records in the FlowFile. The contents of the FlowFile need not be Avro, but the text of the schema will be used." title="The FlowFile will be given an attribute named 'avro.schema' and this attribute will contain the Avro Schema that describes the records in the FlowFile. The contents of the FlowFile need not be Avro, but the text of the schema will be used."></img></li><li>HWX Schema Reference Attributes <img src="../../../../../html/images/iconInfo.png" alt="The FlowFile will be given a set of 3 attributes to describe the schema: 'schema.identifier', 'schema.version', and 'schema.protocol.version'. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data." title="The FlowFile will be giv
 en a set of 3 attributes to describe the schema: 'schema.identifier', 'schema.version', and 'schema.protocol.version'. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data."></img></li><li>HWX Content-Encoded Schema Reference <img src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile will contain a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, as found at https://github.com/hortonworks/registry. This will be prepended to each FlowFile. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data." title="The cont
 ent of the FlowFile will contain a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, as found at https://github.com/hortonworks/registry. This will be prepended to each FlowFile. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data."></img></li><li>Confluent Schema Registry Reference <img src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile will contain a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/schema-registry/docs/serializer-formatter
 .html. This will be prepended to each FlowFile. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data. This is based on the encoding used by version 3.2.x of the Confluent Schema Registry." title="The content of the FlowFile will contain a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html. This will be prepended to each FlowFile. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data. This is based on the encoding used by version 3.2.x of the Confluent Schema Registry."></img></li><li>Do Not Write Schema <img src="../../../../../html/images/iconInfo.png" alt="Do not add any schem
 a-related information to the FlowFile." title="Do not add any schema-related information to the FlowFile."></img></li></ul></td><td id="description">Specifies how the schema for a Record should be added to the data.</td></tr><tr><td id="name"><strong>Schema Access Strategy</strong></td><td id="default-value">inherit-record-schema</td><td id="allowable-values"><ul><li>Use 'Schema Name' Property <img src="../../../../../html/images/iconInfo.png" alt="The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in the configured Schema Registry service." title="The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in the configured Schema Registry service."></img></li><li>Inherit Record Schema <img src="../../../../../html/images/iconInfo.png" alt="The schema used to write records will be the same schema that was given to the Record when the R
 ecord was created." title="The schema used to write records will be the same schema that was given to the Record when the Record was created."></img></li><li>Use 'Schema Text' Property <img src="../../../../../html/images/iconInfo.png" alt="The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions." title="The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions."></img></li></ul></td><td id="description">Specifies how to obtain the schema that is to be used for interpreting the data.</td></tr><tr><td id="name">Schema Registry</td><td id="default-value"></td><td id="allowable-values"><strong>Controller 
 Service API: </strong><br/>SchemaRegistry<br/><strong>Implementations: </strong><a href="../../../nifi-confluent-platform-nar/1.7.1/org.apache.nifi.confluent.schemaregistry.ConfluentSchemaRegistry/index.html">ConfluentSchemaRegistry</a><br/><a href="../../../nifi-registry-nar/1.7.1/org.apache.nifi.schemaregistry.services.AvroSchemaRegistry/index.html">AvroSchemaRegistry</a><br/><a href="../../../nifi-hwx-schema-registry-nar/1.7.1/org.apache.nifi.schemaregistry.hortonworks.HortonworksSchemaRegistry/index.html">HortonworksSchemaRegistry</a></td><td id="description">Specifies the Controller Service to use for the Schema Registry</td></tr><tr><td id="name">Schema Name</td><td id="default-value">${schema.name}</td><td id="allowable-values"></td><td id="description">Specifies the name of the schema to lookup in the Schema Registry property<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name"
 >Schema Version</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the version of the schema to lookup in the Schema Registry. If not specified then the latest version of the schema will be retrieved.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Branch</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the name of the branch to use when looking up the schema in the Schema Registry property. If the chosen Schema Registry does not support branching, this value will be ignored.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Text</td><td id="default-value">${avro.schema}</td><td id="allowable-values"></td><td id="description">The text of an Avro-formatted Schema<br/><strong>Suppor
 ts Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Date Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Date fields. If not specified, Date fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters, as in 01/01/2017).</td></tr><tr><td id="name">Time Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Time fields. If not specified, Time fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (f
 or example, HH:mm:ss for a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 18:04:15).</td></tr><tr><td id="name">Timestamp Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Timestamp fields. If not specified, Timestamp fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy HH:mm:ss for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters; and then followed by a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 01/01/2017 18:04:15).</td></tr><tr><td id="name"><strong>CSV Format</strong></td><td id="default-value">custom</td><td id="allowable-
 values"><ul><li>Custom Format <img src="../../../../../html/images/iconInfo.png" alt="The format of the CSV is configured by using the properties of this Controller Service, such as Value Separator" title="The format of the CSV is configured by using the properties of this Controller Service, such as Value Separator"></img></li><li>RFC 4180 <img src="../../../../../html/images/iconInfo.png" alt="CSV data follows the RFC 4180 Specification defined at https://tools.ietf.org/html/rfc4180" title="CSV data follows the RFC 4180 Specification defined at https://tools.ietf.org/html/rfc4180"></img></li><li>Microsoft Excel <img src="../../../../../html/images/iconInfo.png" alt="CSV data follows the format used by Microsoft Excel" title="CSV data follows the format used by Microsoft Excel"></img></li><li>Tab-Delimited <img src="../../../../../html/images/iconInfo.png" alt="CSV data is Tab-Delimited instead of Comma Delimited" title="CSV data is Tab-Delimited instead of Comma Delimited"></img><
 /li><li>MySQL Format <img src="../../../../../html/images/iconInfo.png" alt="CSV data follows the format used by MySQL" title="CSV data follows the format used by MySQL"></img></li><li>Informix Unload <img src="../../../../../html/images/iconInfo.png" alt="The format used by Informix when issuing the UNLOAD TO file_name command" title="The format used by Informix when issuing the UNLOAD TO file_name command"></img></li><li>Informix Unload Escape Disabled <img src="../../../../../html/images/iconInfo.png" alt="The format used by Informix when issuing the UNLOAD TO file_name command with escaping disabled" title="The format used by Informix when issuing the UNLOAD TO file_name command with escaping disabled"></img></li></ul></td><td id="description">Specifies which "format" the CSV data is in, or specifies if custom formatting should be used.</td></tr><tr><td id="name"><strong>Value Separator</strong></td><td id="default-value">,</td><td id="allowable-values"></td><td id="description"
 >The character that is used to separate values/fields in a CSV Record</td></tr><tr><td id="name"><strong>Include Header Line</strong></td><td id="default-value">true</td><td id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td id="description">Specifies whether or not the CSV column names should be written out as the first line.</td></tr><tr><td id="name"><strong>Quote Character</strong></td><td id="default-value">"</td><td id="allowable-values"></td><td id="description">The character that is used to quote values so that escape characters do not have to be used</td></tr><tr><td id="name"><strong>Escape Character</strong></td><td id="default-value">\</td><td id="allowable-values"></td><td id="description">The character that is used to escape characters that would otherwise have a specific meaning to the CSV Parser.</td></tr><tr><td id="name">Comment Marker</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">The character that is used to d
 enote the start of a comment. Any line that begins with this comment will be ignored.</td></tr><tr><td id="name">Null String</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies a String that, if present as a value in the CSV, should be considered a null field instead of using the literal value.</td></tr><tr><td id="name"><strong>Trim Fields</strong></td><td id="default-value">true</td><td id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td id="description">Whether or not white space should be removed from the beginning and end of fields</td></tr><tr><td id="name"><strong>Quote Mode</strong></td><td id="default-value">MINIMAL</td><td id="allowable-values"><ul><li>Quote All Values <img src="../../../../../html/images/iconInfo.png" alt="All values will be quoted using the configured quote character." title="All values will be quoted using the configured quote character."></img></li><li>Quote Minimal <img src="../../../../../html/
 images/iconInfo.png" alt="Values will be quoted only if they are contain special characters such as newline characters or field separators." title="Values will be quoted only if they are contain special characters such as newline characters or field separators."></img></li><li>Quote Non-Numeric Values <img src="../../../../../html/images/iconInfo.png" alt="Values will be quoted unless the value is a number." title="Values will be quoted unless the value is a number."></img></li><li>Do Not Quote Values <img src="../../../../../html/images/iconInfo.png" alt="Values will not be quoted. Instead, all special characters will be escaped using the configured escape character." title="Values will not be quoted. Instead, all special characters will be escaped using the configured escape character."></img></li></ul></td><td id="description">Specifies how fields should be quoted when they are written</td></tr><tr><td id="name"><strong>Record Separator</strong></td><td id="default-value">\n</td>
 <td id="allowable-values"></td><td id="description">Specifies the characters to use in order to separate CSV Records</td></tr><tr><td id="name"><strong>Include Trailing Delimiter</strong></td><td id="default-value">false</td><td id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td id="description">If true, a trailing delimiter will be added to each CSV Record that is written. If false, the trailing delimiter will be omitted.</td></tr><tr><td id="name"><strong>Character Set</strong></td><td id="default-value">UTF-8</td><td id="allowable-values"></td><td id="description">The Character Encoding that is used to encode/decode the CSV file</td></tr></table><h3>State management: </h3>This component does not store state.<h3>Restricted: </h3>This component is not restricted.<h3>System Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.grok.GrokReader/additionalDetails.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.grok.GrokReader/additionalDetails.html?rev=1836075&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.grok.GrokReader/additionalDetails.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.grok.GrokReader/additionalDetails.html Tue Jul 17 01:35:38 2018
@@ -0,0 +1,405 @@
+<!DOCTYPE html>
+<html lang="en">
+    <!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+    <head>
+        <meta charset="utf-8"/>
+        <title>GrokReader</title>
+        <link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"/>
+    </head>
+
+    <body>
+        <p>
+        	The GrokReader Controller Service provides a means for parsing and structuring input that is
+        	made up of unstructured text, such as log files. Grok allows users to add a naming construct to
+        	Regular Expressions such that they can be composed in order to create expressions that are easier
+        	to manage and work with. This Controller Service consists of one Required Property and a few Optional
+        	Properties. The is named <code>Grok Pattern File</code> property specifies the filename of
+        	a file that contains Grok Patterns that can be used for parsing log data. If not specified, a default
+        	patterns file will be used. Its contents are provided below. There are also properties for specifying
+        	the schema to use when parsing data. The schema is not required. However, when data is parsed
+        	a Record is created that contains all of the fields present in the Grok Expression (explained below),
+        	and all fields are of type String. If a schema is chosen, the field can be declared to be a different,
+        	compatible type, such as number. Additionally, if the schema does not contain one of the fields in the
+        	parsed data, that field will be ignored. This can be used to filter out fields that are not of interest.
+		</p>
+		
+		<p>
+        	The Required Property is named <code>Grok Expression</code> and specifies how to parse each
+        	incoming record. This is done by providing a Grok Expression such as:
+        	<code>%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} \[%{DATA:thread}\] %{DATA:class} %{GREEDYDATA:message}</code>.
+        	This Expression will parse Apache NiFi log messages. This is accomplished by specifying that a line begins
+        	with the <code>TIMESTAMP_ISO8601</code> pattern (which is a Regular Expression defined in the default
+        	Grok Patterns File). The value that matches this pattern is then given the name <code>timestamp</code>. As a result,
+        	the value that matches this pattern will be assigned to a field named <code>timestamp</code> in the Record that
+        	produced by this Controller Service.
+        </p>
+        
+        <p>
+        	If a line is encountered in the FlowFile that does not match the configured Grok Expression, it is assumed that the line
+        	is part of the previous message. If the line is the start of a stack trace, then the entire stack trace is read in and assigned
+        	to a field named <code>STACK_TRACE</code>. Otherwise, the line is appended to the last field defined in the Grok Expression. This
+        	is done because typically the last field is a 'message' type of field, which can consist of new-lines.
+        </p>
+
+
+		<h2>Schemas and Type Coercion</h2>
+		
+		<p>
+			When a record is parsed from incoming data, it is separated into fields. Each of these fields is then looked up against the
+			configured schema (by field name) in order to determine what the type of the data should be. If the field is not present in
+			the schema, that field is omitted from the Record. If the field is found in the schema, the data type of the received data
+			is compared against the data type specified in the schema. If the types match, the value of that field is used as-is. If the
+			schema indicates that the field should be of a different type, then the Controller Service will attempt to coerce the data
+			into the type specified by the schema. If the field cannot be coerced into the specified type, an Exception will be thrown.
+		</p>
+		
+		<p>
+			The following rules apply when attempting to coerce a field value from one data type to another:
+		</p>
+			
+		<ul>
+			<li>Any data type can be coerced into a String type.</li>
+			<li>Any numeric data type (Byte, Short, Int, Long, Float, Double) can be coerced into any other numeric data type.</li>
+			<li>Any numeric value can be coerced into a Date, Time, or Timestamp type, by assuming that the Long value is the number of
+			milliseconds since epoch (Midnight GMT, January 1, 1970).</li>
+			<li>A String value can be coerced into a Date, Time, or Timestamp type, if its format matches the configured "Date Format," "Time Format,"
+				or "Timestamp Format."</li>
+			<li>A String value can be coerced into a numeric value if the value is of the appropriate type. For example, the String value
+				<code>8</code> can be coerced into any numeric type. However, the String value <code>8.2</code> can be coerced into a Double or Float
+				type but not an Integer.</li>
+			<li>A String value of "true" or "false" (regardless of case) can be coerced into a Boolean value.</li>
+			<li>A String value that is not empty can be coerced into a Char type. If the String contains more than 1 character, the first character is used
+				and the rest of the characters are ignored.</li>
+			<li>Any "date/time" type (Date, Time, Timestamp) can be coerced into any other "date/time" type.</li>
+			<li>Any "date/time" type can be coerced into a Long type, representing the number of milliseconds since epoch (Midnight GMT, January 1, 1970).</li>
+			<li>Any "date/time" type can be coerced into a String. The format of the String is whatever DateFormat is configured for the corresponding
+				property (Date Format, Time Format, Timestamp Format property).</li>
+		</ul>
+		
+		<p>
+			If none of the above rules apply when attempting to coerce a value from one data type to another, the coercion will fail and an Exception
+			will be thrown.
+		</p>
+		
+		
+
+        <h2>
+        	Examples
+		</h2>
+        
+        <p>
+        	As an example, consider that this Controller Service is configured with the following properties:
+        </p>
+
+		<table>
+    		<head>
+    			<th>Property Name</th>
+    			<th>Property Value</th>
+    		</head>
+    		<body>
+    			<tr>
+    				<td>Grok Expression</td>
+    				<td><code>%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} \[%{DATA:thread}\] %{DATA:class} %{GREEDYDATA:message}</code></td>
+    			</tr>
+    		</body>
+    	</table>
+
+        <p>
+        	Additionally, let's consider a FlowFile whose contents consists of the following:
+        </p>
+
+        <code><pre>
+2016-08-04 13:26:32,473 INFO [Leader Election Notification Thread-1] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@1fa27ea5 has been interrupted; no longer leader for role 'Cluster Coordinator'
+2016-08-04 13:26:32,474 ERROR [Leader Election Notification Thread-2] o.apache.nifi.controller.FlowController One
+Two
+Three
+org.apache.nifi.exception.UnitTestException: Testing to ensure we are able to capture stack traces
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_45]
+	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_45]
+        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_45]
+        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_45]
+        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45]
+        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45]
+        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
+Caused by: org.apache.nifi.exception.UnitTestException: Testing to ensure we are able to capture stack traces
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    ... 12 common frames omitted
+2016-08-04 13:26:35,475 WARN [Curator-Framework-0] org.apache.curator.ConnectionState Connection attempt unsuccessful after 3008 (greater than max timeout of 3000). Resetting connection and trying again with a new connection.
+        </pre></code>
+    	
+    	<p>
+    		In this case, the result will be that this FlowFile consists of 3 different records. The first record will contain the following values:
+    	</p>
+
+		<table>
+    		<head>
+    			<th>Field Name</th>
+    			<th>Field Value</th>
+    		</head>
+    		<body>
+    			<tr>
+    				<td>timestamp</td>
+    				<td>2016-08-04 13:26:32,473</td>
+    			</tr>
+    			<tr>
+    				<td>level</td>
+    				<td>INFO</td>
+    			</tr>
+    			<tr>
+    				<td>thread</td>
+    				<td>Leader Election Notification Thread-1</td>
+    			</tr>
+    			<tr>
+    				<td>class</td>
+    				<td>o.a.n.c.l.e.CuratorLeaderElectionManager</td>
+    			</tr>
+    			<tr>
+    				<td>message</td>
+    				<td>org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@1fa27ea5 has been interrupted; no longer leader for role 'Cluster Coordinator'</td>
+    			</tr>
+    			<tr>
+    				<td>STACK_TRACE</td>
+    				<td><i>null</i></td>
+    			</tr>
+    		</body>
+    	</table>
+    	
+    	<p>
+    		The second record will contain the following values:
+    	</p>
+    	
+		<table>
+    		<head>
+    			<th>Field Name</th>
+    			<th>Field Value</th>
+    		</head>
+    		<body>
+    			<tr>
+    				<td>timestamp</td>
+    				<td>2016-08-04 13:26:32,474</td>
+    			</tr>
+    			<tr>
+    				<td>level</td>
+    				<td>ERROR</td>
+    			</tr>
+    			<tr>
+    				<td>thread</td>
+    				<td>Leader Election Notification Thread-2</td>
+    			</tr>
+    			<tr>
+    				<td>class</td>
+    				<td>o.apache.nifi.controller.FlowController</td>
+    			</tr>
+    			<tr>
+    				<td>message</td>
+    				<td>One<br />
+Two<br />
+Three</td>
+    			</tr>
+    			<tr>
+    				<td>STACK_TRACE</td>
+    				<td>
+<pre>
+org.apache.nifi.exception.UnitTestException: Testing to ensure we are able to capture stack traces
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_45]
+	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_45]
+        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_45]
+        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_45]
+        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45]
+        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45]
+        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
+Caused by: org.apache.nifi.exception.UnitTestException: Testing to ensure we are able to capture stack traces
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    ... 12 common frames omitted
+</pre></td>
+    			</tr>
+    		</body>
+    	</table>
+    	
+		<p>
+			The third record will contain the following values:
+		</p>    	
+    	
+		<table>
+    		<head>
+    			<th>Field Name</th>
+    			<th>Field Value</th>
+    		</head>
+    		<body>
+    			<tr>
+    				<td>timestamp</td>
+    				<td>2016-08-04 13:26:35,475</td>
+    			</tr>
+    			<tr>
+    				<td>level</td>
+    				<td>WARN</td>
+    			</tr>
+    			<tr>
+    				<td>thread</td>
+    				<td>Curator-Framework-0</td>
+    			</tr>
+    			<tr>
+    				<td>class</td>
+    				<td>org.apache.curator.ConnectionState</td>
+    			</tr>
+    			<tr>
+    				<td>message</td>
+    				<td>Connection attempt unsuccessful after 3008 (greater than max timeout of 3000). Resetting connection and trying again with a new connection.</td>
+    			</tr>
+    			<tr>
+    				<td>STACK_TRACE</td>
+    				<td><i>null</i></td>
+    			</tr>
+    		</body>
+    	</table>    	
+
+		
+		<h2>
+		</h2>
+    	
+    	<h2>Default Patterns</h2>
+
+    	<p>
+    		The following patterns are available in the default Grok Pattern File:
+    	</p>
+
+		<code>
+		<pre>
+# Log Levels
+LOGLEVEL ([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WARN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)|FINE|FINER|FINEST|CONFIG
+
+# Syslog Dates: Month Day HH:MM:SS
+SYSLOGTIMESTAMP %{MONTH} +%{MONTHDAY} %{TIME}
+PROG (?:[\w._/%-]+)
+SYSLOGPROG %{PROG:program}(?:\[%{POSINT:pid}\])?
+SYSLOGHOST %{IPORHOST}
+SYSLOGFACILITY <%{NONNEGINT:facility}.%{NONNEGINT:priority}>
+HTTPDATE %{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME} %{INT}
+
+# Months: January, Feb, 3, 03, 12, December
+MONTH \b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\b
+MONTHNUM (?:0?[1-9]|1[0-2])
+MONTHNUM2 (?:0[1-9]|1[0-2])
+MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])
+
+# Days: Monday, Tue, Thu, etc...
+DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)
+
+# Years?
+YEAR (?>\d\d){1,2}
+HOUR (?:2[0123]|[01]?[0-9])
+MINUTE (?:[0-5][0-9])
+# '60' is a leap second in most time standards and thus is valid.
+SECOND (?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)
+TIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
+
+# datestamp is YYYY/MM/DD-HH:MM:SS.UUUU (or something like it)
+DATE_US_MONTH_DAY_YEAR %{MONTHNUM}[/-]%{MONTHDAY}[/-]%{YEAR}
+DATE_US_YEAR_MONTH_DAY %{YEAR}[/-]%{MONTHNUM}[/-]%{MONTHDAY}
+DATE_US %{DATE_US_MONTH_DAY_YEAR}|%{DATE_US_YEAR_MONTH_DAY}
+DATE_EU %{MONTHDAY}[./-]%{MONTHNUM}[./-]%{YEAR}
+ISO8601_TIMEZONE (?:Z|[+-]%{HOUR}(?::?%{MINUTE}))
+ISO8601_SECOND (?:%{SECOND}|60)
+TIMESTAMP_ISO8601 %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?
+DATE %{DATE_US}|%{DATE_EU}
+DATESTAMP %{DATE}[- ]%{TIME}
+TZ (?:[PMCE][SD]T|UTC)
+DATESTAMP_RFC822 %{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME} %{TZ}
+DATESTAMP_RFC2822 %{DAY}, %{MONTHDAY} %{MONTH} %{YEAR} %{TIME} %{ISO8601_TIMEZONE}
+DATESTAMP_OTHER %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{TZ} %{YEAR}
+DATESTAMP_EVENTLOG %{YEAR}%{MONTHNUM2}%{MONTHDAY}%{HOUR}%{MINUTE}%{SECOND}
+
+
+POSINT \b(?:[1-9][0-9]*)\b
+NONNEGINT \b(?:[0-9]+)\b
+WORD \b\w+\b
+NOTSPACE \S+
+SPACE \s*
+DATA .*?
+GREEDYDATA .*
+QUOTEDSTRING (?>(?<!\\)(?>"(?>\\.|[^\\"]+)+"|""|(?>'(?>\\.|[^\\']+)+')|''|(?>`(?>\\.|[^\\`]+)+`)|``))
+UUID [A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}
+
+USERNAME [a-zA-Z0-9._-]+
+USER %{USERNAME}
+INT (?:[+-]?(?:[0-9]+))
+BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))
+NUMBER (?:%{BASE10NUM})
+BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))
+BASE16FLOAT \b(?<![0-9A-Fa-f.])(?:[+-]?(?:0x)?(?:(?:[0-9A-Fa-f]+(?:\.[0-9A-Fa-f]*)?)|(?:\.[0-9A-Fa-f]+)))\b
+
+# Networking
+MAC (?:%{CISCOMAC}|%{WINDOWSMAC}|%{COMMONMAC})
+CISCOMAC (?:(?:[A-Fa-f0-9]{4}\.){2}[A-Fa-f0-9]{4})
+WINDOWSMAC (?:(?:[A-Fa-f0-9]{2}-){5}[A-Fa-f0-9]{2})
+COMMONMAC (?:(?:[A-Fa-f0-9]{2}:){5}[A-Fa-f0-9]{2})
+IPV6 ((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5
 ]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:)))(%.+)?
+IPV4 (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2}))(?![0-9])
+IP (?:%{IPV6}|%{IPV4})
+HOSTNAME \b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b)
+HOST %{HOSTNAME}
+IPORHOST (?:%{HOSTNAME}|%{IP})
+HOSTPORT %{IPORHOST}:%{POSINT}
+
+# paths
+PATH (?:%{UNIXPATH}|%{WINPATH})
+UNIXPATH (?>/(?>[\w_%!$@:.,-]+|\\.)*)+
+TTY (?:/dev/(pts|tty([pq])?)(\w+)?/?(?:[0-9]+))
+WINPATH (?>[A-Za-z]+:|\\)(?:\\[^\\?*]*)+
+URIPROTO [A-Za-z]+(\+[A-Za-z+]+)?
+URIHOST %{IPORHOST}(?::%{POSINT:port})?
+# uripath comes loosely from RFC1738, but mostly from what Firefox
+# doesn't turn into %XX
+URIPATH (?:/[A-Za-z0-9$.+!*'(){},~:;=@#%_\-]*)+
+#URIPARAM \?(?:[A-Za-z0-9]+(?:=(?:[^&]*))?(?:&(?:[A-Za-z0-9]+(?:=(?:[^&]*))?)?)*)?
+URIPARAM \?[A-Za-z0-9$.+!*'|(){},~@#%&/=:;_?\-\[\]]*
+URIPATHPARAM %{URIPATH}(?:%{URIPARAM})?
+URI %{URIPROTO}://(?:%{USER}(?::[^@]*)?@)?(?:%{URIHOST})?(?:%{URIPATHPARAM})?
+
+# Shortcuts
+QS %{QUOTEDSTRING}
+
+# Log formats
+SYSLOGBASE %{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}:
+COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
+COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}
+		</pre>
+		</code>
+
+    </body>
+</html>

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.grok.GrokReader/index.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.grok.GrokReader/index.html?rev=1836075&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.grok.GrokReader/index.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.grok.GrokReader/index.html Tue Jul 17 01:35:38 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"></meta><title>GrokReader</title><link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"></link></head><script type="text/javascript">window.onload = function(){if(self==top) { document.getElementById('nameHeader').style.display = "inherit"; } }</script><body><h1 id="nameHeader" style="display: none;">GrokReader</h1><h2>Description: </h2><p>Provides a mechanism for reading unstructured text data, such as log files, and structuring the data so that it can be processed. The service is configured using Grok patterns. The service reads from a stream of data and splits each message that it finds into a separate Record, each containing the fields that are configured. If a line in the input does not match the expected message pattern, the line of text is either considered to be part of the previous message or is skipped, depending on the configuration, with the exception of stack traces. A stack trace th
 at is found at the end of a log message is considered to be part of the previous message but is added to the 'stackTrace' field of the Record. If a record has no stack trace, it will have a NULL value for the stackTrace field (assuming that the schema does in fact include a stackTrace field of type String). Assuming that the schema includes a '_raw' field of type String, the raw message will be included in the Record.</p><p><a href="additionalDetails.html">Additional Details...</a></p><h3>Tags: </h3><p>grok, logs, logfiles, parse, unstructured, text, record, reader, regex, pattern, logstash</p><h3>Properties: </h3><p>In the list below, the names of required properties appear in <strong>bold</strong>. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the <a href="../../../../../html/expression-language-guide.html">NiFi Expression Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
 Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td id="name"><strong>Schema Access Strategy</strong></td><td id="default-value">string-fields-from-grok-expression</td><td id="allowable-values"><ul><li>Use String Fields From Grok Expression <img src="../../../../../html/images/iconInfo.png" alt="The schema will be derived by using the field names present in the Grok Expression. All fields will be assumed to be of type String. Additionally, a field will be included with a name of 'stackTrace' and a type of String." title="The schema will be derived by using the field names present in the Grok Expression. All fields will be assumed to be of type String. Additionally, a field will be included with a name of 'stackTrace' and a type of String."></img></li><li>Use 'Schema Name' Property <img src="../../../../../html/images/iconInfo.png" alt="The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in 
 the configured Schema Registry service." title="The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in the configured Schema Registry service."></img></li><li>Use 'Schema Text' Property <img src="../../../../../html/images/iconInfo.png" alt="The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions." title="The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions."></img></li><li>HWX Schema Reference Attributes <img src="../../../../../html/images/iconInfo.png" alt="The FlowFile contains 3 Attributes that will be used to looku
 p a Schema from the configured Schema Registry: 'schema.identifier', 'schema.version', and 'schema.protocol.version'" title="The FlowFile contains 3 Attributes that will be used to lookup a Schema from the configured Schema Registry: 'schema.identifier', 'schema.version', and 'schema.protocol.version'"></img></li><li>HWX Content-Encoded Schema Reference <img src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, found at https://github.com/hortonworks/registry" title="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol version', followed by 8
  bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, found at https://github.com/hortonworks/registry"></img></li><li>Confluent Content-Encoded Schema Reference <img src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html. This is based on version 3.2.x of the Confluent Schema Registry." title="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.
 html. This is based on version 3.2.x of the Confluent Schema Registry."></img></li></ul></td><td id="description">Specifies how to obtain the schema that is to be used for interpreting the data.</td></tr><tr><td id="name">Schema Registry</td><td id="default-value"></td><td id="allowable-values"><strong>Controller Service API: </strong><br/>SchemaRegistry<br/><strong>Implementations: </strong><a href="../../../nifi-confluent-platform-nar/1.7.1/org.apache.nifi.confluent.schemaregistry.ConfluentSchemaRegistry/index.html">ConfluentSchemaRegistry</a><br/><a href="../../../nifi-registry-nar/1.7.1/org.apache.nifi.schemaregistry.services.AvroSchemaRegistry/index.html">AvroSchemaRegistry</a><br/><a href="../../../nifi-hwx-schema-registry-nar/1.7.1/org.apache.nifi.schemaregistry.hortonworks.HortonworksSchemaRegistry/index.html">HortonworksSchemaRegistry</a></td><td id="description">Specifies the Controller Service to use for the Schema Registry</td></tr><tr><td id="name">Schema Name</td><td i
 d="default-value">${schema.name}</td><td id="allowable-values"></td><td id="description">Specifies the name of the schema to lookup in the Schema Registry property<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Version</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the version of the schema to lookup in the Schema Registry. If not specified then the latest version of the schema will be retrieved.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Branch</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the name of the branch to use when looking up the schema in the Schema Registry property. If the chosen Schema Registry does not support branching, this value will be ignored.<br
 /><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Text</td><td id="default-value">${avro.schema}</td><td id="allowable-values"></td><td id="description">The text of an Avro-formatted Schema<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Grok Pattern File</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Path to a file that contains Grok Patterns to use for parsing logs. If not specified, a built-in default Pattern file will be used. If specified, all patterns in the given pattern file will override the default patterns. See the Controller Service's Additional Details for a list of pre-defined patterns.<br/><strong>Supports Expression Language: true (will be evaluated using variable registry only)</strong></td></tr><tr><td id="name"><strong>G
 rok Expression</strong></td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format of a log line in Grok format. This allows the Record Reader to understand how to parse each log line. If a line in the log file does not match this pattern, the line will be assumed to belong to the previous log message.If other Grok expressions are referenced by this expression, they need to be supplied in the Grok Pattern File</td></tr><tr><td id="name"><strong>No Match Behavior</strong></td><td id="default-value">append-to-previous-message</td><td id="allowable-values"><ul><li>Append to Previous Message <img src="../../../../../html/images/iconInfo.png" alt="The line of text that does not match the Grok Expression will be appended to the last field of the prior message." title="The line of text that does not match the Grok Expression will be appended to the last field of the prior message."></img></li><li>Skip Line <img src="../../../../../html/images/i
 conInfo.png" alt="The line of text that does not match the Grok Expression will be skipped." title="The line of text that does not match the Grok Expression will be skipped."></img></li></ul></td><td id="description">If a line of text is encountered and it does not match the given Grok Expression, and it is not part of a stack trace, this property specifies how the text should be processed.</td></tr></table><h3>State management: </h3>This component does not store state.<h3>Restricted: </h3>This component is not restricted.<h3>System Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.json.JsonPathReader/additionalDetails.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.json.JsonPathReader/additionalDetails.html?rev=1836075&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.json.JsonPathReader/additionalDetails.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.json.JsonPathReader/additionalDetails.html Tue Jul 17 01:35:38 2018
@@ -0,0 +1,246 @@
+<!DOCTYPE html>
+<html lang="en">
+    <!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+    <head>
+        <meta charset="utf-8"/>
+        <title>JsonPathReader</title>
+        <link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"/>
+    </head>
+
+    <body>
+        <p>
+        	The JsonPathReader Controller Service, parses FlowFiles that are in the JSON format. User-defined properties
+        	specify how to extract all relevant fields from the JSON in order to create a Record. The Controller
+        	Service will not be valid unless at least one JSON Path is provided. Unlike the
+        	<a href="../org.apache.nifi.json.JsonTreeReader/additionalDetails.html">JsonTreeReader</a> Controller Service, this
+        	service will return a record that contains only those fields that have been configured via JSON Path.
+        </p>
+        
+        <p>
+        	If the root of the FlowFile's JSON is a JSON Array, each JSON Object found in that array will be treated as a separate
+        	Record, not as a single record made up of an array. If the root of the FlowFile's JSON is a JSON Object, it will be
+        	evaluated as a single Record.
+        </p>
+        
+        <p>
+        	Supplying a JSON Path is accomplished by adding a user-defined property where the name of the property becomes the name
+        	of the field in the Record that is returned. The value of the property must be a valid JSON Path expression. This JSON Path
+        	will be evaluated against each top-level JSON Object in the FlowFile, and the result will be the value of the field whose
+        	name is specified by the property name. If any JSON Path is given but no field is present in the Schema with the proper name,
+        	then the field will be skipped.
+        </p>
+        
+		<p>
+			This Controller Service must be configured with a schema. Each JSON Path that is evaluated and is found in the "root level"
+			of the schema will produce a Field in the Record. I.e., the schema should match the Record that is created by evaluating all
+			of the JSON Paths. It should not match the "incoming JSON" that is read from the FlowFile.
+		</p>
+
+
+		<h2>Schemas and Type Coercion</h2>
+		
+		<p>
+			When a record is parsed from incoming data, it is separated into fields. Each of these fields is then looked up against the
+			configured schema (by field name) in order to determine what the type of the data should be. If the field is not present in
+			the schema, that field is omitted from the Record. If the field is found in the schema, the data type of the received data
+			is compared against the data type specified in the schema. If the types match, the value of that field is used as-is. If the
+			schema indicates that the field should be of a different type, then the Controller Service will attempt to coerce the data
+			into the type specified by the schema. If the field cannot be coerced into the specified type, an Exception will be thrown.
+		</p>
+		
+		<p>
+			The following rules apply when attempting to coerce a field value from one data type to another:
+		</p>
+			
+		<ul>
+			<li>Any data type can be coerced into a String type.</li>
+			<li>Any numeric data type (Byte, Short, Int, Long, Float, Double) can be coerced into any other numeric data type.</li>
+			<li>Any numeric value can be coerced into a Date, Time, or Timestamp type, by assuming that the Long value is the number of
+			milliseconds since epoch (Midnight GMT, January 1, 1970).</li>
+			<li>A String value can be coerced into a Date, Time, or Timestamp type, if its format matches the configured "Date Format," "Time Format,"
+				or "Timestamp Format."</li>
+			<li>A String value can be coerced into a numeric value if the value is of the appropriate type. For example, the String value
+				<code>8</code> can be coerced into any numeric type. However, the String value <code>8.2</code> can be coerced into a Double or Float
+				type but not an Integer.</li>
+			<li>A String value of "true" or "false" (regardless of case) can be coerced into a Boolean value.</li>
+			<li>A String value that is not empty can be coerced into a Char type. If the String contains more than 1 character, the first character is used
+				and the rest of the characters are ignored.</li>
+			<li>Any "date/time" type (Date, Time, Timestamp) can be coerced into any other "date/time" type.</li>
+			<li>Any "date/time" type can be coerced into a Long type, representing the number of milliseconds since epoch (Midnight GMT, January 1, 1970).</li>
+			<li>Any "date/time" type can be coerced into a String. The format of the String is whatever DateFormat is configured for the corresponding
+				property (Date Format, Time Format, Timestamp Format property). If no value is specified, then the value will be converted into a String
+				representation of the number of milliseconds since epoch (Midnight GMT, January 1, 1970).</li>
+		</ul>
+		
+		<p>
+			If none of the above rules apply when attempting to coerce a value from one data type to another, the coercion will fail and an Exception
+			will be thrown.
+		</p>
+		
+		
+
+		<h2>Examples</h2>
+
+        <p>
+        	As an example, consider a FlowFile whose content contains the following JSON:
+        </p>
+        
+        <code>
+        <pre>
+[{
+    "id": 17,
+    "name": "John",
+    "child": {
+        "id": "1"
+    },
+    "siblingIds": [4, 8],
+    "siblings": [
+        { "name": "Jeremy", "id": 4 },
+        { "name": "Julia", "id": 8}
+    ]
+  },
+  {
+    "id": 98,
+    "name": "Jane",
+    "child": {
+        "id": 2
+    },
+    "gender": "F",
+    "siblingIds": [],
+    "siblings": []
+  }]
+		</pre>
+        </code>
+        
+        <p>
+    		And the following schema has been configured:
+        </p>
+        
+        <code>
+        <pre>
+{
+	"namespace": "nifi",
+	"name": "person",
+	"type": "record",
+	"fields": [
+		{ "name": "id", "type": "int" },
+		{ "name": "name", "type": "string" },
+		{ "name": "childId", "type": "long" },
+		{ "name": "gender", "type": "string" },
+		{ "name": "siblingNames", "type": {
+			"type": "array",
+			"items": "string"
+		}}
+	]
+}
+        </pre>
+        </code>
+        
+        <p>
+        	If we configure this Controller Service with the following user-defined properties:
+        	
+        	<table>
+        		<tr>
+        			<th>Property Name</th>
+        			<th>Property Value</th>
+        		</tr>
+    			<tr>
+    				<td>id</td>
+    				<td><code>$.id</code></td>
+    			</tr>
+    			<tr>
+    				<td>name</td>
+    				<td><code>$.name</code></td>
+    			</tr>
+    			<tr>
+    				<td>childId</td>
+    				<td><code>$.child.id</code></td>
+    			</tr>
+    			<tr>
+    				<td>gender</td>
+    				<td><code>$.gender</code></td>
+    			</tr>
+    			<tr>
+    				<td>siblingNames</td>
+    				<td><code>$.siblings[*].name</code></td>
+    			</tr>
+        	</table>
+        </p>
+        
+		<p>
+			In this case, the FlowFile will generate two Records. The first record will consist of the following key/value pairs:
+
+        	<table>
+        		<tr>
+	    			<th>Field Name</th>
+	    			<th>Field Value</th>
+				</tr>
+    			<tr>
+    				<td>id</td>
+    				<td>17</td>
+    			</tr>
+    			<tr>
+    				<td>name</td>
+    				<td>John</td>
+    			</tr>
+    			<tr>
+    				<td>childId</td>
+    				<td>1</td>
+    			</tr>
+    			<tr>
+    				<td>gender</td>
+    				<td><i>null</i></td>
+    			</tr>
+    			<tr>
+    				<td>siblingNames</td>
+    				<td><i>array of two elements: </i><code>Jeremy</code><i> and </i><code>Julia</code></td>
+    			</tr>
+			</table>
+		</p>
+		
+		<p>
+			The second record will consist of the following key/value pairs:
+
+        	<table>
+        		<tr>
+        			<th>Field Name</th>
+        			<th>Field Value</th>
+        		</tr>
+    			<tr>
+    				<td>id</td>
+    				<td>98</td>
+    			</tr>
+    			<tr>
+    				<td>name</td>
+    				<td>Jane</td>
+    			</tr>
+    			<tr>
+    				<td>childId</td>
+    				<td>2</td>
+    			</tr>
+    			<tr>
+    				<td>gender</td>
+    				<td>F</td>
+    			</tr>
+    			<tr>
+    				<td>siblingNames</td>
+    				<td><i>empty array</i></td>
+    			</tr>
+			</table>
+		</p>
+		
+    </body>
+</html>

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.json.JsonPathReader/index.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.json.JsonPathReader/index.html?rev=1836075&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.json.JsonPathReader/index.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.json.JsonPathReader/index.html Tue Jul 17 01:35:38 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"></meta><title>JsonPathReader</title><link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"></link></head><script type="text/javascript">window.onload = function(){if(self==top) { document.getElementById('nameHeader').style.display = "inherit"; } }</script><body><h1 id="nameHeader" style="display: none;">JsonPathReader</h1><h2>Description: </h2><p>Parses JSON records and evaluates user-defined JSON Path's against each JSON object. While the reader expects each record to be well-formed JSON, the content of a FlowFile may consist of many records, each as a well-formed JSON array or JSON object with optional whitespace between them, such as the common 'JSON-per-line' format. If an array is encountered, each element in that array will be treated as a separate record. User-defined properties define the fields that should be extracted from the JSON in order to form the fields of a Record. Any JSON fiel
 d that is not extracted via a JSONPath will not be returned in the JSON Records.</p><p><a href="additionalDetails.html">Additional Details...</a></p><h3>Tags: </h3><p>json, jsonpath, record, reader, parser</p><h3>Properties: </h3><p>In the list below, the names of required properties appear in <strong>bold</strong>. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the <a href="../../../../../html/expression-language-guide.html">NiFi Expression Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td id="name"><strong>Schema Access Strategy</strong></td><td id="default-value">schema-name</td><td id="allowable-values"><ul><li>Use 'Schema Name' Property <img src="../../../../../html/images/iconInfo.png" alt="The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lo
 okup the Schema in the configured Schema Registry service." title="The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in the configured Schema Registry service."></img></li><li>Use 'Schema Text' Property <img src="../../../../../html/images/iconInfo.png" alt="The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions." title="The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions."></img></li><li>HWX Schema Reference Attributes <img src="../../../../../html/images/iconInfo.png" alt="The FlowFile contains 3 Attributes that wi
 ll be used to lookup a Schema from the configured Schema Registry: 'schema.identifier', 'schema.version', and 'schema.protocol.version'" title="The FlowFile contains 3 Attributes that will be used to lookup a Schema from the configured Schema Registry: 'schema.identifier', 'schema.version', and 'schema.protocol.version'"></img></li><li>HWX Content-Encoded Schema Reference <img src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, found at https://github.com/hortonworks/registry" title="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol vers
 ion', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, found at https://github.com/hortonworks/registry"></img></li><li>Confluent Content-Encoded Schema Reference <img src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html. This is based on version 3.2.x of the Confluent Schema Registry." title="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/schema-registry/docs/se
 rializer-formatter.html. This is based on version 3.2.x of the Confluent Schema Registry."></img></li></ul></td><td id="description">Specifies how to obtain the schema that is to be used for interpreting the data.</td></tr><tr><td id="name">Schema Registry</td><td id="default-value"></td><td id="allowable-values"><strong>Controller Service API: </strong><br/>SchemaRegistry<br/><strong>Implementations: </strong><a href="../../../nifi-confluent-platform-nar/1.7.1/org.apache.nifi.confluent.schemaregistry.ConfluentSchemaRegistry/index.html">ConfluentSchemaRegistry</a><br/><a href="../../../nifi-registry-nar/1.7.1/org.apache.nifi.schemaregistry.services.AvroSchemaRegistry/index.html">AvroSchemaRegistry</a><br/><a href="../../../nifi-hwx-schema-registry-nar/1.7.1/org.apache.nifi.schemaregistry.hortonworks.HortonworksSchemaRegistry/index.html">HortonworksSchemaRegistry</a></td><td id="description">Specifies the Controller Service to use for the Schema Registry</td></tr><tr><td id="name">Sc
 hema Name</td><td id="default-value">${schema.name}</td><td id="allowable-values"></td><td id="description">Specifies the name of the schema to lookup in the Schema Registry property<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Version</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the version of the schema to lookup in the Schema Registry. If not specified then the latest version of the schema will be retrieved.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Branch</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the name of the branch to use when looking up the schema in the Schema Registry property. If the chosen Schema Registry does not support branching, this value 
 will be ignored.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Text</td><td id="default-value">${avro.schema}</td><td id="allowable-values"></td><td id="description">The text of an Avro-formatted Schema<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Date Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Date fields. If not specified, Date fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters, as in 01/01/2017).</td></tr><tr><td id="name">Time 
 Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Time fields. If not specified, Time fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, HH:mm:ss for a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 18:04:15).</td></tr><tr><td id="name">Timestamp Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Timestamp fields. If not specified, Timestamp fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy HH:mm:ss for a two-digit month, followed by a two-digit day, followed by a four-digi
 t year, all separated by '/' characters; and then followed by a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 01/01/2017 18:04:15).</td></tr></table><h3>Dynamic Properties: </h3><p>Dynamic Properties allow the user to specify both the name and value of a property.<table id="dynamic-properties"><tr><th>Name</th><th>Value</th><th>Description</th></tr><tr><td id="name">The field name for the record.</td><td id="value">A JSONPath Expression that will be evaluated against each JSON record. The result of the JSONPath will be the value of the field whose name is the same as the property name.</td><td>User-defined properties identify how to extract specific fields from a JSON object in order to create a Record<br/><strong>Supports Expression Language: false</strong></td></tr></table></p><h3>State management: </h3>This component does not store state.<h3>Restricted: </h3>This component is not restricted
 .<h3>System Resource Considerations:</h3>None specified.<h3>See Also:</h3><p><a href="../org.apache.nifi.json.JsonTreeReader/index.html">JsonTreeReader</a></p></body></html>
\ No newline at end of file

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.json.JsonRecordSetWriter/index.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.json.JsonRecordSetWriter/index.html?rev=1836075&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.json.JsonRecordSetWriter/index.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.1/org.apache.nifi.json.JsonRecordSetWriter/index.html Tue Jul 17 01:35:38 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"></meta><title>JsonRecordSetWriter</title><link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"></link></head><script type="text/javascript">window.onload = function(){if(self==top) { document.getElementById('nameHeader').style.display = "inherit"; } }</script><body><h1 id="nameHeader" style="display: none;">JsonRecordSetWriter</h1><h2>Description: </h2><p>Writes the results of a RecordSet as either a JSON Array or one JSON object per line. If using Array output, then even if the RecordSet consists of a single row, it will be written as an array with a single element. If using One Line Per Object output, the JSON objects cannot be pretty-printed.</p><h3>Tags: </h3><p>json, resultset, writer, serialize, record, recordset, row</p><h3>Properties: </h3><p>In the list below, the names of required properties appear in <strong>bold</strong>. Any other properties (not in bold) are considered optional. T
 he table also indicates any default values, and whether a property supports the <a href="../../../../../html/expression-language-guide.html">NiFi Expression Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td id="name"><strong>Schema Write Strategy</strong></td><td id="default-value">schema-name</td><td id="allowable-values"><ul><li>Set 'schema.name' Attribute <img src="../../../../../html/images/iconInfo.png" alt="The FlowFile will be given an attribute named 'schema.name' and this attribute will indicate the name of the schema in the Schema Registry. Note that ifthe schema for a record is not obtained from a Schema Registry, then no attribute will be added." title="The FlowFile will be given an attribute named 'schema.name' and this attribute will indicate the name of the schema in the Schema Registry. Note that ifthe schema for a record is not obtained from a Schema Registry, then no attribute wi
 ll be added."></img></li><li>Set 'avro.schema' Attribute <img src="../../../../../html/images/iconInfo.png" alt="The FlowFile will be given an attribute named 'avro.schema' and this attribute will contain the Avro Schema that describes the records in the FlowFile. The contents of the FlowFile need not be Avro, but the text of the schema will be used." title="The FlowFile will be given an attribute named 'avro.schema' and this attribute will contain the Avro Schema that describes the records in the FlowFile. The contents of the FlowFile need not be Avro, but the text of the schema will be used."></img></li><li>HWX Schema Reference Attributes <img src="../../../../../html/images/iconInfo.png" alt="The FlowFile will be given a set of 3 attributes to describe the schema: 'schema.identifier', 'schema.version', and 'schema.protocol.version'. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the 
 data." title="The FlowFile will be given a set of 3 attributes to describe the schema: 'schema.identifier', 'schema.version', and 'schema.protocol.version'. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data."></img></li><li>HWX Content-Encoded Schema Reference <img src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile will contain a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, as found at https://github.com/hortonworks/registry. This will be prepended to each FlowFile. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempti
 ng to write the data." title="The content of the FlowFile will contain a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, as found at https://github.com/hortonworks/registry. This will be prepended to each FlowFile. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data."></img></li><li>Confluent Schema Registry Reference <img src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile will contain a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/sch
 ema-registry/docs/serializer-formatter.html. This will be prepended to each FlowFile. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data. This is based on the encoding used by version 3.2.x of the Confluent Schema Registry." title="The content of the FlowFile will contain a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html. This will be prepended to each FlowFile. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data. This is based on the encoding used by version 3.2.x of the Confluent Schema Registry."></img></li><li>Do Not Write Schema <img src="../../../../../html/images/i
 conInfo.png" alt="Do not add any schema-related information to the FlowFile." title="Do not add any schema-related information to the FlowFile."></img></li></ul></td><td id="description">Specifies how the schema for a Record should be added to the data.</td></tr><tr><td id="name"><strong>Schema Access Strategy</strong></td><td id="default-value">inherit-record-schema</td><td id="allowable-values"><ul><li>Use 'Schema Name' Property <img src="../../../../../html/images/iconInfo.png" alt="The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in the configured Schema Registry service." title="The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in the configured Schema Registry service."></img></li><li>Inherit Record Schema <img src="../../../../../html/images/iconInfo.png" alt="The schema used to write records will be the same schema t
 hat was given to the Record when the Record was created." title="The schema used to write records will be the same schema that was given to the Record when the Record was created."></img></li><li>Use 'Schema Text' Property <img src="../../../../../html/images/iconInfo.png" alt="The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions." title="The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions."></img></li></ul></td><td id="description">Specifies how to obtain the schema that is to be used for interpreting the data.</td></tr><tr><td id="name">Schema Registry</td><td id="default-value"></td><td id=
 "allowable-values"><strong>Controller Service API: </strong><br/>SchemaRegistry<br/><strong>Implementations: </strong><a href="../../../nifi-confluent-platform-nar/1.7.1/org.apache.nifi.confluent.schemaregistry.ConfluentSchemaRegistry/index.html">ConfluentSchemaRegistry</a><br/><a href="../../../nifi-registry-nar/1.7.1/org.apache.nifi.schemaregistry.services.AvroSchemaRegistry/index.html">AvroSchemaRegistry</a><br/><a href="../../../nifi-hwx-schema-registry-nar/1.7.1/org.apache.nifi.schemaregistry.hortonworks.HortonworksSchemaRegistry/index.html">HortonworksSchemaRegistry</a></td><td id="description">Specifies the Controller Service to use for the Schema Registry</td></tr><tr><td id="name">Schema Name</td><td id="default-value">${schema.name}</td><td id="allowable-values"></td><td id="description">Specifies the name of the schema to lookup in the Schema Registry property<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registr
 y)</strong></td></tr><tr><td id="name">Schema Version</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the version of the schema to lookup in the Schema Registry. If not specified then the latest version of the schema will be retrieved.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Branch</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the name of the branch to use when looking up the schema in the Schema Registry property. If the chosen Schema Registry does not support branching, this value will be ignored.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Text</td><td id="default-value">${avro.schema}</td><td id="allowable-values"></td><td id="description">The text of an Av
 ro-formatted Schema<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Date Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Date fields. If not specified, Date fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters, as in 01/01/2017).</td></tr><tr><td id="name">Time Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Time fields. If not specified, Time fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value mus
 t match the Java Simple Date Format (for example, HH:mm:ss for a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 18:04:15).</td></tr><tr><td id="name">Timestamp Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Timestamp fields. If not specified, Timestamp fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy HH:mm:ss for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters; and then followed by a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 01/01/2017 18:04:15).</td></tr><tr><td id="name"><strong>Pretty Print JSON</strong></td><td id=
 "default-value">false</td><td id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td id="description">Specifies whether or not the JSON should be pretty printed</td></tr><tr><td id="name"><strong>Suppress Null Values</strong></td><td id="default-value">never-suppress</td><td id="allowable-values"><ul><li>Never Suppress <img src="../../../../../html/images/iconInfo.png" alt="Fields that are missing (present in the schema but not in the record), or that have a value of null, will be written out as a null value" title="Fields that are missing (present in the schema but not in the record), or that have a value of null, will be written out as a null value"></img></li><li>Always Suppress <img src="../../../../../html/images/iconInfo.png" alt="Fields that are missing (present in the schema but not in the record), or that have a value of null, will not be written out" title="Fields that are missing (present in the schema but not in the record), or that have a value of null, will
  not be written out"></img></li><li>Suppress Missing Values <img src="../../../../../html/images/iconInfo.png" alt="When a field has a value of null, it will be written out. However, if a field is defined in the schema and not present in the record, the field will not be written out." title="When a field has a value of null, it will be written out. However, if a field is defined in the schema and not present in the record, the field will not be written out."></img></li></ul></td><td id="description">Specifies how the writer should handle a null field</td></tr><tr><td id="name"><strong>Output Grouping</strong></td><td id="default-value">output-array</td><td id="allowable-values"><ul><li>Array <img src="../../../../../html/images/iconInfo.png" alt="Output records as a JSON array" title="Output records as a JSON array"></img></li><li>One Line Per Object <img src="../../../../../html/images/iconInfo.png" alt="Output records with one JSON object per line, delimited by a newline character
 " title="Output records with one JSON object per line, delimited by a newline character"></img></li></ul></td><td id="description">Specifies how the writer should output the JSON records (as an array or one object per line, e.g.) Note that if 'One Line Per Object' is selected, then Pretty Print JSON must be false.</td></tr></table><h3>State management: </h3>This component does not store state.<h3>Restricted: </h3>This component is not restricted.<h3>System Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file