You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@nifi.apache.org by jo...@apache.org on 2020/01/23 03:48:24 UTC

svn commit: r1873052 [28/49] - in /nifi/site/trunk/docs/nifi-docs: ./ components/org.apache.nifi/nifi-ambari-nar/1.11.0/ components/org.apache.nifi/nifi-ambari-nar/1.11.0/org.apache.nifi.reporting.ambari.AmbariReportingTask/ components/org.apache.nifi/...

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.json.JsonTreeReader/additionalDetails.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.json.JsonTreeReader/additionalDetails.html?rev=1873052&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.json.JsonTreeReader/additionalDetails.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.json.JsonTreeReader/additionalDetails.html Thu Jan 23 03:48:17 2020
@@ -0,0 +1,333 @@
+<!DOCTYPE html>
+<html lang="en">
+    <!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+    <head>
+        <meta charset="utf-8"/>
+        <title>JsonTreeReader</title>
+        <link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"/>
+    </head>
+
+    <body>
+        <p>
+        	The JsonTreeReader Controller Service reads a JSON Object and creates a Record object for the entire
+        	JSON Object tree. The Controller Service must be configured with a Schema that describes the structure
+        	of the JSON data. If any field exists in the JSON that is not in the schema, that field will be skipped.
+        	If the schema contains a field for which no JSON field exists, a null value will be used in the Record
+        	(or the default value defined in the schema, if applicable).
+        </p>
+
+        <p>
+        	If the root element of the JSON is a JSON Array, each JSON Object within that array will be treated as
+        	its own separate Record. If the root element is a JSON Object, the JSON will all be treated as a single
+        	Record.
+        </p>
+
+
+		<h2>Schemas and Type Coercion</h2>
+
+		<p>
+			When a record is parsed from incoming data, it is separated into fields. Each of these fields is then looked up against the
+			configured schema (by field name) in order to determine what the type of the data should be. If the field is not present in
+			the schema, that field is omitted from the Record. If the field is found in the schema, the data type of the received data
+			is compared against the data type specified in the schema. If the types match, the value of that field is used as-is. If the
+			schema indicates that the field should be of a different type, then the Controller Service will attempt to coerce the data
+			into the type specified by the schema. If the field cannot be coerced into the specified type, an Exception will be thrown.
+		</p>
+
+		<p>
+			The following rules apply when attempting to coerce a field value from one data type to another:
+		</p>
+
+		<ul>
+			<li>Any data type can be coerced into a String type.</li>
+			<li>Any numeric data type (Byte, Short, Int, Long, Float, Double) can be coerced into any other numeric data type.</li>
+			<li>Any numeric value can be coerced into a Date, Time, or Timestamp type, by assuming that the Long value is the number of
+			milliseconds since epoch (Midnight GMT, January 1, 1970).</li>
+			<li>A String value can be coerced into a Date, Time, or Timestamp type, if its format matches the configured "Date Format," "Time Format,"
+				or "Timestamp Format."</li>
+			<li>A String value can be coerced into a numeric value if the value is of the appropriate type. For example, the String value
+				<code>8</code> can be coerced into any numeric type. However, the String value <code>8.2</code> can be coerced into a Double or Float
+				type but not an Integer.</li>
+			<li>A String value of "true" or "false" (regardless of case) can be coerced into a Boolean value.</li>
+			<li>A String value that is not empty can be coerced into a Char type. If the String contains more than 1 character, the first character is used
+				and the rest of the characters are ignored.</li>
+			<li>Any "date/time" type (Date, Time, Timestamp) can be coerced into any other "date/time" type.</li>
+			<li>Any "date/time" type can be coerced into a Long type, representing the number of milliseconds since epoch (Midnight GMT, January 1, 1970).</li>
+			<li>Any "date/time" type can be coerced into a String. The format of the String is whatever DateFormat is configured for the corresponding
+				property (Date Format, Time Format, Timestamp Format property). If no value is specified, then the value will be converted into a String
+				representation of the number of milliseconds since epoch (Midnight GMT, January 1, 1970).</li>
+		</ul>
+
+		<p>
+			If none of the above rules apply when attempting to coerce a value from one data type to another, the coercion will fail and an Exception
+			will be thrown.
+		</p>
+
+
+
+        <h2>Schema Inference</h2>
+
+        <p>
+            While NiFi's Record API does require that each Record have a schema, it is often convenient to infer the schema based on the values in the data,
+            rather than having to manually create a schema. This is accomplished by selecting a value of "Infer Schema" for the "Schema Access Strategy" property.
+            When using this strategy, the Reader will determine the schema by first parsing all data in the FlowFile, keeping track of all fields that it has encountered
+            and the type of each field. Once all data has been parsed, a schema is formed that encompasses all fields that have been encountered.
+        </p>
+
+        <p>
+            A common concern when inferring schemas is how to handle the condition of two values that have different types. For example, consider a FlowFile with the following two records:
+        </p>
+<code><pre>
+[{
+    "name": "John",
+    "age": 8,
+    "values": "N/A"
+}, {
+    "name": "Jane",
+    "age": "Ten",
+    "values": [ 8, "Ten" ]
+}]
+</pre></code>
+
+        <p>
+            It is clear that the "name" field will be inferred as a STRING type. However, how should we handle the "age" field? Should the field be an CHOICE between INT and STRING? Should we
+            prefer LONG over INT? Should we just use a STRING? Should the field be considered nullable?
+        </p>
+
+        <p>
+            To help understand how this Record Reader infers schemas, we have the following list of rules that are followed in the inference logic:
+        </p>
+
+        <ul>
+            <li>All fields are inferred to be nullable.</li>
+            <li>
+                When two values are encountered for the same field in two different records (or two values are encountered for an ARRAY type), the inference engine prefers
+                to use a "wider" data type over using a CHOICE data type. A data type "A" is said to be wider than data type "B" if and only if data type "A" encompasses all
+                values of "B" in addition to other values. For example, the LONG type is wider than the INT type but not wider than the BOOLEAN type (and BOOLEAN is also not wider
+                than LONG). INT is wider than SHORT. The STRING type is considered wider than all other types with the Exception of MAP, RECORD, ARRAY, and CHOICE.
+            </li>
+            <li>
+                If two values are encountered for the same field in two different records (or two values are encountered for an ARRAY type), but neither value is of a type that
+                is wider than the other, then a CHOICE type is used. In the example above, the "values" field will be inferred as a CHOICE between a STRING or an ARRRAY&lt;STRING&gt;.
+            </li>
+            <li>
+                If the "Time Format," "Timestamp Format," or "Date Format" properties are configured, any value that would otherwise be considered a STRING type is first checked against
+                the configured formats to see if it matches any of them. If the value matches the Timestamp Format, the value is considered a Timestamp field. If it matches the Date Format,
+                it is considered a Date field. If it matches the Time Format, it is considered a Time field. In the unlikely event that the value matches more than one of the configured
+                formats, they will be matched in the order: Timestamp, Date, Time. I.e., if a value matched both the Timestamp Format and the Date Format, the type that is inferred will be
+                Timestamp. Because parsing dates and times can be expensive, it is advisable not to configure these formats if dates, times, and timestamps are not expected, or if processing
+                the data as a STRING is acceptable. For use cases when this is important, though, the inference engine is intelligent enough to optimize the parsing by first checking several
+                very cheap conditions. For example, the string's length is examined to see if it is too long or too short to match the pattern. This results in far more efficient processing
+                than would result if attempting to parse each string value as a timestamp.
+            </li>
+            <li>The MAP type is never inferred. Instead, the RECORD type is used.</li>
+            <li>If a field exists but all values are null, then the field is inferred to be of type STRING.</li>
+        </ul>
+
+
+
+        <h2>Caching of Inferred Schemas</h2>
+
+        <p>
+            This Record Reader requires that if a schema is to be inferred, that all records be read in order to ensure that the schema that gets inferred is applicable for all
+            records in the FlowFile. However, this can become expensive, especially if the data undergoes many different transformations. To alleviate the cost of inferring schemas,
+            the Record Reader can be configured with a "Schema Inference Cache" by populating the property with that name. This is a Controller Service that can be shared by Record
+            Readers and Record Writers.
+        </p>
+
+        <p>
+            Whenever a Record Writer is used to write data, if it is configured with a "Schema Cache," it will also add the schema to the Schema Cache. This will result in an
+            identifier for that schema being added as an attribute to the FlowFile.
+        </p>
+
+        <p>
+            Whenever a Record Reader is used to read data, if it is configured with a "Schema Inference Cache", it will first look for a "schema.cache.identifier" attribute on the FlowFile.
+            If the attribute exists, it will use the value of that attribute to lookup the schema in the schema cache. If it is able to find a schema in the cache with that identifier,
+            then it will use that schema instead of reading, parsing, and analyzing the data to infer the schema. If the attribute is not available on the FlowFile, or if the attribute is
+            available but the cache does not have a schema with that identifier, then the Record Reader will proceed to infer the schema as described above.
+        </p>
+
+        <p>
+            The end result is that users are able to chain together many different Processors to operate on Record-oriented data. Typically, only the first such Processor in the chain will
+            incur the "penalty" of inferring the schema. For all other Processors in the chain, the Record Reader is able to simply lookup the schema in the Schema Cache by identifier.
+            This allows the Record Reader to infer a schema accurately, since it is inferred based on all data in the FlowFile, and still allows this to happen efficiently since the schema
+            will typically only be inferred once, regardless of how many Processors handle the data.
+        </p>
+
+
+
+        <h2>Examples</h2>
+
+        <p>
+        	As an example, consider the following JSON is read:
+        </p>
+<code>
+<pre>
+[{
+    "id": 17,
+    "name": "John",
+    "child": {
+        "id": "1"
+    },
+    "dob": "10-29-1982"
+    "siblings": [
+        { "name": "Jeremy", "id": 4 },
+        { "name": "Julia", "id": 8}
+    ]
+  },
+  {
+    "id": 98,
+    "name": "Jane",
+    "child": {
+        "id": 2
+    },
+    "dob": "08-30-1984"
+    "gender": "F",
+    "siblingIds": [],
+    "siblings": []
+  }]
+</pre>
+</code>
+
+        <p>
+        	Also, consider that the schema that is configured for this JSON is as follows (assuming that the AvroSchemaRegistry
+        	Controller Service is chosen to denote the Schema):
+        </p>
+
+<code>
+<pre>
+{
+	"namespace": "nifi",
+	"name": "person",
+	"type": "record",
+	"fields": [
+		{ "name": "id", "type": "int" },
+		{ "name": "name", "type": "string" },
+		{ "name": "gender", "type": "string" },
+		{ "name": "dob", "type": {
+			"type": "int",
+			"logicalType": "date"
+		}},
+		{ "name": "siblings", "type": {
+			"type": "array",
+			"items": {
+				"type": "record",
+				"fields": [
+					{ "name": "name", "type": "string" }
+				]
+			}
+		}}
+	]
+}
+</pre>
+</code>
+
+        <p>
+        	Let us also assume that this Controller Service is configured with the "Date Format" property set to "MM-dd-yyyy", as this
+        	matches the date format used for our JSON data. This will result in the JSON creating two separate records, because the root
+        	element is a JSON array with two elements.
+        </p>
+
+        <p>
+        	The first Record will consist of the following values:
+        </p>
+
+        <table>
+        	<tr>
+    			<th>Field Name</th>
+    			<th>Field Value</th>
+        	</tr>
+    		<tr>
+    			<td>id</td>
+    			<td>17</td>
+    		</tr>
+    		<tr>
+    			<td>name</td>
+    			<td>John</td>
+    		</tr>
+    		<tr>
+    			<td>gender</td>
+    			<td><i>null</i></td>
+    		</tr>
+    		<tr>
+    			<td>dob</td>
+    			<td>11-30-1983</td>
+    		</tr>
+    		<tr>
+    			<td>siblings</td>
+    			<td>
+    				<i>array with two elements, each of which is itself a Record:</i>
+    				<br />
+    				<table>
+    					<tr>
+							<th>Field Name</th>
+							<th>Field Value</th>
+						</tr>
+						<tr>
+							<td>name</td>
+							<td>Jeremy</td>
+						</tr>
+    				</table>
+    				<br />
+    				<i>and:</i>
+    				<br />
+    				<table>
+						<tr>
+							<th>Field Name</th>
+							<th>Field Value</th>
+						</tr>
+						<tr>
+							<td>name</td>
+							<td>Julia</td>
+						</tr>
+    				</table>
+    			</td>
+    		</tr>
+        </table>
+
+        <p>
+        	The second Record will consist of the following values:
+        </p>
+
+		<table>
+			<tr>
+    			<th>Field Name</th>
+    			<th>Field Value</th>
+        	</tr>
+    		<tr>
+    			<td>id</td>
+    			<td>98</td>
+    		</tr>
+    		<tr>
+    			<td>name</td>
+    			<td>Jane</td>
+    		</tr>
+    		<tr>
+    			<td>gender</td>
+    			<td>F</td>
+    		</tr>
+    		<tr>
+    			<td>dob</td>
+    			<td>08-30-1984</td>
+    		</tr>
+    		<tr>
+    			<td>siblings</td>
+    			<td><i>empty array</i></td>
+    		</tr>
+        </table>
+
+    </body>
+</html>

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.json.JsonTreeReader/index.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.json.JsonTreeReader/index.html?rev=1873052&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.json.JsonTreeReader/index.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.json.JsonTreeReader/index.html Thu Jan 23 03:48:17 2020
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"></meta><title>JsonTreeReader</title><link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"></link></head><script type="text/javascript">window.onload = function(){if(self==top) { document.getElementById('nameHeader').style.display = "inherit"; } }</script><body><h1 id="nameHeader" style="display: none;">JsonTreeReader</h1><h2>Description: </h2><p>Parses JSON into individual Record objects. While the reader expects each record to be well-formed JSON, the content of a FlowFile may consist of many records, each as a well-formed JSON array or JSON object with optional whitespace between them, such as the common 'JSON-per-line' format. If an array is encountered, each element in that array will be treated as a separate record. If the schema that is configured contains a field that is not present in the JSON, a null value will be used. If the JSON contains a field that is not present in the schema, th
 at field will be skipped. See the Usage of the Controller Service for more information and examples.</p><p><a href="additionalDetails.html">Additional Details...</a></p><h3>Tags: </h3><p>json, tree, record, reader, parser</p><h3>Properties: </h3><p>In the list below, the names of required properties appear in <strong>bold</strong>. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the <a href="../../../../../html/expression-language-guide.html">NiFi Expression Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td id="name"><strong>Schema Access Strategy</strong></td><td id="default-value">infer-schema</td><td id="allowable-values"><ul><li>Use 'Schema Name' Property <img src="../../../../../html/images/iconInfo.png" alt="The name of the Schema to use is specified by the 'Schema Name' Property. The value of this prope
 rty is used to lookup the Schema in the configured Schema Registry service." title="The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in the configured Schema Registry service."></img></li><li>Use 'Schema Text' Property <img src="../../../../../html/images/iconInfo.png" alt="The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions." title="The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions."></img></li><li>HWX Schema Reference Attributes <img src="../../../../../html/images/iconInfo.png" alt="The FlowFile contains 3 A
 ttributes that will be used to lookup a Schema from the configured Schema Registry: 'schema.identifier', 'schema.version', and 'schema.protocol.version'" title="The FlowFile contains 3 Attributes that will be used to lookup a Schema from the configured Schema Registry: 'schema.identifier', 'schema.version', and 'schema.protocol.version'"></img></li><li>HWX Content-Encoded Schema Reference <img src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, found at https://github.com/hortonworks/registry" title="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating t
 he 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, found at https://github.com/hortonworks/registry"></img></li><li>Confluent Content-Encoded Schema Reference <img src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html. This is based on version 3.2.x of the Confluent Schema Registry." title="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/schema
 -registry/docs/serializer-formatter.html. This is based on version 3.2.x of the Confluent Schema Registry."></img></li><li>Infer Schema <img src="../../../../../html/images/iconInfo.png" alt="The Schema of the data will be inferred automatically when the data is read. See component Usage and Additional Details for information about how the schema is inferred." title="The Schema of the data will be inferred automatically when the data is read. See component Usage and Additional Details for information about how the schema is inferred."></img></li></ul></td><td id="description">Specifies how to obtain the schema that is to be used for interpreting the data.</td></tr><tr><td id="name">Schema Registry</td><td id="default-value"></td><td id="allowable-values"><strong>Controller Service API: </strong><br/>SchemaRegistry<br/><strong>Implementations: </strong><a href="../../../nifi-hwx-schema-registry-nar/1.11.0/org.apache.nifi.schemaregistry.hortonworks.HortonworksSchemaRegistry/index.html
 ">HortonworksSchemaRegistry</a><br/><a href="../../../nifi-confluent-platform-nar/1.11.0/org.apache.nifi.confluent.schemaregistry.ConfluentSchemaRegistry/index.html">ConfluentSchemaRegistry</a><br/><a href="../../../nifi-registry-nar/1.11.0/org.apache.nifi.schemaregistry.services.AvroSchemaRegistry/index.html">AvroSchemaRegistry</a></td><td id="description">Specifies the Controller Service to use for the Schema Registry</td></tr><tr><td id="name">Schema Name</td><td id="default-value">${schema.name}</td><td id="allowable-values"></td><td id="description">Specifies the name of the schema to lookup in the Schema Registry property<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Version</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the version of the schema to lookup in the Schema Registry. If not specified then the latest version
  of the schema will be retrieved.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Branch</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the name of the branch to use when looking up the schema in the Schema Registry property. If the chosen Schema Registry does not support branching, this value will be ignored.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Text</td><td id="default-value">${avro.schema}</td><td id="allowable-values"></td><td id="description">The text of an Avro-formatted Schema<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Inference Cache</td><td id="default-value"></td><td id="allowable-val
 ues"><strong>Controller Service API: </strong><br/>RecordSchemaCacheService<br/><strong>Implementation: </strong><a href="../org.apache.nifi.schema.inference.VolatileSchemaCache/index.html">VolatileSchemaCache</a></td><td id="description">Specifies a Schema Cache to use when inferring the schema. If not populated, the schema will be inferred each time. However, if a cache is specified, the cache will first be consulted and if the applicable schema can be found, it will be used instead of inferring the schema.</td></tr><tr><td id="name">Date Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Date fields. If not specified, Date fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separa
 ted by '/' characters, as in 01/01/2017).</td></tr><tr><td id="name">Time Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Time fields. If not specified, Time fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, HH:mm:ss for a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 18:04:15).</td></tr><tr><td id="name">Timestamp Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Timestamp fields. If not specified, Timestamp fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy HH:mm:ss f
 or a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters; and then followed by a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 01/01/2017 18:04:15).</td></tr></table><h3>State management: </h3>This component does not store state.<h3>Restricted: </h3>This component is not restricted.<h3>System Resource Considerations:</h3>None specified.<h3>See Also:</h3><p><a href="../org.apache.nifi.json.JsonPathReader/index.html">JsonPathReader</a></p></body></html>
\ No newline at end of file

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.schema.inference.VolatileSchemaCache/index.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.schema.inference.VolatileSchemaCache/index.html?rev=1873052&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.schema.inference.VolatileSchemaCache/index.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.schema.inference.VolatileSchemaCache/index.html Thu Jan 23 03:48:17 2020
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"></meta><title>VolatileSchemaCache</title><link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"></link></head><script type="text/javascript">window.onload = function(){if(self==top) { document.getElementById('nameHeader').style.display = "inherit"; } }</script><body><h1 id="nameHeader" style="display: none;">VolatileSchemaCache</h1><h2>Description: </h2><p>Provides a Schema Cache that evicts elements based on a Least-Recently-Used algorithm. This cache is not persisted, so any restart of NiFi will result in the cache being cleared. Additionally, the cache will be cleared any time that the Controller Service is stopped and restarted.</p><h3>Tags: </h3><p>record, schema, cache</p><h3>Properties: </h3><p>In the list below, the names of required properties appear in <strong>bold</strong>. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whe
 ther a property supports the <a href="../../../../../html/expression-language-guide.html">NiFi Expression Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td id="name"><strong>Maximum Cache Size</strong></td><td id="default-value">100</td><td id="allowable-values"></td><td id="description">The maximum number of Schemas to cache.<br/><strong>Supports Expression Language: true (will be evaluated using variable registry only)</strong></td></tr></table><h3>State management: </h3>This component does not store state.<h3>Restricted: </h3>This component is not restricted.<h3>System Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.Syslog5424Reader/additionalDetails.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.Syslog5424Reader/additionalDetails.html?rev=1873052&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.Syslog5424Reader/additionalDetails.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.Syslog5424Reader/additionalDetails.html Thu Jan 23 03:48:17 2020
@@ -0,0 +1,91 @@
+<!DOCTYPE html>
+<html lang="en">
+    <!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+    <head>
+        <meta charset="utf-8"/>
+        <title>Syslog5424Reader</title>
+        <link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"/>
+    </head>
+
+    <body>
+        <p>
+        	The Syslog5424Reader Controller Service provides a means for parsing valid <a href="https://tools.ietf.org/html/rfc5424">RFC 5424 Syslog</a>  messages.
+			This service produces records with a set schema to match the specification.
+		</p>
+		
+		<p>
+        	The Required Property of this service is named <code>Character Set</code> and specifies the Character Set of the incoming text.
+        </p>
+
+		<h2>Schemas</h2>
+		
+		<p>
+			When a record is parsed from incoming data, it is parsed into the RFC 5424 schema.
+			<h4>The RFC 5424 schema</h4>
+			<code><pre>
+				{
+				  "type" : "record",
+				  "name" : "nifiRecord",
+				  "namespace" : "org.apache.nifi",
+				  "fields" : [ {
+					"name" : "priority",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "severity",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "facility",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "version",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "timestamp",
+					"type" : [ "null", {
+					  "type" : "long",
+					  "logicalType" : "timestamp-millis"
+					} ]
+				  }, {
+					"name" : "hostname",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "body",
+					"type" : [ "null", "string" ]
+				  },
+					"name" : "appName",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "procid",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "messageid",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "structuredData",
+					"type" : [ "null", {
+					  "type" : "map",
+					  "values" : {
+						"type" : "map",
+						"values" : "string"
+					  }
+					} ]
+				  } ]
+				}
+			</pre></code>
+		</p>
+
+    </body>
+</html>

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.Syslog5424Reader/index.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.Syslog5424Reader/index.html?rev=1873052&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.Syslog5424Reader/index.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.Syslog5424Reader/index.html Thu Jan 23 03:48:17 2020
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"></meta><title>Syslog5424Reader</title><link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"></link></head><script type="text/javascript">window.onload = function(){if(self==top) { document.getElementById('nameHeader').style.display = "inherit"; } }</script><body><h1 id="nameHeader" style="display: none;">Syslog5424Reader</h1><h2>Description: </h2><p>Provides a mechanism for reading RFC 5424 compliant Syslog data, such as log files, and structuring the data so that it can be processed.</p><p><a href="additionalDetails.html">Additional Details...</a></p><h3>Tags: </h3><p>syslog 5424, syslog, logs, logfiles, parse, text, record, reader</p><h3>Properties: </h3><p>In the list below, the names of required properties appear in <strong>bold</strong>. Any other properties (not in bold) are considered optional. The table also indicates any default values.</p><table id="properties"><tr><th>Name</th><th>De
 fault Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td id="name"><strong>Character Set</strong></td><td id="default-value">UTF-8</td><td id="allowable-values"></td><td id="description">Specifies which character set of the Syslog messages</td></tr></table><h3>State management: </h3>This component does not store state.<h3>Restricted: </h3>This component is not restricted.<h3>System Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.SyslogReader/additionalDetails.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.SyslogReader/additionalDetails.html?rev=1873052&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.SyslogReader/additionalDetails.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.SyslogReader/additionalDetails.html Thu Jan 23 03:48:17 2020
@@ -0,0 +1,70 @@
+<!DOCTYPE html>
+<html lang="en">
+    <!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+    <head>
+        <meta charset="utf-8"/>
+        <title>Syslog5424Reader</title>
+        <link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"/>
+    </head>
+
+    <body>
+        <p>
+        	The SyslogReader Controller Service provides a means to parse the contents of a Syslog message in accordance to RFC5424 and RFC3164
+			formats. This reader produces records with a set schema to match the common set of fields between the specifications.
+		</p>
+		
+		<p>
+        	The Required Property of this service is named <code>Character Set</code> and specifies the Character Set of the incoming text.
+        </p>
+
+		<h2>Schemas</h2>
+		
+		<p>
+			When a record is parsed from incoming data, it is parsed into the Generic Syslog Schema.
+			<h4>The Generic Syslog Schema</h4>
+			<code><pre>
+				{
+				  "type" : "record",
+				  "name" : "nifiRecord",
+				  "namespace" : "org.apache.nifi",
+				  "fields" : [ {
+					"name" : "priority",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "severity",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "facility",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "version",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "timestamp",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "hostname",
+					"type" : [ "null", "string" ]
+				  }, {
+					"name" : "body",
+					"type" : [ "null", "string" ]
+				  } ]
+				}
+			</pre></code>
+		</p>
+
+    </body>
+</html>

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.SyslogReader/index.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.SyslogReader/index.html?rev=1873052&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.SyslogReader/index.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.syslog.SyslogReader/index.html Thu Jan 23 03:48:17 2020
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"></meta><title>SyslogReader</title><link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"></link></head><script type="text/javascript">window.onload = function(){if(self==top) { document.getElementById('nameHeader').style.display = "inherit"; } }</script><body><h1 id="nameHeader" style="display: none;">SyslogReader</h1><h2>Description: </h2><p>Attempts to parses the contents of a Syslog message in accordance to RFC5424 and RFC3164. In the case of RFC5424 formatted messages, structured data is not supported, and will be returned as part of the message.Note: Be mindfull that RFC3164 is informational and a wide range of different implementations are present in the wild.</p><p><a href="additionalDetails.html">Additional Details...</a></p><h3>Tags: </h3><p>syslog, logs, logfiles, parse, text, record, reader</p><h3>Properties: </h3><p>In the list below, the names of required properties appear in <stron
 g>bold</strong>. Any other properties (not in bold) are considered optional. The table also indicates any default values.</p><table id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td id="name"><strong>Character Set</strong></td><td id="default-value">UTF-8</td><td id="allowable-values"></td><td id="description">Specifies which character set of the Syslog messages</td></tr></table><h3>State management: </h3>This component does not store state.<h3>Restricted: </h3>This component is not restricted.<h3>System Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.text.FreeFormTextRecordSetWriter/index.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.text.FreeFormTextRecordSetWriter/index.html?rev=1873052&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.text.FreeFormTextRecordSetWriter/index.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.text.FreeFormTextRecordSetWriter/index.html Thu Jan 23 03:48:17 2020
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"></meta><title>FreeFormTextRecordSetWriter</title><link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"></link></head><script type="text/javascript">window.onload = function(){if(self==top) { document.getElementById('nameHeader').style.display = "inherit"; } }</script><body><h1 id="nameHeader" style="display: none;">FreeFormTextRecordSetWriter</h1><h2>Description: </h2><p>Writes the contents of a RecordSet as free-form text. The configured text is able to make use of the Expression Language to reference each of the fields that are available in a Record. Each record in the RecordSet will be separated by a single newline character.</p><h3>Tags: </h3><p>text, freeform, expression, language, el, record, recordset, resultset, writer, serialize</p><h3>Properties: </h3><p>In the list below, the names of required properties appear in <strong>bold</strong>. Any other properties (not in bold) are consider
 ed optional. The table also indicates any default values, and whether a property supports the <a href="../../../../../html/expression-language-guide.html">NiFi Expression Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td id="name"><strong>Text</strong></td><td id="default-value"></td><td id="allowable-values"></td><td id="description">The text to use when writing the results. This property will evaluate the Expression Language using any of the fields available in a Record.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name"><strong>Character Set</strong></td><td id="default-value">UTF-8</td><td id="allowable-values"></td><td id="description">The Character set to use when writing the data to the FlowFile</td></tr></table><h3>State management: </h3>This component does not store state.<h3>Restricted: <
 /h3>This component is not restricted.<h3>System Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.xml.XMLReader/additionalDetails.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.xml.XMLReader/additionalDetails.html?rev=1873052&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.xml.XMLReader/additionalDetails.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.xml.XMLReader/additionalDetails.html Thu Jan 23 03:48:17 2020
@@ -0,0 +1,531 @@
+<!DOCTYPE html>
+<html lang="en">
+    <!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+    <head>
+        <meta charset="utf-8"/>
+        <title>XMLReader</title>
+        <link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"/>
+    </head>
+
+    <body>
+    <p>
+        The XMLReader Controller Service reads XML content and creates Record objects. The Controller Service
+        must be configured with a schema that describes the structure of the XML data. Fields in the XML data
+        that are not defined in the schema will be skipped. Depending on whether the property "Expect Records as Array"
+        is set to "false" or "true", the reader either expects a single record or an array of records for each FlowFile.
+    </p>
+
+    <p>
+        Example: Single record
+    </p>
+    <code>
+            <pre>
+                &lt;record&gt;
+                  &lt;field1&gt;content&lt;/field1&gt;
+                  &lt;field2&gt;content&lt;/field2&gt;
+                &lt;/record&gt;
+            </pre>
+    </code>
+
+    <p>
+        An array of records has to be enclosed by a root tag.
+        Example: Array of records
+    </p>
+
+    <code>
+            <pre>
+                &lt;root&gt;
+                  &lt;record&gt;
+                    &lt;field1&gt;content&lt;/field1&gt;
+                    &lt;field2&gt;content&lt;/field2&gt;
+                  &lt;/record&gt;
+                  &lt;record&gt;
+                    &lt;field1&gt;content&lt;/field1&gt;
+                    &lt;field2&gt;content&lt;/field2&gt;
+                  &lt;/record&gt;
+                &lt;/root&gt;
+            </pre>
+    </code>
+
+    <h2>Example: Simple Fields</h2>
+
+    <p>
+        The simplest kind of data within XML data are tags / fields only containing content (no attributes, no embedded tags).
+        They can be described in the schema by simple types (e. g. INT, STRING, ...).
+    </p>
+
+    <code>
+            <pre>
+                &lt;root&gt;
+                  &lt;record&gt;
+                    &lt;simple_field&gt;content&lt;/simple_field&gt;
+                  &lt;/record&gt;
+                &lt;/root&gt;
+            </pre>
+    </code>
+
+    <p>
+        This record can be described by a schema containing one field (e. g. of type string). By providing this schema,
+        the reader expects zero or one occurrences of "simple_field" in the record.
+    </p>
+
+    <code>
+            <pre>
+                {
+                  "namespace": "nifi",
+                  "name": "test",
+                  "type": "record",
+                  "fields": [
+                    { "name": "simple_field", "type": "string" }
+                  ]
+                }
+            </pre>
+    </code>
+
+    <h2>Example: Arrays with Simple Fields</h2>
+
+    <p>
+        Arrays are considered as repetitive tags / fields in XML data. For the following XML data, "array_field" is considered
+        to be an array enclosing simple fields, whereas "simple_field" is considered to be a simple field not enclosed in
+        an array.
+    </p>
+
+    <code>
+            <pre>
+                &lt;record&gt;
+                  &lt;array_field&gt;content&lt;/array_field&gt;
+                  &lt;array_field&gt;content&lt;/array_field&gt;
+                  &lt;simple_field&gt;content&lt;/simple_field&gt;
+                &lt;/record&gt;
+            </pre>
+    </code>
+
+    <p>
+        This record can be described by the following schema:
+    </p>
+
+    <code>
+            <pre>
+                {
+                  "namespace": "nifi",
+                  "name": "test",
+                  "type": "record",
+                  "fields": [
+                    { "name": "array_field", "type":
+                      { "type": "array", "items": "string" }
+                    },
+                    { "name": "simple_field", "type": "string" }
+                  ]
+                }
+            </pre>
+    </code>
+
+    <p>
+        If a field in a schema is embedded in an array, the reader expects zero, one or more occurrences of the field
+        in a record. The field "array_field" principally also could be defined as a simple field, but then the second occurrence
+        of this field would replace the first in the record object. Moreover, the field "simple_field" could also be defined
+        as an array. In this case, the reader would put it into the record object as an array with one element.
+    </p>
+
+    <h2>Example: Tags with Attributes</h2>
+
+    <p>
+        XML fields frequently not only contain content, but also attributes. The following record contains a field with
+        an attribute "attr" and content:
+    </p>
+
+    <code>
+            <pre>
+                &lt;record&gt;
+                  &lt;field_with_attribute attr="attr_content"&gt;content of field&lt;/field_with_attribute&gt;
+                &lt;/record&gt;
+            </pre>
+    </code>
+
+    <p>
+        To parse the content of the field "field_with_attribute" together with the attribute "attr", two requirements have
+        to be fulfilled:
+    </p>
+
+    <ul>
+        <li>In the schema, the field has to be defined as record.</li>
+        <li>The property "Field Name for Content" has to be set.</li>
+        <li>As an option, the property "Attribute Prefix" also can be set.</li>
+    </ul>
+
+    <p>
+        For the example above, the following property settings are assumed:
+    </p>
+
+    <table>
+        <tr>
+            <th>Property Name</th>
+            <th>Property Value</th>
+        </tr>
+        <tr>
+            <td>Field Name for Content</td>
+            <td><code>field_name_for_content</code></td>
+        </tr>
+        <tr>
+            <td>Attribute Prefix</td>
+            <td><code>prefix_</code></td>
+        </tr>
+    </table>
+
+    <p>
+        The schema can be defined as follows:
+    </p>
+
+    <code>
+            <pre>
+                {
+                  "name": "test",
+                  "namespace": "nifi",
+                  "type": "record",
+                  "fields": [
+                    {
+                      "name": "field_with_attribute",
+                      "type": {
+                        "name": "RecordForTag",
+                        "type": "record",
+                        "fields" : [
+                          {"name": "attr", "type": "string"},
+                          {"name": "field_name_for_content", "type": "string"}
+                        ]
+                    }
+                  ]
+                }
+            </pre>
+    </code>
+
+    <p>
+        Note that the field "field_name_for_content" not only has to be defined in the property section, but also in the
+        schema, whereas the prefix for attributes is not part of the schema. It will be appended when an attribute named
+        "attr" is found at the respective position in the XML data and added to the record. The record object of the above
+        example will be structured as follows:
+    </p>
+
+    <code>
+            <pre>
+                Record (
+                    Record "field_with_attribute" (
+                        RecordField "prefix_attr" = "attr_content",
+                        RecordField "field_name_for_content" = "content of field"
+                    )
+                )
+            </pre>
+    </code>
+
+    <p>
+        Principally, the field "field_with_attribute" could also be defined as a simple field. In this case, the attributes
+        simply would be ignored. Vice versa, the simple field in example 1 above could also be defined as a record (assuming that
+        the property "Field Name for Content" is set.
+    </p>
+
+    <h2>Example: Tags within tags</h2>
+
+    <p>
+        XML data is frequently nested. In this case, tags enclose other tags:
+    </p>
+
+    <code>
+            <pre>
+                &lt;record&gt;
+                  &lt;field_with_embedded_fields attr=&quot;attr_content&quot;&gt;
+                    &lt;embedded_field&gt;embedded content&lt;/embedded_field&gt;
+                    &lt;another_embedded_field&gt;another embedded content&lt;/another_embedded_field&gt;
+                  &lt;/field_with_embedded_fields&gt;
+                &lt;/record&gt;
+            </pre>
+    </code>
+
+    <p>
+        The enclosing fields always have to be defined as records, irrespective whether they include attributes to be
+        parsed or not. In this example, the tag "field_with_embedded_fields" encloses the fields "embedded_field" and
+        "another_embedded_field", which are both simple fields. The schema can be defined as follows:
+    </p>
+
+    <code>
+            <pre>
+                {
+                  "name": "test",
+                  "namespace": "nifi",
+                  "type": "record",
+                  "fields": [
+                    {
+                      "name": "field_with_embedded_fields",
+                      "type": {
+                        "name": "RecordForEmbedded",
+                        "type": "record",
+                        "fields" : [
+                          {"name": "attr", "type": "string"},
+                          {"name": "embedded_field", "type": "string"},
+                          {"name": "another_embedded_field", "type": "string"}
+                        ]
+                    }
+                  ]
+                }
+            </pre>
+    </code>
+
+    <p>
+        Notice that this case does not require the property "Field Name for Content" to be set as this is only required
+        for tags containing attributes and content.
+    </p>
+
+    <h2>Example: Array of records</h2>
+
+    <p>
+        For further explanation of the logic of this reader, an example of an array of records shall be demonstrated.
+        The following record contains the field "array_field", which repeatedly occurs. The field contains two
+        embedded fields.
+    </p>
+
+    <code>
+            <pre>
+                &lt;record&gt;
+                  &lt;array_field&gt;
+                    &lt;embedded_field&gt;embedded content 1&lt;/embedded_field&gt;
+                    &lt;another_embedded_field&gt;another embedded content 1&lt;/another_embedded_field&gt;
+                  &lt;/array_field&gt;
+                  &lt;array_field&gt;
+                    &lt;embedded_field&gt;embedded content 2&lt;/embedded_field&gt;
+                    &lt;another_embedded_field&gt;another embedded content 2&lt;/another_embedded_field&gt;
+                  &lt;/array_field&gt;
+                &lt;/record&gt;
+            </pre>
+    </code>
+
+    <p>
+        This XML data can be parsed similarly to the data in example 4. However, the record defined in the schema of
+        example 4 has to be embedded in an array.
+    </p>
+
+    <code>
+            <pre>
+                {
+                  "namespace": "nifi",
+                  "name": "test",
+                  "type": "record",
+                  "fields": [
+                    { "name": "array_field",
+                      "type": {
+                        "type": "array",
+                        "items": {
+                          "name": "RecordInArray",
+                          "type": "record",
+                          "fields" : [
+                            {"name": "embedded_field", "type": "string"},
+                            {"name": "another_embedded_field", "type": "string"}
+                          ]
+                        }
+                      }
+                    }
+                  ]
+                }
+            </pre>
+    </code>
+
+    <h2>Example: Array in record</h2>
+
+    <p>
+        In XML data, arrays are frequently enclosed by tags:
+    </p>
+
+    <code>
+            <pre>
+                &lt;record&gt;
+                  &lt;field_enclosing_array&gt;
+                    &lt;element&gt;content 1&lt;/element&gt;
+                    &lt;element&gt;content 2&lt;/element&gt;
+                  &lt;/field_enclosing_array&gt;
+                  &lt;field_without_array&gt; content 3&lt;/field_without_array&gt;
+                &lt;/record&gt;
+            </pre>
+    </code>
+
+    <p>
+        For the schema, embedded tags have to be described by records. Therefore, the field "field_enclosing_array"
+        is a record that embeds an array with elements of type string:
+    </p>
+
+    <code>
+            <pre>
+                {
+                  "namespace": "nifi",
+                  "name": "test",
+                  "type": "record",
+                  "fields": [
+                    { "name": "field_enclosing_array",
+                      "type": {
+                        "name": "EmbeddedRecord",
+                        "type": "record",
+                        "fields" : [
+                          {
+                            "name": "element",
+                            "type": {
+                              "type": "array",
+                              "items": "string"
+                            }
+                          }
+                        ]
+                      }
+                    },
+                    { "name": "field_without_array", "type": "string" }
+                  ]
+                }
+            </pre>
+    </code>
+
+
+    <h2>Example: Maps</h2>
+
+    <p>
+        A map is a field embedding fields with different names:
+    </p>
+
+    <code>
+            <pre>
+                &lt;record&gt;
+                  &lt;map_field&gt;
+                    &lt;field1&gt;content&lt;/field1&gt;
+                    &lt;field2&gt;content&lt;/field2&gt;
+                    ...
+                  &lt;/map_field&gt;
+                  &lt;simple_field&gt;content&lt;/simple_field&gt;
+                &lt;/record&gt;
+            </pre>
+    </code>
+
+    <p>
+        This data can be processed using the following schema:
+    </p>
+
+    <code>
+            <pre>
+                {
+                  "namespace": "nifi",
+                  "name": "test",
+                  "type": "record",
+                  "fields": [
+                    { "name": "map_field", "type":
+                      { "type": "map", "items": string }
+                    },
+                    { "name": "simple_field", "type": "string" }
+                  ]
+                }
+            </pre>
+    </code>
+
+
+    <h2>Schema Inference</h2>
+
+    <p>
+        While NiFi's Record API does require that each Record have a schema, it is often convenient to infer the schema based on the values in the data,
+        rather than having to manually create a schema. This is accomplished by selecting a value of "Infer Schema" for the "Schema Access Strategy" property.
+        When using this strategy, the Reader will determine the schema by first parsing all data in the FlowFile, keeping track of all fields that it has encountered
+        and the type of each field. Once all data has been parsed, a schema is formed that encompasses all fields that have been encountered.
+    </p>
+
+    <p>
+        A common concern when inferring schemas is how to handle the condition of two values that have different types. For example, consider a FlowFile with the following two records:
+    </p>
+    <code><pre>
+<root>
+    <record>
+        <name>John</name>
+        <age>8</age>
+        <values>N/A</values>
+    </record>
+    <record>
+        <name>Jane</name>
+        <age>Ten</age>
+        <values>8</values>
+        <values>Ten</values>
+    </record>
+</root>
+</pre></code>
+
+    <p>
+        It is clear that the "name" field will be inferred as a STRING type. However, how should we handle the "age" field? Should the field be an CHOICE between INT and STRING? Should we
+        prefer LONG over INT? Should we just use a STRING? Should the field be considered nullable?
+    </p>
+
+    <p>
+        To help understand how this Record Reader infers schemas, we have the following list of rules that are followed in the inference logic:
+    </p>
+
+    <ul>
+        <li>All fields are inferred to be nullable.</li>
+        <li>
+            When two values are encountered for the same field in two different records (or two values are encountered for an ARRAY type), the inference engine prefers
+            to use a "wider" data type over using a CHOICE data type. A data type "A" is said to be wider than data type "B" if and only if data type "A" encompasses all
+            values of "B" in addition to other values. For example, the LONG type is wider than the INT type but not wider than the BOOLEAN type (and BOOLEAN is also not wider
+            than LONG). INT is wider than SHORT. The STRING type is considered wider than all other types with the Exception of MAP, RECORD, ARRAY, and CHOICE.
+        </li>
+        <li>
+            If two values are encountered for the same field in two different records (or two values are encountered for an ARRAY type), but neither value is of a type that
+            is wider than the other, then a CHOICE type is used. In the example above, the "values" field will be inferred as a CHOICE between a STRING or an ARRRAY&lt;STRING&gt;.
+        </li>
+        <li>
+            If the "Time Format," "Timestamp Format," or "Date Format" properties are configured, any value that would otherwise be considered a STRING type is first checked against
+            the configured formats to see if it matches any of them. If the value matches the Timestamp Format, the value is considered a Timestamp field. If it matches the Date Format,
+            it is considered a Date field. If it matches the Time Format, it is considered a Time field. In the unlikely event that the value matches more than one of the configured
+            formats, they will be matched in the order: Timestamp, Date, Time. I.e., if a value matched both the Timestamp Format and the Date Format, the type that is inferred will be
+            Timestamp. Because parsing dates and times can be expensive, it is advisable not to configure these formats if dates, times, and timestamps are not expected, or if processing
+            the data as a STRING is acceptable. For use cases when this is important, though, the inference engine is intelligent enough to optimize the parsing by first checking several
+            very cheap conditions. For example, the string's length is examined to see if it is too long or too short to match the pattern. This results in far more efficient processing
+            than would result if attempting to parse each string value as a timestamp.
+        </li>
+        <li>The MAP type is never inferred. Instead, the RECORD type is used.</li>
+        <li>If two elements exist with the same name and the same parent (i.e., two sibling elements have the same name), the field will be inferred to be of type ARRAY.</li>
+        <li>If a field exists but all values are null, then the field is inferred to be of type STRING.</li>
+    </ul>
+
+
+
+    <h2>Caching of Inferred Schemas</h2>
+
+    <p>
+        This Record Reader requires that if a schema is to be inferred, that all records be read in order to ensure that the schema that gets inferred is applicable for all
+        records in the FlowFile. However, this can become expensive, especially if the data undergoes many different transformations. To alleviate the cost of inferring schemas,
+        the Record Reader can be configured with a "Schema Inference Cache" by populating the property with that name. This is a Controller Service that can be shared by Record
+        Readers and Record Writers.
+    </p>
+
+    <p>
+        Whenever a Record Writer is used to write data, if it is configured with a "Schema Cache," it will also add the schema to the Schema Cache. This will result in an
+        identifier for that schema being added as an attribute to the FlowFile.
+    </p>
+
+    <p>
+        Whenever a Record Reader is used to read data, if it is configured with a "Schema Inference Cache", it will first look for a "schema.cache.identifier" attribute on the FlowFile.
+        If the attribute exists, it will use the value of that attribute to lookup the schema in the schema cache. If it is able to find a schema in the cache with that identifier,
+        then it will use that schema instead of reading, parsing, and analyzing the data to infer the schema. If the attribute is not available on the FlowFile, or if the attribute is
+        available but the cache does not have a schema with that identifier, then the Record Reader will proceed to infer the schema as described above.
+    </p>
+
+    <p>
+        The end result is that users are able to chain together many different Processors to operate on Record-oriented data. Typically, only the first such Processor in the chain will
+        incur the "penalty" of inferring the schema. For all other Processors in the chain, the Record Reader is able to simply lookup the schema in the Schema Cache by identifier.
+        This allows the Record Reader to infer a schema accurately, since it is inferred based on all data in the FlowFile, and still allows this to happen efficiently since the schema
+        will typically only be inferred once, regardless of how many Processors handle the data.
+    </p>
+
+
+
+    </body>
+</html>

Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.xml.XMLReader/index.html
URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.xml.XMLReader/index.html?rev=1873052&view=auto
==============================================================================
--- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.xml.XMLReader/index.html (added)
+++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.11.0/org.apache.nifi.xml.XMLReader/index.html Thu Jan 23 03:48:17 2020
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"></meta><title>XMLReader</title><link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"></link></head><script type="text/javascript">window.onload = function(){if(self==top) { document.getElementById('nameHeader').style.display = "inherit"; } }</script><body><h1 id="nameHeader" style="display: none;">XMLReader</h1><h2>Description: </h2><p>Reads XML content and creates Record objects. Records are expected in the second level of XML data, embedded in an enclosing root tag.</p><p><a href="additionalDetails.html">Additional Details...</a></p><h3>Tags: </h3><p>xml, record, reader, parser</p><h3>Properties: </h3><p>In the list below, the names of required properties appear in <strong>bold</strong>. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the <a href="../../../../../html/expression-language-guide.html">NiFi E
 xpression Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td id="name"><strong>Schema Access Strategy</strong></td><td id="default-value">infer-schema</td><td id="allowable-values"><ul><li>Use 'Schema Name' Property <img src="../../../../../html/images/iconInfo.png" alt="The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in the configured Schema Registry service." title="The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in the configured Schema Registry service."></img></li><li>Use 'Schema Text' Property <img src="../../../../../html/images/iconInfo.png" alt="The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema
  Text' property must be valid after substituting the expressions." title="The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions."></img></li><li>HWX Schema Reference Attributes <img src="../../../../../html/images/iconInfo.png" alt="The FlowFile contains 3 Attributes that will be used to lookup a Schema from the configured Schema Registry: 'schema.identifier', 'schema.version', and 'schema.protocol.version'" title="The FlowFile contains 3 Attributes that will be used to lookup a Schema from the configured Schema Registry: 'schema.identifier', 'schema.version', and 'schema.protocol.version'"></img></li><li>HWX Content-Encoded Schema Reference <img src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile contains a reference to a schema in the Schema Registry service. T
 he reference is encoded as a single byte indicating the 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, found at https://github.com/hortonworks/registry" title="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, found at https://github.com/hortonworks/registry"></img></li><li>Confluent Content-Encoded Schema Reference <img src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the 
 identifier of the schema, as outlined at http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html. This is based on version 3.2.x of the Confluent Schema Registry." title="The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html. This is based on version 3.2.x of the Confluent Schema Registry."></img></li><li>Infer Schema <img src="../../../../../html/images/iconInfo.png" alt="The Schema of the data will be inferred automatically when the data is read. See component Usage and Additional Details for information about how the schema is inferred." title="The Schema of the data will be inferred automatically when the data is read. See component Usage and Additional Details for information about how the schema is inferred."><
 /img></li></ul></td><td id="description">Specifies how to obtain the schema that is to be used for interpreting the data.</td></tr><tr><td id="name">Schema Registry</td><td id="default-value"></td><td id="allowable-values"><strong>Controller Service API: </strong><br/>SchemaRegistry<br/><strong>Implementations: </strong><a href="../../../nifi-hwx-schema-registry-nar/1.11.0/org.apache.nifi.schemaregistry.hortonworks.HortonworksSchemaRegistry/index.html">HortonworksSchemaRegistry</a><br/><a href="../../../nifi-confluent-platform-nar/1.11.0/org.apache.nifi.confluent.schemaregistry.ConfluentSchemaRegistry/index.html">ConfluentSchemaRegistry</a><br/><a href="../../../nifi-registry-nar/1.11.0/org.apache.nifi.schemaregistry.services.AvroSchemaRegistry/index.html">AvroSchemaRegistry</a></td><td id="description">Specifies the Controller Service to use for the Schema Registry</td></tr><tr><td id="name">Schema Name</td><td id="default-value">${schema.name}</td><td id="allowable-values"></td><t
 d id="description">Specifies the name of the schema to lookup in the Schema Registry property<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Version</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the version of the schema to lookup in the Schema Registry. If not specified then the latest version of the schema will be retrieved.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Branch</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the name of the branch to use when looking up the schema in the Schema Registry property. If the chosen Schema Registry does not support branching, this value will be ignored.<br/><strong>Supports Expression Language: true (will be evaluated using 
 flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Text</td><td id="default-value">${avro.schema}</td><td id="allowable-values"></td><td id="description">The text of an Avro-formatted Schema<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Schema Inference Cache</td><td id="default-value"></td><td id="allowable-values"><strong>Controller Service API: </strong><br/>RecordSchemaCacheService<br/><strong>Implementation: </strong><a href="../org.apache.nifi.schema.inference.VolatileSchemaCache/index.html">VolatileSchemaCache</a></td><td id="description">Specifies a Schema Cache to use when inferring the schema. If not populated, the schema will be inferred each time. However, if a cache is specified, the cache will first be consulted and if the applicable schema can be found, it will be used instead of inferring the schema.</td></tr><tr><td id="name">
 <strong>Expect Records as Array</strong></td><td id="default-value">false</td><td id="allowable-values"><ul><li>false <img src="../../../../../html/images/iconInfo.png" alt="Each FlowFile will consist of a single record without any sort of &quot;wrapper&quot;." title="Each FlowFile will consist of a single record without any sort of &quot;wrapper&quot;."></img></li><li>true <img src="../../../../../html/images/iconInfo.png" alt="Each FlowFile will consist of zero or more records. The outer-most XML element is expected to be a &quot;wrapper&quot; and will be ignored." title="Each FlowFile will consist of zero or more records. The outer-most XML element is expected to be a &quot;wrapper&quot; and will be ignored."></img></li><li>Use attribute 'xml.stream.is.array' <img src="../../../../../html/images/iconInfo.png" alt="Whether to treat a FlowFile as a single Record or an array of multiple Records is determined by the value of the 'xml.stream.is.array' attribute. If the value of the at
 tribute is 'true' (case-insensitive), then the XML Reader will treat the FlowFile as a series of Records with the outer element being ignored. If the value of the attribute is 'false' (case-insensitive), then the FlowFile is treated as a single Record and no wrapper element is assumed. If the attribute is missing or its value is anything other than 'true' or 'false', then an Exception will be thrown and no records will be parsed." title="Whether to treat a FlowFile as a single Record or an array of multiple Records is determined by the value of the 'xml.stream.is.array' attribute. If the value of the attribute is 'true' (case-insensitive), then the XML Reader will treat the FlowFile as a series of Records with the outer element being ignored. If the value of the attribute is 'false' (case-insensitive), then the FlowFile is treated as a single Record and no wrapper element is assumed. If the attribute is missing or its value is anything other than 'true' or 'false', then an Exception
  will be thrown and no records will be parsed."></img></li></ul></td><td id="description">This property defines whether the reader expects a FlowFile to consist of a single Record or a series of Records with a "wrapper element". Because XML does not provide for a way to read a series of XML documents from a stream directly, it is common to combine many XML documents by concatenating them and then wrapping the entire XML blob  with a "wrapper element". This property dictates whether the reader expects a FlowFile to consist of a single Record or a series of Records with a "wrapper element" that will be ignored.</td></tr><tr><td id="name">Attribute Prefix</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">If this property is set, the name of attributes will be prepended with a prefix when they are added to a record.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><
 td id="name">Field Name for Content</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">If tags with content (e. g. &lt;field&gt;content&lt;/field&gt;) are defined as nested records in the schema, the name of the tag will be used as name for the record and the value of this property will be used as name for the field. If tags with content shall be parsed together with attributes (e. g. &lt;field attribute="123"&gt;content&lt;/field&gt;), they have to be defined as records. For additional information, see the section of processor usage.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name">Date Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Date fields. If not specified, Date fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT
 ). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters, as in 01/01/2017).</td></tr><tr><td id="name">Time Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Time fields. If not specified, Time fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, HH:mm:ss for a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 18:04:15).</td></tr><tr><td id="name">Timestamp Format</td><td id="default-value"></td><td id="allowable-values"></td><td id="description">Specifies the format to use when reading/writing Timestamp fields. If not specified, Timestamp fields 
 will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy HH:mm:ss for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters; and then followed by a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 01/01/2017 18:04:15).</td></tr></table><h3>State management: </h3>This component does not store state.<h3>Restricted: </h3>This component is not restricted.<h3>System Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file