You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by da...@apache.org on 2016/08/16 08:03:43 UTC

[19/51] [partial] camel git commit: CAMEL-9541: Use -component as suffix for component docs.

http://git-wip-us.apache.org/repos/asf/camel/blob/9c0b7baf/components/camel-hbase/src/main/docs/hbase.adoc
----------------------------------------------------------------------
diff --git a/components/camel-hbase/src/main/docs/hbase.adoc b/components/camel-hbase/src/main/docs/hbase.adoc
deleted file mode 100644
index 7dd398c..0000000
--- a/components/camel-hbase/src/main/docs/hbase.adoc
+++ /dev/null
@@ -1,519 +0,0 @@
-[[hbase-HBaseComponent]]
-HBase Component
-~~~~~~~~~~~~~~~
-
-*Available as of Camel 2.10*
-
-This component provides an idemptotent repository, producers and
-consumers for http://hbase.apache.org/[Apache HBase].
-
-Maven users will need to add the following dependency to their `pom.xml`
-for this component:
-
-[source,xml]
-------------------------------------------------------------
-<dependency>
-    <groupId>org.apache.camel</groupId>
-    <artifactId>camel-hbase</artifactId>
-    <version>x.x.x</version>
-    <!-- use the same version as your Camel core version -->
-</dependency>
-------------------------------------------------------------
-
-[[hbase-ApacheHBaseOverview]]
-Apache HBase Overview
-^^^^^^^^^^^^^^^^^^^^^
-
-HBase is an open-source, distributed, versioned, column-oriented store
-modeled after Google's Bigtable: A Distributed Storage System for
-Structured Data. You can use HBase when you need random, realtime
-read/write access to your Big Data. More information at
-http://hbase.apache.org[Apache HBase].
-
-[[hbase-CamelandHBase]]
-Camel and HBase
-^^^^^^^^^^^^^^^
-
-When using a datasotre inside a camel route, there is always the
-chalenge of specifying how the camel message will stored to the
-datastore. In document based stores things are more easy as the message
-body can be directly mapped to a document. In relational databases an
-ORM solution can be used to map properties to columns etc. In column
-based stores things are more challenging as there is no standard way to
-perform that kind of mapping.
-
-HBase adds two additional challenges:
-
-* HBase groups columns into families, so just mapping a property to a
-column using a name convention is just not enough.
-* HBase doesn't have the notion of type, which means that it stores
-everything as byte[] and doesn't know if the byte[] represents a String,
-a Number, a serialized Java object or just binary data.
-
-To overcome these challenges, camel-hbase makes use of the message
-headers to specify the mapping of the message to HBase columns. It also
-provides the ability to use some camel-hbase provided classes that model
-HBase data and can be easily convert to and from xml/json etc. +
- Finally it provides the ability to the user to implement and use his
-own mapping strategy.
-
-Regardless of the mapping strategy camel-hbase will convert a message
-into an org.apache.camel.component.hbase.model.HBaseData object and use
-that object for its internal operations.
-
-[[hbase-Configuringthecomponent]]
-Configuring the component
-^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The HBase component can be provided a custom HBaseConfiguration object
-as a property or it can create an HBase configuration object on its own
-based on the HBase related resources that are found on classpath.
-
-[source,xml]
------------------------------------------------------------------------------
-    <bean id="hbase" class="org.apache.camel.component.hbase.HBaseComponent">
-        <property name="configuration" ref="config"/>
-    </bean>
------------------------------------------------------------------------------
-
-If no configuration object is provided to the component, the component
-will create one. The created configuration will search the class path
-for an hbase-site.xml file, from which it will draw the configuration.
-You can find more information about how to configure HBase clients at:
-http://archive.apache.org/dist/hbase/docs/client_dependencies.html[HBase
-client configuration and dependencies]
-
-[[hbase-HBaseProducer]]
-HBase Producer
-^^^^^^^^^^^^^^
-
-As mentioned above camel provides produers endpoints for HBase. This
-allows you to store, delete, retrieve or query data from HBase using
-your camel routes.
-
-[source,java]
------------------------
-hbase://table[?options]
------------------------
-
-where *table* is the table name.
-
-The supported operations are:
-
-* Put
-* Get
-* Delete
-* Scan
-
-[[hbase-SupportedURIoptions]]
-Supported URI options
-+++++++++++++++++++++
-
-
-
-// component options: START
-The HBase component supports 2 options which are listed below.
-
-
-
-{% raw %}
-[width="100%",cols="2s,1m,8",options="header"]
-|=======================================================================
-| Name | Java Type | Description
-| configuration | Configuration | To use the shared configuration
-| poolMaxSize | int | Maximum number of references to keep for each table in the HTable pool. The default value is 10.
-|=======================================================================
-{% endraw %}
-// component options: END
-
-
-
-
-
-// endpoint options: START
-The HBase component supports 17 endpoint options which are listed below:
-
-{% raw %}
-[width="100%",cols="2s,1,1m,1m,5",options="header"]
-|=======================================================================
-| Name | Group | Default | Java Type | Description
-| tableName | common |  | String | *Required* The name of the table
-| cellMappingStrategyFactory | common |  | CellMappingStrategyFactory | To use a custom CellMappingStrategyFactory that is responsible for mapping cells.
-| filters | common |  | List | A list of filters to use.
-| mappingStrategyClassName | common |  | String | The class name of a custom mapping strategy implementation.
-| mappingStrategyName | common |  | String | The strategy to use for mapping Camel messages to HBase columns. Supported values: header or body.
-| rowMapping | common |  | Map | To map the key/values from the Map to a HBaseRow. The following keys is supported: rowId - The id of the row. This has limited use as the row usually changes per Exchange. rowType - The type to covert row id to. Supported operations: CamelHBaseScan. family - The column family. Supports a number suffix for referring to more than one columns. qualifier - The column qualifier. Supports a number suffix for referring to more than one columns. value - The value. Supports a number suffix for referring to more than one columns valueType - The value type. Supports a number suffix for referring to more than one columns. Supported operations: CamelHBaseGet and CamelHBaseScan.
-| rowModel | common |  | HBaseRow | An instance of org.apache.camel.component.hbase.model.HBaseRow which describes how each row should be modeled
-| userGroupInformation | common |  | UserGroupInformation | Defines privileges to communicate with HBase such as using kerberos.
-| bridgeErrorHandler | consumer | false | boolean | Allows for bridging the consumer to the Camel routing Error Handler which mean any exceptions occurred while the consumer is trying to pickup incoming messages or the likes will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions that will be logged at WARN/ERROR level and ignored.
-| maxMessagesPerPoll | consumer |  | int | Gets the maximum number of messages as a limit to poll at each polling. Is default unlimited but use 0 or negative number to disable it as unlimited.
-| operation | consumer |  | String | The HBase operation to perform
-| remove | consumer | true | boolean | If the option is true Camel HBase Consumer will remove the rows which it processes.
-| removeHandler | consumer |  | HBaseRemoveHandler | To use a custom HBaseRemoveHandler that is executed when a row is to be removed.
-| exceptionHandler | consumer (advanced) |  | ExceptionHandler | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions that will be logged at WARN/ERROR level and ignored.
-| maxResults | producer | 100 | int | The maximum number of rows to scan.
-| exchangePattern | advanced | InOnly | ExchangePattern | Sets the default exchange pattern when creating an exchange
-| synchronous | advanced | false | boolean | Sets whether synchronous processing should be strictly used or Camel is allowed to use asynchronous processing (if supported).
-|=======================================================================
-{% endraw %}
-// endpoint options: END
-
-
-
-[[hbase-PutOperations.]]
-Put Operations.
-+++++++++++++++
-
-HBase is a column based store, which allows you to store data into a
-specific column of a specific row. Columns are grouped into families, so
-in order to specify a column you need to specify the column family and
-the qualifier of that column. To store data into a specific column you
-need to specify both the column and the row.
-
-The simplest scenario for storing data into HBase from a camel route,
-would be to store part of the message body to specified HBase column.
-
-[source,xml]
------------------------------------------------------------------------------------------------------------
-        <route>
-            <from uri="direct:in"/>
-            <!-- Set the HBase Row -->
-            <setHeader headerName="CamelHBaseRowId">
-                <el>${in.body.id}</el>
-            </setHeader>
-            <!-- Set the HBase Value -->
-            <setHeader headerName="CamelHBaseValue">
-                <el>${in.body.value}</el>
-            </setHeader>
-            <to uri="hbase:mytable?operation=CamelHBasePut&amp;family=myfamily&amp;qualifier=myqualifier"/>
-        </route>
------------------------------------------------------------------------------------------------------------
-
-The route above assumes that the message body contains an object that
-has an id and value property and will store the content of value in the
-HBase column myfamily:myqualifier in the row specified by id. If we
-needed to specify more than one column/value pairs we could just specify
-additional column mappings. Notice that you must use numbers from the
-2nd header onwards, eg RowId2, RowId3, RowId4, etc. Only the 1st header
-does not have the number 1.
-
-[source,xml]
-------------------------------------------------------------------------------------------------------------------------------------------------------------
-        <route>
-            <from uri="direct:in"/>
-            <!-- Set the HBase Row 1st column -->
-            <setHeader headerName="CamelHBaseRowId">
-                <el>${in.body.id}</el>
-            </setHeader>
-            <!-- Set the HBase Row 2nd column -->
-            <setHeader headerName="CamelHBaseRowId2">
-                <el>${in.body.id}</el>
-            </setHeader>
-            <!-- Set the HBase Value for 1st column -->
-            <setHeader headerName="CamelHBaseValue">
-                <el>${in.body.value}</el>
-            </setHeader>
-            <!-- Set the HBase Value for 2nd column -->
-            <setHeader headerName="CamelHBaseValue2">
-                <el>${in.body.othervalue}</el>
-            </setHeader>
-            <to uri="hbase:mytable?operation=CamelHBasePut&amp;family=myfamily&amp;qualifier=myqualifier&amp;family2=myfamily&amp;qualifier2=myqualifier2"/>
-        </route>
-------------------------------------------------------------------------------------------------------------------------------------------------------------
-
-It is important to remember that you can use uri options, message
-headers or a combination of both. It is recommended to specify constants
-as part of the uri and dynamic values as headers. If something is
-defined both as header and as part of the uri, the header will be used.
-
-[[hbase-GetOperations.]]
-Get Operations.
-+++++++++++++++
-
-A Get Operation is an operation that is used to retrieve one or more
-values from a specified HBase row. To specify what are the values that
-you want to retrieve you can just specify them as part of the uri or as
-message headers.
-
-[source,xml]
-----------------------------------------------------------------------------------------------------------------------------------------
-        <route>
-            <from uri="direct:in"/>
-            <!-- Set the HBase Row of the Get -->
-            <setHeader headerName="CamelHBaseRowId">
-                <el>${in.body.id}</el>
-            </setHeader>
-            <to uri="hbase:mytable?operation=CamelHBaseGet&amp;family=myfamily&amp;qualifier=myqualifier&amp;valueType=java.lang.Long"/>
-            <to uri="log:out"/>
-        </route>
-----------------------------------------------------------------------------------------------------------------------------------------
-
-In the example above the result of the get operation will be stored as a
-header with name CamelHBaseValue.
-
-[[hbase-DeleteOperations.]]
-Delete Operations.
-++++++++++++++++++
-
-You can also you camel-hbase to perform HBase delete operation. The
-delete operation will remove an entire row. All that needs to be
-specified is one or more rows as part of the message headers.
-
-[source,xml]
-----------------------------------------------------------------
-        <route>
-            <from uri="direct:in"/>
-            <!-- Set the HBase Row of the Get -->
-            <setHeader headerName="CamelHBaseRowId">
-                <el>${in.body.id}</el>
-            </setHeader>
-            <to uri="hbase:mytable?operation=CamelHBaseDelete"/>
-        </route>
-----------------------------------------------------------------
-
-[[hbase-ScanOperations.]]
-Scan Operations.
-++++++++++++++++
-
-A scan operation is the equivalent of a query in HBase. You can use the
-scan operation to retrieve multiple rows. To specify what columns should
-be part of the result and also specify how the values will be converted
-to objects you can use either uri options or headers.
-
-[source,xml]
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
-        <route>
-            <from uri="direct:in"/>
-            <to uri="hbase:mytable?operation=CamelHBaseScan&amp;family=myfamily&amp;qualifier=myqualifier&amp;valueType=java.lang.Long&amp;rowType=java.lang.String"/>
-            <to uri="log:out"/>
-        </route>
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
-In this case its probable that you also also need to specify a list of
-filters for limiting the results. You can specify a list of filters as
-part of the uri and camel will return only the rows that satisfy *ALL*
-the filters.  +
- To have a filter that will be aware of the information that is part of
-the message, camel defines the ModelAwareFilter. This will allow your
-filter to take into consideration the model that is defined by the
-message and the mapping strategy. +
- When using a ModelAwareFilter camel-hbase will apply the selected
-mapping strategy to the in message, will create an object that models
-the mapping and will pass that object to the Filter.
-
-For example to perform scan using as criteria the message headers, you
-can make use of the ModelAwareColumnMatchingFilter as shown below.
-
-[source,xml]
------------------------------------------------------------------------------------------------------------
-        <route>
-            <from uri="direct:scan"/>
-            <!-- Set the Criteria -->
-            <setHeader headerName="CamelHBaseFamily">
-                <constant>name</constant>
-            </setHeader>
-            <setHeader headerName="CamelHBaseQualifier">
-                <constant>first</constant>
-            </setHeader>
-            <setHeader headerName="CamelHBaseValue">
-                <el>in.body.firstName</el>
-            </setHeader>
-            <setHeader headerName="CamelHBaseFamily2">
-                <constant>name</constant>
-            </setHeader>
-            <setHeader headerName="CamelHBaseQualifier2">
-                <constant>last</constant>
-            </setHeader>
-            <setHeader headerName="CamelHBaseValue2">
-                <el>in.body.lastName</el>
-            </setHeader>
-            <!-- Set additional fields that you want to be return by skipping value -->
-            <setHeader headerName="CamelHBaseFamily3">
-                <constant>address</constant>
-            </setHeader>
-            <setHeader headerName="CamelHBaseQualifier3">
-                <constant>country</constant>
-            </setHeader>
-            <to uri="hbase:mytable?operation=CamelHBaseScan&amp;filters=#myFilterList"/>
-        </route>
-
-        <bean id="myFilters" class="java.util.ArrayList">
-            <constructor-arg>
-                <list>
-                    <bean class="org.apache.camel.component.hbase.filters.ModelAwareColumnMatchingFilter"/>
-                </list>
-            </constructor-arg>
-        </bean>
------------------------------------------------------------------------------------------------------------
-
-The route above assumes that a pojo is with properties firstName and
-lastName is passed as the message body, it takes those properties and
-adds them as part of the message headers. The default mapping strategy
-will create a model object that will map the headers to HBase columns
-and will pass that model the the ModelAwareColumnMatchingFilter. The
-filter will filter out any rows, that do not contain columns that match
-the model. It is like query by example.
-
-[[hbase-HBaseConsumer]]
-HBase Consumer
-^^^^^^^^^^^^^^
-
-The Camel HBase Consumer, will perform repeated scan on the specified
-HBase table and will return the scan results as part of the message. You
-can either specify header mapping (default) or body mapping. The later
-will just add the org.apache.camel.component.hbase.model.HBaseData as
-part of the message body.
-
-[source,java]
------------------------
-hbase://table[?options]
------------------------
-
-You can specify the columns that you want to be return and their types
-as part of the uri options:
-
-[source,java]
-------------------------------------------------------------------------------------------------------------------------------------------------------
-hbase:mutable?family=name&qualifer=first&valueType=java.lang.String&family=address&qualifer=number&valueType2=java.lang.Integer&rowType=java.lang.Long
-------------------------------------------------------------------------------------------------------------------------------------------------------
-
-The example above will create a model object that is consisted of the
-specified fields and the scan results will populate the model object
-with values. Finally the mapping strategy will be used to map this model
-to the camel message.
-
-[[hbase-HBaseIdempotentrepository]]
-HBase Idempotent repository
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The camel-hbase component also provides an idempotent repository which
-can be used when you want to make sure that each message is processed
-only once. The HBase idempotent repository is configured with a table, a
-column family and a column qualifier and will create to that table a row
-per message.
-
-[source,java]
-------------------------------------------------------------------------------------------------------------------
-HBaseConfiguration configuration = HBaseConfiguration.create();
-HBaseIdempotentRepository repository = new HBaseIdempotentRepository(configuration, tableName, family, qualifier);
-
-from("direct:in")
-  .idempotentConsumer(header("messageId"), repository)
-  .to("log:out);
-------------------------------------------------------------------------------------------------------------------
-
-[[hbase-HBaseMapping]]
-HBase Mapping
-^^^^^^^^^^^^^
-
-It was mentioned above that you the default mapping strategies are
-*header* and *body* mapping. +
- Below you can find some detailed examples of how each mapping strategy
-works.
-
-[[hbase-HBaseHeadermappingExamples]]
-HBase Header mapping Examples
-+++++++++++++++++++++++++++++
-
-The header mapping is the default mapping. 
- To put the value "myvalue" into HBase row "myrow" and column
-"myfamily:mycolum" the message should contain the following headers:
-
-[width="100%",cols="10%,90%",options="header",]
-|=======================================================================
-|Header |Value
-
-|CamelHBaseRowId |myrow
-
-|CamelHBaseFamily |myfamily
-
-|CamelHBaseQualifier |myqualifier
-
-|CamelHBaseValue |myvalue
-|=======================================================================
-
-To put more values for different columns and / or different rows you can
-specify additional headers suffixed with the index of the headers, e.g:
-
-[width="100%",cols="10%,90%",options="header",]
-|=======================================================================
-|Header |Value
-
-|CamelHBaseRowId |myrow
-
-|CamelHBaseFamily |myfamily
-
-|CamelHBaseQualifier |myqualifier
-
-|CamelHBaseValue |myvalue
-
-|CamelHBaseRowId2 |myrow2
-
-|CamelHBaseFamily2 |myfamily
-
-|CamelHBaseQualifier2 |myqualifier
-
-|CamelHBaseValue2 |myvalue2
-|=======================================================================
-
-In the case of retrieval operations such as get or scan you can also
-specify for each column the type that you want the data to be converted
-to. For exampe:
-
-[width="100%",cols="10%,90%",options="header",]
-|=======================================================================
-|Header |Value
-
-|CamelHBaseFamily |myfamily
-
-|CamelHBaseQualifier |myqualifier
-
-|CamelHBaseValueType |Long
-|=======================================================================
-
-Please note that in order to avoid boilerplate headers that are
-considered constant for all messages, you can also specify them as part
-of the endpoint uri, as you will see below.
-
-[[hbase-BodymappingExamples]]
-Body mapping Examples
-+++++++++++++++++++++
-
-In order to use the body mapping strategy you will have to specify the
-option mappingStrategy as part of the uri, for example:
-
-[source,java]
-----------------------------------
-hbase:mytable?mappingStrategyName=body
-----------------------------------
-
-To use the body mapping strategy the body needs to contain an instance
-of org.apache.camel.component.hbase.model.HBaseData. You can construct t
-
-[source,java]
----------------------------------
-HBaseData data = new HBaseData();
-HBaseRow row = new HBaseRow();
-row.setId("myRowId");
-HBaseCell cell = new HBaseCell();
-cell.setFamily("myfamily");
-cell.setQualifier("myqualifier");
-cell.setValue("myValue");
-row.getCells().add(cell);
-data.addRows().add(row);
----------------------------------
-
-The object above can be used for example in a put operation and will
-result in creating or updating the row with id myRowId and add the value
-myvalue to the column myfamily:myqualifier. +
- The body mapping strategy might not seem very appealing at first. The
-advantage it has over the header mapping strategy is that the HBaseData
-object can be easily converted to or from xml/json.
-
-[[hbase-Seealso]]
-See also
-^^^^^^^^
-
-* link:polling-consumer.html[Polling Consumer]
-* http://hbase.apache.org[Apache HBase]
-

http://git-wip-us.apache.org/repos/asf/camel/blob/9c0b7baf/components/camel-hdfs/src/main/docs/hdfs-component.adoc
----------------------------------------------------------------------
diff --git a/components/camel-hdfs/src/main/docs/hdfs-component.adoc b/components/camel-hdfs/src/main/docs/hdfs-component.adoc
new file mode 100644
index 0000000..41821a2
--- /dev/null
+++ b/components/camel-hdfs/src/main/docs/hdfs-component.adoc
@@ -0,0 +1,248 @@
+[[HDFS-HDFSComponent]]
+HDFS Component
+~~~~~~~~~~~~~~
+
+*Available as of Camel 2.8*
+
+The *hdfs* component enables you to read and write messages from/to an
+HDFS file system. HDFS is the distributed file system at the heart of
+http://hadoop.apache.org[Hadoop].
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+[source,xml]
+------------------------------------------------------------
+<dependency>
+    <groupId>org.apache.camel</groupId>
+    <artifactId>camel-hdfs</artifactId>
+    <version>x.x.x</version>
+    <!-- use the same version as your Camel core version -->
+</dependency>
+------------------------------------------------------------
+
+[[HDFS-URIformat]]
+URI format
+^^^^^^^^^^
+
+[source,java]
+---------------------------------------
+hdfs://hostname[:port][/path][?options]
+---------------------------------------
+
+You can append query options to the URI in the following format,
+`?option=value&option=value&...` +
+ The path is treated in the following way:
+
+1.  as a consumer, if it's a file, it just reads the file, otherwise if
+it represents a directory it scans all the file under the path
+satisfying the configured pattern. All the files under that directory
+must be of the same type.
+2.  as a producer, if at least one split strategy is defined, the path
+is considered a directory and under that directory the producer creates
+a different file per split named using the configured
+link:uuidgenerator.html[UuidGenerator].
+
+*Note*
+
+When consuming from hdfs then in normal mode, a file is split into
+chunks, producing a message per chunk. You can configure the size of the
+chunk using the chunkSize option. If you want to read from hdfs and
+write to a regular file using the file component, then you can use the
+fileMode=Append to append each of the chunks together.
+
+�
+
+[[HDFS-Options]]
+Options
+^^^^^^^
+
+
+
+// component options: START
+The HDFS component supports 1 options which are listed below.
+
+
+
+{% raw %}
+[width="100%",cols="2s,1m,8",options="header"]
+|=======================================================================
+| Name | Java Type | Description
+| jAASConfiguration | Configuration | To use the given configuration for security with JAAS.
+|=======================================================================
+{% endraw %}
+// component options: END
+
+
+
+
+
+
+// endpoint options: START
+The HDFS component supports 41 endpoint options which are listed below:
+
+{% raw %}
+[width="100%",cols="2s,1,1m,1m,5",options="header"]
+|=======================================================================
+| Name | Group | Default | Java Type | Description
+| hostName | common |  | String | *Required* HDFS host to use
+| port | common | 8020 | int | HDFS port to use
+| path | common |  | String | *Required* The directory path to use
+| connectOnStartup | common | true | boolean | Whether to connect to the HDFS file system on starting the producer/consumer. If false then the connection is created on-demand. Notice that HDFS may take up till 15 minutes to establish a connection as it has hardcoded 45 x 20 sec redelivery. By setting this option to false allows your application to startup and not block for up till 15 minutes.
+| fileSystemType | common | HDFS | HdfsFileSystemType | Set to LOCAL to not use HDFS but local java.io.File instead.
+| fileType | common | NORMAL_FILE | HdfsFileType | The file type to use. For more details see Hadoop HDFS documentation about the various files types.
+| keyType | common | NULL | WritableType | The type for the key in case of sequence or map files.
+| owner | common |  | String | The file owner must match this owner for the consumer to pickup the file. Otherwise the file is skipped.
+| valueType | common | BYTES | WritableType | The type for the key in case of sequence or map files
+| bridgeErrorHandler | consumer | false | boolean | Allows for bridging the consumer to the Camel routing Error Handler which mean any exceptions occurred while the consumer is trying to pickup incoming messages or the likes will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions that will be logged at WARN/ERROR level and ignored.
+| delay | consumer | 1000 | long | The interval (milliseconds) between the directory scans.
+| initialDelay | consumer |  | long | For the consumer how much to wait (milliseconds) before to start scanning the directory.
+| pattern | consumer | * | String | The pattern used for scanning the directory
+| sendEmptyMessageWhenIdle | consumer | false | boolean | If the polling consumer did not poll any files you can enable this option to send an empty message (no body) instead.
+| exceptionHandler | consumer (advanced) |  | ExceptionHandler | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions that will be logged at WARN/ERROR level and ignored.
+| pollStrategy | consumer (advanced) |  | PollingConsumerPollStrategy | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.
+| append | producer | false | boolean | Append to existing file. Notice that not all HDFS file systems support the append option.
+| overwrite | producer | true | boolean | Whether to overwrite existing files with the same name
+| blockSize | advanced | 67108864 | long | The size of the HDFS blocks
+| bufferSize | advanced | 4096 | int | The buffer size used by HDFS
+| checkIdleInterval | advanced | 500 | int | How often (time in millis) in to run the idle checker background task. This option is only in use if the splitter strategy is IDLE.
+| chunkSize | advanced | 4096 | int | When reading a normal file this is split into chunks producing a message per chunk.
+| compressionCodec | advanced | DEFAULT | HdfsCompressionCodec | The compression codec to use
+| compressionType | advanced | NONE | CompressionType | The compression type to use (is default not in use)
+| exchangePattern | advanced | InOnly | ExchangePattern | Sets the default exchange pattern when creating an exchange
+| openedSuffix | advanced | opened | String | When a file is opened for reading/writing the file is renamed with this suffix to avoid to read it during the writing phase.
+| readSuffix | advanced | read | String | Once the file has been read is renamed with this suffix to avoid to read it again.
+| replication | advanced | 3 | short | The HDFS replication factor
+| splitStrategy | advanced |  | String | In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So for the moment it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way: If the split strategy option has been defined the hdfs path will be used as a directory and files will be created using the configured UuidGenerator. Every time a splitting condition is met a new file is created. The splitStrategy option is defined as a string with the following syntax: splitStrategy=ST:valueST:value... where ST can be: BYTES a new file is created and the old is closed when the number of written bytes is more than value MESSAGES a new file is created and the old is closed when the number of written messages is more than value IDLE a new file is created and the old is closed when no writing happened in the last value milliseconds
+| synchronous | advanced | false | boolean | Sets whether synchronous processing should be strictly used or Camel is allowed to use asynchronous processing (if supported).
+| backoffErrorThreshold | scheduler |  | int | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.
+| backoffIdleThreshold | scheduler |  | int | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.
+| backoffMultiplier | scheduler |  | int | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.
+| greedy | scheduler | false | boolean | If greedy is enabled then the ScheduledPollConsumer will run immediately again if the previous run polled 1 or more messages.
+| runLoggingLevel | scheduler | TRACE | LoggingLevel | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.
+| scheduledExecutorService | scheduler |  | ScheduledExecutorService | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.
+| scheduler | scheduler | none | ScheduledPollConsumerScheduler | To use a cron scheduler from either camel-spring or camel-quartz2 component
+| schedulerProperties | scheduler |  | Map | To configure additional properties when using a custom scheduler or any of the Quartz2 Spring based scheduler.
+| startScheduler | scheduler | true | boolean | Whether the scheduler should be auto started.
+| timeUnit | scheduler | MILLISECONDS | TimeUnit | Time unit for initialDelay and delay options.
+| useFixedDelay | scheduler | true | boolean | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.
+|=======================================================================
+{% endraw %}
+// endpoint options: END
+
+
+
+
+
+[[HDFS-KeyTypeandValueType]]
+KeyType and ValueType
++++++++++++++++++++++
+
+* NULL it means that the key or the value is absent
+* BYTE for writing a byte, the java Byte class is mapped into a BYTE
+* BYTES for writing a sequence of bytes. It maps the java ByteBuffer
+class
+* INT for writing java integer
+* FLOAT for writing java float
+* LONG for writing java long
+* DOUBLE for writing java double
+* TEXT for writing java strings
+
+BYTES is also used with everything else, for example, in Camel a file is
+sent around as an InputStream, int this case is written in a sequence
+file or a map file as a sequence of bytes.
+
+[[HDFS-SplittingStrategy]]
+Splitting Strategy
+^^^^^^^^^^^^^^^^^^
+
+In the current version of Hadoop opening a file in append mode is
+disabled since it's not very reliable. So, for the moment, it's only
+possible to create new files. The Camel HDFS endpoint tries to solve
+this problem in this way:
+
+* If the split strategy option has been defined, the hdfs path will be
+used as a directory and files will be created using the configured
+link:uuidgenerator.html[UuidGenerator]
+* Every time a splitting condition is met, a new file is created. +
+ The splitStrategy option is defined as a string with the following
+syntax: +
+ splitStrategy=<ST>:<value>,<ST>:<value>,*
+
+where <ST> can be:
+
+* BYTES a new file is created, and the old is closed when the number of
+written bytes is more than <value>
+* MESSAGES a new file is created, and the old is closed when the number
+of written messages is more than <value>
+* IDLE a new file is created, and the old is closed when no writing
+happened in the last <value> milliseconds
+
+*Note*
+
+note that this strategy currently requires either setting an IDLE value
+or setting the HdfsConstants.HDFS_CLOSE header to false to use the
+BYTES/MESSAGES configuration...otherwise, the file will be closed with
+each message
+
+for example:
+
+[source,java]
+----------------------------------------------------------------
+hdfs://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5
+----------------------------------------------------------------
+
+it means: a new file is created either when it has been idle for more
+than 1 second or if more than 5 bytes have been written. So, running
+`hadoop fs -ls /tmp/simple-file` you'll see that multiple files have
+been created.
+
+[[HDFS-MessageHeaders]]
+Message Headers
+^^^^^^^^^^^^^^^
+
+The following headers are supported by this component:
+
+[[HDFS-Produceronly]]
+Producer only
++++++++++++++
+
+[width="100%",cols="10%,90%",options="header",]
+|=======================================================================
+|Header |Description
+
+|`CamelFileName` |*Camel 2.13:* Specifies the name of the file to write (relative to the
+endpoint path). The name can be a `String` or an
+link:expression.html[Expression] object. Only relevant when not using a
+split strategy.
+|=======================================================================
+
+[[HDFS-Controllingtoclosefilestream]]
+Controlling to close file stream
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+*Available as of Camel 2.10.4*
+
+When using the link:hdfs.html[HDFS] producer *without* a split strategy,
+then the file output stream is by default closed after the write.
+However you may want to keep the stream open, and only explicitly close
+the stream later. For that you can use the header
+`HdfsConstants.HDFS_CLOSE` (value = `"CamelHdfsClose"`) to control this.
+Setting this value to a boolean allows you to explicit control whether
+the stream should be closed or not.
+
+Notice this does not apply if you use a split strategy, as there are
+various strategies that can control when the stream is closed.
+
+[[HDFS-UsingthiscomponentinOSGi]]
+Using this component in OSGi
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This component is fully functional in an OSGi environment, however, it
+requires some actions from the user. Hadoop uses the thread context
+class loader in order to load resources. Usually, the thread context
+classloader will be the bundle class loader of the bundle that contains
+the routes. So, the default configuration files need to be visible from
+the bundle class loader. A typical way to deal with it is to keep a copy
+of core-default.xml in your bundle root. That file can be found in the
+hadoop-common.jar.

http://git-wip-us.apache.org/repos/asf/camel/blob/9c0b7baf/components/camel-hdfs/src/main/docs/hdfs.adoc
----------------------------------------------------------------------
diff --git a/components/camel-hdfs/src/main/docs/hdfs.adoc b/components/camel-hdfs/src/main/docs/hdfs.adoc
deleted file mode 100644
index 41821a2..0000000
--- a/components/camel-hdfs/src/main/docs/hdfs.adoc
+++ /dev/null
@@ -1,248 +0,0 @@
-[[HDFS-HDFSComponent]]
-HDFS Component
-~~~~~~~~~~~~~~
-
-*Available as of Camel 2.8*
-
-The *hdfs* component enables you to read and write messages from/to an
-HDFS file system. HDFS is the distributed file system at the heart of
-http://hadoop.apache.org[Hadoop].
-
-Maven users will need to add the following dependency to their `pom.xml`
-for this component:
-
-[source,xml]
-------------------------------------------------------------
-<dependency>
-    <groupId>org.apache.camel</groupId>
-    <artifactId>camel-hdfs</artifactId>
-    <version>x.x.x</version>
-    <!-- use the same version as your Camel core version -->
-</dependency>
-------------------------------------------------------------
-
-[[HDFS-URIformat]]
-URI format
-^^^^^^^^^^
-
-[source,java]
----------------------------------------
-hdfs://hostname[:port][/path][?options]
----------------------------------------
-
-You can append query options to the URI in the following format,
-`?option=value&option=value&...` +
- The path is treated in the following way:
-
-1.  as a consumer, if it's a file, it just reads the file, otherwise if
-it represents a directory it scans all the file under the path
-satisfying the configured pattern. All the files under that directory
-must be of the same type.
-2.  as a producer, if at least one split strategy is defined, the path
-is considered a directory and under that directory the producer creates
-a different file per split named using the configured
-link:uuidgenerator.html[UuidGenerator].
-
-*Note*
-
-When consuming from hdfs then in normal mode, a file is split into
-chunks, producing a message per chunk. You can configure the size of the
-chunk using the chunkSize option. If you want to read from hdfs and
-write to a regular file using the file component, then you can use the
-fileMode=Append to append each of the chunks together.
-
-�
-
-[[HDFS-Options]]
-Options
-^^^^^^^
-
-
-
-// component options: START
-The HDFS component supports 1 options which are listed below.
-
-
-
-{% raw %}
-[width="100%",cols="2s,1m,8",options="header"]
-|=======================================================================
-| Name | Java Type | Description
-| jAASConfiguration | Configuration | To use the given configuration for security with JAAS.
-|=======================================================================
-{% endraw %}
-// component options: END
-
-
-
-
-
-
-// endpoint options: START
-The HDFS component supports 41 endpoint options which are listed below:
-
-{% raw %}
-[width="100%",cols="2s,1,1m,1m,5",options="header"]
-|=======================================================================
-| Name | Group | Default | Java Type | Description
-| hostName | common |  | String | *Required* HDFS host to use
-| port | common | 8020 | int | HDFS port to use
-| path | common |  | String | *Required* The directory path to use
-| connectOnStartup | common | true | boolean | Whether to connect to the HDFS file system on starting the producer/consumer. If false then the connection is created on-demand. Notice that HDFS may take up till 15 minutes to establish a connection as it has hardcoded 45 x 20 sec redelivery. By setting this option to false allows your application to startup and not block for up till 15 minutes.
-| fileSystemType | common | HDFS | HdfsFileSystemType | Set to LOCAL to not use HDFS but local java.io.File instead.
-| fileType | common | NORMAL_FILE | HdfsFileType | The file type to use. For more details see Hadoop HDFS documentation about the various files types.
-| keyType | common | NULL | WritableType | The type for the key in case of sequence or map files.
-| owner | common |  | String | The file owner must match this owner for the consumer to pickup the file. Otherwise the file is skipped.
-| valueType | common | BYTES | WritableType | The type for the key in case of sequence or map files
-| bridgeErrorHandler | consumer | false | boolean | Allows for bridging the consumer to the Camel routing Error Handler which mean any exceptions occurred while the consumer is trying to pickup incoming messages or the likes will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions that will be logged at WARN/ERROR level and ignored.
-| delay | consumer | 1000 | long | The interval (milliseconds) between the directory scans.
-| initialDelay | consumer |  | long | For the consumer how much to wait (milliseconds) before to start scanning the directory.
-| pattern | consumer | * | String | The pattern used for scanning the directory
-| sendEmptyMessageWhenIdle | consumer | false | boolean | If the polling consumer did not poll any files you can enable this option to send an empty message (no body) instead.
-| exceptionHandler | consumer (advanced) |  | ExceptionHandler | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions that will be logged at WARN/ERROR level and ignored.
-| pollStrategy | consumer (advanced) |  | PollingConsumerPollStrategy | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.
-| append | producer | false | boolean | Append to existing file. Notice that not all HDFS file systems support the append option.
-| overwrite | producer | true | boolean | Whether to overwrite existing files with the same name
-| blockSize | advanced | 67108864 | long | The size of the HDFS blocks
-| bufferSize | advanced | 4096 | int | The buffer size used by HDFS
-| checkIdleInterval | advanced | 500 | int | How often (time in millis) in to run the idle checker background task. This option is only in use if the splitter strategy is IDLE.
-| chunkSize | advanced | 4096 | int | When reading a normal file this is split into chunks producing a message per chunk.
-| compressionCodec | advanced | DEFAULT | HdfsCompressionCodec | The compression codec to use
-| compressionType | advanced | NONE | CompressionType | The compression type to use (is default not in use)
-| exchangePattern | advanced | InOnly | ExchangePattern | Sets the default exchange pattern when creating an exchange
-| openedSuffix | advanced | opened | String | When a file is opened for reading/writing the file is renamed with this suffix to avoid to read it during the writing phase.
-| readSuffix | advanced | read | String | Once the file has been read is renamed with this suffix to avoid to read it again.
-| replication | advanced | 3 | short | The HDFS replication factor
-| splitStrategy | advanced |  | String | In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So for the moment it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way: If the split strategy option has been defined the hdfs path will be used as a directory and files will be created using the configured UuidGenerator. Every time a splitting condition is met a new file is created. The splitStrategy option is defined as a string with the following syntax: splitStrategy=ST:valueST:value... where ST can be: BYTES a new file is created and the old is closed when the number of written bytes is more than value MESSAGES a new file is created and the old is closed when the number of written messages is more than value IDLE a new file is created and the old is closed when no writing happened in the last value milliseconds
-| synchronous | advanced | false | boolean | Sets whether synchronous processing should be strictly used or Camel is allowed to use asynchronous processing (if supported).
-| backoffErrorThreshold | scheduler |  | int | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.
-| backoffIdleThreshold | scheduler |  | int | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.
-| backoffMultiplier | scheduler |  | int | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.
-| greedy | scheduler | false | boolean | If greedy is enabled then the ScheduledPollConsumer will run immediately again if the previous run polled 1 or more messages.
-| runLoggingLevel | scheduler | TRACE | LoggingLevel | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.
-| scheduledExecutorService | scheduler |  | ScheduledExecutorService | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.
-| scheduler | scheduler | none | ScheduledPollConsumerScheduler | To use a cron scheduler from either camel-spring or camel-quartz2 component
-| schedulerProperties | scheduler |  | Map | To configure additional properties when using a custom scheduler or any of the Quartz2 Spring based scheduler.
-| startScheduler | scheduler | true | boolean | Whether the scheduler should be auto started.
-| timeUnit | scheduler | MILLISECONDS | TimeUnit | Time unit for initialDelay and delay options.
-| useFixedDelay | scheduler | true | boolean | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.
-|=======================================================================
-{% endraw %}
-// endpoint options: END
-
-
-
-
-
-[[HDFS-KeyTypeandValueType]]
-KeyType and ValueType
-+++++++++++++++++++++
-
-* NULL it means that the key or the value is absent
-* BYTE for writing a byte, the java Byte class is mapped into a BYTE
-* BYTES for writing a sequence of bytes. It maps the java ByteBuffer
-class
-* INT for writing java integer
-* FLOAT for writing java float
-* LONG for writing java long
-* DOUBLE for writing java double
-* TEXT for writing java strings
-
-BYTES is also used with everything else, for example, in Camel a file is
-sent around as an InputStream, int this case is written in a sequence
-file or a map file as a sequence of bytes.
-
-[[HDFS-SplittingStrategy]]
-Splitting Strategy
-^^^^^^^^^^^^^^^^^^
-
-In the current version of Hadoop opening a file in append mode is
-disabled since it's not very reliable. So, for the moment, it's only
-possible to create new files. The Camel HDFS endpoint tries to solve
-this problem in this way:
-
-* If the split strategy option has been defined, the hdfs path will be
-used as a directory and files will be created using the configured
-link:uuidgenerator.html[UuidGenerator]
-* Every time a splitting condition is met, a new file is created. +
- The splitStrategy option is defined as a string with the following
-syntax: +
- splitStrategy=<ST>:<value>,<ST>:<value>,*
-
-where <ST> can be:
-
-* BYTES a new file is created, and the old is closed when the number of
-written bytes is more than <value>
-* MESSAGES a new file is created, and the old is closed when the number
-of written messages is more than <value>
-* IDLE a new file is created, and the old is closed when no writing
-happened in the last <value> milliseconds
-
-*Note*
-
-note that this strategy currently requires either setting an IDLE value
-or setting the HdfsConstants.HDFS_CLOSE header to false to use the
-BYTES/MESSAGES configuration...otherwise, the file will be closed with
-each message
-
-for example:
-
-[source,java]
-----------------------------------------------------------------
-hdfs://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5
-----------------------------------------------------------------
-
-it means: a new file is created either when it has been idle for more
-than 1 second or if more than 5 bytes have been written. So, running
-`hadoop fs -ls /tmp/simple-file` you'll see that multiple files have
-been created.
-
-[[HDFS-MessageHeaders]]
-Message Headers
-^^^^^^^^^^^^^^^
-
-The following headers are supported by this component:
-
-[[HDFS-Produceronly]]
-Producer only
-+++++++++++++
-
-[width="100%",cols="10%,90%",options="header",]
-|=======================================================================
-|Header |Description
-
-|`CamelFileName` |*Camel 2.13:* Specifies the name of the file to write (relative to the
-endpoint path). The name can be a `String` or an
-link:expression.html[Expression] object. Only relevant when not using a
-split strategy.
-|=======================================================================
-
-[[HDFS-Controllingtoclosefilestream]]
-Controlling to close file stream
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-*Available as of Camel 2.10.4*
-
-When using the link:hdfs.html[HDFS] producer *without* a split strategy,
-then the file output stream is by default closed after the write.
-However you may want to keep the stream open, and only explicitly close
-the stream later. For that you can use the header
-`HdfsConstants.HDFS_CLOSE` (value = `"CamelHdfsClose"`) to control this.
-Setting this value to a boolean allows you to explicit control whether
-the stream should be closed or not.
-
-Notice this does not apply if you use a split strategy, as there are
-various strategies that can control when the stream is closed.
-
-[[HDFS-UsingthiscomponentinOSGi]]
-Using this component in OSGi
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-This component is fully functional in an OSGi environment, however, it
-requires some actions from the user. Hadoop uses the thread context
-class loader in order to load resources. Usually, the thread context
-classloader will be the bundle class loader of the bundle that contains
-the routes. So, the default configuration files need to be visible from
-the bundle class loader. A typical way to deal with it is to keep a copy
-of core-default.xml in your bundle root. That file can be found in the
-hadoop-common.jar.

http://git-wip-us.apache.org/repos/asf/camel/blob/9c0b7baf/components/camel-hdfs2/src/main/docs/hdfs2-component.adoc
----------------------------------------------------------------------
diff --git a/components/camel-hdfs2/src/main/docs/hdfs2-component.adoc b/components/camel-hdfs2/src/main/docs/hdfs2-component.adoc
new file mode 100644
index 0000000..5d0ee27
--- /dev/null
+++ b/components/camel-hdfs2/src/main/docs/hdfs2-component.adoc
@@ -0,0 +1,294 @@
+[[HDFS2-HDFS2Component]]
+HDFS2 Component
+~~~~~~~~~~~~~~~
+
+*Available as of Camel 2.13*
+
+The *hdfs2* component enables you to read and write messages from/to an
+HDFS file system using Hadoop 2.x. HDFS is the distributed file system
+at the heart of http://hadoop.apache.org[Hadoop].
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+[source,xml]
+------------------------------------------------------------
+<dependency>
+    <groupId>org.apache.camel</groupId>
+    <artifactId>camel-hdfs2</artifactId>
+    <version>x.x.x</version>
+    <!-- use the same version as your Camel core version -->
+</dependency>
+------------------------------------------------------------
+
+[[HDFS2-URIformat]]
+URI format
+^^^^^^^^^^
+
+[source,java]
+----------------------------------------
+hdfs2://hostname[:port][/path][?options]
+----------------------------------------
+
+You can append query options to the URI in the following format,
+`?option=value&option=value&...` +
+ The path is treated in the following way:
+
+1.  as a consumer, if it's a file, it just reads the file, otherwise if
+it represents a directory it scans all the file under the path
+satisfying the configured pattern. All the files under that directory
+must be of the same type.
+2.  as a producer, if at least one split strategy is defined, the path
+is considered a directory and under that directory the producer creates
+a different file per split named using the configured
+link:uuidgenerator.html[UuidGenerator].
+
+
+When consuming from hdfs2 then in normal mode, a file is split into
+chunks, producing a message per chunk. You can configure the size of the
+chunk using the chunkSize option. If you want to read from hdfs and
+write to a regular file using the file component, then you can use the
+fileMode=Append to append each of the chunks together.
+
+[[HDFS2-Options]]
+Options
+^^^^^^^
+
+
+
+
+// component options: START
+The HDFS2 component supports 1 options which are listed below.
+
+
+
+{% raw %}
+[width="100%",cols="2s,1m,8",options="header"]
+|=======================================================================
+| Name | Java Type | Description
+| jAASConfiguration | Configuration | To use the given configuration for security with JAAS.
+|=======================================================================
+{% endraw %}
+// component options: END
+
+
+
+
+
+// endpoint options: START
+The HDFS2 component supports 41 endpoint options which are listed below:
+
+{% raw %}
+[width="100%",cols="2s,1,1m,1m,5",options="header"]
+|=======================================================================
+| Name | Group | Default | Java Type | Description
+| hostName | common |  | String | *Required* HDFS host to use
+| port | common | 8020 | int | HDFS port to use
+| path | common |  | String | *Required* The directory path to use
+| connectOnStartup | common | true | boolean | Whether to connect to the HDFS file system on starting the producer/consumer. If false then the connection is created on-demand. Notice that HDFS may take up till 15 minutes to establish a connection as it has hardcoded 45 x 20 sec redelivery. By setting this option to false allows your application to startup and not block for up till 15 minutes.
+| fileSystemType | common | HDFS | HdfsFileSystemType | Set to LOCAL to not use HDFS but local java.io.File instead.
+| fileType | common | NORMAL_FILE | HdfsFileType | The file type to use. For more details see Hadoop HDFS documentation about the various files types.
+| keyType | common | NULL | WritableType | The type for the key in case of sequence or map files.
+| owner | common |  | String | The file owner must match this owner for the consumer to pickup the file. Otherwise the file is skipped.
+| valueType | common | BYTES | WritableType | The type for the key in case of sequence or map files
+| bridgeErrorHandler | consumer | false | boolean | Allows for bridging the consumer to the Camel routing Error Handler which mean any exceptions occurred while the consumer is trying to pickup incoming messages or the likes will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions that will be logged at WARN/ERROR level and ignored.
+| delay | consumer | 1000 | long | The interval (milliseconds) between the directory scans.
+| initialDelay | consumer |  | long | For the consumer how much to wait (milliseconds) before to start scanning the directory.
+| pattern | consumer | * | String | The pattern used for scanning the directory
+| sendEmptyMessageWhenIdle | consumer | false | boolean | If the polling consumer did not poll any files you can enable this option to send an empty message (no body) instead.
+| exceptionHandler | consumer (advanced) |  | ExceptionHandler | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions that will be logged at WARN/ERROR level and ignored.
+| pollStrategy | consumer (advanced) |  | PollingConsumerPollStrategy | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.
+| append | producer | false | boolean | Append to existing file. Notice that not all HDFS file systems support the append option.
+| overwrite | producer | true | boolean | Whether to overwrite existing files with the same name
+| blockSize | advanced | 67108864 | long | The size of the HDFS blocks
+| bufferSize | advanced | 4096 | int | The buffer size used by HDFS
+| checkIdleInterval | advanced | 500 | int | How often (time in millis) in to run the idle checker background task. This option is only in use if the splitter strategy is IDLE.
+| chunkSize | advanced | 4096 | int | When reading a normal file this is split into chunks producing a message per chunk.
+| compressionCodec | advanced | DEFAULT | HdfsCompressionCodec | The compression codec to use
+| compressionType | advanced | NONE | CompressionType | The compression type to use (is default not in use)
+| exchangePattern | advanced | InOnly | ExchangePattern | Sets the default exchange pattern when creating an exchange
+| openedSuffix | advanced | opened | String | When a file is opened for reading/writing the file is renamed with this suffix to avoid to read it during the writing phase.
+| readSuffix | advanced | read | String | Once the file has been read is renamed with this suffix to avoid to read it again.
+| replication | advanced | 3 | short | The HDFS replication factor
+| splitStrategy | advanced |  | String | In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So for the moment it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way: If the split strategy option has been defined the hdfs path will be used as a directory and files will be created using the configured UuidGenerator. Every time a splitting condition is met a new file is created. The splitStrategy option is defined as a string with the following syntax: splitStrategy=ST:valueST:value... where ST can be: BYTES a new file is created and the old is closed when the number of written bytes is more than value MESSAGES a new file is created and the old is closed when the number of written messages is more than value IDLE a new file is created and the old is closed when no writing happened in the last value milliseconds
+| synchronous | advanced | false | boolean | Sets whether synchronous processing should be strictly used or Camel is allowed to use asynchronous processing (if supported).
+| backoffErrorThreshold | scheduler |  | int | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.
+| backoffIdleThreshold | scheduler |  | int | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.
+| backoffMultiplier | scheduler |  | int | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.
+| greedy | scheduler | false | boolean | If greedy is enabled then the ScheduledPollConsumer will run immediately again if the previous run polled 1 or more messages.
+| runLoggingLevel | scheduler | TRACE | LoggingLevel | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.
+| scheduledExecutorService | scheduler |  | ScheduledExecutorService | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.
+| scheduler | scheduler | none | ScheduledPollConsumerScheduler | To use a cron scheduler from either camel-spring or camel-quartz2 component
+| schedulerProperties | scheduler |  | Map | To configure additional properties when using a custom scheduler or any of the Quartz2 Spring based scheduler.
+| startScheduler | scheduler | true | boolean | Whether the scheduler should be auto started.
+| timeUnit | scheduler | MILLISECONDS | TimeUnit | Time unit for initialDelay and delay options.
+| useFixedDelay | scheduler | true | boolean | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.
+|=======================================================================
+{% endraw %}
+// endpoint options: END
+
+
+
+
+[[HDFS2-KeyTypeandValueType]]
+KeyType and ValueType
++++++++++++++++++++++
+
+* NULL it means that the key or the value is absent
+* BYTE for writing a byte, the java Byte class is mapped into a BYTE
+* BYTES for writing a sequence of bytes. It maps the java ByteBuffer
+class
+* INT for writing java integer
+* FLOAT for writing java float
+* LONG for writing java long
+* DOUBLE for writing java double
+* TEXT for writing java strings
+
+BYTES is also used with everything else, for example, in Camel a file is
+sent around as an InputStream, int this case is written in a sequence
+file or a map file as a sequence of bytes.
+
+[[HDFS2-SplittingStrategy]]
+Splitting Strategy
+^^^^^^^^^^^^^^^^^^
+
+In the current version of Hadoop opening a file in append mode is
+disabled since it's not very reliable. So, for the moment, it's only
+possible to create new files. The Camel HDFS endpoint tries to solve
+this problem in this way:
+
+* If the split strategy option has been defined, the hdfs path will be
+used as a directory and files will be created using the configured
+link:uuidgenerator.html[UuidGenerator]
+* Every time a splitting condition is met, a new file is created. +
+ The splitStrategy option is defined as a string with the following
+syntax: splitStrategy=<ST>:<value>,<ST>:<value>,*
+
+where <ST> can be:
+
+* BYTES a new file is created, and the old is closed when the number of
+written bytes is more than <value>
+* MESSAGES a new file is created, and the old is closed when the number
+of written messages is more than <value>
+* IDLE a new file is created, and the old is closed when no writing
+happened in the last <value> milliseconds
+
+note that this strategy currently requires either setting an IDLE value
+or setting the HdfsConstants.HDFS_CLOSE header to false to use the
+BYTES/MESSAGES configuration...otherwise, the file will be closed with
+each message
+
+for example:
+
+[source,java]
+-----------------------------------------------------------------
+hdfs2://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5
+-----------------------------------------------------------------
+
+it means: a new file is created either when it has been idle for more
+than 1 second or if more than 5 bytes have been written. So, running
+`hadoop fs -ls /tmp/simple-file` you'll see that multiple files have
+been created.
+
+[[HDFS2-MessageHeaders]]
+Message Headers
+^^^^^^^^^^^^^^^
+
+The following headers are supported by this component:
+
+[[HDFS2-Produceronly]]
+Producer only
++++++++++++++
+
+[width="100%",cols="10%,90%",options="header",]
+|=======================================================================
+|Header |Description
+
+|`CamelFileName` |*Camel 2.13:* Specifies the name of the file to write (relative to the
+endpoint path). The name can be a `String` or an
+link:expression.html[Expression] object. Only relevant when not using a
+split strategy.
+|=======================================================================
+
+[[HDFS2-Controllingtoclosefilestream]]
+Controlling to close file stream
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When using the link:hdfs2.html[HDFS2] producer *without* a split
+strategy, then the file output stream is by default closed after the
+write. However you may want to keep the stream open, and only explicitly
+close the stream later. For that you can use the header
+`HdfsConstants.HDFS_CLOSE` (value = `"CamelHdfsClose"`) to control this.
+Setting this value to a boolean allows you to explicit control whether
+the stream should be closed or not.
+
+Notice this does not apply if you use a split strategy, as there are
+various strategies that can control when the stream is closed.
+
+[[HDFS2-UsingthiscomponentinOSGi]]
+Using this component in OSGi
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+There are some quirks when running this component in an OSGi environment
+related to the mechanism Hadoop 2.x uses to discover different
+`org.apache.hadoop.fs.FileSystem` implementations. Hadoop 2.x uses
+`java.util.ServiceLoader` which looks for
+`/META-INF/services/org.apache.hadoop.fs.FileSystem` files defining
+available filesystem types and implementations. These resources are not
+available when running inside OSGi.
+
+As with�`camel-hdfs` component, the default configuration files need to
+be visible from the bundle class loader. A typical way to deal with it
+is to keep a copy of�`core-default.xml` (and e.g., `hdfs-default.xml`)
+in your bundle root.
+
+[[HDFS2-Usingthiscomponentwithmanuallydefinedroutes]]
+Using this component with manually defined routes
++++++++++++++++++++++++++++++++++++++++++++++++++
+
+There are two options:
+
+1.  Package `/META-INF/services/org.apache.hadoop.fs.FileSystem`
+resource with bundle that defines the routes. This resource should list
+all the required Hadoop 2.x filesystem implementations.
+2.  Provide boilerplate initialization code which populates internal,
+static cache inside `org.apache.hadoop.fs.FileSystem` class:
+
+[source,java]
+----------------------------------------------------------------------------------------------------
+org.apache.hadoop.conf.Configuration conf = new org.apache.hadoop.conf.Configuration();
+conf.setClass("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class, FileSystem.class);
+conf.setClass("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class, FileSystem.class);
+...
+FileSystem.get("file:///", conf);
+FileSystem.get("hdfs://localhost:9000/", conf);
+...
+----------------------------------------------------------------------------------------------------
+
+[[HDFS2-UsingthiscomponentwithBlueprintcontainer]]
+Using this component with Blueprint container
++++++++++++++++++++++++++++++++++++++++++++++
+
+Two options:
+
+1.  Package `/META-INF/services/org.apache.hadoop.fs.FileSystem`
+resource with bundle that contains blueprint definition.
+2.  Add the following to the blueprint definition file:
+
+[source,java]
+------------------------------------------------------------------------------------------------------
+<bean id="hdfsOsgiHelper" class="org.apache.camel.component.hdfs2.HdfsOsgiHelper">
+   <argument>
+      <map>
+         <entry key="file:///" value="org.apache.hadoop.fs.LocalFileSystem"  />
+         <entry key="hdfs://localhost:9000/" value="org.apache.hadoop.hdfs.DistributedFileSystem" />
+         ...
+      </map>
+   </argument>
+</bean>
+
+<bean id="hdfs2" class="org.apache.camel.component.hdfs2.HdfsComponent" depends-on="hdfsOsgiHelper" />
+------------------------------------------------------------------------------------------------------
+
+This way Hadoop 2.x will have correct mapping of URI schemes to
+filesystem implementations.

http://git-wip-us.apache.org/repos/asf/camel/blob/9c0b7baf/components/camel-hdfs2/src/main/docs/hdfs2.adoc
----------------------------------------------------------------------
diff --git a/components/camel-hdfs2/src/main/docs/hdfs2.adoc b/components/camel-hdfs2/src/main/docs/hdfs2.adoc
deleted file mode 100644
index 5d0ee27..0000000
--- a/components/camel-hdfs2/src/main/docs/hdfs2.adoc
+++ /dev/null
@@ -1,294 +0,0 @@
-[[HDFS2-HDFS2Component]]
-HDFS2 Component
-~~~~~~~~~~~~~~~
-
-*Available as of Camel 2.13*
-
-The *hdfs2* component enables you to read and write messages from/to an
-HDFS file system using Hadoop 2.x. HDFS is the distributed file system
-at the heart of http://hadoop.apache.org[Hadoop].
-
-Maven users will need to add the following dependency to their `pom.xml`
-for this component:
-
-[source,xml]
-------------------------------------------------------------
-<dependency>
-    <groupId>org.apache.camel</groupId>
-    <artifactId>camel-hdfs2</artifactId>
-    <version>x.x.x</version>
-    <!-- use the same version as your Camel core version -->
-</dependency>
-------------------------------------------------------------
-
-[[HDFS2-URIformat]]
-URI format
-^^^^^^^^^^
-
-[source,java]
-----------------------------------------
-hdfs2://hostname[:port][/path][?options]
-----------------------------------------
-
-You can append query options to the URI in the following format,
-`?option=value&option=value&...` +
- The path is treated in the following way:
-
-1.  as a consumer, if it's a file, it just reads the file, otherwise if
-it represents a directory it scans all the file under the path
-satisfying the configured pattern. All the files under that directory
-must be of the same type.
-2.  as a producer, if at least one split strategy is defined, the path
-is considered a directory and under that directory the producer creates
-a different file per split named using the configured
-link:uuidgenerator.html[UuidGenerator].
-
-
-When consuming from hdfs2 then in normal mode, a file is split into
-chunks, producing a message per chunk. You can configure the size of the
-chunk using the chunkSize option. If you want to read from hdfs and
-write to a regular file using the file component, then you can use the
-fileMode=Append to append each of the chunks together.
-
-[[HDFS2-Options]]
-Options
-^^^^^^^
-
-
-
-
-// component options: START
-The HDFS2 component supports 1 options which are listed below.
-
-
-
-{% raw %}
-[width="100%",cols="2s,1m,8",options="header"]
-|=======================================================================
-| Name | Java Type | Description
-| jAASConfiguration | Configuration | To use the given configuration for security with JAAS.
-|=======================================================================
-{% endraw %}
-// component options: END
-
-
-
-
-
-// endpoint options: START
-The HDFS2 component supports 41 endpoint options which are listed below:
-
-{% raw %}
-[width="100%",cols="2s,1,1m,1m,5",options="header"]
-|=======================================================================
-| Name | Group | Default | Java Type | Description
-| hostName | common |  | String | *Required* HDFS host to use
-| port | common | 8020 | int | HDFS port to use
-| path | common |  | String | *Required* The directory path to use
-| connectOnStartup | common | true | boolean | Whether to connect to the HDFS file system on starting the producer/consumer. If false then the connection is created on-demand. Notice that HDFS may take up till 15 minutes to establish a connection as it has hardcoded 45 x 20 sec redelivery. By setting this option to false allows your application to startup and not block for up till 15 minutes.
-| fileSystemType | common | HDFS | HdfsFileSystemType | Set to LOCAL to not use HDFS but local java.io.File instead.
-| fileType | common | NORMAL_FILE | HdfsFileType | The file type to use. For more details see Hadoop HDFS documentation about the various files types.
-| keyType | common | NULL | WritableType | The type for the key in case of sequence or map files.
-| owner | common |  | String | The file owner must match this owner for the consumer to pickup the file. Otherwise the file is skipped.
-| valueType | common | BYTES | WritableType | The type for the key in case of sequence or map files
-| bridgeErrorHandler | consumer | false | boolean | Allows for bridging the consumer to the Camel routing Error Handler which mean any exceptions occurred while the consumer is trying to pickup incoming messages or the likes will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions that will be logged at WARN/ERROR level and ignored.
-| delay | consumer | 1000 | long | The interval (milliseconds) between the directory scans.
-| initialDelay | consumer |  | long | For the consumer how much to wait (milliseconds) before to start scanning the directory.
-| pattern | consumer | * | String | The pattern used for scanning the directory
-| sendEmptyMessageWhenIdle | consumer | false | boolean | If the polling consumer did not poll any files you can enable this option to send an empty message (no body) instead.
-| exceptionHandler | consumer (advanced) |  | ExceptionHandler | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions that will be logged at WARN/ERROR level and ignored.
-| pollStrategy | consumer (advanced) |  | PollingConsumerPollStrategy | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.
-| append | producer | false | boolean | Append to existing file. Notice that not all HDFS file systems support the append option.
-| overwrite | producer | true | boolean | Whether to overwrite existing files with the same name
-| blockSize | advanced | 67108864 | long | The size of the HDFS blocks
-| bufferSize | advanced | 4096 | int | The buffer size used by HDFS
-| checkIdleInterval | advanced | 500 | int | How often (time in millis) in to run the idle checker background task. This option is only in use if the splitter strategy is IDLE.
-| chunkSize | advanced | 4096 | int | When reading a normal file this is split into chunks producing a message per chunk.
-| compressionCodec | advanced | DEFAULT | HdfsCompressionCodec | The compression codec to use
-| compressionType | advanced | NONE | CompressionType | The compression type to use (is default not in use)
-| exchangePattern | advanced | InOnly | ExchangePattern | Sets the default exchange pattern when creating an exchange
-| openedSuffix | advanced | opened | String | When a file is opened for reading/writing the file is renamed with this suffix to avoid to read it during the writing phase.
-| readSuffix | advanced | read | String | Once the file has been read is renamed with this suffix to avoid to read it again.
-| replication | advanced | 3 | short | The HDFS replication factor
-| splitStrategy | advanced |  | String | In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So for the moment it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way: If the split strategy option has been defined the hdfs path will be used as a directory and files will be created using the configured UuidGenerator. Every time a splitting condition is met a new file is created. The splitStrategy option is defined as a string with the following syntax: splitStrategy=ST:valueST:value... where ST can be: BYTES a new file is created and the old is closed when the number of written bytes is more than value MESSAGES a new file is created and the old is closed when the number of written messages is more than value IDLE a new file is created and the old is closed when no writing happened in the last value milliseconds
-| synchronous | advanced | false | boolean | Sets whether synchronous processing should be strictly used or Camel is allowed to use asynchronous processing (if supported).
-| backoffErrorThreshold | scheduler |  | int | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.
-| backoffIdleThreshold | scheduler |  | int | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.
-| backoffMultiplier | scheduler |  | int | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.
-| greedy | scheduler | false | boolean | If greedy is enabled then the ScheduledPollConsumer will run immediately again if the previous run polled 1 or more messages.
-| runLoggingLevel | scheduler | TRACE | LoggingLevel | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.
-| scheduledExecutorService | scheduler |  | ScheduledExecutorService | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.
-| scheduler | scheduler | none | ScheduledPollConsumerScheduler | To use a cron scheduler from either camel-spring or camel-quartz2 component
-| schedulerProperties | scheduler |  | Map | To configure additional properties when using a custom scheduler or any of the Quartz2 Spring based scheduler.
-| startScheduler | scheduler | true | boolean | Whether the scheduler should be auto started.
-| timeUnit | scheduler | MILLISECONDS | TimeUnit | Time unit for initialDelay and delay options.
-| useFixedDelay | scheduler | true | boolean | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.
-|=======================================================================
-{% endraw %}
-// endpoint options: END
-
-
-
-
-[[HDFS2-KeyTypeandValueType]]
-KeyType and ValueType
-+++++++++++++++++++++
-
-* NULL it means that the key or the value is absent
-* BYTE for writing a byte, the java Byte class is mapped into a BYTE
-* BYTES for writing a sequence of bytes. It maps the java ByteBuffer
-class
-* INT for writing java integer
-* FLOAT for writing java float
-* LONG for writing java long
-* DOUBLE for writing java double
-* TEXT for writing java strings
-
-BYTES is also used with everything else, for example, in Camel a file is
-sent around as an InputStream, int this case is written in a sequence
-file or a map file as a sequence of bytes.
-
-[[HDFS2-SplittingStrategy]]
-Splitting Strategy
-^^^^^^^^^^^^^^^^^^
-
-In the current version of Hadoop opening a file in append mode is
-disabled since it's not very reliable. So, for the moment, it's only
-possible to create new files. The Camel HDFS endpoint tries to solve
-this problem in this way:
-
-* If the split strategy option has been defined, the hdfs path will be
-used as a directory and files will be created using the configured
-link:uuidgenerator.html[UuidGenerator]
-* Every time a splitting condition is met, a new file is created. +
- The splitStrategy option is defined as a string with the following
-syntax: splitStrategy=<ST>:<value>,<ST>:<value>,*
-
-where <ST> can be:
-
-* BYTES a new file is created, and the old is closed when the number of
-written bytes is more than <value>
-* MESSAGES a new file is created, and the old is closed when the number
-of written messages is more than <value>
-* IDLE a new file is created, and the old is closed when no writing
-happened in the last <value> milliseconds
-
-note that this strategy currently requires either setting an IDLE value
-or setting the HdfsConstants.HDFS_CLOSE header to false to use the
-BYTES/MESSAGES configuration...otherwise, the file will be closed with
-each message
-
-for example:
-
-[source,java]
------------------------------------------------------------------
-hdfs2://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5
------------------------------------------------------------------
-
-it means: a new file is created either when it has been idle for more
-than 1 second or if more than 5 bytes have been written. So, running
-`hadoop fs -ls /tmp/simple-file` you'll see that multiple files have
-been created.
-
-[[HDFS2-MessageHeaders]]
-Message Headers
-^^^^^^^^^^^^^^^
-
-The following headers are supported by this component:
-
-[[HDFS2-Produceronly]]
-Producer only
-+++++++++++++
-
-[width="100%",cols="10%,90%",options="header",]
-|=======================================================================
-|Header |Description
-
-|`CamelFileName` |*Camel 2.13:* Specifies the name of the file to write (relative to the
-endpoint path). The name can be a `String` or an
-link:expression.html[Expression] object. Only relevant when not using a
-split strategy.
-|=======================================================================
-
-[[HDFS2-Controllingtoclosefilestream]]
-Controlling to close file stream
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-When using the link:hdfs2.html[HDFS2] producer *without* a split
-strategy, then the file output stream is by default closed after the
-write. However you may want to keep the stream open, and only explicitly
-close the stream later. For that you can use the header
-`HdfsConstants.HDFS_CLOSE` (value = `"CamelHdfsClose"`) to control this.
-Setting this value to a boolean allows you to explicit control whether
-the stream should be closed or not.
-
-Notice this does not apply if you use a split strategy, as there are
-various strategies that can control when the stream is closed.
-
-[[HDFS2-UsingthiscomponentinOSGi]]
-Using this component in OSGi
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-There are some quirks when running this component in an OSGi environment
-related to the mechanism Hadoop 2.x uses to discover different
-`org.apache.hadoop.fs.FileSystem` implementations. Hadoop 2.x uses
-`java.util.ServiceLoader` which looks for
-`/META-INF/services/org.apache.hadoop.fs.FileSystem` files defining
-available filesystem types and implementations. These resources are not
-available when running inside OSGi.
-
-As with�`camel-hdfs` component, the default configuration files need to
-be visible from the bundle class loader. A typical way to deal with it
-is to keep a copy of�`core-default.xml` (and e.g., `hdfs-default.xml`)
-in your bundle root.
-
-[[HDFS2-Usingthiscomponentwithmanuallydefinedroutes]]
-Using this component with manually defined routes
-+++++++++++++++++++++++++++++++++++++++++++++++++
-
-There are two options:
-
-1.  Package `/META-INF/services/org.apache.hadoop.fs.FileSystem`
-resource with bundle that defines the routes. This resource should list
-all the required Hadoop 2.x filesystem implementations.
-2.  Provide boilerplate initialization code which populates internal,
-static cache inside `org.apache.hadoop.fs.FileSystem` class:
-
-[source,java]
-----------------------------------------------------------------------------------------------------
-org.apache.hadoop.conf.Configuration conf = new org.apache.hadoop.conf.Configuration();
-conf.setClass("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class, FileSystem.class);
-conf.setClass("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class, FileSystem.class);
-...
-FileSystem.get("file:///", conf);
-FileSystem.get("hdfs://localhost:9000/", conf);
-...
-----------------------------------------------------------------------------------------------------
-
-[[HDFS2-UsingthiscomponentwithBlueprintcontainer]]
-Using this component with Blueprint container
-+++++++++++++++++++++++++++++++++++++++++++++
-
-Two options:
-
-1.  Package `/META-INF/services/org.apache.hadoop.fs.FileSystem`
-resource with bundle that contains blueprint definition.
-2.  Add the following to the blueprint definition file:
-
-[source,java]
-------------------------------------------------------------------------------------------------------
-<bean id="hdfsOsgiHelper" class="org.apache.camel.component.hdfs2.HdfsOsgiHelper">
-   <argument>
-      <map>
-         <entry key="file:///" value="org.apache.hadoop.fs.LocalFileSystem"  />
-         <entry key="hdfs://localhost:9000/" value="org.apache.hadoop.hdfs.DistributedFileSystem" />
-         ...
-      </map>
-   </argument>
-</bean>
-
-<bean id="hdfs2" class="org.apache.camel.component.hdfs2.HdfsComponent" depends-on="hdfsOsgiHelper" />
-------------------------------------------------------------------------------------------------------
-
-This way Hadoop 2.x will have correct mapping of URI schemes to
-filesystem implementations.