You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by ac...@apache.org on 2016/05/10 07:25:55 UTC

[1/2] camel git commit: Added camel-mongodb docs to Gitbook

Repository: camel
Updated Branches:
  refs/heads/master 35be1b70e -> 3173d96ca


Added camel-mongodb docs to Gitbook


Project: http://git-wip-us.apache.org/repos/asf/camel/repo
Commit: http://git-wip-us.apache.org/repos/asf/camel/commit/38fefbcb
Tree: http://git-wip-us.apache.org/repos/asf/camel/tree/38fefbcb
Diff: http://git-wip-us.apache.org/repos/asf/camel/diff/38fefbcb

Branch: refs/heads/master
Commit: 38fefbcb482998559a2a5f5376062b4423af2d35
Parents: 35be1b7
Author: Andrea Cosentino <an...@gmail.com>
Authored: Tue May 10 09:01:40 2016 +0200
Committer: Andrea Cosentino <an...@gmail.com>
Committed: Tue May 10 09:01:40 2016 +0200

----------------------------------------------------------------------
 .../camel-mongodb/src/main/docs/mongodb.adoc    | 814 +++++++++++++++++++
 docs/user-manual/en/SUMMARY.md                  |   1 +
 2 files changed, 815 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/camel/blob/38fefbcb/components/camel-mongodb/src/main/docs/mongodb.adoc
----------------------------------------------------------------------
diff --git a/components/camel-mongodb/src/main/docs/mongodb.adoc b/components/camel-mongodb/src/main/docs/mongodb.adoc
new file mode 100644
index 0000000..a6032a0
--- /dev/null
+++ b/components/camel-mongodb/src/main/docs/mongodb.adoc
@@ -0,0 +1,814 @@
+[[MongoDB-CamelMongoDBcomponent]]
+Camel MongoDB component
+~~~~~~~~~~~~~~~~~~~~~~~
+
+*Available as of Camel 2.10*
+
+According to Wikipedia: "NoSQL is a movement promoting a loosely defined
+class of non-relational data stores that break with a long history of
+relational databases and ACID guarantees." NoSQL solutions have grown in
+popularity in the last few years, and major extremely-used sites and
+services such as Facebook, LinkedIn, Twitter, etc. are known to use them
+extensively to achieve scalability and agility.
+
+Basically, NoSQL solutions differ from traditional RDBMS (Relational
+Database Management Systems) in that they don't use SQL as their query
+language and generally don't offer ACID-like transactional behaviour nor
+relational data. Instead, they are designed around the concept of
+flexible data structures and schemas (meaning that the traditional
+concept of a database table with a fixed schema is dropped), extreme
+scalability on commodity hardware and blazing-fast processing.
+
+MongoDB is a very popular NoSQL solution and the camel-mongodb component
+integrates Camel with MongoDB allowing you to interact with MongoDB
+collections both as a producer (performing operations on the collection)
+and as a consumer (consuming documents from a MongoDB collection).
+
+MongoDB revolves around the concepts of documents (not as is office
+documents, but rather hierarchical data defined in JSON/BSON) and
+collections. This component page will assume you are familiar with them.
+Otherwise, visit http://www.mongodb.org/[http://www.mongodb.org/].
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+[source,xml]
+------------------------------------------------------------
+<dependency>
+    <groupId>org.apache.camel</groupId>
+    <artifactId>camel-mongodb</artifactId>
+    <version>x.y.z</version>
+    <!-- use the same version as your Camel core version -->
+</dependency>
+------------------------------------------------------------
+
+[[MongoDB-URIformat]]
+URI format
+~~~~~~~~~~
+
+[source,java]
+---------------------------------------------------------------------------------------------------------------
+mongodb:connectionBean?database=databaseName&collection=collectionName&operation=operationName[&moreOptions...]
+---------------------------------------------------------------------------------------------------------------
+
+[[MongoDB-options]]
+MongoDB options
+~~~~~~~~~~~~~~~
+
+
+// component options: START
+The MongoDB component has no options.
+// component options: END
+
+
+
+// endpoint options: START
+The MongoDB component supports 22 endpoint options which are listed below:
+
+[width="100%",cols="2s,1,1m,1m,5",options="header"]
+|=======================================================================
+| Name | Group | Default | Java Type | Description
+| connectionBean | common |  | String | *Required* Name of com.mongodb.Mongo to use.
+| collection | common |  | String | Sets the name of the MongoDB collection to bind to this endpoint
+| collectionIndex | common |  | String | Sets the collection index (JSON FORMAT : field1 : order1 field2 : order2)
+| createCollection | common | true | boolean | Create collection during initialisation if it doesn't exist. Default is true.
+| cursorRegenerationDelay | common | 1000 | long | MongoDB tailable cursors will block until new data arrives. If no new data is inserted after some time the cursor will be automatically freed and closed by the MongoDB server. The client is expected to regenerate the cursor if needed. This value specifies the time to wait before attempting to fetch a new cursor and if the attempt fails how long before the next attempt is made. Default value is 1000ms.
+| database | common |  | String | Sets the name of the MongoDB database to target
+| dynamicity | common | false | boolean | Sets whether this endpoint will attempt to dynamically resolve the target database and collection from the incoming Exchange properties. Can be used to override at runtime the database and collection specified on the otherwise static endpoint URI. It is disabled by default to boost performance. Enabling it will take a minimal performance hit.
+| operation | common |  | MongoDbOperation | Sets the operation this endpoint will execute against MongoDB. For possible values see MongoDbOperation.
+| outputType | common |  | MongoDbOutputType | Convert the output of the producer to the selected type : DBObjectList DBObject or DBCursor. DBObjectList or DBObject applies to findAll. DBCursor applies to all other operations.
+| persistentId | common |  | String | One tail tracking collection can host many trackers for several tailable consumers. To keep them separate each tracker should have its own unique persistentId.
+| persistentTailTracking | common | false | boolean | Enable persistent tail tracking which is a mechanism to keep track of the last consumed message across system restarts. The next time the system is up the endpoint will recover the cursor from the point where it last stopped slurping records.
+| readPreference | common |  | ReadPreference | Sets a MongoDB ReadPreference on the Mongo connection. Read preferences set directly on the connection will be overridden by this setting. The link com.mongodb.ReadPreferencevalueOf(String) utility method is used to resolve the passed readPreference value. Some examples for the possible values are nearest primary or secondary etc.
+| tailTrackCollection | common |  | String | Collection where tail tracking information will be persisted. If not specified link MongoDbTailTrackingConfigDEFAULT_COLLECTION will be used by default.
+| tailTrackDb | common |  | String | Indicates what database the tail tracking mechanism will persist to. If not specified the current database will be picked by default. Dynamicity will not be taken into account even if enabled i.e. the tail tracking database will not vary past endpoint initialisation.
+| tailTrackField | common |  | String | Field where the last tracked value will be placed. If not specified link MongoDbTailTrackingConfigDEFAULT_FIELD will be used by default.
+| tailTrackIncreasingField | common |  | String | Correlation field in the incoming record which is of increasing nature and will be used to position the tailing cursor every time it is generated. The cursor will be (re)created with a query of type: tailTrackIncreasingField lastValue (possibly recovered from persistent tail tracking). Can be of type Integer Date String etc. NOTE: No support for dot notation at the current time so the field should be at the top level of the document.
+| writeConcern | common |  | WriteConcern | Set the WriteConcern for write operations on MongoDB using the standard ones. Resolved from the fields of the WriteConcern class by calling the link WriteConcernvalueOf(String) method.
+| writeResultAsHeader | common | false | boolean | In write operations it determines whether instead of returning WriteResult as the body of the OUT message we transfer the IN message to the OUT and attach the WriteResult as a header.
+| bridgeErrorHandler | consumer | false | boolean | Allows for bridging the consumer to the Camel routing Error Handler which mean any exceptions occurred while the consumer is trying to pickup incoming messages or the likes will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions that will be logged at WARN/ERROR level and ignored.
+| exceptionHandler | consumer (advanced) |  | ExceptionHandler | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions that will be logged at WARN/ERROR level and ignored.
+| exchangePattern | advanced | InOnly | ExchangePattern | Sets the default exchange pattern when creating an exchange
+| synchronous | advanced | false | boolean | Sets whether synchronous processing should be strictly used or Camel is allowed to use asynchronous processing (if supported).
+|=======================================================================
+// endpoint options: END
+
+
+
+[[MongoDB-ConfigurationofdatabaseinSpringXML]]
+Configuration of database in Spring XML
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following Spring XML creates a bean defining the connection to a
+MongoDB instance.
+
+[source,xml]
+----------------------------------------------------------------------------------------------------------------------------------
+<?xml version="1.0" encoding="UTF-8"?>
+<beans xmlns="http://www.springframework.org/schema/beans"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
+    <bean id="mongoBean" class="com.mongodb.Mongo">
+        <constructor-arg name="host" value="${mongodb.host}" />
+        <constructor-arg name="port" value="${mongodb.port}" />
+    </bean>
+</beans>
+----------------------------------------------------------------------------------------------------------------------------------
+
+[[MongoDB-Sampleroute]]
+Sample route
+^^^^^^^^^^^^
+
+The following route defined in Spring XML executes the operation
+link:mongodb.html[*dbStats*] on a collection.
+
+*Get DB stats for specified collection*
+
+[source,xml]
+---------------------------------------------------------------------------------------------------------------------------
+<route>
+  <from uri="direct:start" />
+  <!-- using bean 'mongoBean' defined above -->
+  <to uri="mongodb:mongoBean?database=${mongodb.database}&amp;collection=${mongodb.collection}&amp;operation=getDbStats" />
+  <to uri="direct:result" />
+</route>
+---------------------------------------------------------------------------------------------------------------------------
+
+[[MongoDB-MongoDBoperations-producerendpoints]]
+MongoDB operations - producer endpoints
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+[[MongoDB-Queryoperations]]
+Query operations
+^^^^^^^^^^^^^^^^
+
+[[MongoDB-findById]]
+findById
+++++++++
+
+This operation retrieves only one element from the collection whose _id
+field matches the content of the IN message body. The incoming object
+can be anything that has an equivalent to a BSON type. See
+http://bsonspec.org/#/specification[http://bsonspec.org/#/specification]
+and
+http://www.mongodb.org/display/DOCS/Java+Types[http://www.mongodb.org/display/DOCS/Java+Types].
+
+[source,java]
+------------------------------------------------------------------------------
+from("direct:findById")
+    .to("mongodb:myDb?database=flights&collection=tickets&operation=findById")
+    .to("mock:resultFindById");
+------------------------------------------------------------------------------
+
+
+TIP: *Supports fields filter*. This operation supports specifying a fields filter. See
+link:mongodb.html[Specifying a fields filter].
+
+[[MongoDB-findOneByQuery]]
+findOneByQuery
+++++++++++++++
+
+Use this operation to retrieve just one element from the collection that
+matches a MongoDB query. *The query object is extracted from the IN
+message body*, i.e. it should be of type `DBObject` or convertible to
+`DBObject`. It can be a JSON String or a Hashmap. See
+link:mongodb.html[#Type conversions] for more info.
+
+Example with no query (returns any object of the collection):
+
+[source,java]
+------------------------------------------------------------------------------------
+from("direct:findOneByQuery")
+    .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery")
+    .to("mock:resultFindOneByQuery");
+------------------------------------------------------------------------------------
+
+Example with a query (returns one matching result):
+
+[source,java]
+------------------------------------------------------------------------------------
+from("direct:findOneByQuery")
+    .setBody().constant("{ \"name\": \"Raul Kripalani\" }")
+    .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery")
+    .to("mock:resultFindOneByQuery");
+------------------------------------------------------------------------------------
+
+TIP: *Supports fields filter*. This operation supports specifying a fields filter. See
+link:mongodb.html[Specifying a fields filter].
+
+[[MongoDB-findAll]]
+findAll
++++++++
+
+The `findAll` operation returns all documents matching a query, or none
+at all, in which case all documents contained in the collection are
+returned. *The query object is extracted from the IN message body*, i.e.
+it should be of type `DBObject` or convertible to `DBObject`. It can be
+a JSON String or a Hashmap. See link:mongodb.html[#Type conversions] for
+more info.
+
+Example with no query (returns all object in the collection):
+
+[source,java]
+-----------------------------------------------------------------------------
+from("direct:findAll")
+    .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll")
+    .to("mock:resultFindAll");
+-----------------------------------------------------------------------------
+
+Example with a query (returns all matching results):
+
+[source,java]
+-----------------------------------------------------------------------------
+from("direct:findAll")
+    .setBody().constant("{ \"name\": \"Raul Kripalani\" }")
+    .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll")
+    .to("mock:resultFindAll");
+-----------------------------------------------------------------------------
+
+Paging and efficient retrieval is supported via the following headers:
+
+[width="100%",cols="10%,10%,10%,70%",options="header",]
+|=======================================================================
+|Header key |Quick constant |Description (extracted from MongoDB API doc) |Expected type
+
+|`CamelMongoDbNumToSkip` |`MongoDbConstants.NUM_TO_SKIP` |Discards a given number of elements at the beginning of the cursor. |int/Integer
+
+|`CamelMongoDbLimit` |`MongoDbConstants.LIMIT` |Limits the number of elements returned. |int/Integer
+
+|`CamelMongoDbBatchSize` |`MongoDbConstants.BATCH_SIZE` |Limits the number of elements returned in one batch. A cursor typically
+fetches a batch of result objects and store them locally. If batchSize
+is positive, it represents the size of each batch of objects retrieved.
+It can be adjusted to optimize performance and limit data transfer. If
+batchSize is negative, it will limit of number objects returned, that
+fit within the max batch size limit (usually 4MB), and cursor will be
+closed. For example if batchSize is -10, then the server will return a
+maximum of 10 documents and as many as can fit in 4MB, then close the
+cursor. Note that this feature is different from limit() in that
+documents must fit within a maximum size, and it removes the need to
+send a request to close the cursor server-side. The batch size can be
+changed even after a cursor is iterated, in which case the setting will
+apply on the next batch retrieval. |int/Integer
+|=======================================================================
+
+Additionally, you can set a sortBy criteria by putting the relevant
+`DBObject` describing your sorting in the `CamelMongoDbSortBy` header,
+quick constant: `MongoDbConstants.SORT_BY`.
+
+The `findAll` operation will also return the following OUT headers to
+enable you to iterate through result pages if you are using paging:
+
+[width="100%",cols="10%,10%,10%,70%",options="header",]
+|=======================================================================
+|Header key |Quick constant |Description (extracted from MongoDB API doc) |Data type
+
+|`CamelMongoDbResultTotalSize` |`MongoDbConstants.RESULT_TOTAL_SIZE` |Number of objects matching the query. This does not take limit/skip into
+consideration. |int/Integer
+
+|`CamelMongoDbResultPageSize` |`MongoDbConstants.RESULT_PAGE_SIZE` |Number of objects matching the query. This does not take limit/skip into
+consideration. |int/Integer
+|=======================================================================
+
+TIP: *Supports fields filter*. This operation supports specifying a fields filter. See
+link:mongodb.html[Specifying a fields filter].
+
+[[MongoDB-count]]
+count
++++++
+
+Returns the total number of objects in a collection, returning a Long as
+the OUT message body. +
+The following example will count the number of records in the
+"dynamicCollectionName" collection. Notice how dynamicity is enabled,
+and as a result, the operation will not run against the
+"notableScientists" collection, but against the "dynamicCollectionName"
+collection.
+
+[source,java]
+------------------------------------------------------------------------------------------------------------------------------------
+// from("direct:count").to("mongodb:myDb?database=tickets&collection=flights&operation=count&dynamicity=true");
+Long result = template.requestBodyAndHeader("direct:count", "irrelevantBody", MongoDbConstants.COLLECTION, "dynamicCollectionName");
+assertTrue("Result is not of type Long", result instanceof Long);
+------------------------------------------------------------------------------------------------------------------------------------
+
+From�*Camel 2.14*�onwards you can provide
+a�`com.mongodb.DBObject`�object in the message body as a query,�and
+operation will return the amount of documents matching this criteria.�
+
+�
+
+[source,java]
+------------------------------------------------------------------------------------------------------------------------
+DBObject query = ...
+Long count = template.requestBodyAndHeader("direct:count", query, MongoDbConstants.COLLECTION, "dynamicCollectionName");
+------------------------------------------------------------------------------------------------------------------------
+
+[[MongoDB-Specifyingafieldsfilter]]
+Specifying a fields filter
+++++++++++++++++++++++++++
+
+Query operations will, by default, return the matching objects in their
+entirety (with all their fields). If your documents are large and you
+only require retrieving a subset of their fields, you can specify a
+field filter in all query operations, simply by setting the relevant
+`DBObject` (or type convertible to `DBObject`, such as a JSON String,
+Map, etc.) on the `CamelMongoDbFieldsFilter` header, constant shortcut:
+`MongoDbConstants.FIELDS_FILTER`.
+
+Here is an example that uses MongoDB's BasicDBObjectBuilder to simplify
+the creation of DBObjects. It retrieves all fields except `_id` and
+`boringField`:
+
+[source,java]
+----------------------------------------------------------------------------------------------------------------------------
+// route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll")
+DBObject fieldFilter = BasicDBObjectBuilder.start().add("_id", 0).add("boringField", 0).get();
+Object result = template.requestBodyAndHeader("direct:findAll", (Object) null, MongoDbConstants.FIELDS_FILTER, fieldFilter);
+----------------------------------------------------------------------------------------------------------------------------
+
+[[MongoDB-Create/updateoperations]]
+Create/update operations
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+[[MongoDB-insert]]
+insert
+++++++
+
+Inserts an new object into the MongoDB collection, taken from the IN
+message body. Type conversion is attempted to turn it into `DBObject` or
+a `List`. +
+ Two modes are supported: single insert and multiple insert. For
+multiple insert, the endpoint will expect a List, Array or Collections
+of objects of any type, as long as they are - or can be converted to -
+`DBObject`. All objects are inserted at once. The endpoint will
+intelligently decide which backend operation to invoke (single or
+multiple insert) depending on the input.
+
+Example:
+
+[source,java]
+-----------------------------------------------------------------------------
+from("direct:insert")
+    .to("mongodb:myDb?database=flights&collection=tickets&operation=insert");
+-----------------------------------------------------------------------------
+
+The operation will return a WriteResult, and depending on the
+`WriteConcern` or the value of the `invokeGetLastError` option,
+`getLastError()` would have been called already or not. If you want to
+access the ultimate result of the write operation, you need to retrieve
+the `CommandResult` by calling `getLastError()` or
+`getCachedLastError()` on the `WriteResult`. Then you can verify the
+result by calling `CommandResult.ok()`,
+`CommandResult.getErrorMessage()` and/or `CommandResult.getException()`.
+
+Note that the new object's `_id` must be unique in the collection. If
+you don't specify the value, MongoDB will automatically generate one for
+you. But if you do specify it and it is not unique, the insert operation
+will fail (and for Camel to notice, you will need to enable
+invokeGetLastError or set a WriteConcern that waits for the write
+result).
+
+This is not a limitation of the component, but it is how things work in
+MongoDB for higher throughput. If you are using a custom `_id`, you are
+expected to ensure at the application level that is unique (and this is
+a good practice too).
+
+Since Camel *2.15*: OID(s) of the inserted record(s) is stored in the
+message header under�`CamelMongoOid` key (`MongoDbConstants.OID`
+constant). The value stored is�`org.bson.types.ObjectId` for single
+insert or `java.util.List<org.bson.types.ObjectId>` if multiple records
+have been inserted.
+
+[[MongoDB-save]]
+save
+++++
+
+The save operation is equivalent to an _upsert_ (UPdate, inSERT)
+operation, where the record will be updated, and if it doesn't exist, it
+will be inserted, all in one atomic operation. MongoDB will perform the
+matching based on the _id field.
+
+Beware that in case of an update, the object is replaced entirely and
+the usage of
+http://www.mongodb.org/display/DOCS/Updating#Updating-ModifierOperations[MongoDB's
+$modifiers] is not permitted. Therefore, if you want to manipulate the
+object if it already exists, you have two options:
+
+1.  perform a query to retrieve the entire object first along with all
+its fields (may not be efficient), alter it inside Camel and then save
+it.
+2.  use the update operation with
+http://www.mongodb.org/display/DOCS/Updating#Updating-ModifierOperations[$modifiers],
+which will execute the update at the server-side instead. You can enable
+the upsert flag, in which case if an insert is required, MongoDB will
+apply the $modifiers to the filter query object and insert the result.
+
+For example:
+
+[source,java]
+---------------------------------------------------------------------------
+from("direct:insert")
+    .to("mongodb:myDb?database=flights&collection=tickets&operation=save");
+---------------------------------------------------------------------------
+
+[[MongoDB-update]]
+update
+++++++
+
+Update one or multiple records on the collection. Requires a
+List<DBObject> as the IN message body containing exactly 2 elements:
+
+* Element 1 (index 0) => filter query => determines what objects will be
+affected, same as a typical query object
+* Element 2 (index 1) => update rules => how matched objects will be
+updated. All
+http://www.mongodb.org/display/DOCS/Updating#Updating-ModifierOperations[modifier
+operations] from MongoDB are supported.
+
+NOTE: *Multiupdates* . By default, MongoDB will only update 1 object even if multiple objects
+match the filter query. To instruct MongoDB to update *all* matching
+records, set the `CamelMongoDbMultiUpdate` IN message header to `true`.
+
+A header with key `CamelMongoDbRecordsAffected` will be returned
+(`MongoDbConstants.RECORDS_AFFECTED` constant) with the number of
+records updated (copied from `WriteResult.getN()`).
+
+Supports the following IN message headers:
+
+[width="100%",cols="10%,10%,10%,70%",options="header",]
+|=======================================================================
+|Header key |Quick constant |Description (extracted from MongoDB API doc) |Expected type
+
+|`CamelMongoDbMultiUpdate` |`MongoDbConstants.MULTIUPDATE` |If the update should be applied to all objects matching. See
+http://www.mongodb.org/display/DOCS/Atomic+Operations[http://www.mongodb.org/display/DOCS/Atomic+Operations] |boolean/Boolean
+
+|`CamelMongoDbUpsert` |`MongoDbConstants.UPSERT` |If the database should create the element if it does not exist |boolean/Boolean
+|=======================================================================
+
+For example, the following will update *all* records whose filterField
+field equals true by setting the value of the "scientist" field to
+"Darwin":
+
+[source,java]
+------------------------------------------------------------------------------------------------------------------------------------------
+// route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update");
+DBObject filterField = new BasicDBObject("filterField", true);
+DBObject updateObj = new BasicDBObject("$set", new BasicDBObject("scientist", "Darwin"));
+Object result = template.requestBodyAndHeader("direct:update", new Object[] {filterField, updateObj}, MongoDbConstants.MULTIUPDATE, true);
+------------------------------------------------------------------------------------------------------------------------------------------
+
+[[MongoDB-Deleteoperations]]
+Delete operations
+^^^^^^^^^^^^^^^^^
+
+[[MongoDB-remove]]
+remove
+++++++
+
+Remove matching records from the collection. The IN message body will
+act as the removal filter query, and is expected to be of type
+`DBObject` or a type convertible to it. +
+ The following example will remove all objects whose field
+'conditionField' equals true, in the science database, notableScientists
+collection:
+
+[source,java]
+------------------------------------------------------------------------------------------------------------------
+// route: from("direct:remove").to("mongodb:myDb?database=science&collection=notableScientists&operation=remove");
+DBObject conditionField = new BasicDBObject("conditionField", true);
+Object result = template.requestBody("direct:remove", conditionField);
+------------------------------------------------------------------------------------------------------------------
+
+A header with key `CamelMongoDbRecordsAffected` is returned
+(`MongoDbConstants.RECORDS_AFFECTED` constant) with type `int`,
+containing the number of records deleted (copied from
+`WriteResult.getN()`).
+
+[[MongoDB-Otheroperations]]
+Other operations
+^^^^^^^^^^^^^^^^
+
+[[MongoDB-aggregate]]
+aggregate
++++++++++
+
+*Available as of Camel 2.14*
+
+Perform a aggregation with the given pipeline contained in the
+body.�*Aggregations could be long and heavy operations. Use with care.*
+
+�
+
+[source,java]
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------
+// route: from("direct:aggregate").to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate");
+from("direct:aggregate")
+    .setBody().constant("[{ $match : {$or : [{\"scientist\" : \"Darwin\"},{\"scientist\" : \"Einstein\"}]}},{ $group: { _id: \"$scientist\", count: { $sum: 1 }} } ]")
+    .to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate")
+    .to("mock:resultAggregate");
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------
+
+[[MongoDB-getDbStats]]
+getDbStats
+++++++++++
+
+Equivalent of running the `db.stats()` command in the MongoDB shell,
+which displays useful statistic figures about the database. +
+ For example:
+
+[source,java]
+-------------------------------------
+> db.stats();
+{
+    "db" : "test",
+    "collections" : 7,
+    "objects" : 719,
+    "avgObjSize" : 59.73296244784423,
+    "dataSize" : 42948,
+    "storageSize" : 1000058880,
+    "numExtents" : 9,
+    "indexes" : 4,
+    "indexSize" : 32704,
+    "fileSize" : 1275068416,
+    "nsSizeMB" : 16,
+    "ok" : 1
+}
+-------------------------------------
+
+Usage example:
+
+[source,java]
+---------------------------------------------------------------------------------------------------------
+// from("direct:getDbStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getDbStats");
+Object result = template.requestBody("direct:getDbStats", "irrelevantBody");
+assertTrue("Result is not of type DBObject", result instanceof DBObject);
+---------------------------------------------------------------------------------------------------------
+
+The operation will return a data structure similar to the one displayed
+in the shell, in the form of a `DBObject` in the OUT message body.
+
+[[MongoDB-getColStats]]
+getColStats
++++++++++++
+
+Equivalent of running the `db.collection.stats()` command in the MongoDB
+shell, which displays useful statistic figures about the collection. +
+ For example:
+
+[source,java]
+-----------------------------
+> db.camelTest.stats();
+{
+    "ns" : "test.camelTest",
+    "count" : 100,
+    "size" : 5792,
+    "avgObjSize" : 57.92,
+    "storageSize" : 20480,
+    "numExtents" : 2,
+    "nindexes" : 1,
+    "lastExtentSize" : 16384,
+    "paddingFactor" : 1,
+    "flags" : 1,
+    "totalIndexSize" : 8176,
+    "indexSizes" : {
+        "_id_" : 8176
+    },
+    "ok" : 1
+}
+-----------------------------
+
+Usage example:
+
+[source,java]
+-----------------------------------------------------------------------------------------------------------
+// from("direct:getColStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getColStats");
+Object result = template.requestBody("direct:getColStats", "irrelevantBody");
+assertTrue("Result is not of type DBObject", result instanceof DBObject);
+-----------------------------------------------------------------------------------------------------------
+
+The operation will return a data structure similar to the one displayed
+in the shell, in the form of a `DBObject` in the OUT message body.
+
+[[MongoDB-command]]
+command
++++++++
+
+*Available as of Camel 2.15*
+
+Run the body as a command on database. Usefull for admin operation as
+getting host informations, replication or sharding status.
+
+Collection parameter is not use for this operation.
+
+[source,java]
+--------------------------------------------------------------------------------
+// route: from("command").to("mongodb:myDb?database=science&operation=command");
+DBObject commandBody = new BasicDBObject("hostInfo", "1");
+Object result = template.requestBody("direct:command", commandBody);
+--------------------------------------------------------------------------------
+
+[[MongoDB-Dynamicoperations]]
+Dynamic operations
+^^^^^^^^^^^^^^^^^^
+
+An Exchange can override the endpoint's fixed operation by setting the
+`CamelMongoDbOperation` header, defined by the
+`MongoDbConstants.OPERATION_HEADER` constant. +
+ The values supported are determined by the MongoDbOperation enumeration
+and match the accepted values for the `operation` parameter on the
+endpoint URI.
+
+For example:
+
+[source,java]
+-----------------------------------------------------------------------------------------------------------------------------
+// from("direct:insert").to("mongodb:myDb?database=flights&collection=tickets&operation=insert");
+Object result = template.requestBodyAndHeader("direct:insert", "irrelevantBody", MongoDbConstants.OPERATION_HEADER, "count");
+assertTrue("Result is not of type Long", result instanceof Long);
+-----------------------------------------------------------------------------------------------------------------------------
+
+[[MongoDB-TailableCursorConsumer]]
+Tailable Cursor Consumer
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+MongoDB offers a mechanism to instantaneously consume ongoing data from
+a collection, by keeping the cursor open just like the `tail -f` command
+of *nix systems. This mechanism is significantly more efficient than a
+scheduled poll, due to the fact that the server pushes new data to the
+client as it becomes available, rather than making the client ping back
+at scheduled intervals to fetch new data. It also reduces otherwise
+redundant network traffic.
+
+There is only one requisite to use tailable cursors: the collection must
+be a "capped collection", meaning that it will only hold N objects, and
+when the limit is reached, MongoDB flushes old objects in the same order
+they were originally inserted. For more information, please refer to:
+http://www.mongodb.org/display/DOCS/Tailable+Cursors[http://www.mongodb.org/display/DOCS/Tailable+Cursors].
+
+The Camel MongoDB component implements a tailable cursor consumer,
+making this feature available for you to use in your Camel routes. As
+new objects are inserted, MongoDB will push them as DBObjects in natural
+order to your tailable cursor consumer, who will transform them to an
+Exchange and will trigger your route logic.
+
+[[MongoDB-Howthetailablecursorconsumerworks]]
+How the tailable cursor consumer works
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To turn a cursor into a tailable cursor, a few special flags are to be
+signalled to MongoDB when first generating the cursor. Once created, the
+cursor will then stay open and will block upon calling the
+`DBCursor.next()` method until new data arrives. However, the MongoDB
+server reserves itself the right to kill your cursor if new data doesn't
+appear after an indeterminate period. If you are interested to continue
+consuming new data, you have to regenerate the cursor. And to do so, you
+will have to remember the position where you left off or else you will
+start consuming from the top again.
+
+The Camel MongoDB tailable cursor consumer takes care of all these tasks
+for you. You will just need to provide the key to some field in your
+data of increasing nature, which will act as a marker to position your
+cursor every time it is regenerated, e.g. a timestamp, a sequential ID,
+etc. It can be of any datatype supported by MongoDB. Date, Strings and
+Integers are found to work well. We call this mechanism "tail tracking"
+in the context of this component.
+
+The consumer will remember the last value of this field and whenever the
+cursor is to be regenerated, it will run the query with a filter like:
+`increasingField > lastValue`, so that only unread data is consumed.
+
+*Setting the increasing field:* Set the key of the increasing field on
+the endpoint URI `tailTrackingIncreasingField` option. In Camel 2.10, it
+must be a top-level field in your data, as nested navigation for this
+field is not yet supported. That is, the "timestamp" field is okay, but
+"nested.timestamp" will not work. Please open a ticket in the Camel JIRA
+if you do require support for nested increasing fields.
+
+*Cursor regeneration delay:* One thing to note is that if new data is
+not already available upon initialisation, MongoDB will kill the cursor
+instantly. Since we don't want to overwhelm the server in this case, a
+`cursorRegenerationDelay` option has been introduced (with a default
+value of 1000ms.), which you can modify to suit your needs.
+
+An example:
+
+[source,java]
+-----------------------------------------------------------------------------------------------------
+from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime")
+    .id("tailableCursorConsumer1")
+    .autoStartup(false)
+    .to("mock:test");
+-----------------------------------------------------------------------------------------------------
+
+The above route will consume from the "flights.cancellations" capped
+collection, using "departureTime" as the increasing field, with a
+default regeneration cursor delay of 1000ms.
+
+[[MongoDB-Persistenttailtracking]]
+Persistent tail tracking
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Standard tail tracking is volatile and the last value is only kept in
+memory. However, in practice you will need to restart your Camel
+container every now and then, but your last value would then be lost and
+your tailable cursor consumer would start consuming from the top again,
+very likely sending duplicate records into your route.
+
+To overcome this situation, you can enable the *persistent tail
+tracking* feature to keep track of the last consumed increasing value in
+a special collection inside your MongoDB database too. When the consumer
+initialises again, it will restore the last tracked value and continue
+as if nothing happened.
+
+The last read value is persisted on two occasions: every time the cursor
+is regenerated and when the consumer shuts down. We may consider
+persisting at regular intervals too in the future (flush every 5
+seconds) for added robustness if the demand is there. To request this
+feature, please open a ticket in the Camel JIRA.
+
+[[MongoDB-Enablingpersistenttailtracking]]
+Enabling persistent tail tracking
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To enable this function, set at least the following options on the
+endpoint URI:
+
+* `persistentTailTracking` option to `true`
+* `persistentId` option to a unique identifier for this consumer, so
+that the same collection can be reused across many consumers
+
+Additionally, you can set the `tailTrackDb`, `tailTrackCollection` and
+`tailTrackField` options to customise where the runtime information will
+be stored. Refer to the endpoint options table at the top of this page
+for descriptions of each option.
+
+For example, the following route will consume from the
+"flights.cancellations" capped collection, using "departureTime" as the
+increasing field, with a default regeneration cursor delay of 1000ms,
+with persistent tail tracking turned on, and persisting under the
+"cancellationsTracker" id on the "flights.camelTailTracking", storing
+the last processed value under the "lastTrackingValue" field
+(`camelTailTracking` and `lastTrackingValue` are defaults).
+
+[source,java]
+-----------------------------------------------------------------------------------------------------------------------------------
+from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + 
+     "&persistentId=cancellationsTracker")
+    .id("tailableCursorConsumer2")
+    .autoStartup(false)
+    .to("mock:test");
+-----------------------------------------------------------------------------------------------------------------------------------
+
+Below is another example identical to the one above, but where the
+persistent tail tracking runtime information will be stored under the
+"trackers.camelTrackers" collection, in the "lastProcessedDepartureTime"
+field:
+
+[source,java]
+-----------------------------------------------------------------------------------------------------------------------------------
+from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + 
+     "&persistentId=cancellationsTracker&tailTrackDb=trackers&tailTrackCollection=camelTrackers" + 
+     "&tailTrackField=lastProcessedDepartureTime")
+    .id("tailableCursorConsumer3")
+    .autoStartup(false)
+    .to("mock:test");
+-----------------------------------------------------------------------------------------------------------------------------------
+
+[[MongoDB-Typeconversions]]
+Type conversions
+~~~~~~~~~~~~~~~~
+
+The `MongoDbBasicConverters` type converter included with the
+camel-mongodb component provides the following conversions:
+
+[width="100%",cols="10%,10%,10%,70%",options="header",]
+|=======================================================================
+|Name |From type |To type |How?
+
+|fromMapToDBObject |`Map` |`DBObject` |constructs a new `BasicDBObject` via the `new BasicDBObject(Map m)`
+constructor
+
+|fromBasicDBObjectToMap |`BasicDBObject` |`Map` |`BasicDBObject` already implements `Map`
+
+|fromStringToDBObject |`String` |`DBObject` |uses `com.mongodb.util.JSON.parse(String s)`
+
+|fromAnyObjectToDBObject |`Object`� |`DBObject` |uses the http://jackson.codehaus.org/[Jackson library] to convert the
+object to a `Map`, which is in turn used to initialise a new
+`BasicDBObject`
+|=======================================================================
+
+This type converter is auto-discovered, so you don't need to configure
+anything manually.
+
+[[MongoDB-Seealso]]
+See also
+~~~~~~~~
+
+* http://www.mongodb.org/[MongoDB website]
+* http://en.wikipedia.org/wiki/NoSQL[NoSQL Wikipedia article]
+* http://api.mongodb.org/java/current/[MongoDB Java driver API docs -
+current version]
+*
+http://svn.apache.org/viewvc/camel/trunk/components/camel-mongodb/src/test/[Unit
+tests] for more examples of usage
+

http://git-wip-us.apache.org/repos/asf/camel/blob/38fefbcb/docs/user-manual/en/SUMMARY.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/SUMMARY.md b/docs/user-manual/en/SUMMARY.md
index e0164db..5e106cc 100644
--- a/docs/user-manual/en/SUMMARY.md
+++ b/docs/user-manual/en/SUMMARY.md
@@ -195,6 +195,7 @@
     * [Mina](mina.adoc)
     * [Mina2](mina2.adoc)
     * [MLLP](mllp.adoc)
+    * [MongoDB](mongodb.adoc)
     * [Mock](mock.adoc)
     * [NATS](nats.adoc)
     * [Properties](properties.adoc)


[2/2] camel git commit: Upgrade Spring-boot to version 1.3.5.RELEASE

Posted by ac...@apache.org.
Upgrade Spring-boot to version 1.3.5.RELEASE


Project: http://git-wip-us.apache.org/repos/asf/camel/repo
Commit: http://git-wip-us.apache.org/repos/asf/camel/commit/3173d96c
Tree: http://git-wip-us.apache.org/repos/asf/camel/tree/3173d96c
Diff: http://git-wip-us.apache.org/repos/asf/camel/diff/3173d96c

Branch: refs/heads/master
Commit: 3173d96ca480b1ca1c9a21c0a01bf5c4d8f14eca
Parents: 38fefbc
Author: Andrea Cosentino <an...@gmail.com>
Authored: Tue May 10 09:24:54 2016 +0200
Committer: Andrea Cosentino <an...@gmail.com>
Committed: Tue May 10 09:24:54 2016 +0200

----------------------------------------------------------------------
 parent/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/camel/blob/3173d96c/parent/pom.xml
----------------------------------------------------------------------
diff --git a/parent/pom.xml b/parent/pom.xml
index a285391..54a312f 100644
--- a/parent/pom.xml
+++ b/parent/pom.xml
@@ -509,7 +509,7 @@
     <splunk-version>1.5.0.0_1</splunk-version>
     <spring-batch-version>3.0.7.RELEASE</spring-batch-version>
     <spring-batch-bundle-version>3.0.7.RELEASE_1</spring-batch-bundle-version>
-    <spring-boot-version>1.3.4.RELEASE</spring-boot-version>
+    <spring-boot-version>1.3.5.RELEASE</spring-boot-version>
     <spring-castor-bundle-version>1.2.0</spring-castor-bundle-version>
     <spring-data-commons-version>1.6.5.RELEASE</spring-data-commons-version>
     <spring-data-redis-version>1.6.4.RELEASE</spring-data-redis-version>