You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by fj...@apache.org on 2019/03/01 03:44:50 UTC

[incubator-druid] branch master updated: Fix supported file formats for Hadoop vs Native batch doc (#7069)

This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 45f12de  Fix supported file formats for Hadoop vs Native batch doc (#7069)
45f12de is described below

commit 45f12de9ad5113f614a9b0f738d341a2d4fd5152
Author: Jihoon Son <ji...@apache.org>
AuthorDate: Thu Feb 28 19:44:45 2019 -0800

    Fix supported file formats for Hadoop vs Native batch doc (#7069)
    
    * Fix supported file formats
    
    * address comment
---
 docs/content/ingestion/hadoop-vs-native-batch.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/content/ingestion/hadoop-vs-native-batch.md b/docs/content/ingestion/hadoop-vs-native-batch.md
index ce2c97e..89a8e02 100644
--- a/docs/content/ingestion/hadoop-vs-native-batch.md
+++ b/docs/content/ingestion/hadoop-vs-native-batch.md
@@ -38,6 +38,6 @@ ingestion method.
 | Supported [rollup modes](http://druid.io/docs/latest/ingestion/index.html#roll-up-modes) | Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
 | Supported partitioning methods | [Both Hash-based and range partitioning](http://druid.io/docs/latest/ingestion/hadoop.html#partitioning-specification) | N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
 | Supported input locations | All locations accessible via HDFS client or Druid dataSource | All implemented [firehoses](./firehose.html) | All implemented [firehoses](./firehose.html) |
-| Supported file formats | All implemented Hadoop InputFormats | Currently only text file format (CSV, TSV, JSON) | Currently only text file format (CSV, TSV, JSON) |
+| Supported file formats | All implemented Hadoop InputFormats | Currently text file formats (CSV, TSV, JSON) by default. Additional formats can be added though a [custom extension](../development/modules.html) implementing [`FiniteFirehoseFactory`](https://github.com/apache/incubator-druid/blob/master/core/src/main/java/org/apache/druid/data/input/FiniteFirehoseFactory.java) | Currently text file formats (CSV, TSV, JSON) by default. Additional formats can be added though a [custom exten [...]
 | Saving parse exceptions in ingestion report | Currently not supported | Currently not supported | Supported |
 | Custom segment version | Supported, but this is NOT recommended | N/A | N/A |


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org