You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by GitBox <gi...@apache.org> on 2022/10/12 14:23:39 UTC

[GitHub] [arrow] lwhite1 commented on a diff in pull request #14382: ARROW-17789: [Java][Docs] Update Java Dataset documentation with latest changes

lwhite1 commented on code in PR #14382:
URL: https://github.com/apache/arrow/pull/14382#discussion_r993530932


##########
docs/source/java/dataset.rst:
##########
@@ -228,3 +249,50 @@ native objects after using. For example:
     AutoCloseables.close(factory, dataset, scanner);
 
 If user forgets to close them then native object leakage might be caused.
+
+Development Guidelines
+======================
+
+* Related to the note about ScanOptions batchSize argument: Let's try to read a Parquet file with gzip compression and 3 row groups:
+
+    .. code-block::
+
+       # Let configure ScanOptions as:
+       ScanOptions options = new ScanOptions(/*batchSize*/ 32768);
+
+       $ parquet-tools meta data4_3rg_gzip.parquet
+       file schema: schema
+       age:         OPTIONAL INT64 R:0 D:1
+       name:        OPTIONAL BINARY L:STRING R:0 D:1
+       row group 1: RC:4 TS:182 OFFSET:4
+       row group 2: RC:4 TS:190 OFFSET:420
+       row group 3: RC:3 TS:179 OFFSET:838
+
+    In this case, we are configuring ScanOptions batchSize argument equals to
+    32768 rows, it's greater than 04 rows used on the file, then 04 rows is
+    used on the program execution instead of 32768 rows requested.
+
+* Arrow Java Dataset offer native functionalities consuming native artifacts such as:

Review Comment:
   I don't understand what this section is telling me. It's a pretty big context switch to go from configuring scan options to building jars. Maybe additional description would be helpful



##########
docs/source/java/dataset.rst:
##########
@@ -32,31 +32,49 @@ is not designed only for querying files but can be extended to serve all
 possible data sources such as from inter-process communication or from other
 network locations, etc.
 
+.. contents::
+
 Getting Started
 ===============
 
+Currently supported file formats are:
+
+- Apache Arrow (`.arrow`)
+- Apache ORC (`.orc`)
+- Apache Parquet (`.parquet`)
+- Comma-Separated Values (`.csv`)
+
 Below shows a simplest example of using Dataset to query a Parquet file in Java:
 
 .. code-block:: Java
 
     // read data from file /opt/example.parquet
     String uri = "file:/opt/example.parquet";
-    BufferAllocator allocator = new RootAllocator(Long.MAX_VALUE);
-    DatasetFactory factory = new FileSystemDatasetFactory(allocator,
-        NativeMemoryPool.getDefault(), FileFormat.PARQUET, uri);
-    Dataset dataset = factory.finish();
-    Scanner scanner = dataset.newScan(new ScanOptions(100)));
-    List<ArrowRecordBatch> batches = StreamSupport.stream(
-        scanner.scan().spliterator(), false)
-            .flatMap(t -> stream(t.execute()))
-            .collect(Collectors.toList());
-
-    // do something with read record batches, for example:
-    analyzeArrowData(batches);
-
-    // finished the analysis of the data, close all resources:
-    AutoCloseables.close(batches);
-    AutoCloseables.close(factory, dataset, scanner);
+    try (
+        BufferAllocator allocator = new RootAllocator();
+        DatasetFactory datasetFactory = new FileSystemDatasetFactory(
+                allocator, NativeMemoryPool.getDefault(),
+                FileFormat.PARQUET, uri);
+        Dataset dataset = datasetFactory.finish();
+        Scanner scanner = dataset.newScan(options);

Review Comment:
   The variable `options` doesn't seem to be declared or initialized anywhere



##########
docs/source/java/dataset.rst:
##########
@@ -32,31 +32,49 @@ is not designed only for querying files but can be extended to serve all
 possible data sources such as from inter-process communication or from other
 network locations, etc.
 
+.. contents::
+
 Getting Started
 ===============
 
+Currently supported file formats are:
+
+- Apache Arrow (`.arrow`)
+- Apache ORC (`.orc`)
+- Apache Parquet (`.parquet`)
+- Comma-Separated Values (`.csv`)
+
 Below shows a simplest example of using Dataset to query a Parquet file in Java:
 
 .. code-block:: Java
 
     // read data from file /opt/example.parquet
     String uri = "file:/opt/example.parquet";
-    BufferAllocator allocator = new RootAllocator(Long.MAX_VALUE);
-    DatasetFactory factory = new FileSystemDatasetFactory(allocator,
-        NativeMemoryPool.getDefault(), FileFormat.PARQUET, uri);
-    Dataset dataset = factory.finish();
-    Scanner scanner = dataset.newScan(new ScanOptions(100)));
-    List<ArrowRecordBatch> batches = StreamSupport.stream(
-        scanner.scan().spliterator(), false)
-            .flatMap(t -> stream(t.execute()))
-            .collect(Collectors.toList());
-
-    // do something with read record batches, for example:
-    analyzeArrowData(batches);
-
-    // finished the analysis of the data, close all resources:
-    AutoCloseables.close(batches);
-    AutoCloseables.close(factory, dataset, scanner);
+    try (
+        BufferAllocator allocator = new RootAllocator();
+        DatasetFactory datasetFactory = new FileSystemDatasetFactory(
+                allocator, NativeMemoryPool.getDefault(),
+                FileFormat.PARQUET, uri);
+        Dataset dataset = datasetFactory.finish();
+        Scanner scanner = dataset.newScan(options);

Review Comment:
   I see you discuss options below. Maybe add a comment here to that effect if you're not going to initialize it. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org