You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by "rdblue (via GitHub)" <gi...@apache.org> on 2023/01/24 20:23:49 UTC

[GitHub] [iceberg] rdblue commented on a diff in pull request #6645: Python: Parallelize IO

rdblue commented on code in PR #6645:
URL: https://github.com/apache/iceberg/pull/6645#discussion_r1085884470


##########
python/pyiceberg/io/pyarrow.py:
##########
@@ -470,13 +471,54 @@ def expression_to_pyarrow(expr: BooleanExpression) -> pc.Expression:
     return boolean_expression_visit(expr, _ConvertToArrowExpression())
 
 
+def _file_to_table(
+    fs: FileSystem,
+    task: FileScanTask,
+    bound_row_filter: BooleanExpression,
+    projected_schema: Schema,
+    projected_field_ids: Set[int],
+    case_sensitive: bool,
+) -> pa.Table:
+    _, path = PyArrowFileIO.parse_location(task.file.file_path)
+
+    # Get the schema
+    with fs.open_input_file(path) as fout:
+        parquet_schema = pq.read_schema(fout)
+        schema_raw = parquet_schema.metadata.get(ICEBERG_SCHEMA)
+        if schema_raw is None:
+            raise ValueError(
+                "Iceberg schema is not embedded into the Parquet file, see https://github.com/apache/iceberg/issues/6505"
+            )
+        file_schema = Schema.parse_raw(schema_raw)
+
+    pyarrow_filter = None
+    if bound_row_filter is not AlwaysTrue():
+        translated_row_filter = translate_column_names(bound_row_filter, file_schema, case_sensitive=case_sensitive)
+        bound_file_filter = bind(file_schema, translated_row_filter, case_sensitive=case_sensitive)
+        pyarrow_filter = expression_to_pyarrow(bound_file_filter)
+
+    file_project_schema = prune_columns(file_schema, projected_field_ids, select_full_types=False)
+
+    if file_schema is None:
+        raise ValueError(f"Missing Iceberg schema in Metadata for file: {path}")
+
+    # Prune the stuff that we don't need anyway
+    file_project_schema_arrow = schema_to_pyarrow(file_project_schema)
+
+    arrow_table = ds.dataset(

Review Comment:
   When I was running tests, I noticed that reading into PyArrow became the bottleneck. I think we will probably want to configure the format at least with [`pre_buffer`](https://github.com/apache/iceberg/pull/6590/files#diff-49144c27eab7e03926b0310aa8b513dbeb9c8d2a0d33bacf8dedbd88b4680aacR516). That doesn't need to be done here though.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org