You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2022/07/18 14:58:27 UTC

[GitHub] [iceberg] Fokko opened a new pull request, #5298: Python: Map Manifest onto Pydantic class

Fokko opened a new pull request, #5298:
URL: https://github.com/apache/iceberg/pull/5298

   This makes it much more convenient to use the classes and we have type safety by mypy (that we don't get from using plain dicts).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
rdblue commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r926807040


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]

Review Comment:
   Looks like this is missing `content` (for v2) and `equality_ids` (for delete files in v2)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
rdblue commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r926808514


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]
+
+
+class ManifestEntry(IcebergBaseModel):
+    status: int
+    snapshot_id: Optional[int]
+    data_file: DataFile
+
+
+class Partition(IcebergBaseModel):

Review Comment:
   This is specific to a partition field and is a summary, so you may want to update the name to reflect that.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] Fokko commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
Fokko commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r927579397


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"

Review Comment:
   This is how it is in the Avro file:
   ```
   ➜  warehouse git:(main) avro-tools tojson nyc/taxis_sample/metadata/2ac3263d-9174-418d-b4cc-0542d4c2ea0b-m0.avro | jq | grep -i file_format
   22/07/22 13:55:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
       "file_format": "PARQUET",
       "file_format": "PARQUET",
       "file_format": "PARQUET",
       "file_format": "PARQUET",
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] Fokko commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
Fokko commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r926822036


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]
+
+
+class ManifestEntry(IcebergBaseModel):
+    status: int
+    snapshot_id: Optional[int]
+    data_file: DataFile
+
+
+class Partition(IcebergBaseModel):
+    contains_null: bool
+    contains_nan: Optional[bool]
+    lower_bound: Optional[bytes]
+    upper_bound: Optional[bytes]
+
+
+class ManifestFile(IcebergBaseModel):
+    manifest_path: str
+    manifest_length: int
+    partition_spec_id: int
+    added_snapshot_id: Optional[int]
+    added_data_files_count: Optional[int]
+    existing_data_files_count: Optional[int]
+    deleted_data_files_count: Optional[int]
+    partitions: Optional[List[Partition]]
+    added_rows_count: Optional[int]
+    existing_rows_counts: Optional[int]
+    deleted_rows_count: Optional[int]
+
+
+class Manifest:
+    @staticmethod
+    def read_manifest_entry(input_file: InputFile) -> Iterator[ManifestEntry]:
+        with AvroFile(input_file) as reader:
+            schema = reader.schema
+            for record in reader:
+                dict_repr = convert_pos_to_dict(schema, record)
+                yield ManifestEntry(**dict_repr)
+
+    @staticmethod
+    def read_manifest_list(input_file: InputFile) -> Iterator[ManifestFile]:
+        with AvroFile(input_file) as reader:
+            schema = reader.schema
+            for record in reader:
+                dict_repr = convert_pos_to_dict(schema, record)
+                yield ManifestFile(**dict_repr)
+
+
+@singledispatch
+def convert_pos_to_dict(schema: Union[Schema, IcebergType], struct: AvroStruct) -> Dict[str, Any]:
+    """Converts the positions in the field names
+
+    This makes it easy to map it onto a Pydantic model. Might change later on depending on the performance
+
+     Args:
+         schema (Schema | IcebergType): The schema of the file
+         struct (AvroStruct): The struct containing the data by positions
+
+     Raises:
+         NotImplementedError: If attempting to handle an unknown type in the schema
+    """
+    raise NotImplementedError(f"Cannot traverse non-type: {schema}")
+
+
+@convert_pos_to_dict.register(Schema)
+def _(schema: Schema, struct: AvroStruct) -> Dict[str, Any]:
+    return convert_pos_to_dict(schema.as_struct(), struct)
+
+
+@convert_pos_to_dict.register(StructType)
+def _(struct_type: StructType, values: AvroStruct) -> Dict[str, Any]:
+    """Iterates over all the fields in the dict, and gets the data from the struct"""
+    return {field.name: convert_pos_to_dict(field.field_type, values.get(pos)) for pos, field in enumerate(struct_type.fields)}
+
+
+@convert_pos_to_dict.register(ListType)
+def _(list_type: ListType, values: List[Any]) -> List[Any]:
+    """In the case of a list, we'll go over the elements in the list to handle complex types"""
+    return [convert_pos_to_dict(list_type.element_type, value) for value in values]
+
+
+@convert_pos_to_dict.register(MapType)
+def _(map_type: MapType, values: Dict) -> Dict:
+    """In the case of a map, we both traverse over the key and value to handle complex types"""
+    return {
+        convert_pos_to_dict(map_type.key_type, key): convert_pos_to_dict(map_type.value_type, value)
+        for key, value in values.items()
+    }
+
+
+@convert_pos_to_dict.register(PrimitiveType)

Review Comment:
   Nice, I wasn't aware of that. Thanks!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] samredai commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
samredai commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r923945692


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]
+
+
+class ManifestEntry(IcebergBaseModel):
+    status: int
+    snapshot_id: Optional[int]
+    data_file: DataFile
+
+
+class Partition(IcebergBaseModel):
+    contains_null: bool
+    contains_nan: Optional[bool]
+    lower_bound: Optional[bytes]
+    upper_bound: Optional[bytes]
+
+
+class ManifestFile(IcebergBaseModel):
+    manifest_path: str
+    manifest_length: int
+    partition_spec_id: int
+    added_snapshot_id: Optional[int]
+    added_data_files_count: Optional[int]
+    existing_data_files_count: Optional[int]
+    deleted_data_files_count: Optional[int]
+    partitions: Optional[List[Partition]]
+    added_rows_count: Optional[int]
+    existing_rows_counts: Optional[int]
+    deleted_rows_count: Optional[int]
+
+
+class Manifest:
+    @staticmethod
+    def read_manifest_entry(input_file: InputFile) -> Iterator[ManifestEntry]:
+        with AvroFile(input_file) as reader:
+            schema = reader.schema
+            for record in reader:
+                dict_repr = convert_pos_to_dict(schema, record)
+                yield ManifestEntry(**dict_repr)
+
+    @staticmethod
+    def read_manifest_list(input_file: InputFile) -> Iterator[ManifestFile]:
+        with AvroFile(input_file) as reader:
+            schema = reader.schema
+            for record in reader:
+                dict_repr = convert_pos_to_dict(schema, record)
+                yield ManifestFile(**dict_repr)
+
+
+@singledispatch
+def convert_pos_to_dict(schema: Union[Schema, IcebergType], struct: AvroStruct) -> Dict[str, Any]:
+    """Converts the positions in the field names
+
+    This makes it easy to map it onto a Pydantic model. Might change later on depending on the performance
+
+     Args:
+         schema (Schema | IcebergType): The schema of the file
+         struct (AvroStruct): The struct containing the data by positions
+
+     Raises:
+         NotImplementedError: If attempting to handle an unknown type in the schema
+    """
+    raise NotImplementedError(f"Cannot traverse non-type: {schema}")
+
+
+@convert_pos_to_dict.register(Schema)
+def _(schema: Schema, struct: AvroStruct) -> Dict[str, Any]:
+    return convert_pos_to_dict(schema.as_struct(), struct)
+
+
+@convert_pos_to_dict.register(StructType)
+def _(struct_type: StructType, values: AvroStruct) -> Dict[str, Any]:
+    """Iterates over all the fields in the dict, and gets the data from the struct"""
+    return {field.name: convert_pos_to_dict(field.field_type, values.get(pos)) for pos, field in enumerate(struct_type.fields)}
+
+
+@convert_pos_to_dict.register(ListType)
+def _(list_type: ListType, values: List[Any]) -> List[Any]:
+    """In the case of a list, we'll go over the elements in the list to handle complex types"""
+    return [convert_pos_to_dict(list_type.element_type, value) for value in values]
+
+
+@convert_pos_to_dict.register(MapType)
+def _(map_type: MapType, values: Dict) -> Dict:
+    """In the case of a map, we both traverse over the key and value to handle complex types"""
+    return {
+        convert_pos_to_dict(map_type.key_type, key): convert_pos_to_dict(map_type.value_type, value)
+        for key, value in values.items()
+    }
+
+
+@convert_pos_to_dict.register(PrimitiveType)

Review Comment:
   Since these aren't stacked, there's the option to leave the type out of the decorator and let it be inferred from the first argument's type-hint. So just `@convert_pos_to_dict.register`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] samredai commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
samredai commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r923944865


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]
+
+
+class ManifestEntry(IcebergBaseModel):
+    status: int
+    snapshot_id: Optional[int]
+    data_file: DataFile
+
+
+class Partition(IcebergBaseModel):
+    contains_null: bool
+    contains_nan: Optional[bool]
+    lower_bound: Optional[bytes]
+    upper_bound: Optional[bytes]
+
+
+class ManifestFile(IcebergBaseModel):
+    manifest_path: str
+    manifest_length: int
+    partition_spec_id: int
+    added_snapshot_id: Optional[int]
+    added_data_files_count: Optional[int]
+    existing_data_files_count: Optional[int]
+    deleted_data_files_count: Optional[int]
+    partitions: Optional[List[Partition]]
+    added_rows_count: Optional[int]
+    existing_rows_counts: Optional[int]
+    deleted_rows_count: Optional[int]
+
+
+class Manifest:

Review Comment:
   Since these are only static methods, how about just dropping the class? Someone could always just do `from iceberg import manifest; manifest.read_manifest_entry(...)` if they wanted.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] Fokko commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
Fokko commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r927652488


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]
+
+
+class ManifestEntry(IcebergBaseModel):
+    status: int
+    snapshot_id: Optional[int]
+    data_file: DataFile
+
+
+class Partition(IcebergBaseModel):
+    contains_null: bool
+    contains_nan: Optional[bool]
+    lower_bound: Optional[bytes]
+    upper_bound: Optional[bytes]
+
+
+class ManifestFile(IcebergBaseModel):
+    manifest_path: str
+    manifest_length: int
+    partition_spec_id: int
+    added_snapshot_id: Optional[int]
+    added_data_files_count: Optional[int]
+    existing_data_files_count: Optional[int]
+    deleted_data_files_count: Optional[int]
+    partitions: Optional[List[Partition]]
+    added_rows_count: Optional[int]
+    existing_rows_counts: Optional[int]
+    deleted_rows_count: Optional[int]
+
+
+class Manifest:
+    @staticmethod
+    def read_manifest_entry(input_file: InputFile) -> Iterator[ManifestEntry]:
+        with AvroFile(input_file) as reader:
+            schema = reader.schema
+            for record in reader:
+                dict_repr = convert_pos_to_dict(schema, record)

Review Comment:
   Yes, there is some overhead involved. I've parsed 10k manifests (the same ones :), and it looks like 32% of the total time is spent constructing the object, including reading the actual data. Also, converting the schema from Avro to iceberg is even costlier (35% of total time). This is mostly because the singleton is quite slow with the lookup that it has to make.
   
   ![python5](https://user-images.githubusercontent.com/1134248/180448107-0b453561-2117-4674-9538-379b14936eee.png)
   
   Code for the profiling:
   ```python
   # Licensed to the Apache Software Foundation (ASF) under one
   # or more contributor license agreements.  See the NOTICE file
   # distributed with this work for additional information
   # regarding copyright ownership.  The ASF licenses this file
   # to you under the Apache License, Version 2.0 (the
   # "License"); you may not use this file except in compliance
   # with the License.  You may obtain a copy of the License at
   #
   #   http://www.apache.org/licenses/LICENSE-2.0
   #
   # Unless required by applicable law or agreed to in writing,
   # software distributed under the License is distributed on an
   # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
   # KIND, either express or implied.  See the License for the
   # specific language governing permissions and limitations
   # under the License.
   import os
   from tempfile import TemporaryDirectory
   from urllib.parse import ParseResult, urlparse
   
   from fastavro import parse_schema, writer
   
   from pyiceberg.io.base import InputFile, InputStream
   from pyiceberg.manifest import (
       read_manifest_entry,
   )
   
   
   class LocalInputFile(InputFile):
       """An InputFile implementation for local files (for test use only)"""
   
       def __init__(self, location: str):
   
           parsed_location = urlparse(location)  # Create a ParseResult from the uri
           if parsed_location.scheme and parsed_location.scheme != "file":  # Validate that a uri is provided with a scheme of `file`
               raise ValueError("LocalInputFile location must have a scheme of `file`")
           elif parsed_location.netloc:
               raise ValueError(f"Network location is not allowed for LocalInputFile: {parsed_location.netloc}")
   
           super().__init__(location=location)
           self._parsed_location = parsed_location
   
       @property
       def parsed_location(self) -> ParseResult:
           """The parsed location
   
           Returns:
               ParseResult: The parsed results which has attributes `scheme`, `netloc`, `path`,
               `params`, `query`, and `fragments`.
           """
           return self._parsed_location
   
       def __len__(self):
           return os.path.getsize(self.parsed_location.path)
   
       def exists(self):
           return os.path.exists(self.parsed_location.path)
   
       def open(self) -> InputStream:
           input_file = open(self.parsed_location.path, "rb")
           if not isinstance(input_file, InputStream):
               raise TypeError("Object returned from LocalInputFile.open() does not match the OutputStream protocol.")
           return input_file
   
   
   avro_schema_manifest_entry = {
       "type": "record",
       "name": "manifest_entry",
       "fields": [
           {"name": "status", "type": "int", "field-id": 0},
           {"name": "snapshot_id", "type": ["null", "long"], "default": "null", "field-id": 1},
           {
               "name": "data_file",
               "type": {
                   "type": "record",
                   "name": "r2",
                   "fields": [
                       {"name": "file_path", "type": "string", "doc": "Location URI with FS scheme", "field-id": 100},
                       {
                           "name": "file_format",
                           "type": "string",
                           "doc": "File format name: avro, orc, or parquet",
                           "field-id": 101,
                       },
                       {
                           "name": "partition",
                           "type": {
                               "type": "record",
                               "name": "r102",
                               "fields": [
                                   {"name": "VendorID", "type": ["null", "int"], "default": "null", "field-id": 1000}],
                           },
                           "field-id": 102,
                       },
                       {"name": "record_count", "type": "long", "doc": "Number of records in the file", "field-id": 103},
                       {"name": "file_size_in_bytes", "type": "long", "doc": "Total file size in bytes", "field-id": 104},
                       {"name": "block_size_in_bytes", "type": "long", "field-id": 105},
                       {
                           "name": "column_sizes",
                           "type": [
                               "null",
                               {
                                   "type": "array",
                                   "items": {
                                       "type": "record",
                                       "name": "k117_v118",
                                       "fields": [
                                           {"name": "key", "type": "int", "field-id": 117},
                                           {"name": "value", "type": "long", "field-id": 118},
                                       ],
                                   },
                                   "logicalType": "map",
                               },
                           ],
                           "doc": "Map of column id to total size on disk",
                           "default": "null",
                           "field-id": 108,
                       },
                       {
                           "name": "value_counts",
                           "type": [
                               "null",
                               {
                                   "type": "array",
                                   "items": {
                                       "type": "record",
                                       "name": "k119_v120",
                                       "fields": [
                                           {"name": "key", "type": "int", "field-id": 119},
                                           {"name": "value", "type": "long", "field-id": 120},
                                       ],
                                   },
                                   "logicalType": "map",
                               },
                           ],
                           "doc": "Map of column id to total count, including null and NaN",
                           "default": "null",
                           "field-id": 109,
                       },
                       {
                           "name": "null_value_counts",
                           "type": [
                               "null",
                               {
                                   "type": "array",
                                   "items": {
                                       "type": "record",
                                       "name": "k121_v122",
                                       "fields": [
                                           {"name": "key", "type": "int", "field-id": 121},
                                           {"name": "value", "type": "long", "field-id": 122},
                                       ],
                                   },
                                   "logicalType": "map",
                               },
                           ],
                           "doc": "Map of column id to null value count",
                           "default": "null",
                           "field-id": 110,
                       },
                       {
                           "name": "nan_value_counts",
                           "type": [
                               "null",
                               {
                                   "type": "array",
                                   "items": {
                                       "type": "record",
                                       "name": "k138_v139",
                                       "fields": [
                                           {"name": "key", "type": "int", "field-id": 138},
                                           {"name": "value", "type": "long", "field-id": 139},
                                       ],
                                   },
                                   "logicalType": "map",
                               },
                           ],
                           "doc": "Map of column id to number of NaN values in the column",
                           "default": "null",
                           "field-id": 137,
                       },
                       {
                           "name": "lower_bounds",
                           "type": [
                               "null",
                               {
                                   "type": "array",
                                   "items": {
                                       "type": "record",
                                       "name": "k126_v127",
                                       "fields": [
                                           {"name": "key", "type": "int", "field-id": 126},
                                           {"name": "value", "type": "bytes", "field-id": 127},
                                       ],
                                   },
                                   "logicalType": "map",
                               },
                           ],
                           "doc": "Map of column id to lower bound",
                           "default": "null",
                           "field-id": 125,
                       },
                       {
                           "name": "upper_bounds",
                           "type": [
                               "null",
                               {
                                   "type": "array",
                                   "items": {
                                       "type": "record",
                                       "name": "k129_v130",
                                       "fields": [
                                           {"name": "key", "type": "int", "field-id": 129},
                                           {"name": "value", "type": "bytes", "field-id": 130},
                                       ],
                                   },
                                   "logicalType": "map",
                               },
                           ],
                           "doc": "Map of column id to upper bound",
                           "default": "null",
                           "field-id": 128,
                       },
                       {
                           "name": "key_metadata",
                           "type": ["null", "bytes"],
                           "doc": "Encryption key metadata blob",
                           "default": "null",
                           "field-id": 131,
                       },
                       {
                           "name": "split_offsets",
                           "type": ["null", {"type": "array", "items": "long", "element-id": 133}],
                           "doc": "Splittable offsets",
                           "default": "null",
                           "field-id": 132,
                       },
                       {
                           "name": "sort_order_id",
                           "type": ["null", "int"],
                           "doc": "Sort order ID",
                           "default": "null",
                           "field-id": 140,
                       },
                   ],
               },
               "field-id": 2,
           },
       ],
   }
   
   manifest_entry_records = [
       {
           "status": 1,
           "snapshot_id": 8744736658442914487,
           "data_file": {
               "file_path": "/home/iceberg/warehouse/nyc/taxis_partitioned/data/VendorID=null/00000-633-d8a4223e-dc97-45a1-86e1-adaba6e8abd7-00001.parquet",
               "file_format": "PARQUET",
               "partition": {"VendorID": None},
               "record_count": 19513,
               "file_size_in_bytes": 388872,
               "block_size_in_bytes": 67108864,
               "column_sizes": [
                   {"key": 1, "value": 53},
                   {"key": 2, "value": 98153},
                   {"key": 3, "value": 98693},
                   {"key": 4, "value": 53},
                   {"key": 5, "value": 53},
                   {"key": 6, "value": 53},
                   {"key": 7, "value": 17425},
                   {"key": 8, "value": 18528},
                   {"key": 9, "value": 53},
                   {"key": 10, "value": 44788},
                   {"key": 11, "value": 35571},
                   {"key": 12, "value": 53},
                   {"key": 13, "value": 1243},
                   {"key": 14, "value": 2355},
                   {"key": 15, "value": 12750},
                   {"key": 16, "value": 4029},
                   {"key": 17, "value": 110},
                   {"key": 18, "value": 47194},
                   {"key": 19, "value": 2948},
               ],
               "value_counts": [
                   {"key": 1, "value": 19513},
                   {"key": 2, "value": 19513},
                   {"key": 3, "value": 19513},
                   {"key": 4, "value": 19513},
                   {"key": 5, "value": 19513},
                   {"key": 6, "value": 19513},
                   {"key": 7, "value": 19513},
                   {"key": 8, "value": 19513},
                   {"key": 9, "value": 19513},
                   {"key": 10, "value": 19513},
                   {"key": 11, "value": 19513},
                   {"key": 12, "value": 19513},
                   {"key": 13, "value": 19513},
                   {"key": 14, "value": 19513},
                   {"key": 15, "value": 19513},
                   {"key": 16, "value": 19513},
                   {"key": 17, "value": 19513},
                   {"key": 18, "value": 19513},
                   {"key": 19, "value": 19513},
               ],
               "null_value_counts": [
                   {"key": 1, "value": 19513},
                   {"key": 2, "value": 0},
                   {"key": 3, "value": 0},
                   {"key": 4, "value": 19513},
                   {"key": 5, "value": 19513},
                   {"key": 6, "value": 19513},
                   {"key": 7, "value": 0},
                   {"key": 8, "value": 0},
                   {"key": 9, "value": 19513},
                   {"key": 10, "value": 0},
                   {"key": 11, "value": 0},
                   {"key": 12, "value": 19513},
                   {"key": 13, "value": 0},
                   {"key": 14, "value": 0},
                   {"key": 15, "value": 0},
                   {"key": 16, "value": 0},
                   {"key": 17, "value": 0},
                   {"key": 18, "value": 0},
                   {"key": 19, "value": 0},
               ],
               "nan_value_counts": [
                   {"key": 16, "value": 0},
                   {"key": 17, "value": 0},
                   {"key": 18, "value": 0},
                   {"key": 19, "value": 0},
                   {"key": 10, "value": 0},
                   {"key": 11, "value": 0},
                   {"key": 12, "value": 0},
                   {"key": 13, "value": 0},
                   {"key": 14, "value": 0},
                   {"key": 15, "value": 0},
               ],
               "lower_bounds": [
                   {"key": 2, "value": b"2020-04-01 00:00"},
                   {"key": 3, "value": b"2020-04-01 00:12"},
                   {"key": 7, "value": b"\x03\x00\x00\x00"},
                   {"key": 8, "value": b"\x01\x00\x00\x00"},
                   {"key": 10, "value": b"\xf6(\\\x8f\xc2\x05S\xc0"},
                   {"key": 11, "value": b"\x00\x00\x00\x00\x00\x00\x00\x00"},
                   {"key": 13, "value": b"\x00\x00\x00\x00\x00\x00\x00\x00"},
                   {"key": 14, "value": b"\x00\x00\x00\x00\x00\x00\xe0\xbf"},
                   {"key": 15, "value": b")\\\x8f\xc2\xf5(\x08\xc0"},
                   {"key": 16, "value": b"\x00\x00\x00\x00\x00\x00\x00\x00"},
                   {"key": 17, "value": b"\x00\x00\x00\x00\x00\x00\x00\x00"},
                   {"key": 18, "value": b"\xf6(\\\x8f\xc2\xc5S\xc0"},
                   {"key": 19, "value": b"\x00\x00\x00\x00\x00\x00\x04\xc0"},
               ],
               "upper_bounds": [
                   {"key": 2, "value": b"2020-04-30 23:5:"},
                   {"key": 3, "value": b"2020-05-01 00:41"},
                   {"key": 7, "value": b"\t\x01\x00\x00"},
                   {"key": 8, "value": b"\t\x01\x00\x00"},
                   {"key": 10, "value": b"\xcd\xcc\xcc\xcc\xcc,_@"},
                   {"key": 11, "value": b"\x1f\x85\xebQ\\\xe2\xfe@"},
                   {"key": 13, "value": b"\x00\x00\x00\x00\x00\x00\x12@"},
                   {"key": 14, "value": b"\x00\x00\x00\x00\x00\x00\xe0?"},
                   {"key": 15, "value": b"q=\n\xd7\xa3\xf01@"},
                   {"key": 16, "value": b"\x00\x00\x00\x00\x00`B@"},
                   {"key": 17, "value": b"333333\xd3?"},
                   {"key": 18, "value": b"\x00\x00\x00\x00\x00\x18b@"},
                   {"key": 19, "value": b"\x00\x00\x00\x00\x00\x00\x04@"},
               ],
               "key_metadata": None,
               "split_offsets": [4],
               "sort_order_id": 0,
           },
       },
       {
           "status": 1,
           "snapshot_id": 8744736658442914487,
           "data_file": {
               "file_path": "/home/iceberg/warehouse/nyc/taxis_partitioned/data/VendorID=1/00000-633-d8a4223e-dc97-45a1-86e1-adaba6e8abd7-00002.parquet",
               "file_format": "PARQUET",
               "partition": {"VendorID": 1},
               "record_count": 95050,
               "file_size_in_bytes": 1265950,
               "block_size_in_bytes": 67108864,
               "column_sizes": [
                   {"key": 1, "value": 318},
                   {"key": 2, "value": 329806},
                   {"key": 3, "value": 331632},
                   {"key": 4, "value": 15343},
                   {"key": 5, "value": 2351},
                   {"key": 6, "value": 3389},
                   {"key": 7, "value": 71269},
                   {"key": 8, "value": 76429},
                   {"key": 9, "value": 16383},
                   {"key": 10, "value": 86992},
                   {"key": 11, "value": 89608},
                   {"key": 12, "value": 265},
                   {"key": 13, "value": 19377},
                   {"key": 14, "value": 1692},
                   {"key": 15, "value": 76162},
                   {"key": 16, "value": 4354},
                   {"key": 17, "value": 759},
                   {"key": 18, "value": 120650},
                   {"key": 19, "value": 11804},
               ],
               "value_counts": [
                   {"key": 1, "value": 95050},
                   {"key": 2, "value": 95050},
                   {"key": 3, "value": 95050},
                   {"key": 4, "value": 95050},
                   {"key": 5, "value": 95050},
                   {"key": 6, "value": 95050},
                   {"key": 7, "value": 95050},
                   {"key": 8, "value": 95050},
                   {"key": 9, "value": 95050},
                   {"key": 10, "value": 95050},
                   {"key": 11, "value": 95050},
                   {"key": 12, "value": 95050},
                   {"key": 13, "value": 95050},
                   {"key": 14, "value": 95050},
                   {"key": 15, "value": 95050},
                   {"key": 16, "value": 95050},
                   {"key": 17, "value": 95050},
                   {"key": 18, "value": 95050},
                   {"key": 19, "value": 95050},
               ],
               "null_value_counts": [
                   {"key": 1, "value": 0},
                   {"key": 2, "value": 0},
                   {"key": 3, "value": 0},
                   {"key": 4, "value": 0},
                   {"key": 5, "value": 0},
                   {"key": 6, "value": 0},
                   {"key": 7, "value": 0},
                   {"key": 8, "value": 0},
                   {"key": 9, "value": 0},
                   {"key": 10, "value": 0},
                   {"key": 11, "value": 0},
                   {"key": 12, "value": 95050},
                   {"key": 13, "value": 0},
                   {"key": 14, "value": 0},
                   {"key": 15, "value": 0},
                   {"key": 16, "value": 0},
                   {"key": 17, "value": 0},
                   {"key": 18, "value": 0},
                   {"key": 19, "value": 0},
               ],
               "nan_value_counts": [
                   {"key": 16, "value": 0},
                   {"key": 17, "value": 0},
                   {"key": 18, "value": 0},
                   {"key": 19, "value": 0},
                   {"key": 10, "value": 0},
                   {"key": 11, "value": 0},
                   {"key": 12, "value": 0},
                   {"key": 13, "value": 0},
                   {"key": 14, "value": 0},
                   {"key": 15, "value": 0},
               ],
               "lower_bounds": [
                   {"key": 1, "value": b"\x01\x00\x00\x00"},
                   {"key": 2, "value": b"2020-04-01 00:00"},
                   {"key": 3, "value": b"2020-04-01 00:03"},
                   {"key": 4, "value": b"\x00\x00\x00\x00"},
                   {"key": 5, "value": b"\x01\x00\x00\x00"},
                   {"key": 6, "value": b"N"},
                   {"key": 7, "value": b"\x01\x00\x00\x00"},
                   {"key": 8, "value": b"\x01\x00\x00\x00"},
                   {"key": 9, "value": b"\x01\x00\x00\x00"},
                   {"key": 10, "value": b"\x00\x00\x00\x00\x00\x00\x00\x00"},
                   {"key": 11, "value": b"\x00\x00\x00\x00\x00\x00\x00\x00"},
                   {"key": 13, "value": b"\x00\x00\x00\x00\x00\x00\x00\x00"},
                   {"key": 14, "value": b"\x00\x00\x00\x00\x00\x00\x00\x00"},
                   {"key": 15, "value": b"\x00\x00\x00\x00\x00\x00\x00\x00"},
                   {"key": 16, "value": b"\x00\x00\x00\x00\x00\x00\x00\x00"},
                   {"key": 17, "value": b"\x00\x00\x00\x00\x00\x00\x00\x00"},
                   {"key": 18, "value": b"\x00\x00\x00\x00\x00\x00\x00\x00"},
                   {"key": 19, "value": b"\x00\x00\x00\x00\x00\x00\x00\x00"},
               ],
               "upper_bounds": [
                   {"key": 1, "value": b"\x01\x00\x00\x00"},
                   {"key": 2, "value": b"2020-04-30 23:5:"},
                   {"key": 3, "value": b"2020-05-01 00:1:"},
                   {"key": 4, "value": b"\x06\x00\x00\x00"},
                   {"key": 5, "value": b"c\x00\x00\x00"},
                   {"key": 6, "value": b"Y"},
                   {"key": 7, "value": b"\t\x01\x00\x00"},
                   {"key": 8, "value": b"\t\x01\x00\x00"},
                   {"key": 9, "value": b"\x04\x00\x00\x00"},
                   {"key": 10, "value": b"\\\x8f\xc2\xf5(8\x8c@"},
                   {"key": 11, "value": b"\xcd\xcc\xcc\xcc\xcc,f@"},
                   {"key": 13, "value": b"\x00\x00\x00\x00\x00\x00\x1c@"},
                   {"key": 14, "value": b"\x9a\x99\x99\x99\x99\x99\xf1?"},
                   {"key": 15, "value": b"\x00\x00\x00\x00\x00\x00Y@"},
                   {"key": 16, "value": b"\x00\x00\x00\x00\x00\xb0X@"},
                   {"key": 17, "value": b"333333\xd3?"},
                   {"key": 18, "value": b"\xc3\xf5(\\\x8f:\x8c@"},
                   {"key": 19, "value": b"\x00\x00\x00\x00\x00\x00\x04@"},
               ],
               "key_metadata": None,
               "split_offsets": [4],
               "sort_order_id": 0,
           },
       },
   ]
   
   parsed_schema = parse_schema(avro_schema_manifest_entry)
   
   with TemporaryDirectory() as tmpdir:
       generated_manifest_entry_file = tmpdir + "/manifest.avro"
       with open(generated_manifest_entry_file, "wb") as out:
           writer(out, parsed_schema, manifest_entry_records)
       input_file = LocalInputFile(generated_manifest_entry_file)
       for _ in range(10000):
           list(read_manifest_entry(input_file))
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] Fokko commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
Fokko commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r927097316


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]

Review Comment:
   Added an enum for the content, that defaults to data when not set.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
rdblue commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r926817688


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]
+
+
+class ManifestEntry(IcebergBaseModel):
+    status: int
+    snapshot_id: Optional[int]
+    data_file: DataFile
+
+
+class Partition(IcebergBaseModel):
+    contains_null: bool
+    contains_nan: Optional[bool]
+    lower_bound: Optional[bytes]
+    upper_bound: Optional[bytes]
+
+
+class ManifestFile(IcebergBaseModel):
+    manifest_path: str
+    manifest_length: int
+    partition_spec_id: int
+    added_snapshot_id: Optional[int]
+    added_data_files_count: Optional[int]
+    existing_data_files_count: Optional[int]
+    deleted_data_files_count: Optional[int]
+    partitions: Optional[List[Partition]]
+    added_rows_count: Optional[int]
+    existing_rows_counts: Optional[int]
+    deleted_rows_count: Optional[int]
+
+
+class Manifest:
+    @staticmethod
+    def read_manifest_entry(input_file: InputFile) -> Iterator[ManifestEntry]:
+        with AvroFile(input_file) as reader:
+            schema = reader.schema
+            for record in reader:
+                dict_repr = convert_pos_to_dict(schema, record)

Review Comment:
   Isn't copying from an Avro representation into a dict going to be expensive? I had assumed that we would make `ManifestEntry` and `DataFile` implement `StructProtocol` so that values could be added directly to those classes rather than copied in.
   
   We should probably chat about this a bit to consider the options.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
rdblue commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r926804112


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]
+
+
+class ManifestEntry(IcebergBaseModel):
+    status: int
+    snapshot_id: Optional[int]

Review Comment:
   Looks like this is missing `sequence_number` for v2 tables.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue merged pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
rdblue merged PR #5298:
URL: https://github.com/apache/iceberg/pull/5298


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
rdblue commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r926813227


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]
+
+
+class ManifestEntry(IcebergBaseModel):
+    status: int
+    snapshot_id: Optional[int]
+    data_file: DataFile
+
+
+class Partition(IcebergBaseModel):
+    contains_null: bool
+    contains_nan: Optional[bool]
+    lower_bound: Optional[bytes]
+    upper_bound: Optional[bytes]
+
+
+class ManifestFile(IcebergBaseModel):

Review Comment:
   This is missing `content`, `sequence_number`, and `min_sequence_number` for v2.
   
   There is also now a `key_metadata` field.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] Fokko commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
Fokko commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r926821260


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]
+
+
+class ManifestEntry(IcebergBaseModel):
+    status: int
+    snapshot_id: Optional[int]
+    data_file: DataFile
+
+
+class Partition(IcebergBaseModel):
+    contains_null: bool
+    contains_nan: Optional[bool]
+    lower_bound: Optional[bytes]
+    upper_bound: Optional[bytes]
+
+
+class ManifestFile(IcebergBaseModel):
+    manifest_path: str
+    manifest_length: int
+    partition_spec_id: int
+    added_snapshot_id: Optional[int]
+    added_data_files_count: Optional[int]
+    existing_data_files_count: Optional[int]
+    deleted_data_files_count: Optional[int]
+    partitions: Optional[List[Partition]]
+    added_rows_count: Optional[int]
+    existing_rows_counts: Optional[int]
+    deleted_rows_count: Optional[int]
+
+
+class Manifest:

Review Comment:
   Yes, that makes sense to me. Thanks!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] Fokko commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
Fokko commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r927087797


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"

Review Comment:
   Do you mean the key or the string repr? For the keys, we decided to upper-case them because that's also the convention in the standard lib. The value I just took from an example Manifest: https://github.com/apache/iceberg/blob/d1d70875714ada876cc651be6d1b3a1d7483a0da/python/tests/conftest.py#L233



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
rdblue commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r926144040


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"

Review Comment:
   What about making these lower case? I think that's a more natural way to use them. For example, if we have a way to add an extension to a file name, these names would probably be the extension.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] Fokko commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
Fokko commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r927113964


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]
+
+
+class ManifestEntry(IcebergBaseModel):
+    status: int
+    snapshot_id: Optional[int]
+    data_file: DataFile
+
+
+class Partition(IcebergBaseModel):
+    contains_null: bool
+    contains_nan: Optional[bool]
+    lower_bound: Optional[bytes]
+    upper_bound: Optional[bytes]
+
+
+class ManifestFile(IcebergBaseModel):
+    manifest_path: str
+    manifest_length: int
+    partition_spec_id: int
+    added_snapshot_id: Optional[int]
+    added_data_files_count: Optional[int]
+    existing_data_files_count: Optional[int]
+    deleted_data_files_count: Optional[int]

Review Comment:
   Looks like a discrepancy in the spec? Looks like this is called `aded_files_count` in the spec, and `added_data_files_count` in the code: https://github.com/apache/iceberg/blob/master/api/src/main/java/org/apache/iceberg/ManifestFile.java#L44-L49
   I'll dig into this tomorrow with a fresh mind 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] Fokko commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
Fokko commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r927104392


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]
+
+
+class ManifestEntry(IcebergBaseModel):
+    status: int
+    snapshot_id: Optional[int]
+    data_file: DataFile
+
+
+class Partition(IcebergBaseModel):
+    contains_null: bool
+    contains_nan: Optional[bool]
+    lower_bound: Optional[bytes]
+    upper_bound: Optional[bytes]
+
+
+class ManifestFile(IcebergBaseModel):
+    manifest_path: str
+    manifest_length: int
+    partition_spec_id: int
+    added_snapshot_id: Optional[int]
+    added_data_files_count: Optional[int]
+    existing_data_files_count: Optional[int]
+    deleted_data_files_count: Optional[int]

Review Comment:
   Great catch, updated! 👍🏻 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
rdblue commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r926812236


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int
+    column_sizes: Optional[Dict[int, int]]
+    value_counts: Optional[Dict[int, int]]
+    null_value_counts: Optional[Dict[int, int]]
+    nan_value_counts: Optional[Dict[int, int]]
+    lower_bounds: Optional[Dict[int, bytes]]
+    upper_bounds: Optional[Dict[int, bytes]]
+    key_metadata: Optional[bytes]
+    split_offsets: Optional[List[int]]
+    sort_order_id: Optional[int]
+
+
+class ManifestEntry(IcebergBaseModel):
+    status: int
+    snapshot_id: Optional[int]
+    data_file: DataFile
+
+
+class Partition(IcebergBaseModel):
+    contains_null: bool
+    contains_nan: Optional[bool]
+    lower_bound: Optional[bytes]
+    upper_bound: Optional[bytes]
+
+
+class ManifestFile(IcebergBaseModel):
+    manifest_path: str
+    manifest_length: int
+    partition_spec_id: int
+    added_snapshot_id: Optional[int]
+    added_data_files_count: Optional[int]
+    existing_data_files_count: Optional[int]
+    deleted_data_files_count: Optional[int]

Review Comment:
   All 3 of these should omit `data` because they are used for added/existing/deleted file counts for delete files in delete manifests. These are also required for v2.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
rdblue commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r926805328


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"
+
+
+class DataFile(IcebergBaseModel):
+    file_path: str
+    file_format: FileFormat
+    partition: Dict[str, Any]
+    record_count: int
+    file_size_in_bytes: int
+    block_size_in_bytes: int

Review Comment:
   This should be written for v1 and omitted for v2.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a diff in pull request #5298: Python: Map Manifest onto Pydantic class

Posted by GitBox <gi...@apache.org>.
rdblue commented on code in PR #5298:
URL: https://github.com/apache/iceberg/pull/5298#discussion_r927120403


##########
python/pyiceberg/manifest.py:
##########
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from enum import Enum
+from functools import singledispatch
+from typing import (
+    Any,
+    Dict,
+    Iterator,
+    List,
+    Optional,
+    Union,
+)
+
+from pyiceberg.avro.file import AvroFile
+from pyiceberg.avro.reader import AvroStruct
+from pyiceberg.io.base import InputFile
+from pyiceberg.schema import Schema
+from pyiceberg.types import (
+    IcebergType,
+    ListType,
+    MapType,
+    PrimitiveType,
+    StructType,
+)
+from pyiceberg.utils.iceberg_base_model import IcebergBaseModel
+
+
+class FileFormat(str, Enum):
+    AVRO = "AVRO"
+    PARQUET = "PARQUET"
+    ORC = "ORC"

Review Comment:
   The string, but if Java is producing PARQUET, then we should probably do the same case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org