You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by "ian (Jira)" <ji...@apache.org> on 2021/02/24 01:53:03 UTC
[jira] [Commented] (PARQUET-1958) Forced UTF8 encoding of
BYTE_ARRAY on stream::read/write
[ https://issues.apache.org/jira/browse/PARQUET-1958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17289541#comment-17289541 ]
ian commented on PARQUET-1958:
------------------------------
hi micah. no.. it should be removed to align with schema.node checks and to support unencoded byte arrays. thank you for following up. best, ian.Sent from my Verizon, Samsung Galaxy smartphone
-------- Original message --------From: "Micah Kornfield (Jira)" <ji...@apache.org> Date: 1/22/21 22:27 (GMT-06:00) To: com@ianlavie.com Subject: [jira] [Commented] (PARQUET-1958) Forced UTF8 encoding of
BYTE_ARRAY on stream::read/write [ https://issues.apache.org/jira/browse/PARQUET-1958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17270548#comment-17270548 ] Micah Kornfield commented on PARQUET-1958:------------------------------------------I actually am not sure that the check is needed at all. It seems any byte array should be readable at this point. My memory of this code is a little bit hazy, but do you think there is harm in removing the check?> Forced UTF8 encoding of BYTE_ARRAY on stream::read/write> -------------------------------------------------------->> Key: PARQUET-1958> URL: https://issues.apache.org/jira/browse/PARQUET-1958> Project: Parquet> Issue Type: Bug> Components: parquet-cpp> Affects Versions: cpp-1.5.0> Reporter: ian> Priority: Major>> {code:java}> StreamReader& StreamReader::operator>>(optional<std::string>& v) {> CheckColumn(Type::BYTE_ARRAY, ConvertedType::UTF8);> ByteArray ba;{code}> > {code:java}> StreamWriter& StreamWriter::WriteVariableLength(const char* data_ptr,> std::size_t data_len) {> CheckColumn(Type::BYTE_ARRAY, ConvertedType::UTF8);{code}> > Though the C++ Parquet::Schema::Node allows physical type of BYTE_ARRAY with ConvertedType=NONE, the stream reader/writer classes throw when ConvertedType != UTF8.> std::string is, unfortunately, the canonical byte buffer class in C++.> A simple approach might be to create >>parquet::ByteArray.. with columnCheck(BYTE_ARRAY, NONE), and let the user take it from there. that would use the existing methods that >>std::string uses.. just an idea.> I am new to this forum, and have assigned MAJOR to this bug, but gladly defer to those who have a better grasp of classification.--This message was sent by Atlassian Jira(v8.3.4#803005)
> Forced UTF8 encoding of BYTE_ARRAY on stream::read/write
> --------------------------------------------------------
>
> Key: PARQUET-1958
> URL: https://issues.apache.org/jira/browse/PARQUET-1958
> Project: Parquet
> Issue Type: Bug
> Components: parquet-cpp
> Affects Versions: cpp-1.5.0
> Reporter: ian
> Priority: Major
>
> {code:java}
> StreamReader& StreamReader::operator>>(optional<std::string>& v) {
> CheckColumn(Type::BYTE_ARRAY, ConvertedType::UTF8);
> ByteArray ba;{code}
>
> {code:java}
> StreamWriter& StreamWriter::WriteVariableLength(const char* data_ptr,
> std::size_t data_len) {
> CheckColumn(Type::BYTE_ARRAY, ConvertedType::UTF8);{code}
>
> Though the C++ Parquet::Schema::Node allows physical type of BYTE_ARRAY with ConvertedType=NONE, the stream reader/writer classes throw when ConvertedType != UTF8.
> std::string is, unfortunately, the canonical byte buffer class in C++.
> A simple approach might be to create >>parquet::ByteArray.. with columnCheck(BYTE_ARRAY, NONE), and let the user take it from there. that would use the existing methods that >>std::string uses.. just an idea.
> I am new to this forum, and have assigned MAJOR to this bug, but gladly defer to those who have a better grasp of classification.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)