You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by "ian (Jira)" <ji...@apache.org> on 2020/12/23 15:55:00 UTC

[jira] [Created] (PARQUET-1958) Forced UTF8 encoding of BYTE_ARRAY on stream::read/write

ian created PARQUET-1958:
----------------------------

             Summary: Forced UTF8 encoding of BYTE_ARRAY on stream::read/write
                 Key: PARQUET-1958
                 URL: https://issues.apache.org/jira/browse/PARQUET-1958
             Project: Parquet
          Issue Type: Bug
          Components: parquet-cpp
    Affects Versions: cpp-1.5.0
            Reporter: ian


{code:java}
StreamReader& StreamReader::operator>>(optional<std::string>& v) {
 CheckColumn(Type::BYTE_ARRAY, ConvertedType::UTF8);
 ByteArray ba;{code}
 
{code:java}
StreamWriter& StreamWriter::WriteVariableLength(const char* data_ptr,
 std::size_t data_len) {
 CheckColumn(Type::BYTE_ARRAY, ConvertedType::UTF8);{code}
 

Though the C++ Parquet::Schema::Node allows physical type of BYTE_ARRAY with ConvertedType=NONE, the stream reader/writer classes throw when ConvertedType != UTF8.

std::string is, unfortunately, the canonical byte buffer class in C++.

A simple approach might be to create >>parquet::ByteArray.. with columnCheck(BYTE_ARRAY, NONE), and let the user take it from there.  that would use the existing methods that >>std::string uses.. just an idea.

I am new to this forum, and have assigned MAJOR to this bug, but gladly defer to those who have a better grasp of classification.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)