You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by "Johannes Müller (Jira)" <ji...@apache.org> on 2020/09/01 15:36:00 UTC
[jira] [Commented] (PARQUET-1822) Parquet without Hadoop
dependencies
[ https://issues.apache.org/jira/browse/PARQUET-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17188564#comment-17188564 ]
Johannes Müller commented on PARQUET-1822:
------------------------------------------
We're having the exact same issue. We would like to write to parquet for the convenience and the obvious benefits of the format but it just seems impossible to do without a lot of overhead, including a Hadoop installed?
> Parquet without Hadoop dependencies
> -----------------------------------
>
> Key: PARQUET-1822
> URL: https://issues.apache.org/jira/browse/PARQUET-1822
> Project: Parquet
> Issue Type: Improvement
> Components: parquet-avro
> Affects Versions: 1.11.0
> Environment: Amazon Fargate (linux), Windows development box.
> We are writing Parquet to be read by the Snowflake and Athena databases.
> Reporter: mark juchems
> Priority: Minor
> Labels: documentation, newbie
>
> I have been trying for weeks to create a parquet file from avro and write to S3 in Java. This has been incredibly frustrating and odd as Spark can do it easily (I'm told).
> I have assembled the correct jars through luck and diligence, but now I find out that I have to have hadoop installed on my machine. I am currently developing in Windows and it seems a dll and exe can fix that up but am wondering about Linus as the code will eventually run in Fargate on AWS.
> *Why do I need external dependencies and not pure java?*
> The thing really is how utterly complex all this is. I would like to create an avro file and convert it to Parquet and write it to S3, but I am trapped in "ParquetWriter" hell!
> *Why can't I get a normal OutputStream and write it wherever I want?*
> I have scoured the web for examples and there are a few but we really need some documentation on this stuff. I understand that there may be reasons for all this but I can't find them on the web anywhere. Any help? Can't we get the "SimpleParquet" jar that does this:
>
> ParquetWriter writer = AvroParquetWriter.<GenericData.Record>builder(outputStream)
> .withSchema(avroSchema)
> .withConf(conf)
> .withCompressionCodec(CompressionCodecName.SNAPPY)
> .withWriteMode(Mode.OVERWRITE)//probably not good for prod. (overwrites files).
> .build();
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)