You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@nifi.apache.org by Mike Thomsen <mi...@gmail.com> on 2018/06/07 22:18:36 UTC
Anyone use the Parquet processors with local fs?
Just wondering if anyone has configured the Parquet processors to write to
a local file system and if so, what was involved. My understanding is that
S3 and HDFS are normally required.
Thanks,
Mike
Re: Anyone use the Parquet processors with local fs?
Posted by Matt Burgess <ma...@apache.org>.
Mike,
IIRC you just need your Hadoop Configuration Resources to include a
core-site.xml that has the filesystem set to file:/// or something
similar. Others can/will certainly know better and hopefully
confirm/deny :)
For example, in the unit test PutParquetTest, the core-site.xml has
the following property:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>file:///</value>
</property>
</configuration>
Regards,
Matt
On Thu, Jun 7, 2018 at 6:18 PM, Mike Thomsen <mi...@gmail.com> wrote:
> Just wondering if anyone has configured the Parquet processors to write to a
> local file system and if so, what was involved. My understanding is that S3
> and HDFS are normally required.
>
> Thanks,
>
> Mike