You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@hop.apache.org by Dieter Hehn <d....@traviangames.com> on 2022/08/18 09:41:36 UTC

Error writing row to parquet file

Hi Hop Community, 

I run Hop 2.0.0 locally (openjdk 11.0.15 2022-04-19 LTS) and have a remote server running Hop containerized:
docker run -d \
 --name=hop \
 --hostname=hop \
 -v /home/ec2-user/hop/data:/files \
 -v /tmp:/tmp \
 -e TZ=Europe/Berlin \
 -e HOP_SERVER_USER="xx" \
 -e HOP_SERVER_PASS="yy" \
 -e HOP_SERVER_PORT=8080 \
 -e HOP_SERVER_HOSTNAME=0.0.0.0 \
 -e HOP_SERVER_METADATA_FOLDER="/files/metadata" \  -e HOP_SHARED_JDBC_FOLDER="/files/jdbc" \  -e HOP_LOG_PATH="/files/log/hop.log" \  -e HOP_SERVER_MAX_LOG_LINES=10000 \  -e HOP_SERVER_MAX_LOG_TIMEOUT=720 \  -e HOP_SERVER_MAX_OBJECT_TIMEOUT=720 \  -e AWS_ACCESS_KEY_ID="abc" \  -e AWS_SECRET_ACCESS_KEY="def" \  -e AWS_REGION="xx" \  --restart unless-stopped \  -m 16G \  apache/hop

I'm running a pipeline which takes table input from mysql and creates parquet output (to s3). Running the pipeline with local run config works great, but executing the same pipeline with remote run config on the server causes the error below.
Any hints what might be causing the different behavior locally vs. remotely, and how to fix it, would be greatly appreciated. Thanks in advance!

2022/08/18 10:52:21 - Parquet File Output .0 - ERROR: org.apache.hop.core.exception.HopException: 
2022/08/18 10:52:21 - Parquet File Output .0 - Error writing row to parquet file
2022/08/18 10:52:21 - Parquet File Output .0 - newLimit > capacity: (78 > 77)
2022/08/18 10:52:21 - Parquet File Output .0 - 
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.hop.parquet.transforms.output.ParquetOutput.processRow(ParquetOutput.java:122)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.hop.pipeline.transform.RunThread.run(RunThread.java:51)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at java.base/java.lang.Thread.run(Thread.java:829)
2022/08/18 10:52:21 - Parquet File Output .0 - Caused by: java.lang.IllegalArgumentException: newLimit > capacity: (78 > 77)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at java.base/java.nio.Buffer.createLimitException(Buffer.java:372)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at java.base/java.nio.Buffer.limit(Buffer.java:346)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at java.base/java.nio.ByteBuffer.limit(ByteBuffer.java:1107)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at java.base/java.nio.MappedByteBuffer.limit(MappedByteBuffer.java:235)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at java.base/java.nio.MappedByteBuffer.limit(MappedByteBuffer.java:67)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.xerial.snappy.Snappy.compress(Snappy.java:156)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:78)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.compress(CodecFactory.java:167)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writeDictionaryPage(ColumnChunkPageWriteStore.java:375)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.parquet.column.impl.ColumnWriterBase.finalizeColumnChunk(ColumnWriterBase.java:311)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.parquet.column.impl.ColumnWriteStoreBase.flush(ColumnWriteStoreBase.java:188)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:29)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:185)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.checkBlockSizeReached(InternalParquetRecordWriter.java:158)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:140)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:310)
2022/08/18 10:52:21 - Parquet File Output .0 - 	at org.apache.hop.parquet.transforms.output.ParquetOutput.processRow(ParquetOutput.java:118)
2022/08/18 10:52:21 - Parquet File Output .0 - 	... 2 more

Cheers,
Dieter



Re: Error writing row to parquet file

Posted by Matt Casters <ma...@neo4j.com>.
This could be an error compressing more data.
Could you set HOP_OPTIONS with more memory for the JVM perhaps?

Op do 18 aug. 2022 11:41 schreef Dieter Hehn <d....@traviangames.com>:

> Hi Hop Community,
>
> I run Hop 2.0.0 locally (openjdk 11.0.15 2022-04-19 LTS) and have a remote
> server running Hop containerized:
> docker run -d \
>  --name=hop \
>  --hostname=hop \
>  -v /home/ec2-user/hop/data:/files \
>  -v /tmp:/tmp \
>  -e TZ=Europe/Berlin \
>  -e HOP_SERVER_USER="xx" \
>  -e HOP_SERVER_PASS="yy" \
>  -e HOP_SERVER_PORT=8080 \
>  -e HOP_SERVER_HOSTNAME=0.0.0.0 \
>  -e HOP_SERVER_METADATA_FOLDER="/files/metadata" \  -e
> HOP_SHARED_JDBC_FOLDER="/files/jdbc" \  -e
> HOP_LOG_PATH="/files/log/hop.log" \  -e HOP_SERVER_MAX_LOG_LINES=10000 \
> -e HOP_SERVER_MAX_LOG_TIMEOUT=720 \  -e HOP_SERVER_MAX_OBJECT_TIMEOUT=720
> \  -e AWS_ACCESS_KEY_ID="abc" \  -e AWS_SECRET_ACCESS_KEY="def" \  -e
> AWS_REGION="xx" \  --restart unless-stopped \  -m 16G \  apache/hop
>
> I'm running a pipeline which takes table input from mysql and creates
> parquet output (to s3). Running the pipeline with local run config works
> great, but executing the same pipeline with remote run config on the server
> causes the error below.
> Any hints what might be causing the different behavior locally vs.
> remotely, and how to fix it, would be greatly appreciated. Thanks in
> advance!
>
> 2022/08/18 10:52:21 - Parquet File Output .0 - ERROR:
> org.apache.hop.core.exception.HopException:
> 2022/08/18 10:52:21 - Parquet File Output .0 - Error writing row to
> parquet file
> 2022/08/18 10:52:21 - Parquet File Output .0 - newLimit > capacity: (78 >
> 77)
> 2022/08/18 10:52:21 - Parquet File Output .0 -
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.hop.parquet.transforms.output.ParquetOutput.processRow(ParquetOutput.java:122)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.hop.pipeline.transform.RunThread.run(RunThread.java:51)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> java.base/java.lang.Thread.run(Thread.java:829)
> 2022/08/18 10:52:21 - Parquet File Output .0 - Caused by:
> java.lang.IllegalArgumentException: newLimit > capacity: (78 > 77)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> java.base/java.nio.Buffer.createLimitException(Buffer.java:372)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> java.base/java.nio.Buffer.limit(Buffer.java:346)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> java.base/java.nio.ByteBuffer.limit(ByteBuffer.java:1107)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> java.base/java.nio.MappedByteBuffer.limit(MappedByteBuffer.java:235)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> java.base/java.nio.MappedByteBuffer.limit(MappedByteBuffer.java:67)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.xerial.snappy.Snappy.compress(Snappy.java:156)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:78)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.compress(CodecFactory.java:167)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writeDictionaryPage(ColumnChunkPageWriteStore.java:375)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.column.impl.ColumnWriterBase.finalizeColumnChunk(ColumnWriterBase.java:311)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.column.impl.ColumnWriteStoreBase.flush(ColumnWriteStoreBase.java:188)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:29)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:185)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.InternalParquetRecordWriter.checkBlockSizeReached(InternalParquetRecordWriter.java:158)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:140)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:310)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.hop.parquet.transforms.output.ParquetOutput.processRow(ParquetOutput.java:118)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  ... 2 more
>
> Cheers,
> Dieter
>
>
>

Re: Error writing row to parquet file

Posted by Matt Casters <ma...@neo4j.com>.
This could be an error compressing more data.
Could you set HOP_OPTIONS with more memory for the JVM perhaps?

Op do 18 aug. 2022 11:41 schreef Dieter Hehn <d....@traviangames.com>:

> Hi Hop Community,
>
> I run Hop 2.0.0 locally (openjdk 11.0.15 2022-04-19 LTS) and have a remote
> server running Hop containerized:
> docker run -d \
>  --name=hop \
>  --hostname=hop \
>  -v /home/ec2-user/hop/data:/files \
>  -v /tmp:/tmp \
>  -e TZ=Europe/Berlin \
>  -e HOP_SERVER_USER="xx" \
>  -e HOP_SERVER_PASS="yy" \
>  -e HOP_SERVER_PORT=8080 \
>  -e HOP_SERVER_HOSTNAME=0.0.0.0 \
>  -e HOP_SERVER_METADATA_FOLDER="/files/metadata" \  -e
> HOP_SHARED_JDBC_FOLDER="/files/jdbc" \  -e
> HOP_LOG_PATH="/files/log/hop.log" \  -e HOP_SERVER_MAX_LOG_LINES=10000 \
> -e HOP_SERVER_MAX_LOG_TIMEOUT=720 \  -e HOP_SERVER_MAX_OBJECT_TIMEOUT=720
> \  -e AWS_ACCESS_KEY_ID="abc" \  -e AWS_SECRET_ACCESS_KEY="def" \  -e
> AWS_REGION="xx" \  --restart unless-stopped \  -m 16G \  apache/hop
>
> I'm running a pipeline which takes table input from mysql and creates
> parquet output (to s3). Running the pipeline with local run config works
> great, but executing the same pipeline with remote run config on the server
> causes the error below.
> Any hints what might be causing the different behavior locally vs.
> remotely, and how to fix it, would be greatly appreciated. Thanks in
> advance!
>
> 2022/08/18 10:52:21 - Parquet File Output .0 - ERROR:
> org.apache.hop.core.exception.HopException:
> 2022/08/18 10:52:21 - Parquet File Output .0 - Error writing row to
> parquet file
> 2022/08/18 10:52:21 - Parquet File Output .0 - newLimit > capacity: (78 >
> 77)
> 2022/08/18 10:52:21 - Parquet File Output .0 -
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.hop.parquet.transforms.output.ParquetOutput.processRow(ParquetOutput.java:122)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.hop.pipeline.transform.RunThread.run(RunThread.java:51)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> java.base/java.lang.Thread.run(Thread.java:829)
> 2022/08/18 10:52:21 - Parquet File Output .0 - Caused by:
> java.lang.IllegalArgumentException: newLimit > capacity: (78 > 77)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> java.base/java.nio.Buffer.createLimitException(Buffer.java:372)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> java.base/java.nio.Buffer.limit(Buffer.java:346)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> java.base/java.nio.ByteBuffer.limit(ByteBuffer.java:1107)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> java.base/java.nio.MappedByteBuffer.limit(MappedByteBuffer.java:235)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> java.base/java.nio.MappedByteBuffer.limit(MappedByteBuffer.java:67)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.xerial.snappy.Snappy.compress(Snappy.java:156)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:78)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.compress(CodecFactory.java:167)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writeDictionaryPage(ColumnChunkPageWriteStore.java:375)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.column.impl.ColumnWriterBase.finalizeColumnChunk(ColumnWriterBase.java:311)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.column.impl.ColumnWriteStoreBase.flush(ColumnWriteStoreBase.java:188)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:29)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:185)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.InternalParquetRecordWriter.checkBlockSizeReached(InternalParquetRecordWriter.java:158)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:140)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:310)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  at
> org.apache.hop.parquet.transforms.output.ParquetOutput.processRow(ParquetOutput.java:118)
> 2022/08/18 10:52:21 - Parquet File Output .0 -  ... 2 more
>
> Cheers,
> Dieter
>
>
>