You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@drill.apache.org by "Dony Dong (Jira)" <ji...@apache.org> on 2021/06/21 05:28:00 UTC
[jira] [Created] (DRILL-7953) Query failed with (Too many open
files)
Dony Dong created DRILL-7953:
--------------------------------
Summary: Query failed with (Too many open files)
Key: DRILL-7953
URL: https://issues.apache.org/jira/browse/DRILL-7953
Project: Apache Drill
Issue Type: Bug
Components: Server
Affects Versions: 1.15.0
Reporter: Dony Dong
Hi Support,
When we query a complex view that will access a lot of parquet files, the query failed with error,
Caused by: org.apache.drill.common.exceptions.ExecutionSetupException: Error opening or reading metadata for parquet file at location: part-00006-df5fe7db-6086-43a3-9575-1b18c140b5e6-c000.snappy.parquet
at org.apache.drill.exec.store.parquet.columnreaders.PageReader.<init>(PageReader.java:151) ~[drill-java-exec-1.15.0.jar:1.15.0]
at org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.<init>(AsyncPageReader.java:97) ~[drill-java-exec-1.15.0.jar:1.15.0]
at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.<init>(ColumnReader.java:100) ~[drill-java-exec-1.15.0.jar:1.15.0]
at org.apache.drill.exec.store.parquet.columnreaders.NullableColumnReader.<init>(NullableColumnReader.java:43) ~[drill-java-exec-1.15.0.jar:1.15.0]
at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableFixedByteAlignedReader.<init>(NullableFixedByteAlignedReaders.java:54) ~[drill-java-exec-1.15.0.jar:1.15.0]
at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableConvertedReader.<init>(NullableFixedByteAlignedReaders.java:328) ~[drill-java-exec-1.15.0.jar:1.15.0]
at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableDateReader.<init>(NullableFixedByteAlignedReaders.java:348) ~[drill-java-exec-1.15.0.jar:1.15.0]
at org.apache.drill.exec.store.parquet.columnreaders.ColumnReaderFactory.createFixedColumnReader(ColumnReaderFactory.java:185) ~[drill-java-exec-1.15.0.jar:1.15.0]
at org.apache.drill.exec.store.parquet.columnreaders.ParquetColumnMetadata.makeFixedWidthReader(ParquetColumnMetadata.java:141) ~[drill-java-exec-1.15.0.jar:1.15.0]
at org.apache.drill.exec.store.parquet.columnreaders.ReadState.buildReader(ReadState.java:123) ~[drill-java-exec-1.15.0.jar:1.15.0]
at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.setup(ParquetRecordReader.java:253) ~[drill-java-exec-1.15.0.jar:1.15.0]
... 29 common frames omitted
Caused by: java.io.FileNotFoundException: /data/testing/CH/part-00006-df5fe7db-6086-43a3-9575-1b18c140b5e6-c000.snappy.parquet (Too many open files)
at java.io.FileInputStream.open0(Native Method) ~[na:1.8.0_181]
at java.io.FileInputStream.open(FileInputStream.java:195) ~[na:1.8.0_181]
at java.io.FileInputStream.<init>(FileInputStream.java:138) ~[na:1.8.0_181]
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.<init>(RawLocalFileSystem.java:106) ~[hadoop-common-2.7.4.jar:na]
at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202) ~[hadoop-common-2.7.4.jar:na]
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:143) ~[hadoop-common-2.7.4.jar:na]
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) ~[hadoop-common-2.7.4.jar:na]
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) ~[hadoop-common-2.7.4.jar:na]
at org.apache.drill.exec.store.dfs.DrillFileSystem.open(DrillFileSystem.java:151) ~[drill-java-exec-1.15.0.jar:1.15.0]
at org.apache.drill.exec.store.parquet.columnreaders.PageReader.<init>(PageReader.java:133) ~[drill-java-exec-1.15.0.jar:1.15.0]
... 39 common frames omitted
We add below in the /etc/security/limits.conf, but Drill still uses the default setting 1024 when startup.
* hard nofile 65536
* soft nofile 65536
Fri Jun 18 19:40:33 AEST 2021 Starting drillbit on drill-testing
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 740731
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 740731
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
is there some place we can set this parameter?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
Re: [jira] [Created] (DRILL-7953) Query failed with (Too many open
files)
Posted by Charles Givre <cg...@gmail.com>.
Hi there,
Another question I’d have is can you first try upgrading Drill to a more recent version? 1.15 is several years old at least.
> On Jun 21, 2021, at 7:51 AM, James Turton <ja...@somecomputer.xyz.INVALID> wrote:
>
> Are you starting Drill using systemd? If so, see the LimitNOFILE option.
>
> On 2021/06/21 07:28, Dony Dong (Jira) wrote:
>> Dony Dong created DRILL-7953:
>> --------------------------------
>>
>> Summary: Query failed with (Too many open files)
>> Key: DRILL-7953
>> URL: https://issues.apache.org/jira/browse/DRILL-7953
>> Project: Apache Drill
>> Issue Type: Bug
>> Components: Server
>> Affects Versions: 1.15.0
>> Reporter: Dony Dong
>>
>>
>> Hi Support,
>>
>>
>> When we query a complex view that will access a lot of parquet files, the query failed with error,
>>
>> Caused by: org.apache.drill.common.exceptions.ExecutionSetupException: Error opening or reading metadata for parquet file at location: part-00006-df5fe7db-6086-43a3-9575-1b18c140b5e6-c000.snappy.parquet
>> at org.apache.drill.exec.store.parquet.columnreaders.PageReader.<init>(PageReader.java:151) ~[drill-java-exec-1.15.0.jar:1.15.0]
>> at org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.<init>(AsyncPageReader.java:97) ~[drill-java-exec-1.15.0.jar:1.15.0]
>> at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.<init>(ColumnReader.java:100) ~[drill-java-exec-1.15.0.jar:1.15.0]
>> at org.apache.drill.exec.store.parquet.columnreaders.NullableColumnReader.<init>(NullableColumnReader.java:43) ~[drill-java-exec-1.15.0.jar:1.15.0]
>> at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableFixedByteAlignedReader.<init>(NullableFixedByteAlignedReaders.java:54) ~[drill-java-exec-1.15.0.jar:1.15.0]
>> at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableConvertedReader.<init>(NullableFixedByteAlignedReaders.java:328) ~[drill-java-exec-1.15.0.jar:1.15.0]
>> at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableDateReader.<init>(NullableFixedByteAlignedReaders.java:348) ~[drill-java-exec-1.15.0.jar:1.15.0]
>> at org.apache.drill.exec.store.parquet.columnreaders.ColumnReaderFactory.createFixedColumnReader(ColumnReaderFactory.java:185) ~[drill-java-exec-1.15.0.jar:1.15.0]
>> at org.apache.drill.exec.store.parquet.columnreaders.ParquetColumnMetadata.makeFixedWidthReader(ParquetColumnMetadata.java:141) ~[drill-java-exec-1.15.0.jar:1.15.0]
>> at org.apache.drill.exec.store.parquet.columnreaders.ReadState.buildReader(ReadState.java:123) ~[drill-java-exec-1.15.0.jar:1.15.0]
>> at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.setup(ParquetRecordReader.java:253) ~[drill-java-exec-1.15.0.jar:1.15.0]
>> ... 29 common frames omitted
>> Caused by: java.io.FileNotFoundException: /data/testing/CH/part-00006-df5fe7db-6086-43a3-9575-1b18c140b5e6-c000.snappy.parquet (Too many open files)
>> at java.io.FileInputStream.open0(Native Method) ~[na:1.8.0_181]
>> at java.io.FileInputStream.open(FileInputStream.java:195) ~[na:1.8.0_181]
>> at java.io.FileInputStream.<init>(FileInputStream.java:138) ~[na:1.8.0_181]
>> at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.<init>(RawLocalFileSystem.java:106) ~[hadoop-common-2.7.4.jar:na]
>> at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202) ~[hadoop-common-2.7.4.jar:na]
>> at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:143) ~[hadoop-common-2.7.4.jar:na]
>> at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) ~[hadoop-common-2.7.4.jar:na]
>> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) ~[hadoop-common-2.7.4.jar:na]
>> at org.apache.drill.exec.store.dfs.DrillFileSystem.open(DrillFileSystem.java:151) ~[drill-java-exec-1.15.0.jar:1.15.0]
>> at org.apache.drill.exec.store.parquet.columnreaders.PageReader.<init>(PageReader.java:133) ~[drill-java-exec-1.15.0.jar:1.15.0]
>> ... 39 common frames omitted
>>
>>
>> We add below in the /etc/security/limits.conf, but Drill still uses the default setting 1024 when startup.
>>
>> * hard nofile 65536
>> * soft nofile 65536
>>
>>
>> Fri Jun 18 19:40:33 AEST 2021 Starting drillbit on drill-testing
>> core file size (blocks, -c) 0
>> data seg size (kbytes, -d) unlimited
>> scheduling priority (-e) 0
>> file size (blocks, -f) unlimited
>> pending signals (-i) 740731
>> max locked memory (kbytes, -l) 64
>> max memory size (kbytes, -m) unlimited
>> open files (-n) 1024
>> pipe size (512 bytes, -p) 8
>> POSIX message queues (bytes, -q) 819200
>> real-time priority (-r) 0
>> stack size (kbytes, -s) 8192
>> cpu time (seconds, -t) unlimited
>> max user processes (-u) 740731
>> virtual memory (kbytes, -v) unlimited
>> file locks (-x) unlimited
>>
>>
>> is there some place we can set this parameter?
>>
>>
>>
>>
>>
>>
>> --
>> This message was sent by Atlassian Jira
>> (v8.3.4#803005)
>
Re: [jira] [Created] (DRILL-7953) Query failed with (Too many open
files)
Posted by James Turton <ja...@somecomputer.xyz.INVALID>.
Are you starting Drill using systemd? If so, see the LimitNOFILE option.
On 2021/06/21 07:28, Dony Dong (Jira) wrote:
> Dony Dong created DRILL-7953:
> --------------------------------
>
> Summary: Query failed with (Too many open files)
> Key: DRILL-7953
> URL: https://issues.apache.org/jira/browse/DRILL-7953
> Project: Apache Drill
> Issue Type: Bug
> Components: Server
> Affects Versions: 1.15.0
> Reporter: Dony Dong
>
>
> Hi Support,
>
>
>
> When we query a complex view that will access a lot of parquet files, the query failed with error,
>
> Caused by: org.apache.drill.common.exceptions.ExecutionSetupException: Error opening or reading metadata for parquet file at location: part-00006-df5fe7db-6086-43a3-9575-1b18c140b5e6-c000.snappy.parquet
> at org.apache.drill.exec.store.parquet.columnreaders.PageReader.<init>(PageReader.java:151) ~[drill-java-exec-1.15.0.jar:1.15.0]
> at org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.<init>(AsyncPageReader.java:97) ~[drill-java-exec-1.15.0.jar:1.15.0]
> at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.<init>(ColumnReader.java:100) ~[drill-java-exec-1.15.0.jar:1.15.0]
> at org.apache.drill.exec.store.parquet.columnreaders.NullableColumnReader.<init>(NullableColumnReader.java:43) ~[drill-java-exec-1.15.0.jar:1.15.0]
> at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableFixedByteAlignedReader.<init>(NullableFixedByteAlignedReaders.java:54) ~[drill-java-exec-1.15.0.jar:1.15.0]
> at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableConvertedReader.<init>(NullableFixedByteAlignedReaders.java:328) ~[drill-java-exec-1.15.0.jar:1.15.0]
> at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableDateReader.<init>(NullableFixedByteAlignedReaders.java:348) ~[drill-java-exec-1.15.0.jar:1.15.0]
> at org.apache.drill.exec.store.parquet.columnreaders.ColumnReaderFactory.createFixedColumnReader(ColumnReaderFactory.java:185) ~[drill-java-exec-1.15.0.jar:1.15.0]
> at org.apache.drill.exec.store.parquet.columnreaders.ParquetColumnMetadata.makeFixedWidthReader(ParquetColumnMetadata.java:141) ~[drill-java-exec-1.15.0.jar:1.15.0]
> at org.apache.drill.exec.store.parquet.columnreaders.ReadState.buildReader(ReadState.java:123) ~[drill-java-exec-1.15.0.jar:1.15.0]
> at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.setup(ParquetRecordReader.java:253) ~[drill-java-exec-1.15.0.jar:1.15.0]
> ... 29 common frames omitted
> Caused by: java.io.FileNotFoundException: /data/testing/CH/part-00006-df5fe7db-6086-43a3-9575-1b18c140b5e6-c000.snappy.parquet (Too many open files)
> at java.io.FileInputStream.open0(Native Method) ~[na:1.8.0_181]
> at java.io.FileInputStream.open(FileInputStream.java:195) ~[na:1.8.0_181]
> at java.io.FileInputStream.<init>(FileInputStream.java:138) ~[na:1.8.0_181]
> at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.<init>(RawLocalFileSystem.java:106) ~[hadoop-common-2.7.4.jar:na]
> at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202) ~[hadoop-common-2.7.4.jar:na]
> at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:143) ~[hadoop-common-2.7.4.jar:na]
> at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) ~[hadoop-common-2.7.4.jar:na]
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) ~[hadoop-common-2.7.4.jar:na]
> at org.apache.drill.exec.store.dfs.DrillFileSystem.open(DrillFileSystem.java:151) ~[drill-java-exec-1.15.0.jar:1.15.0]
> at org.apache.drill.exec.store.parquet.columnreaders.PageReader.<init>(PageReader.java:133) ~[drill-java-exec-1.15.0.jar:1.15.0]
> ... 39 common frames omitted
>
>
>
> We add below in the /etc/security/limits.conf, but Drill still uses the default setting 1024 when startup.
>
> * hard nofile 65536
> * soft nofile 65536
>
>
>
> Fri Jun 18 19:40:33 AEST 2021 Starting drillbit on drill-testing
> core file size (blocks, -c) 0
> data seg size (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size (blocks, -f) unlimited
> pending signals (-i) 740731
> max locked memory (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files (-n) 1024
> pipe size (512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority (-r) 0
> stack size (kbytes, -s) 8192
> cpu time (seconds, -t) unlimited
> max user processes (-u) 740731
> virtual memory (kbytes, -v) unlimited
> file locks (-x) unlimited
>
>
>
> is there some place we can set this parameter?
>
>
>
>
>
>
>
>
>
> --
> This message was sent by Atlassian Jira
> (v8.3.4#803005)