You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "James Turton (Jira)" <ji...@apache.org> on 2022/05/29 07:28:00 UTC

[jira] [Closed] (DRILL-7953) Query failed with (Too many open files)

     [ https://issues.apache.org/jira/browse/DRILL-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

James Turton closed DRILL-7953.
-------------------------------
    Resolution: Not A Bug

Drill's open files quota can be increased to solve this issue, but the best way to apply that configuration will depend on the OS on which Drill is running. For example, here is information related to systemd-based Linux distros [1].

[1] https://unix.stackexchange.com/questions/345595/how-to-set-ulimits-on-service-with-systemd

> Query failed with (Too many open files)
> ---------------------------------------
>
>                 Key: DRILL-7953
>                 URL: https://issues.apache.org/jira/browse/DRILL-7953
>             Project: Apache Drill
>          Issue Type: Bug
>          Components:  Server
>    Affects Versions: 1.15.0
>            Reporter: Dony Dong
>            Priority: Major
>
> Hi Support,
>  
> When we query a complex view that will access a lot of parquet files, the query failed with error,  
> Caused by: org.apache.drill.common.exceptions.ExecutionSetupException: Error opening or reading metadata for parquet file at location: part-00006-df5fe7db-6086-43a3-9575-1b18c140b5e6-c000.snappy.parquet
>  at org.apache.drill.exec.store.parquet.columnreaders.PageReader.<init>(PageReader.java:151) ~[drill-java-exec-1.15.0.jar:1.15.0]
>  at org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.<init>(AsyncPageReader.java:97) ~[drill-java-exec-1.15.0.jar:1.15.0]
>  at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.<init>(ColumnReader.java:100) ~[drill-java-exec-1.15.0.jar:1.15.0]
>  at org.apache.drill.exec.store.parquet.columnreaders.NullableColumnReader.<init>(NullableColumnReader.java:43) ~[drill-java-exec-1.15.0.jar:1.15.0]
>  at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableFixedByteAlignedReader.<init>(NullableFixedByteAlignedReaders.java:54) ~[drill-java-exec-1.15.0.jar:1.15.0]
>  at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableConvertedReader.<init>(NullableFixedByteAlignedReaders.java:328) ~[drill-java-exec-1.15.0.jar:1.15.0]
>  at org.apache.drill.exec.store.parquet.columnreaders.NullableFixedByteAlignedReaders$NullableDateReader.<init>(NullableFixedByteAlignedReaders.java:348) ~[drill-java-exec-1.15.0.jar:1.15.0]
>  at org.apache.drill.exec.store.parquet.columnreaders.ColumnReaderFactory.createFixedColumnReader(ColumnReaderFactory.java:185) ~[drill-java-exec-1.15.0.jar:1.15.0]
>  at org.apache.drill.exec.store.parquet.columnreaders.ParquetColumnMetadata.makeFixedWidthReader(ParquetColumnMetadata.java:141) ~[drill-java-exec-1.15.0.jar:1.15.0]
>  at org.apache.drill.exec.store.parquet.columnreaders.ReadState.buildReader(ReadState.java:123) ~[drill-java-exec-1.15.0.jar:1.15.0]
>  at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.setup(ParquetRecordReader.java:253) ~[drill-java-exec-1.15.0.jar:1.15.0]
>  ... 29 common frames omitted
> Caused by: java.io.FileNotFoundException: /data/testing/CH/part-00006-df5fe7db-6086-43a3-9575-1b18c140b5e6-c000.snappy.parquet (Too many open files)
>  at java.io.FileInputStream.open0(Native Method) ~[na:1.8.0_181]
>  at java.io.FileInputStream.open(FileInputStream.java:195) ~[na:1.8.0_181]
>  at java.io.FileInputStream.<init>(FileInputStream.java:138) ~[na:1.8.0_181]
>  at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.<init>(RawLocalFileSystem.java:106) ~[hadoop-common-2.7.4.jar:na]
>  at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202) ~[hadoop-common-2.7.4.jar:na]
>  at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:143) ~[hadoop-common-2.7.4.jar:na]
>  at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) ~[hadoop-common-2.7.4.jar:na]
>  at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) ~[hadoop-common-2.7.4.jar:na]
>  at org.apache.drill.exec.store.dfs.DrillFileSystem.open(DrillFileSystem.java:151) ~[drill-java-exec-1.15.0.jar:1.15.0]
>  at org.apache.drill.exec.store.parquet.columnreaders.PageReader.<init>(PageReader.java:133) ~[drill-java-exec-1.15.0.jar:1.15.0]
>  ... 39 common frames omitted
>  
> We add below in the /etc/security/limits.conf, but Drill still uses the default setting 1024 when startup. 
> * hard nofile 65536
> * soft nofile 65536
>  
> Fri Jun 18 19:40:33 AEST 2021 Starting drillbit on drill-testing
> core file size (blocks, -c) 0
> data seg size (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size (blocks, -f) unlimited
> pending signals (-i) 740731
> max locked memory (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files (-n) 1024
> pipe size (512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority (-r) 0
> stack size (kbytes, -s) 8192
> cpu time (seconds, -t) unlimited
> max user processes (-u) 740731
> virtual memory (kbytes, -v) unlimited
> file locks (-x) unlimited
>  
> is there some place we can set this parameter?
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)