You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@drill.apache.org by "Zubair, Muhammad" <mu...@rbc.com.INVALID> on 2017/08/23 18:32:45 UTC

Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

Hello,
After setting up drill on one of the edge nodes of our HDFS cluster, I am unable to read any hdfs files. I can query data from local files (as long as they are in a folder that has 777 permissions) but querying data from hdfs fails with the following error:
Error: RESOURCE ERROR: Failed to create schema tree.
[Error Id: d9f7908c-6c3b-49c0-a11e-71c004d27f46 on server-name:31010] (state=,code=0)
Query:
0: jdbc:drill:zk=local> select * from hdfs.`/names/city.parquet` limit 2;
Querying from local file works fine:
0: jdbc:drill:zk=local> select * from dfs.`/tmp/city.parquet` limit 2;
My HDFS settings are similar to the DFS settings, except for the connection URL being the server address instead of file:///
I can't find anything online regarding this error for drill.
_______________________________________________________________________
If you received this email in error, please advise the sender (by return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.  

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.

Re: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

Posted by Vlad Rozov <vr...@apache.org>.
The classpath looks good to me. By any chance, are you running secure 
Hadoop cluster? Have you configured Drill to work with the secure cluster?

Thank you,

Vlad

On 8/24/17 10:52, Zubair, Muhammad wrote:
> Thanks Vlad.
>
> Using the full path to jinfo resulted in the same error. Using JConsole I was able to get the classpath and VM arguments:
>
> VM arguments:
> -XX:MaxPermSize=512M -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9000 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dlog.path=/app/pnlp/tools/drill/apache-drill-1.11.0/log/sqlline.log -Dlog.query.path=/app/pnlp/tools/drill/apache-drill-1.11.0/log/sqlline_queries.json
> Class path:
> /app/pnlp/tools/drill/apache-drill-1.11.0/conf:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-hive-exec-shaded-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/vector-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-storage-hive-core-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-protocol-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-jdbc-storage-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-logical-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-common-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-memory-base-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-kudu-storage-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-storage-hbase-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/tpch-sample-data-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-rpc-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-jdbc-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-gis-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-java-exec-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-mongo-storage-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/ext/zookeeper-3.4.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jsr305-3.0.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerby-pkix-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jcodings-1.0.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jcl-over-slf4j-1.7.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-encoding-1.8.1-drill-r0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-hdfs-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-buffer-4.0.27.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/aws-java-sdk-1.7.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-procedure-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/curator-framework-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/libthrift-0.9.2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/httpcore-4.2.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/config-1.0.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/json-20090211.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-servlets-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-mapreduce-client-core-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-common-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/mongo-java-driver-3.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-annotations-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kudu-client-1.3.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-compiler-2.7.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-math3-3.1.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-server-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-yarn-common-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-handler-4.0.27.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hive-metastore-1.2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-continuation-9.1.1.v20140108.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-lang3-3.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-generator-1.8.1-drill-r0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/interface-annotations-1.3.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/protostuff-json-1.0.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/bcprov-jdk15on-1.52.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/serializer-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.logback.converter-classic-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/metrics-jvm-3.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/esri-geometry-api-1.2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-jaxrs-json-provider-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jsch-0.1.42.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/guava-18.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jcommander-1.30.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-client-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-module-jaxb-annotations-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-mapreduce-client-app-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-auth-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-admin-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-webapp-9.1.1.v20140108.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-mapreduce-client-shuffle-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.sulky.formatting-0.9.17.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/eigenbase-properties-1.1.5.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-common-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-identity-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/async-1.4.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.sulky.io-0.9.17.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/curator-x-discovery-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/findbugs-annotations-1.3.9-1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/msgpack-0.6.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/asm-debug-all-5.0.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hive-contrib-1.2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.data.logging-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-common-1.1.3-tests.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-annotations-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/xalan-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/libfb303-0.9.2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-yarn-client-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jdo-api-3.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/stax-api-1.0-2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-jaxrs-base-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-jaxrs-1.9.13.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.logback.classic-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/disruptor-3.3.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/xercesImpl-2.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerby-config-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/snappy-java-1.1.2.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-core-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/avro-1.7.7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/apacheds-i18n-2.0.0-M15.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-aws-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/protostuff-core-1.0.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-annotations-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/avro-ipc-1.7.7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/metrics-core-3.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/antlr-runtime-3.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-prefix-tree-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/metrics-healthchecks-3.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/curator-recipes-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-core-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-module-afterburner-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-hadoop-1.8.1-drill-r0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/mockito-core-1.9.5.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jpam-1.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/univocity-parsers-1.3.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.sulky.codec-0.9.17.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-digester-1.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.logback.appender.multiplex-core-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jul-to-slf4j-1.7.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-cli-1.2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/javax.inject-1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/datanucleus-rdbms-3.2.9.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/javassist-3.12.1.GA.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/antlr-2.7.7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-transport-4.0.27.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hppc-0.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-collections-3.2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/bcpkix-jdk15on-1.52.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-jackson-1.8.1-drill-r0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jdk.tools-1.7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-lang-2.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-core-asl-1.9.13.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-math-2.2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.data.eventsource-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/json-simple-1.1.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-protocol-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-util-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/joni-2.1.2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerby-util-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/metrics-json-3.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/datanucleus-core-3.2.10.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-io-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-compress-1.4.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-transport-native-epoll-4.0.27.Final-linux-x86_64.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/validation-api-1.1.0.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/objenesis-1.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/curator-client-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/log4j-over-slf4j-1.7.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-client-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/calcite-core-1.4.0-drill-r21.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerby-asn1-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-column-1.8.1-drill-r0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-pool2-2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/avro-mapred-1.7.7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/sqlline-1.1.9-drill-r7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/derby-10.10.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-common-1.8.1-drill-r0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-servlet-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/httpclient-4.2.5.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-security-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-mapreduce-client-common-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jline-2.10.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/protostuff-api-1.0.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/protobuf-java-2.5.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.data.logging.protobuf-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-codec-4.0.27.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/bonecp-0.8.0.RELEASE.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jta-1.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-http-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-io-2.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/slf4j-api-1.7.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/gson-2.2.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-dbcp-1.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-databind-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/api-asn1-api-1.0.0-M20.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-crypto-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-xml-9.1.1.v20140108.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-common-4.0.27.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/janino-2.7.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/httpdlog-parser-2.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-3.7.0.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jsp-api-2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-mapper-asl-1.9.11.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-hadoop-compat-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-httpclient-3.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-common-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parser-core-2.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/xml-apis-1.4.01.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/javassist-3.16.1-GA.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-format-2.3.0-incubating.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-configuration-1.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/datanucleus-api-jdo-3.2.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-hadoop2-compat-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-codec-1.10.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/joda-time-2.9.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/leveldbjni-all-1.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-net-3.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.logback.appender.multiplex-classic-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hive-hbase-handler-1.2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-util-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/stringtemplate-3.2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-client-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-yarn-server-common-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.sender-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.data.converter-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/htrace-core-3.1.0-incubating.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/freemarker-2.3.21.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/calcite-linq4j-1.4.0-drill-r21.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-mapreduce-client-jobclient-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-server-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-beanutils-core-1.8.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/metrics-core-2.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-beanutils-1.7.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/xz-1.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-server-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/velocity-1.7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-simplekdc-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hamcrest-core-1.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/metrics-servlets-3.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/xmlenc-0.52.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/foodmart-data-json-0.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/avro-ipc-1.7.7-tests.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/dom4j-1.6.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/paranamer-2.5.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/apacheds-kerberos-codec-2.0.0-M15.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/calcite-avatica-1.4.0-drill-r21.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-pool-1.5.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/api-util-1.0.0-M20.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jetty-6.1.26.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-guava-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/javax.ws.rs-api-2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/codemodel-2.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jaxb-api-2.2.2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/osgi-resource-locator-1.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-container-servlet-core-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/activation-1.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/javax.inject-2.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-client-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-container-jetty-http-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/hk2-api-2.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jetty-util-6.1.26.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-mvc-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/logback-classic-1.0.13.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-container-jetty-servlet-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/reflections-0.9.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-container-servlet-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-media-multipart-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-server-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-common-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/mimepull-1.9.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/aopalliance-repackaged-2.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/javax.servlet-api-3.1.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-mvc-freemarker-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/logback-core-1.0.13.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/hk2-locator-2.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/hk2-utils-2.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/javax.annotation-api-1.2.jar
> Library path:
> /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> Boot class path:
> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/resources.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/rt.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/sunrsasign.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/jsse.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/jce.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/charsets.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/jfr.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/classes
>
>
>
>
>
> -----Original Message-----
> From: Vlad Rozov [mailto:vrozov@apache.org]
> Sent: August 24, 17 12:52 PM
> To: user@drill.apache.org
> Subject: Re: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)
>
> Try /etc/alternatives/java_sdk_1.8.0/bin/jinfo <pid> or use jconsole to get the classpath.
>
> Thank you,
>
> Vlad
>
> On 8/24/17 09:38, Zubair, Muhammad wrote:
>> $ yarn version
>> Hadoop 2.7.1.2.4.2.0-258
>> Subversion git@github.com:hortonworks/hadoop.git -r
>> 13debf893a605e8a88df18a7d8d214f571e05289
>> Compiled by jenkins on 2016-04-25T05:46Z Compiled with protoc 2.5.0
>>  From source with checksum 2a2d95f05ec6c3ac547ed58cab713ac This command
>> was run using
>> /usr/hdp/2.4.2.0-258/hadoop/hadoop-common-2.7.1.2.4.2.0-258.jar
>>
>> Drill process:
>>
>> 10961 pts/0    Sl+    0:18 /etc/alternatives/java_sdk_1.8.0/bin/java -XX:MaxPermSize=512M -Dlog.path=/app /tools/drill/apache-drill
>>
>> $ jinfo 10961
>> Attaching to process ID 10961, please wait...
>> Debugger attached successfully.
>> Server compiler detected.
>> JVM version is 25.71-b15
>> Java System Properties:
>>
>> Exception in thread "main" java.lang.reflect.InvocationTargetException
>>           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>           at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>           at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>           at java.lang.reflect.Method.invoke(Method.java:497)
>>           at sun.tools.jinfo.JInfo.runTool(JInfo.java:108)
>>           at sun.tools.jinfo.JInfo.main(JInfo.java:76)
>> Caused by: java.lang.InternalError: Metadata does not appear to be polymorphic
>>           at sun.jvm.hotspot.types.basic.BasicTypeDataBase.findDynamicTypeForAddress(BasicTypeDataBase.java:278)
>>           at sun.jvm.hotspot.runtime.VirtualBaseConstructor.instantiateWrapperFor(VirtualBaseConstructor.java:102)
>>           at sun.jvm.hotspot.oops.Metadata.instantiateWrapperFor(Metadata.java:68)
>>           at sun.jvm.hotspot.memory.SystemDictionary.getSystemKlass(SystemDictionary.java:127)
>>           at sun.jvm.hotspot.runtime.VM.readSystemProperties(VM.java:879)
>>           at sun.jvm.hotspot.runtime.VM.getSystemProperties(VM.java:873)
>>           at sun.jvm.hotspot.tools.SysPropsDumper.run(SysPropsDumper.java:44)
>>           at sun.jvm.hotspot.tools.JInfo$1.run(JInfo.java:79)
>>           at sun.jvm.hotspot.tools.JInfo.run(JInfo.java:94)
>>           at sun.jvm.hotspot.tools.Tool.startInternal(Tool.java:260)
>>           at sun.jvm.hotspot.tools.Tool.start(Tool.java:223)
>>           at sun.jvm.hotspot.tools.Tool.execute(Tool.java:118)
>>           at sun.jvm.hotspot.tools.JInfo.main(JInfo.java:138)
>>           ... 6 more
>>
>>
>>
>> -----Original Message-----
>> From: Vlad Rozov [mailto:vrozov@apache.org]
>> Sent: August 24, 17 12:17 PM
>> To: user@drill.apache.org
>> Subject: Re: Apache Drill unable to read files from HDFS (Resource
>> error: Failed to create schema tree)
>>
>> One of possible problems is a mismatch of yarn libraries on the edge node. What hadoop distro and version do you have on the edge node? Can you provide output (classpath) of "jinfo <pid>" where pid is Foreman/Drillbit process id.
>>
>> Caused By (java.lang.NoClassDefFoundError)
>> org/apache/hadoop/yarn/api/ApplicationClientProtocolPB
>> org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenIn
>> fo():65 org.apache.hadoop.security.SecurityUtil.getTokenInfo():331
>> org.apache.hadoop.security.SaslRpcClient.getServerToken():263
>> org.apache.hadoop.security.SaslRpcClient.createSaslClient():219
>> org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159
>> org.apache.hadoop.security.SaslRpcClient.saslConnect():396
>> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555
>> org.apache.hadoop.ipc.Client$Connection.access$1800():370
>> org.apache.hadoop.ipc.Client$Connection$2.run():724
>>
>> Thank you,
>>
>> Vlad
>>
>> On 8/24/17 08:39, Zubair, Muhammad wrote:
>>> Padma,
>>> I've already modified the configuration as specified, but the error is still there.
>>>
>>> Running hdfs dfs -ls /folder does return list of files
>>>
>>> I enabled Verbose error logging, here's the full error message:
>>>
>>> org.apache.drill.common.exceptions.UserRemoteException: RESOURCE
>>> ERROR: Failed to create schema tree. [Error Id:
>>> 28c0c9a2-460d-460e-b93b-1d34e341cc65 on server:31010]
>>> (java.io.IOException) Failed on local exception: java.io.IOException:
>>> Couldn't set up IO streams; Host Details : local host is:
>>> "server/10.61.60.113"; destination host is: "hdfs-server":8020;
>>> org.apache.hadoop.net.NetUtils.wrapException():776
>>> org.apache.hadoop.ipc.Client.call():1480
>>> org.apache.hadoop.ipc.Client.call():1407
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229
>>> com.sun.proxy.$Proxy63.getListing():-1
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
>>> g
>>> etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke():43
>>> java.lang.reflect.Method.invoke():497
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102
>>> com.sun.proxy.$Proxy64.getListing():-1
>>> org.apache.hadoop.hdfs.DFSClient.listPaths():2094
>>> org.apache.hadoop.hdfs.DFSClient.listPaths():2077
>>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791
>>> org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106
>>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853
>>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849
>>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81
>>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860
>>> org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522
>>> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():1
>>> 6
>>> 0
>>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSch
>>> e
>>> ma.():77
>>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchem
>>> a
>>> s():64
>>> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():14
>>> 9
>>> org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFact
>>> o
>>> ry.registerSchemas():396
>>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110
>>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99
>>> org.apache.drill.exec.ops.QueryContext.getRootSchema():164
>>> org.apache.drill.exec.ops.QueryContext.getRootSchema():153
>>> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139
>>> org.apache.drill.exec.planner.sql.SqlConverter.():111
>>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101
>>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79
>>> org.apache.drill.exec.work.foreman.Foreman.runSQL():1050
>>> org.apache.drill.exec.work.foreman.Foreman.run():280
>>> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
>>> java.lang.Thread.run():745 Caused By (java.io.IOException) Couldn't
>>> set up IO streams
>>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams():788
>>> org.apache.hadoop.ipc.Client$Connection.access$2800():370
>>> org.apache.hadoop.ipc.Client.getConnection():1529
>>> org.apache.hadoop.ipc.Client.call():1446
>>> org.apache.hadoop.ipc.Client.call():1407
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229
>>> com.sun.proxy.$Proxy63.getListing():-1
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
>>> g
>>> etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke():43
>>> java.lang.reflect.Method.invoke():497
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102
>>> com.sun.proxy.$Proxy64.getListing():-1
>>> org.apache.hadoop.hdfs.DFSClient.listPaths():2094
>>> org.apache.hadoop.hdfs.DFSClient.listPaths():2077
>>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791
>>> org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106
>>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853
>>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849
>>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81
>>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860
>>> org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522
>>> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():1
>>> 6
>>> 0
>>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSch
>>> e
>>> ma.():77
>>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchem
>>> a
>>> s():64
>>> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():14
>>> 9
>>> org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFact
>>> o
>>> ry.registerSchemas():396
>>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110
>>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99
>>> org.apache.drill.exec.ops.QueryContext.getRootSchema():164
>>> org.apache.drill.exec.ops.QueryContext.getRootSchema():153
>>> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139
>>> org.apache.drill.exec.planner.sql.SqlConverter.():111
>>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101
>>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79
>>> org.apache.drill.exec.work.foreman.Foreman.runSQL():1050
>>> org.apache.drill.exec.work.foreman.Foreman.run():280
>>> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
>>> java.lang.Thread.run():745 Caused By (java.lang.NoClassDefFoundError)
>>> org/apache/hadoop/yarn/api/ApplicationClientProtocolPB
>>> org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenI
>>> n
>>> fo():65 org.apache.hadoop.security.SecurityUtil.getTokenInfo():331
>>> org.apache.hadoop.security.SaslRpcClient.getServerToken():263
>>> org.apache.hadoop.security.SaslRpcClient.createSaslClient():219
>>> org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159
>>> org.apache.hadoop.security.SaslRpcClient.saslConnect():396
>>> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555
>>> org.apache.hadoop.ipc.Client$Connection.access$1800():370
>>> org.apache.hadoop.ipc.Client$Connection$2.run():724
>>> org.apache.hadoop.ipc.Client$Connection$2.run():720
>>> java.security.AccessController.doPrivileged():-2
>>> javax.security.auth.Subject.doAs():422
>>> org.apache.hadoop.security.UserGroupInformation.doAs():1657
>>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams():720
>>> org.apache.hadoop.ipc.Client$Connection.access$2800():370
>>> org.apache.hadoop.ipc.Client.getConnection():1529
>>> org.apache.hadoop.ipc.Client.call():1446
>>> org.apache.hadoop.ipc.Client.call():1407
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229
>>> com.sun.proxy.$Proxy63.getListing():-1
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
>>> g
>>> etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke():43
>>> java.lang.reflect.Method.invoke():497
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102
>>> com.sun.proxy.$Proxy64.getListing():-1
>>> org.apache.hadoop.hdfs.DFSClient.listPaths():2094
>>> org.apache.hadoop.hdfs.DFSClient.listPaths():2077
>>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791
>>> org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106
>>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853
>>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849
>>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81
>>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860
>>> org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522
>>> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():1
>>> 6
>>> 0
>>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSch
>>> e
>>> ma.():77
>>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchem
>>> a
>>> s():64
>>> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():14
>>> 9
>>> org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFact
>>> o
>>> ry.registerSchemas():396
>>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110
>>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99
>>> org.apache.drill.exec.ops.QueryContext.getRootSchema():164
>>> org.apache.drill.exec.ops.QueryContext.getRootSchema():153
>>> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139
>>> org.apache.drill.exec.planner.sql.SqlConverter.():111
>>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101
>>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79
>>> org.apache.drill.exec.work.foreman.Foreman.runSQL():1050
>>> org.apache.drill.exec.work.foreman.Foreman.run():280
>>> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
>>> java.lang.Thread.run():745
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: Padma Penumarthy [mailto:ppenumarthy@mapr.com]
>>> Sent: August 23, 17 9:09 PM
>>> To: user@drill.apache.org
>>> Subject: Re: Apache Drill unable to read files from HDFS (Resource
>>> error: Failed to create schema tree)
>>>
>>> For HDFS, your storage plugin configuration should be something like this:
>>>
>>> {
>>>      "type": "file",
>>>      "enabled": true,
>>>      "connection": "hdfs://<IP Address>:<Port>”,   // IP address and port number of name node metadata service
>>>      "config": null,
>>>      "workspaces": {
>>>        "root": {
>>>          "location": "/",
>>>          "writable": true,
>>>          "defaultInputFormat": null
>>>        },
>>>        "tmp": {
>>>          "location": "/tmp",
>>>          "writable": true,
>>>          "defaultInputFormat": null
>>>        }
>>>      },
>>>
>>> Also, try hadoop dfs -ls command to see if you can list the files.
>>>
>>> Thanks,
>>> Padma
>>>
>>>
>>> On Aug 23, 2017, at 12:18 PM, Lee, David <Da...@blackrock.com>> wrote:
>>>
>>> HDFS storage plugin should be set to your HDFS name node url..
>>>
>>> -----Original Message-----
>>> From: Zubair, Muhammad [mailto:muhammad.zubair@rbc.com.INVALID]
>>> Sent: Wednesday, August 23, 2017 11:33 AM
>>> To: user@drill.apache.org<ma...@drill.apache.org>
>>> Subject: Apache Drill unable to read files from HDFS (Resource error:
>>> Failed to create schema tree)
>>>
>>> Hello,
>>> After setting up drill on one of the edge nodes of our HDFS cluster, I am unable to read any hdfs files. I can query data from local files (as long as they are in a folder that has 777 permissions) but querying data from hdfs fails with the following error:
>>> Error: RESOURCE ERROR: Failed to create schema tree.
>>> [Error Id: d9f7908c-6c3b-49c0-a11e-71c004d27f46 on server-name:31010]
>>> (state=,code=0)
>>> Query:
>>> 0: jdbc:drill:zk=local> select * from hdfs.`/names/city.parquet` limit 2; Querying from local file works fine:
>>> 0: jdbc:drill:zk=local> select * from dfs.`/tmp/city.parquet` limit 2; My HDFS settings are similar to the DFS settings, except for the connection URL being the server address instead of file:/// I can't find anything online regarding this error for drill.
>>> _____________________________________________________________________
>>> _ _ If you received this email in error, please advise the sender (by
>>> return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>>>
>>> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.
>>>
>>>
>>> This message may contain information that is confidential or privileged. If you are not the intended recipient, please advise the sender immediately and delete this message. See http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for further information.  Please refer to http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more information about BlackRock’s Privacy Policy.
>>>
>>> For a list of BlackRock's office addresses worldwide, see http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.
>>>
>>> © 2017 BlackRock, Inc. All rights reserved.
>>>
>>> _____________________________________________________________________
>>> _ _ If you received this email in error, please advise the sender (by
>>> return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>>>
>>> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.
>> Thank you,
>>
>> Vlad
>>
>>
>> ______________________________________________________________________
>> _ If you received this email in error, please advise the sender (by
>> return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>>
>> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.
>
> Thank you,
>
> Vlad
>
> _______________________________________________________________________
> If you received this email in error, please advise the sender (by return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>
> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.


Thank you,

Vlad



RE: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

Posted by "Zubair, Muhammad" <mu...@rbc.com.INVALID>.
Thanks Vlad.

Using the full path to jinfo resulted in the same error. Using JConsole I was able to get the classpath and VM arguments:

VM arguments: 
-XX:MaxPermSize=512M -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9000 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dlog.path=/app/pnlp/tools/drill/apache-drill-1.11.0/log/sqlline.log -Dlog.query.path=/app/pnlp/tools/drill/apache-drill-1.11.0/log/sqlline_queries.json 
Class path: 
/app/pnlp/tools/drill/apache-drill-1.11.0/conf:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-hive-exec-shaded-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/vector-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-storage-hive-core-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-protocol-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-jdbc-storage-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-logical-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-common-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-memory-base-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-kudu-storage-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-storage-hbase-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/tpch-sample-data-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-rpc-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-jdbc-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-gis-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-java-exec-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/drill-mongo-storage-1.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/ext/zookeeper-3.4.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jsr305-3.0.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerby-pkix-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jcodings-1.0.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jcl-over-slf4j-1.7.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-encoding-1.8.1-drill-r0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-hdfs-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-buffer-4.0.27.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/aws-java-sdk-1.7.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-procedure-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/curator-framework-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/libthrift-0.9.2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/httpcore-4.2.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/config-1.0.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/json-20090211.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-servlets-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-mapreduce-client-core-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-common-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/mongo-java-driver-3.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-annotations-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kudu-client-1.3.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-compiler-2.7.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-math3-3.1.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-server-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-yarn-common-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-handler-4.0.27.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hive-metastore-1.2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-continuation-9.1.1.v20140108.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-lang3-3.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-generator-1.8.1-drill-r0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/interface-annotations-1.3.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/protostuff-json-1.0.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/bcprov-jdk15on-1.52.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/serializer-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.logback.converter-classic-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/metrics-jvm-3.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/esri-geometry-api-1.2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-jaxrs-json-provider-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jsch-0.1.42.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/guava-18.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jcommander-1.30.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-client-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-module-jaxb-annotations-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-mapreduce-client-app-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-auth-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-admin-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-webapp-9.1.1.v20140108.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-mapreduce-client-shuffle-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.sulky.formatting-0.9.17.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/eigenbase-properties-1.1.5.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-common-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-identity-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/async-1.4.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.sulky.io-0.9.17.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/curator-x-discovery-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/findbugs-annotations-1.3.9-1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/msgpack-0.6.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/asm-debug-all-5.0.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hive-contrib-1.2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.data.logging-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-common-1.1.3-tests.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-annotations-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/xalan-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/libfb303-0.9.2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-yarn-client-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jdo-api-3.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/stax-api-1.0-2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-jaxrs-base-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-jaxrs-1.9.13.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.logback.classic-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/disruptor-3.3.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/xercesImpl-2.11.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerby-config-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/snappy-java-1.1.2.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-core-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/avro-1.7.7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/apacheds-i18n-2.0.0-M15.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-aws-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/protostuff-core-1.0.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-annotations-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/avro-ipc-1.7.7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/metrics-core-3.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/antlr-runtime-3.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-prefix-tree-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/metrics-healthchecks-3.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/curator-recipes-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-core-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-module-afterburner-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-hadoop-1.8.1-drill-r0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/mockito-core-1.9.5.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jpam-1.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/univocity-parsers-1.3.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.sulky.codec-0.9.17.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-digester-1.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.logback.appender.multiplex-core-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jul-to-slf4j-1.7.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-cli-1.2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/javax.inject-1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/datanucleus-rdbms-3.2.9.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/javassist-3.12.1.GA.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/antlr-2.7.7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-transport-4.0.27.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hppc-0.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-collections-3.2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/bcpkix-jdk15on-1.52.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-jackson-1.8.1-drill-r0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jdk.tools-1.7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-lang-2.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-core-asl-1.9.13.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-math-2.2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.data.eventsource-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/json-simple-1.1.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-protocol-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-util-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/joni-2.1.2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerby-util-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/metrics-json-3.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/datanucleus-core-3.2.10.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-io-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-compress-1.4.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-transport-native-epoll-4.0.27.Final-linux-x86_64.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/validation-api-1.1.0.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/objenesis-1.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/curator-client-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/log4j-over-slf4j-1.7.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-client-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/calcite-core-1.4.0-drill-r21.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerby-asn1-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-column-1.8.1-drill-r0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-pool2-2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/avro-mapred-1.7.7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/sqlline-1.1.9-drill-r7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/derby-10.10.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-common-1.8.1-drill-r0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-servlet-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/httpclient-4.2.5.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-security-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-mapreduce-client-common-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jline-2.10.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/protostuff-api-1.0.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/protobuf-java-2.5.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.data.logging.protobuf-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-codec-4.0.27.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/bonecp-0.8.0.RELEASE.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jta-1.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-http-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-io-2.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/slf4j-api-1.7.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/gson-2.2.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-dbcp-1.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-databind-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/api-asn1-api-1.0.0-M20.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-crypto-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-xml-9.1.1.v20140108.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-common-4.0.27.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/janino-2.7.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/httpdlog-parser-2.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/netty-3.7.0.Final.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jsp-api-2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jackson-mapper-asl-1.9.11.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-hadoop-compat-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-httpclient-3.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-common-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parser-core-2.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/xml-apis-1.4.01.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/javassist-3.16.1-GA.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/parquet-format-2.3.0-incubating.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-configuration-1.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/datanucleus-api-jdo-3.2.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hbase-hadoop2-compat-1.1.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-codec-1.10.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/joda-time-2.9.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/leveldbjni-all-1.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-net-3.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.logback.appender.multiplex-classic-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hive-hbase-handler-1.2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-util-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/stringtemplate-3.2.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-client-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-yarn-server-common-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.sender-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/de.huxhorn.lilith.data.converter-0.9.44.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/htrace-core-3.1.0-incubating.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/freemarker-2.3.21.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/calcite-linq4j-1.4.0-drill-r21.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hadoop-mapreduce-client-jobclient-2.7.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-server-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-beanutils-core-1.8.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/metrics-core-2.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-beanutils-1.7.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/xz-1.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/jetty-server-9.1.5.v20140505.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/velocity-1.7.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/kerb-simplekdc-1.0.0-RC2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/hamcrest-core-1.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/metrics-servlets-3.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/xmlenc-0.52.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/foodmart-data-json-0.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/avro-ipc-1.7.7-tests.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/dom4j-1.6.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/paranamer-2.5.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/apacheds-kerberos-codec-2.0.0-M15.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/calcite-avatica-1.4.0-drill-r21.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/commons-pool-1.5.4.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/3rdparty/api-util-1.0.0-M20.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jetty-6.1.26.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-guava-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/javax.ws.rs-api-2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/codemodel-2.6.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jaxb-api-2.2.2.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/osgi-resource-locator-1.0.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-container-servlet-core-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/activation-1.1.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/javax.inject-2.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-client-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-container-jetty-http-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/hk2-api-2.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jetty-util-6.1.26.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-mvc-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/logback-classic-1.0.13.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-container-jetty-servlet-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/reflections-0.9.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-container-servlet-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-media-multipart-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-server-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-common-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/mimepull-1.9.3.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/aopalliance-repackaged-2.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/javax.servlet-api-3.1.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/jersey-mvc-freemarker-2.8.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/logback-core-1.0.13.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/hk2-locator-2.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/hk2-utils-2.2.0.jar:/app/pnlp/tools/drill/apache-drill-1.11.0/jars/classb/javax.annotation-api-1.2.jar
Library path: 
/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Boot class path: 
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/resources.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/rt.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/sunrsasign.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/jsse.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/jce.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/charsets.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/jfr.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/classes





-----Original Message-----
From: Vlad Rozov [mailto:vrozov@apache.org] 
Sent: August 24, 17 12:52 PM
To: user@drill.apache.org
Subject: Re: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

Try /etc/alternatives/java_sdk_1.8.0/bin/jinfo <pid> or use jconsole to get the classpath.

Thank you,

Vlad

On 8/24/17 09:38, Zubair, Muhammad wrote:
> $ yarn version
> Hadoop 2.7.1.2.4.2.0-258
> Subversion git@github.com:hortonworks/hadoop.git -r 
> 13debf893a605e8a88df18a7d8d214f571e05289
> Compiled by jenkins on 2016-04-25T05:46Z Compiled with protoc 2.5.0  
> From source with checksum 2a2d95f05ec6c3ac547ed58cab713ac This command 
> was run using 
> /usr/hdp/2.4.2.0-258/hadoop/hadoop-common-2.7.1.2.4.2.0-258.jar
>
> Drill process:
>
> 10961 pts/0    Sl+    0:18 /etc/alternatives/java_sdk_1.8.0/bin/java -XX:MaxPermSize=512M -Dlog.path=/app /tools/drill/apache-drill
>
> $ jinfo 10961
> Attaching to process ID 10961, please wait...
> Debugger attached successfully.
> Server compiler detected.
> JVM version is 25.71-b15
> Java System Properties:
>
> Exception in thread "main" java.lang.reflect.InvocationTargetException
>          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>          at java.lang.reflect.Method.invoke(Method.java:497)
>          at sun.tools.jinfo.JInfo.runTool(JInfo.java:108)
>          at sun.tools.jinfo.JInfo.main(JInfo.java:76)
> Caused by: java.lang.InternalError: Metadata does not appear to be polymorphic
>          at sun.jvm.hotspot.types.basic.BasicTypeDataBase.findDynamicTypeForAddress(BasicTypeDataBase.java:278)
>          at sun.jvm.hotspot.runtime.VirtualBaseConstructor.instantiateWrapperFor(VirtualBaseConstructor.java:102)
>          at sun.jvm.hotspot.oops.Metadata.instantiateWrapperFor(Metadata.java:68)
>          at sun.jvm.hotspot.memory.SystemDictionary.getSystemKlass(SystemDictionary.java:127)
>          at sun.jvm.hotspot.runtime.VM.readSystemProperties(VM.java:879)
>          at sun.jvm.hotspot.runtime.VM.getSystemProperties(VM.java:873)
>          at sun.jvm.hotspot.tools.SysPropsDumper.run(SysPropsDumper.java:44)
>          at sun.jvm.hotspot.tools.JInfo$1.run(JInfo.java:79)
>          at sun.jvm.hotspot.tools.JInfo.run(JInfo.java:94)
>          at sun.jvm.hotspot.tools.Tool.startInternal(Tool.java:260)
>          at sun.jvm.hotspot.tools.Tool.start(Tool.java:223)
>          at sun.jvm.hotspot.tools.Tool.execute(Tool.java:118)
>          at sun.jvm.hotspot.tools.JInfo.main(JInfo.java:138)
>          ... 6 more
>
>
>
> -----Original Message-----
> From: Vlad Rozov [mailto:vrozov@apache.org]
> Sent: August 24, 17 12:17 PM
> To: user@drill.apache.org
> Subject: Re: Apache Drill unable to read files from HDFS (Resource 
> error: Failed to create schema tree)
>
> One of possible problems is a mismatch of yarn libraries on the edge node. What hadoop distro and version do you have on the edge node? Can you provide output (classpath) of "jinfo <pid>" where pid is Foreman/Drillbit process id.
>
> Caused By (java.lang.NoClassDefFoundError) 
> org/apache/hadoop/yarn/api/ApplicationClientProtocolPB 
> org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenIn
> fo():65 org.apache.hadoop.security.SecurityUtil.getTokenInfo():331 
> org.apache.hadoop.security.SaslRpcClient.getServerToken():263 
> org.apache.hadoop.security.SaslRpcClient.createSaslClient():219 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159 
> org.apache.hadoop.security.SaslRpcClient.saslConnect():396 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555 
> org.apache.hadoop.ipc.Client$Connection.access$1800():370 
> org.apache.hadoop.ipc.Client$Connection$2.run():724
>
> Thank you,
>
> Vlad
>
> On 8/24/17 08:39, Zubair, Muhammad wrote:
>> Padma,
>> I've already modified the configuration as specified, but the error is still there.
>>
>> Running hdfs dfs -ls /folder does return list of files
>>
>> I enabled Verbose error logging, here's the full error message:
>>
>> org.apache.drill.common.exceptions.UserRemoteException: RESOURCE
>> ERROR: Failed to create schema tree. [Error Id:
>> 28c0c9a2-460d-460e-b93b-1d34e341cc65 on server:31010]
>> (java.io.IOException) Failed on local exception: java.io.IOException:
>> Couldn't set up IO streams; Host Details : local host is:
>> "server/10.61.60.113"; destination host is: "hdfs-server":8020;
>> org.apache.hadoop.net.NetUtils.wrapException():776
>> org.apache.hadoop.ipc.Client.call():1480
>> org.apache.hadoop.ipc.Client.call():1407
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229
>> com.sun.proxy.$Proxy63.getListing():-1
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
>> g
>> etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1
>> sun.reflect.DelegatingMethodAccessorImpl.invoke():43
>> java.lang.reflect.Method.invoke():497
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102
>> com.sun.proxy.$Proxy64.getListing():-1
>> org.apache.hadoop.hdfs.DFSClient.listPaths():2094
>> org.apache.hadoop.hdfs.DFSClient.listPaths():2077
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791
>> org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106
>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853
>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860
>> org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522
>> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():1
>> 6
>> 0
>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSch
>> e
>> ma.():77
>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchem
>> a
>> s():64
>> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():14
>> 9 
>> org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFact
>> o
>> ry.registerSchemas():396
>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110
>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99
>> org.apache.drill.exec.ops.QueryContext.getRootSchema():164
>> org.apache.drill.exec.ops.QueryContext.getRootSchema():153
>> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139
>> org.apache.drill.exec.planner.sql.SqlConverter.():111
>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101
>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79
>> org.apache.drill.exec.work.foreman.Foreman.runSQL():1050
>> org.apache.drill.exec.work.foreman.Foreman.run():280
>> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
>> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
>> java.lang.Thread.run():745 Caused By (java.io.IOException) Couldn't 
>> set up IO streams
>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams():788
>> org.apache.hadoop.ipc.Client$Connection.access$2800():370
>> org.apache.hadoop.ipc.Client.getConnection():1529
>> org.apache.hadoop.ipc.Client.call():1446
>> org.apache.hadoop.ipc.Client.call():1407
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229
>> com.sun.proxy.$Proxy63.getListing():-1
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
>> g
>> etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1
>> sun.reflect.DelegatingMethodAccessorImpl.invoke():43
>> java.lang.reflect.Method.invoke():497
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102
>> com.sun.proxy.$Proxy64.getListing():-1
>> org.apache.hadoop.hdfs.DFSClient.listPaths():2094
>> org.apache.hadoop.hdfs.DFSClient.listPaths():2077
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791
>> org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106
>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853
>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860
>> org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522
>> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():1
>> 6
>> 0
>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSch
>> e
>> ma.():77
>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchem
>> a
>> s():64
>> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():14
>> 9 
>> org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFact
>> o
>> ry.registerSchemas():396
>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110
>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99
>> org.apache.drill.exec.ops.QueryContext.getRootSchema():164
>> org.apache.drill.exec.ops.QueryContext.getRootSchema():153
>> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139
>> org.apache.drill.exec.planner.sql.SqlConverter.():111
>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101
>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79
>> org.apache.drill.exec.work.foreman.Foreman.runSQL():1050
>> org.apache.drill.exec.work.foreman.Foreman.run():280
>> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
>> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
>> java.lang.Thread.run():745 Caused By (java.lang.NoClassDefFoundError) 
>> org/apache/hadoop/yarn/api/ApplicationClientProtocolPB
>> org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenI
>> n
>> fo():65 org.apache.hadoop.security.SecurityUtil.getTokenInfo():331
>> org.apache.hadoop.security.SaslRpcClient.getServerToken():263
>> org.apache.hadoop.security.SaslRpcClient.createSaslClient():219
>> org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159
>> org.apache.hadoop.security.SaslRpcClient.saslConnect():396
>> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555
>> org.apache.hadoop.ipc.Client$Connection.access$1800():370
>> org.apache.hadoop.ipc.Client$Connection$2.run():724
>> org.apache.hadoop.ipc.Client$Connection$2.run():720
>> java.security.AccessController.doPrivileged():-2
>> javax.security.auth.Subject.doAs():422
>> org.apache.hadoop.security.UserGroupInformation.doAs():1657
>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams():720
>> org.apache.hadoop.ipc.Client$Connection.access$2800():370
>> org.apache.hadoop.ipc.Client.getConnection():1529
>> org.apache.hadoop.ipc.Client.call():1446
>> org.apache.hadoop.ipc.Client.call():1407
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229
>> com.sun.proxy.$Proxy63.getListing():-1
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
>> g
>> etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1
>> sun.reflect.DelegatingMethodAccessorImpl.invoke():43
>> java.lang.reflect.Method.invoke():497
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102
>> com.sun.proxy.$Proxy64.getListing():-1
>> org.apache.hadoop.hdfs.DFSClient.listPaths():2094
>> org.apache.hadoop.hdfs.DFSClient.listPaths():2077
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791
>> org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106
>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853
>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860
>> org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522
>> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():1
>> 6
>> 0
>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSch
>> e
>> ma.():77
>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchem
>> a
>> s():64
>> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():14
>> 9 
>> org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFact
>> o
>> ry.registerSchemas():396
>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110
>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99
>> org.apache.drill.exec.ops.QueryContext.getRootSchema():164
>> org.apache.drill.exec.ops.QueryContext.getRootSchema():153
>> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139
>> org.apache.drill.exec.planner.sql.SqlConverter.():111
>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101
>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79
>> org.apache.drill.exec.work.foreman.Foreman.runSQL():1050
>> org.apache.drill.exec.work.foreman.Foreman.run():280
>> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
>> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
>> java.lang.Thread.run():745
>>
>>
>>
>> -----Original Message-----
>> From: Padma Penumarthy [mailto:ppenumarthy@mapr.com]
>> Sent: August 23, 17 9:09 PM
>> To: user@drill.apache.org
>> Subject: Re: Apache Drill unable to read files from HDFS (Resource
>> error: Failed to create schema tree)
>>
>> For HDFS, your storage plugin configuration should be something like this:
>>
>> {
>>     "type": "file",
>>     "enabled": true,
>>     "connection": "hdfs://<IP Address>:<Port>”,   // IP address and port number of name node metadata service
>>     "config": null,
>>     "workspaces": {
>>       "root": {
>>         "location": "/",
>>         "writable": true,
>>         "defaultInputFormat": null
>>       },
>>       "tmp": {
>>         "location": "/tmp",
>>         "writable": true,
>>         "defaultInputFormat": null
>>       }
>>     },
>>
>> Also, try hadoop dfs -ls command to see if you can list the files.
>>
>> Thanks,
>> Padma
>>
>>
>> On Aug 23, 2017, at 12:18 PM, Lee, David <Da...@blackrock.com>> wrote:
>>
>> HDFS storage plugin should be set to your HDFS name node url..
>>
>> -----Original Message-----
>> From: Zubair, Muhammad [mailto:muhammad.zubair@rbc.com.INVALID]
>> Sent: Wednesday, August 23, 2017 11:33 AM
>> To: user@drill.apache.org<ma...@drill.apache.org>
>> Subject: Apache Drill unable to read files from HDFS (Resource error:
>> Failed to create schema tree)
>>
>> Hello,
>> After setting up drill on one of the edge nodes of our HDFS cluster, I am unable to read any hdfs files. I can query data from local files (as long as they are in a folder that has 777 permissions) but querying data from hdfs fails with the following error:
>> Error: RESOURCE ERROR: Failed to create schema tree.
>> [Error Id: d9f7908c-6c3b-49c0-a11e-71c004d27f46 on server-name:31010]
>> (state=,code=0)
>> Query:
>> 0: jdbc:drill:zk=local> select * from hdfs.`/names/city.parquet` limit 2; Querying from local file works fine:
>> 0: jdbc:drill:zk=local> select * from dfs.`/tmp/city.parquet` limit 2; My HDFS settings are similar to the DFS settings, except for the connection URL being the server address instead of file:/// I can't find anything online regarding this error for drill.
>> _____________________________________________________________________
>> _ _ If you received this email in error, please advise the sender (by 
>> return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>>
>> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.
>>
>>
>> This message may contain information that is confidential or privileged. If you are not the intended recipient, please advise the sender immediately and delete this message. See http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for further information.  Please refer to http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more information about BlackRock’s Privacy Policy.
>>
>> For a list of BlackRock's office addresses worldwide, see http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.
>>
>> © 2017 BlackRock, Inc. All rights reserved.
>>
>> _____________________________________________________________________
>> _ _ If you received this email in error, please advise the sender (by 
>> return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>>
>> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.
>
> Thank you,
>
> Vlad
>
>
> ______________________________________________________________________
> _ If you received this email in error, please advise the sender (by 
> return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>
> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.


Thank you,

Vlad

_______________________________________________________________________
If you received this email in error, please advise the sender (by return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.  

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.

Re: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

Posted by Vlad Rozov <vr...@apache.org>.
Try /etc/alternatives/java_sdk_1.8.0/bin/jinfo <pid> or use jconsole to 
get the classpath.

Thank you,

Vlad

On 8/24/17 09:38, Zubair, Muhammad wrote:
> $ yarn version
> Hadoop 2.7.1.2.4.2.0-258
> Subversion git@github.com:hortonworks/hadoop.git -r 13debf893a605e8a88df18a7d8d214f571e05289
> Compiled by jenkins on 2016-04-25T05:46Z
> Compiled with protoc 2.5.0
>  From source with checksum 2a2d95f05ec6c3ac547ed58cab713ac
> This command was run using /usr/hdp/2.4.2.0-258/hadoop/hadoop-common-2.7.1.2.4.2.0-258.jar
>
> Drill process:
>
> 10961 pts/0    Sl+    0:18 /etc/alternatives/java_sdk_1.8.0/bin/java -XX:MaxPermSize=512M -Dlog.path=/app /tools/drill/apache-drill
>
> $ jinfo 10961
> Attaching to process ID 10961, please wait...
> Debugger attached successfully.
> Server compiler detected.
> JVM version is 25.71-b15
> Java System Properties:
>
> Exception in thread "main" java.lang.reflect.InvocationTargetException
>          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>          at java.lang.reflect.Method.invoke(Method.java:497)
>          at sun.tools.jinfo.JInfo.runTool(JInfo.java:108)
>          at sun.tools.jinfo.JInfo.main(JInfo.java:76)
> Caused by: java.lang.InternalError: Metadata does not appear to be polymorphic
>          at sun.jvm.hotspot.types.basic.BasicTypeDataBase.findDynamicTypeForAddress(BasicTypeDataBase.java:278)
>          at sun.jvm.hotspot.runtime.VirtualBaseConstructor.instantiateWrapperFor(VirtualBaseConstructor.java:102)
>          at sun.jvm.hotspot.oops.Metadata.instantiateWrapperFor(Metadata.java:68)
>          at sun.jvm.hotspot.memory.SystemDictionary.getSystemKlass(SystemDictionary.java:127)
>          at sun.jvm.hotspot.runtime.VM.readSystemProperties(VM.java:879)
>          at sun.jvm.hotspot.runtime.VM.getSystemProperties(VM.java:873)
>          at sun.jvm.hotspot.tools.SysPropsDumper.run(SysPropsDumper.java:44)
>          at sun.jvm.hotspot.tools.JInfo$1.run(JInfo.java:79)
>          at sun.jvm.hotspot.tools.JInfo.run(JInfo.java:94)
>          at sun.jvm.hotspot.tools.Tool.startInternal(Tool.java:260)
>          at sun.jvm.hotspot.tools.Tool.start(Tool.java:223)
>          at sun.jvm.hotspot.tools.Tool.execute(Tool.java:118)
>          at sun.jvm.hotspot.tools.JInfo.main(JInfo.java:138)
>          ... 6 more
>
>
>
> -----Original Message-----
> From: Vlad Rozov [mailto:vrozov@apache.org]
> Sent: August 24, 17 12:17 PM
> To: user@drill.apache.org
> Subject: Re: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)
>
> One of possible problems is a mismatch of yarn libraries on the edge node. What hadoop distro and version do you have on the edge node? Can you provide output (classpath) of "jinfo <pid>" where pid is Foreman/Drillbit process id.
>
> Caused By (java.lang.NoClassDefFoundError) org/apache/hadoop/yarn/api/ApplicationClientProtocolPB org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenInfo():65 org.apache.hadoop.security.SecurityUtil.getTokenInfo():331 org.apache.hadoop.security.SaslRpcClient.getServerToken():263 org.apache.hadoop.security.SaslRpcClient.createSaslClient():219 org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159 org.apache.hadoop.security.SaslRpcClient.saslConnect():396 org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555 org.apache.hadoop.ipc.Client$Connection.access$1800():370 org.apache.hadoop.ipc.Client$Connection$2.run():724
>
> Thank you,
>
> Vlad
>
> On 8/24/17 08:39, Zubair, Muhammad wrote:
>> Padma,
>> I've already modified the configuration as specified, but the error is still there.
>>
>> Running hdfs dfs -ls /folder does return list of files
>>
>> I enabled Verbose error logging, here's the full error message:
>>
>> org.apache.drill.common.exceptions.UserRemoteException: RESOURCE
>> ERROR: Failed to create schema tree. [Error Id:
>> 28c0c9a2-460d-460e-b93b-1d34e341cc65 on server:31010]
>> (java.io.IOException) Failed on local exception: java.io.IOException:
>> Couldn't set up IO streams; Host Details : local host is:
>> "server/10.61.60.113"; destination host is: "hdfs-server":8020;
>> org.apache.hadoop.net.NetUtils.wrapException():776
>> org.apache.hadoop.ipc.Client.call():1480
>> org.apache.hadoop.ipc.Client.call():1407
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229
>> com.sun.proxy.$Proxy63.getListing():-1
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.g
>> etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1
>> sun.reflect.DelegatingMethodAccessorImpl.invoke():43
>> java.lang.reflect.Method.invoke():497
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102
>> com.sun.proxy.$Proxy64.getListing():-1
>> org.apache.hadoop.hdfs.DFSClient.listPaths():2094
>> org.apache.hadoop.hdfs.DFSClient.listPaths():2077
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791
>> org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106
>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853
>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860
>> org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522
>> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():16
>> 0
>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSche
>> ma.():77
>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchema
>> s():64
>> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149
>> org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFacto
>> ry.registerSchemas():396
>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110
>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99
>> org.apache.drill.exec.ops.QueryContext.getRootSchema():164
>> org.apache.drill.exec.ops.QueryContext.getRootSchema():153
>> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139
>> org.apache.drill.exec.planner.sql.SqlConverter.():111
>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101
>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79
>> org.apache.drill.exec.work.foreman.Foreman.runSQL():1050
>> org.apache.drill.exec.work.foreman.Foreman.run():280
>> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
>> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
>> java.lang.Thread.run():745 Caused By (java.io.IOException) Couldn't
>> set up IO streams
>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams():788
>> org.apache.hadoop.ipc.Client$Connection.access$2800():370
>> org.apache.hadoop.ipc.Client.getConnection():1529
>> org.apache.hadoop.ipc.Client.call():1446
>> org.apache.hadoop.ipc.Client.call():1407
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229
>> com.sun.proxy.$Proxy63.getListing():-1
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.g
>> etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1
>> sun.reflect.DelegatingMethodAccessorImpl.invoke():43
>> java.lang.reflect.Method.invoke():497
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102
>> com.sun.proxy.$Proxy64.getListing():-1
>> org.apache.hadoop.hdfs.DFSClient.listPaths():2094
>> org.apache.hadoop.hdfs.DFSClient.listPaths():2077
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791
>> org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106
>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853
>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860
>> org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522
>> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():16
>> 0
>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSche
>> ma.():77
>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchema
>> s():64
>> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149
>> org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFacto
>> ry.registerSchemas():396
>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110
>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99
>> org.apache.drill.exec.ops.QueryContext.getRootSchema():164
>> org.apache.drill.exec.ops.QueryContext.getRootSchema():153
>> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139
>> org.apache.drill.exec.planner.sql.SqlConverter.():111
>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101
>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79
>> org.apache.drill.exec.work.foreman.Foreman.runSQL():1050
>> org.apache.drill.exec.work.foreman.Foreman.run():280
>> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
>> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
>> java.lang.Thread.run():745 Caused By (java.lang.NoClassDefFoundError)
>> org/apache/hadoop/yarn/api/ApplicationClientProtocolPB
>> org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenIn
>> fo():65 org.apache.hadoop.security.SecurityUtil.getTokenInfo():331
>> org.apache.hadoop.security.SaslRpcClient.getServerToken():263
>> org.apache.hadoop.security.SaslRpcClient.createSaslClient():219
>> org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159
>> org.apache.hadoop.security.SaslRpcClient.saslConnect():396
>> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555
>> org.apache.hadoop.ipc.Client$Connection.access$1800():370
>> org.apache.hadoop.ipc.Client$Connection$2.run():724
>> org.apache.hadoop.ipc.Client$Connection$2.run():720
>> java.security.AccessController.doPrivileged():-2
>> javax.security.auth.Subject.doAs():422
>> org.apache.hadoop.security.UserGroupInformation.doAs():1657
>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams():720
>> org.apache.hadoop.ipc.Client$Connection.access$2800():370
>> org.apache.hadoop.ipc.Client.getConnection():1529
>> org.apache.hadoop.ipc.Client.call():1446
>> org.apache.hadoop.ipc.Client.call():1407
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229
>> com.sun.proxy.$Proxy63.getListing():-1
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.g
>> etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1
>> sun.reflect.DelegatingMethodAccessorImpl.invoke():43
>> java.lang.reflect.Method.invoke():497
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102
>> com.sun.proxy.$Proxy64.getListing():-1
>> org.apache.hadoop.hdfs.DFSClient.listPaths():2094
>> org.apache.hadoop.hdfs.DFSClient.listPaths():2077
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791
>> org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106
>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853
>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860
>> org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522
>> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():16
>> 0
>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSche
>> ma.():77
>> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchema
>> s():64
>> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149
>> org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFacto
>> ry.registerSchemas():396
>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110
>> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99
>> org.apache.drill.exec.ops.QueryContext.getRootSchema():164
>> org.apache.drill.exec.ops.QueryContext.getRootSchema():153
>> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139
>> org.apache.drill.exec.planner.sql.SqlConverter.():111
>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101
>> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79
>> org.apache.drill.exec.work.foreman.Foreman.runSQL():1050
>> org.apache.drill.exec.work.foreman.Foreman.run():280
>> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
>> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
>> java.lang.Thread.run():745
>>
>>
>>
>> -----Original Message-----
>> From: Padma Penumarthy [mailto:ppenumarthy@mapr.com]
>> Sent: August 23, 17 9:09 PM
>> To: user@drill.apache.org
>> Subject: Re: Apache Drill unable to read files from HDFS (Resource
>> error: Failed to create schema tree)
>>
>> For HDFS, your storage plugin configuration should be something like this:
>>
>> {
>>     "type": "file",
>>     "enabled": true,
>>     "connection": "hdfs://<IP Address>:<Port>”,   // IP address and port number of name node metadata service
>>     "config": null,
>>     "workspaces": {
>>       "root": {
>>         "location": "/",
>>         "writable": true,
>>         "defaultInputFormat": null
>>       },
>>       "tmp": {
>>         "location": "/tmp",
>>         "writable": true,
>>         "defaultInputFormat": null
>>       }
>>     },
>>
>> Also, try hadoop dfs -ls command to see if you can list the files.
>>
>> Thanks,
>> Padma
>>
>>
>> On Aug 23, 2017, at 12:18 PM, Lee, David <Da...@blackrock.com>> wrote:
>>
>> HDFS storage plugin should be set to your HDFS name node url..
>>
>> -----Original Message-----
>> From: Zubair, Muhammad [mailto:muhammad.zubair@rbc.com.INVALID]
>> Sent: Wednesday, August 23, 2017 11:33 AM
>> To: user@drill.apache.org<ma...@drill.apache.org>
>> Subject: Apache Drill unable to read files from HDFS (Resource error:
>> Failed to create schema tree)
>>
>> Hello,
>> After setting up drill on one of the edge nodes of our HDFS cluster, I am unable to read any hdfs files. I can query data from local files (as long as they are in a folder that has 777 permissions) but querying data from hdfs fails with the following error:
>> Error: RESOURCE ERROR: Failed to create schema tree.
>> [Error Id: d9f7908c-6c3b-49c0-a11e-71c004d27f46 on server-name:31010]
>> (state=,code=0)
>> Query:
>> 0: jdbc:drill:zk=local> select * from hdfs.`/names/city.parquet` limit 2; Querying from local file works fine:
>> 0: jdbc:drill:zk=local> select * from dfs.`/tmp/city.parquet` limit 2; My HDFS settings are similar to the DFS settings, except for the connection URL being the server address instead of file:/// I can't find anything online regarding this error for drill.
>> ______________________________________________________________________
>> _ If you received this email in error, please advise the sender (by
>> return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>>
>> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.
>>
>>
>> This message may contain information that is confidential or privileged. If you are not the intended recipient, please advise the sender immediately and delete this message. See http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for further information.  Please refer to http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more information about BlackRock’s Privacy Policy.
>>
>> For a list of BlackRock's office addresses worldwide, see http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.
>>
>> © 2017 BlackRock, Inc. All rights reserved.
>>
>> ______________________________________________________________________
>> _ If you received this email in error, please advise the sender (by
>> return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>>
>> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.
>
> Thank you,
>
> Vlad
>
>
> _______________________________________________________________________
> If you received this email in error, please advise the sender (by return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>
> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.


Thank you,

Vlad

RE: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

Posted by "Zubair, Muhammad" <mu...@rbc.com.INVALID>.
$ yarn version
Hadoop 2.7.1.2.4.2.0-258
Subversion git@github.com:hortonworks/hadoop.git -r 13debf893a605e8a88df18a7d8d214f571e05289
Compiled by jenkins on 2016-04-25T05:46Z
Compiled with protoc 2.5.0
From source with checksum 2a2d95f05ec6c3ac547ed58cab713ac
This command was run using /usr/hdp/2.4.2.0-258/hadoop/hadoop-common-2.7.1.2.4.2.0-258.jar

Drill process:

10961 pts/0    Sl+    0:18 /etc/alternatives/java_sdk_1.8.0/bin/java -XX:MaxPermSize=512M -Dlog.path=/app /tools/drill/apache-drill

$ jinfo 10961
Attaching to process ID 10961, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.71-b15
Java System Properties:

Exception in thread "main" java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at sun.tools.jinfo.JInfo.runTool(JInfo.java:108)
        at sun.tools.jinfo.JInfo.main(JInfo.java:76)
Caused by: java.lang.InternalError: Metadata does not appear to be polymorphic
        at sun.jvm.hotspot.types.basic.BasicTypeDataBase.findDynamicTypeForAddress(BasicTypeDataBase.java:278)
        at sun.jvm.hotspot.runtime.VirtualBaseConstructor.instantiateWrapperFor(VirtualBaseConstructor.java:102)
        at sun.jvm.hotspot.oops.Metadata.instantiateWrapperFor(Metadata.java:68)
        at sun.jvm.hotspot.memory.SystemDictionary.getSystemKlass(SystemDictionary.java:127)
        at sun.jvm.hotspot.runtime.VM.readSystemProperties(VM.java:879)
        at sun.jvm.hotspot.runtime.VM.getSystemProperties(VM.java:873)
        at sun.jvm.hotspot.tools.SysPropsDumper.run(SysPropsDumper.java:44)
        at sun.jvm.hotspot.tools.JInfo$1.run(JInfo.java:79)
        at sun.jvm.hotspot.tools.JInfo.run(JInfo.java:94)
        at sun.jvm.hotspot.tools.Tool.startInternal(Tool.java:260)
        at sun.jvm.hotspot.tools.Tool.start(Tool.java:223)
        at sun.jvm.hotspot.tools.Tool.execute(Tool.java:118)
        at sun.jvm.hotspot.tools.JInfo.main(JInfo.java:138)
        ... 6 more



-----Original Message-----
From: Vlad Rozov [mailto:vrozov@apache.org] 
Sent: August 24, 17 12:17 PM
To: user@drill.apache.org
Subject: Re: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

One of possible problems is a mismatch of yarn libraries on the edge node. What hadoop distro and version do you have on the edge node? Can you provide output (classpath) of "jinfo <pid>" where pid is Foreman/Drillbit process id.

Caused By (java.lang.NoClassDefFoundError) org/apache/hadoop/yarn/api/ApplicationClientProtocolPB org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenInfo():65 org.apache.hadoop.security.SecurityUtil.getTokenInfo():331 org.apache.hadoop.security.SaslRpcClient.getServerToken():263 org.apache.hadoop.security.SaslRpcClient.createSaslClient():219 org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159 org.apache.hadoop.security.SaslRpcClient.saslConnect():396 org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555 org.apache.hadoop.ipc.Client$Connection.access$1800():370 org.apache.hadoop.ipc.Client$Connection$2.run():724

Thank you,

Vlad

On 8/24/17 08:39, Zubair, Muhammad wrote:
> Padma,
> I've already modified the configuration as specified, but the error is still there.
>
> Running hdfs dfs -ls /folder does return list of files
>
> I enabled Verbose error logging, here's the full error message:
>
> org.apache.drill.common.exceptions.UserRemoteException: RESOURCE 
> ERROR: Failed to create schema tree. [Error Id: 
> 28c0c9a2-460d-460e-b93b-1d34e341cc65 on server:31010] 
> (java.io.IOException) Failed on local exception: java.io.IOException: 
> Couldn't set up IO streams; Host Details : local host is: 
> "server/10.61.60.113"; destination host is: "hdfs-server":8020; 
> org.apache.hadoop.net.NetUtils.wrapException():776 
> org.apache.hadoop.ipc.Client.call():1480 
> org.apache.hadoop.ipc.Client.call():1407 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229 
> com.sun.proxy.$Proxy63.getListing():-1 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.g
> etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1 
> sun.reflect.DelegatingMethodAccessorImpl.invoke():43 
> java.lang.reflect.Method.invoke():497 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102 
> com.sun.proxy.$Proxy64.getListing():-1 
> org.apache.hadoop.hdfs.DFSClient.listPaths():2094 
> org.apache.hadoop.hdfs.DFSClient.listPaths():2077 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860 
> org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():16
> 0 
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSche
> ma.():77 
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchema
> s():64 
> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149 
> org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFacto
> ry.registerSchemas():396 
> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110 
> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99 
> org.apache.drill.exec.ops.QueryContext.getRootSchema():164 
> org.apache.drill.exec.ops.QueryContext.getRootSchema():153 
> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139 
> org.apache.drill.exec.planner.sql.SqlConverter.():111 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 
> org.apache.drill.exec.work.foreman.Foreman.runSQL():1050 
> org.apache.drill.exec.work.foreman.Foreman.run():280 
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142 
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617 
> java.lang.Thread.run():745 Caused By (java.io.IOException) Couldn't 
> set up IO streams 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams():788 
> org.apache.hadoop.ipc.Client$Connection.access$2800():370 
> org.apache.hadoop.ipc.Client.getConnection():1529 
> org.apache.hadoop.ipc.Client.call():1446 
> org.apache.hadoop.ipc.Client.call():1407 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229 
> com.sun.proxy.$Proxy63.getListing():-1 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.g
> etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1 
> sun.reflect.DelegatingMethodAccessorImpl.invoke():43 
> java.lang.reflect.Method.invoke():497 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102 
> com.sun.proxy.$Proxy64.getListing():-1 
> org.apache.hadoop.hdfs.DFSClient.listPaths():2094 
> org.apache.hadoop.hdfs.DFSClient.listPaths():2077 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860 
> org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():16
> 0 
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSche
> ma.():77 
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchema
> s():64 
> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149 
> org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFacto
> ry.registerSchemas():396 
> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110 
> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99 
> org.apache.drill.exec.ops.QueryContext.getRootSchema():164 
> org.apache.drill.exec.ops.QueryContext.getRootSchema():153 
> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139 
> org.apache.drill.exec.planner.sql.SqlConverter.():111 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 
> org.apache.drill.exec.work.foreman.Foreman.runSQL():1050 
> org.apache.drill.exec.work.foreman.Foreman.run():280 
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142 
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617 
> java.lang.Thread.run():745 Caused By (java.lang.NoClassDefFoundError) 
> org/apache/hadoop/yarn/api/ApplicationClientProtocolPB 
> org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenIn
> fo():65 org.apache.hadoop.security.SecurityUtil.getTokenInfo():331 
> org.apache.hadoop.security.SaslRpcClient.getServerToken():263 
> org.apache.hadoop.security.SaslRpcClient.createSaslClient():219 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159 
> org.apache.hadoop.security.SaslRpcClient.saslConnect():396 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555 
> org.apache.hadoop.ipc.Client$Connection.access$1800():370 
> org.apache.hadoop.ipc.Client$Connection$2.run():724 
> org.apache.hadoop.ipc.Client$Connection$2.run():720 
> java.security.AccessController.doPrivileged():-2 
> javax.security.auth.Subject.doAs():422 
> org.apache.hadoop.security.UserGroupInformation.doAs():1657 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams():720 
> org.apache.hadoop.ipc.Client$Connection.access$2800():370 
> org.apache.hadoop.ipc.Client.getConnection():1529 
> org.apache.hadoop.ipc.Client.call():1446 
> org.apache.hadoop.ipc.Client.call():1407 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229 
> com.sun.proxy.$Proxy63.getListing():-1 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.g
> etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1 
> sun.reflect.DelegatingMethodAccessorImpl.invoke():43 
> java.lang.reflect.Method.invoke():497 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102 
> com.sun.proxy.$Proxy64.getListing():-1 
> org.apache.hadoop.hdfs.DFSClient.listPaths():2094 
> org.apache.hadoop.hdfs.DFSClient.listPaths():2077 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860 
> org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():16
> 0 
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSche
> ma.():77 
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchema
> s():64 
> org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149 
> org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFacto
> ry.registerSchemas():396 
> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110 
> org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99 
> org.apache.drill.exec.ops.QueryContext.getRootSchema():164 
> org.apache.drill.exec.ops.QueryContext.getRootSchema():153 
> org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139 
> org.apache.drill.exec.planner.sql.SqlConverter.():111 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 
> org.apache.drill.exec.work.foreman.Foreman.runSQL():1050 
> org.apache.drill.exec.work.foreman.Foreman.run():280 
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142 
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617 
> java.lang.Thread.run():745
>
>
>
> -----Original Message-----
> From: Padma Penumarthy [mailto:ppenumarthy@mapr.com]
> Sent: August 23, 17 9:09 PM
> To: user@drill.apache.org
> Subject: Re: Apache Drill unable to read files from HDFS (Resource 
> error: Failed to create schema tree)
>
> For HDFS, your storage plugin configuration should be something like this:
>
> {
>    "type": "file",
>    "enabled": true,
>    "connection": "hdfs://<IP Address>:<Port>”,   // IP address and port number of name node metadata service
>    "config": null,
>    "workspaces": {
>      "root": {
>        "location": "/",
>        "writable": true,
>        "defaultInputFormat": null
>      },
>      "tmp": {
>        "location": "/tmp",
>        "writable": true,
>        "defaultInputFormat": null
>      }
>    },
>
> Also, try hadoop dfs -ls command to see if you can list the files.
>
> Thanks,
> Padma
>
>
> On Aug 23, 2017, at 12:18 PM, Lee, David <Da...@blackrock.com>> wrote:
>
> HDFS storage plugin should be set to your HDFS name node url..
>
> -----Original Message-----
> From: Zubair, Muhammad [mailto:muhammad.zubair@rbc.com.INVALID]
> Sent: Wednesday, August 23, 2017 11:33 AM
> To: user@drill.apache.org<ma...@drill.apache.org>
> Subject: Apache Drill unable to read files from HDFS (Resource error: 
> Failed to create schema tree)
>
> Hello,
> After setting up drill on one of the edge nodes of our HDFS cluster, I am unable to read any hdfs files. I can query data from local files (as long as they are in a folder that has 777 permissions) but querying data from hdfs fails with the following error:
> Error: RESOURCE ERROR: Failed to create schema tree.
> [Error Id: d9f7908c-6c3b-49c0-a11e-71c004d27f46 on server-name:31010] 
> (state=,code=0)
> Query:
> 0: jdbc:drill:zk=local> select * from hdfs.`/names/city.parquet` limit 2; Querying from local file works fine:
> 0: jdbc:drill:zk=local> select * from dfs.`/tmp/city.parquet` limit 2; My HDFS settings are similar to the DFS settings, except for the connection URL being the server address instead of file:/// I can't find anything online regarding this error for drill.
> ______________________________________________________________________
> _ If you received this email in error, please advise the sender (by 
> return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>
> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.
>
>
> This message may contain information that is confidential or privileged. If you are not the intended recipient, please advise the sender immediately and delete this message. See http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for further information.  Please refer to http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more information about BlackRock’s Privacy Policy.
>
> For a list of BlackRock's office addresses worldwide, see http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.
>
> © 2017 BlackRock, Inc. All rights reserved.
>
> ______________________________________________________________________
> _ If you received this email in error, please advise the sender (by 
> return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>
> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.


Thank you,

Vlad


_______________________________________________________________________
If you received this email in error, please advise the sender (by return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.  

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.

Re: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

Posted by Vlad Rozov <vr...@apache.org>.
One of possible problems is a mismatch of yarn libraries on the edge 
node. What hadoop distro and version do you have on the edge node? Can 
you provide output (classpath) of "jinfo <pid>" where pid is 
Foreman/Drillbit process id.

Caused By (java.lang.NoClassDefFoundError) org/apache/hadoop/yarn/api/ApplicationClientProtocolPB org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenInfo():65 org.apache.hadoop.security.SecurityUtil.getTokenInfo():331 org.apache.hadoop.security.SaslRpcClient.getServerToken():263 org.apache.hadoop.security.SaslRpcClient.createSaslClient():219 org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159 org.apache.hadoop.security.SaslRpcClient.saslConnect():396 org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555 org.apache.hadoop.ipc.Client$Connection.access$1800():370 org.apache.hadoop.ipc.Client$Connection$2.run():724

Thank you,

Vlad

On 8/24/17 08:39, Zubair, Muhammad wrote:
> Padma,
> I've already modified the configuration as specified, but the error is still there.
>
> Running hdfs dfs -ls /folder does return list of files
>
> I enabled Verbose error logging, here's the full error message:
>
> org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: Failed to create schema tree. [Error Id: 28c0c9a2-460d-460e-b93b-1d34e341cc65 on server:31010] (java.io.IOException) Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "server/10.61.60.113"; destination host is: "hdfs-server":8020; org.apache.hadoop.net.NetUtils.wrapException():776 org.apache.hadoop.ipc.Client.call():1480 org.apache.hadoop.ipc.Client.call():1407 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229 com.sun.proxy.$Proxy63.getListing():-1 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1 sun.reflect.DelegatingMethodAccessorImpl.invoke():43 java.lang.reflect.Method.invoke():497 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102 com.sun.proxy.$Proxy64.getListing():-1 org.apache.hadoop.hdfs.DFSClient.listPaths():2094 org.apache.hadoop.hdfs.DFSClient.listPaths():2077 org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791 org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106 org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853 org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849 org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81 org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860 org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522 org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():160 org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.():77 org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():64 org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149 org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFactory.registerSchemas():396 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99 org.apache.drill.exec.ops.QueryContext.getRootSchema():164 org.apache.drill.exec.ops.QueryContext.getRootSchema():153 org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139 org.apache.drill.exec.planner.sql.SqlConverter.():111 org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101 org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 org.apache.drill.exec.work.foreman.Foreman.runSQL():1050 org.apache.drill.exec.work.foreman.Foreman.run():280 java.util.concurrent.ThreadPoolExecutor.runWorker():1142 java.util.concurrent.ThreadPoolExecutor$Worker.run():617 java.lang.Thread.run():745 Caused By (java.io.IOException) Couldn't set up IO streams org.apache.hadoop.ipc.Client$Connection.setupIOstreams():788 org.apache.hadoop.ipc.Client$Connection.access$2800():370 org.apache.hadoop.ipc.Client.getConnection():1529 org.apache.hadoop.ipc.Client.call():1446 org.apache.hadoop.ipc.Client.call():1407 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229 com.sun.proxy.$Proxy63.getListing():-1 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1 sun.reflect.DelegatingMethodAccessorImpl.invoke():43 java.lang.reflect.Method.invoke():497 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102 com.sun.proxy.$Proxy64.getListing():-1 org.apache.hadoop.hdfs.DFSClient.listPaths():2094 org.apache.hadoop.hdfs.DFSClient.listPaths():2077 org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791 org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106 org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853 org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849 org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81 org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860 org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522 org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():160 org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.():77 org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():64 org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149 org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFactory.registerSchemas():396 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99 org.apache.drill.exec.ops.QueryContext.getRootSchema():164 org.apache.drill.exec.ops.QueryContext.getRootSchema():153 org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139 org.apache.drill.exec.planner.sql.SqlConverter.():111 org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101 org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 org.apache.drill.exec.work.foreman.Foreman.runSQL():1050 org.apache.drill.exec.work.foreman.Foreman.run():280 java.util.concurrent.ThreadPoolExecutor.runWorker():1142 java.util.concurrent.ThreadPoolExecutor$Worker.run():617 java.lang.Thread.run():745 Caused By (java.lang.NoClassDefFoundError) org/apache/hadoop/yarn/api/ApplicationClientProtocolPB org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenInfo():65 org.apache.hadoop.security.SecurityUtil.getTokenInfo():331 org.apache.hadoop.security.SaslRpcClient.getServerToken():263 org.apache.hadoop.security.SaslRpcClient.createSaslClient():219 org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159 org.apache.hadoop.security.SaslRpcClient.saslConnect():396 org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555 org.apache.hadoop.ipc.Client$Connection.access$1800():370 org.apache.hadoop.ipc.Client$Connection$2.run():724 org.apache.hadoop.ipc.Client$Connection$2.run():720 java.security.AccessController.doPrivileged():-2 javax.security.auth.Subject.doAs():422 org.apache.hadoop.security.UserGroupInformation.doAs():1657 org.apache.hadoop.ipc.Client$Connection.setupIOstreams():720 org.apache.hadoop.ipc.Client$Connection.access$2800():370 org.apache.hadoop.ipc.Client.getConnection():1529 org.apache.hadoop.ipc.Client.call():1446 org.apache.hadoop.ipc.Client.call():1407 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229 com.sun.proxy.$Proxy63.getListing():-1 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1 sun.reflect.DelegatingMethodAccessorImpl.invoke():43 java.lang.reflect.Method.invoke():497 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102 com.sun.proxy.$Proxy64.getListing():-1 org.apache.hadoop.hdfs.DFSClient.listPaths():2094 org.apache.hadoop.hdfs.DFSClient.listPaths():2077 org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791 org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106 org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853 org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849 org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81 org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860 org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522 org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():160 org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.():77 org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():64 org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149 org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFactory.registerSchemas():396 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99 org.apache.drill.exec.ops.QueryContext.getRootSchema():164 org.apache.drill.exec.ops.QueryContext.getRootSchema():153 org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139 org.apache.drill.exec.planner.sql.SqlConverter.():111 org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101 org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 org.apache.drill.exec.work.foreman.Foreman.runSQL():1050 org.apache.drill.exec.work.foreman.Foreman.run():280 java.util.concurrent.ThreadPoolExecutor.runWorker():1142 java.util.concurrent.ThreadPoolExecutor$Worker.run():617 java.lang.Thread.run():745
>
>
>
> -----Original Message-----
> From: Padma Penumarthy [mailto:ppenumarthy@mapr.com]
> Sent: August 23, 17 9:09 PM
> To: user@drill.apache.org
> Subject: Re: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)
>
> For HDFS, your storage plugin configuration should be something like this:
>
> {
>    "type": "file",
>    "enabled": true,
>    "connection": "hdfs://<IP Address>:<Port>”,   // IP address and port number of name node metadata service
>    "config": null,
>    "workspaces": {
>      "root": {
>        "location": "/",
>        "writable": true,
>        "defaultInputFormat": null
>      },
>      "tmp": {
>        "location": "/tmp",
>        "writable": true,
>        "defaultInputFormat": null
>      }
>    },
>
> Also, try hadoop dfs -ls command to see if you can list the files.
>
> Thanks,
> Padma
>
>
> On Aug 23, 2017, at 12:18 PM, Lee, David <Da...@blackrock.com>> wrote:
>
> HDFS storage plugin should be set to your HDFS name node url..
>
> -----Original Message-----
> From: Zubair, Muhammad [mailto:muhammad.zubair@rbc.com.INVALID]
> Sent: Wednesday, August 23, 2017 11:33 AM
> To: user@drill.apache.org<ma...@drill.apache.org>
> Subject: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)
>
> Hello,
> After setting up drill on one of the edge nodes of our HDFS cluster, I am unable to read any hdfs files. I can query data from local files (as long as they are in a folder that has 777 permissions) but querying data from hdfs fails with the following error:
> Error: RESOURCE ERROR: Failed to create schema tree.
> [Error Id: d9f7908c-6c3b-49c0-a11e-71c004d27f46 on server-name:31010] (state=,code=0)
> Query:
> 0: jdbc:drill:zk=local> select * from hdfs.`/names/city.parquet` limit 2; Querying from local file works fine:
> 0: jdbc:drill:zk=local> select * from dfs.`/tmp/city.parquet` limit 2; My HDFS settings are similar to the DFS settings, except for the connection URL being the server address instead of file:/// I can't find anything online regarding this error for drill.
> _______________________________________________________________________
> If you received this email in error, please advise the sender (by return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>
> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.
>
>
> This message may contain information that is confidential or privileged. If you are not the intended recipient, please advise the sender immediately and delete this message. See http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for further information.  Please refer to http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more information about BlackRock’s Privacy Policy.
>
> For a list of BlackRock's office addresses worldwide, see http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.
>
> © 2017 BlackRock, Inc. All rights reserved.
>
> _______________________________________________________________________
> If you received this email in error, please advise the sender (by return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.
>
> Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.


Thank you,

Vlad


RE: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

Posted by "Zubair, Muhammad" <mu...@rbc.com.INVALID>.
Padma,
I've already modified the configuration as specified, but the error is still there.

Running hdfs dfs -ls /folder does return list of files

I enabled Verbose error logging, here's the full error message:

org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: Failed to create schema tree. [Error Id: 28c0c9a2-460d-460e-b93b-1d34e341cc65 on server:31010] (java.io.IOException) Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "server/10.61.60.113"; destination host is: "hdfs-server":8020; org.apache.hadoop.net.NetUtils.wrapException():776 org.apache.hadoop.ipc.Client.call():1480 org.apache.hadoop.ipc.Client.call():1407 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229 com.sun.proxy.$Proxy63.getListing():-1 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1 sun.reflect.DelegatingMethodAccessorImpl.invoke():43 java.lang.reflect.Method.invoke():497 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102 com.sun.proxy.$Proxy64.getListing():-1 org.apache.hadoop.hdfs.DFSClient.listPaths():2094 org.apache.hadoop.hdfs.DFSClient.listPaths():2077 org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791 org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106 org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853 org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849 org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81 org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860 org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522 org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():160 org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.():77 org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():64 org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149 org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFactory.registerSchemas():396 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99 org.apache.drill.exec.ops.QueryContext.getRootSchema():164 org.apache.drill.exec.ops.QueryContext.getRootSchema():153 org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139 org.apache.drill.exec.planner.sql.SqlConverter.():111 org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101 org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 org.apache.drill.exec.work.foreman.Foreman.runSQL():1050 org.apache.drill.exec.work.foreman.Foreman.run():280 java.util.concurrent.ThreadPoolExecutor.runWorker():1142 java.util.concurrent.ThreadPoolExecutor$Worker.run():617 java.lang.Thread.run():745 Caused By (java.io.IOException) Couldn't set up IO streams org.apache.hadoop.ipc.Client$Connection.setupIOstreams():788 org.apache.hadoop.ipc.Client$Connection.access$2800():370 org.apache.hadoop.ipc.Client.getConnection():1529 org.apache.hadoop.ipc.Client.call():1446 org.apache.hadoop.ipc.Client.call():1407 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229 com.sun.proxy.$Proxy63.getListing():-1 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1 sun.reflect.DelegatingMethodAccessorImpl.invoke():43 java.lang.reflect.Method.invoke():497 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102 com.sun.proxy.$Proxy64.getListing():-1 org.apache.hadoop.hdfs.DFSClient.listPaths():2094 org.apache.hadoop.hdfs.DFSClient.listPaths():2077 org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791 org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106 org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853 org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849 org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81 org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860 org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522 org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():160 org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.():77 org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():64 org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149 org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFactory.registerSchemas():396 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99 org.apache.drill.exec.ops.QueryContext.getRootSchema():164 org.apache.drill.exec.ops.QueryContext.getRootSchema():153 org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139 org.apache.drill.exec.planner.sql.SqlConverter.():111 org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101 org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 org.apache.drill.exec.work.foreman.Foreman.runSQL():1050 org.apache.drill.exec.work.foreman.Foreman.run():280 java.util.concurrent.ThreadPoolExecutor.runWorker():1142 java.util.concurrent.ThreadPoolExecutor$Worker.run():617 java.lang.Thread.run():745 Caused By (java.lang.NoClassDefFoundError) org/apache/hadoop/yarn/api/ApplicationClientProtocolPB org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenInfo():65 org.apache.hadoop.security.SecurityUtil.getTokenInfo():331 org.apache.hadoop.security.SaslRpcClient.getServerToken():263 org.apache.hadoop.security.SaslRpcClient.createSaslClient():219 org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159 org.apache.hadoop.security.SaslRpcClient.saslConnect():396 org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555 org.apache.hadoop.ipc.Client$Connection.access$1800():370 org.apache.hadoop.ipc.Client$Connection$2.run():724 org.apache.hadoop.ipc.Client$Connection$2.run():720 java.security.AccessController.doPrivileged():-2 javax.security.auth.Subject.doAs():422 org.apache.hadoop.security.UserGroupInformation.doAs():1657 org.apache.hadoop.ipc.Client$Connection.setupIOstreams():720 org.apache.hadoop.ipc.Client$Connection.access$2800():370 org.apache.hadoop.ipc.Client.getConnection():1529 org.apache.hadoop.ipc.Client.call():1446 org.apache.hadoop.ipc.Client.call():1407 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229 com.sun.proxy.$Proxy63.getListing():-1 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1 sun.reflect.DelegatingMethodAccessorImpl.invoke():43 java.lang.reflect.Method.invoke():497 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102 com.sun.proxy.$Proxy64.getListing():-1 org.apache.hadoop.hdfs.DFSClient.listPaths():2094 org.apache.hadoop.hdfs.DFSClient.listPaths():2077 org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791 org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106 org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853 org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849 org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81 org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860 org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522 org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():160 org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.():77 org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():64 org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149 org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFactory.registerSchemas():396 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99 org.apache.drill.exec.ops.QueryContext.getRootSchema():164 org.apache.drill.exec.ops.QueryContext.getRootSchema():153 org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139 org.apache.drill.exec.planner.sql.SqlConverter.():111 org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101 org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 org.apache.drill.exec.work.foreman.Foreman.runSQL():1050 org.apache.drill.exec.work.foreman.Foreman.run():280 java.util.concurrent.ThreadPoolExecutor.runWorker():1142 java.util.concurrent.ThreadPoolExecutor$Worker.run():617 java.lang.Thread.run():745



-----Original Message-----
From: Padma Penumarthy [mailto:ppenumarthy@mapr.com] 
Sent: August 23, 17 9:09 PM
To: user@drill.apache.org
Subject: Re: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

For HDFS, your storage plugin configuration should be something like this:

{
  "type": "file",
  "enabled": true,
  "connection": "hdfs://<IP Address>:<Port>”,   // IP address and port number of name node metadata service
  "config": null,
  "workspaces": {
    "root": {
      "location": "/",
      "writable": true,
      "defaultInputFormat": null
    },
    "tmp": {
      "location": "/tmp",
      "writable": true,
      "defaultInputFormat": null
    }
  },

Also, try hadoop dfs -ls command to see if you can list the files.

Thanks,
Padma


On Aug 23, 2017, at 12:18 PM, Lee, David <Da...@blackrock.com>> wrote:

HDFS storage plugin should be set to your HDFS name node url..

-----Original Message-----
From: Zubair, Muhammad [mailto:muhammad.zubair@rbc.com.INVALID]
Sent: Wednesday, August 23, 2017 11:33 AM
To: user@drill.apache.org<ma...@drill.apache.org>
Subject: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

Hello,
After setting up drill on one of the edge nodes of our HDFS cluster, I am unable to read any hdfs files. I can query data from local files (as long as they are in a folder that has 777 permissions) but querying data from hdfs fails with the following error:
Error: RESOURCE ERROR: Failed to create schema tree.
[Error Id: d9f7908c-6c3b-49c0-a11e-71c004d27f46 on server-name:31010] (state=,code=0)
Query:
0: jdbc:drill:zk=local> select * from hdfs.`/names/city.parquet` limit 2; Querying from local file works fine:
0: jdbc:drill:zk=local> select * from dfs.`/tmp/city.parquet` limit 2; My HDFS settings are similar to the DFS settings, except for the connection URL being the server address instead of file:/// I can't find anything online regarding this error for drill.
_______________________________________________________________________
If you received this email in error, please advise the sender (by return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.


This message may contain information that is confidential or privileged. If you are not the intended recipient, please advise the sender immediately and delete this message. See http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for further information.  Please refer to http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more information about BlackRock’s Privacy Policy.

For a list of BlackRock's office addresses worldwide, see http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.

© 2017 BlackRock, Inc. All rights reserved.

_______________________________________________________________________
If you received this email in error, please advise the sender (by return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.  

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.

Re: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

Posted by Padma Penumarthy <pp...@mapr.com>.
For HDFS, your storage plugin configuration should be something like this:

{
  "type": "file",
  "enabled": true,
  "connection": "hdfs://<IP Address>:<Port>”,   // IP address and port number of name node metadata service
  "config": null,
  "workspaces": {
    "root": {
      "location": "/",
      "writable": true,
      "defaultInputFormat": null
    },
    "tmp": {
      "location": "/tmp",
      "writable": true,
      "defaultInputFormat": null
    }
  },

Also, try hadoop dfs -ls command to see if you can list the files.

Thanks,
Padma


On Aug 23, 2017, at 12:18 PM, Lee, David <Da...@blackrock.com>> wrote:

HDFS storage plugin should be set to your HDFS name node url..

-----Original Message-----
From: Zubair, Muhammad [mailto:muhammad.zubair@rbc.com.INVALID]
Sent: Wednesday, August 23, 2017 11:33 AM
To: user@drill.apache.org<ma...@drill.apache.org>
Subject: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

Hello,
After setting up drill on one of the edge nodes of our HDFS cluster, I am unable to read any hdfs files. I can query data from local files (as long as they are in a folder that has 777 permissions) but querying data from hdfs fails with the following error:
Error: RESOURCE ERROR: Failed to create schema tree.
[Error Id: d9f7908c-6c3b-49c0-a11e-71c004d27f46 on server-name:31010] (state=,code=0)
Query:
0: jdbc:drill:zk=local> select * from hdfs.`/names/city.parquet` limit 2; Querying from local file works fine:
0: jdbc:drill:zk=local> select * from dfs.`/tmp/city.parquet` limit 2; My HDFS settings are similar to the DFS settings, except for the connection URL being the server address instead of file:/// I can't find anything online regarding this error for drill.
_______________________________________________________________________
If you received this email in error, please advise the sender (by return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.


This message may contain information that is confidential or privileged. If you are not the intended recipient, please advise the sender immediately and delete this message. See http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for further information.  Please refer to http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more information about BlackRock’s Privacy Policy.

For a list of BlackRock's office addresses worldwide, see http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.

© 2017 BlackRock, Inc. All rights reserved.


RE: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

Posted by "Lee, David" <Da...@blackrock.com>.
HDFS storage plugin should be set to your HDFS name node url..

-----Original Message-----
From: Zubair, Muhammad [mailto:muhammad.zubair@rbc.com.INVALID] 
Sent: Wednesday, August 23, 2017 11:33 AM
To: user@drill.apache.org
Subject: Apache Drill unable to read files from HDFS (Resource error: Failed to create schema tree)

Hello,
After setting up drill on one of the edge nodes of our HDFS cluster, I am unable to read any hdfs files. I can query data from local files (as long as they are in a folder that has 777 permissions) but querying data from hdfs fails with the following error:
Error: RESOURCE ERROR: Failed to create schema tree.
[Error Id: d9f7908c-6c3b-49c0-a11e-71c004d27f46 on server-name:31010] (state=,code=0)
Query:
0: jdbc:drill:zk=local> select * from hdfs.`/names/city.parquet` limit 2; Querying from local file works fine:
0: jdbc:drill:zk=local> select * from dfs.`/tmp/city.parquet` limit 2; My HDFS settings are similar to the DFS settings, except for the connection URL being the server address instead of file:/// I can't find anything online regarding this error for drill.
_______________________________________________________________________
If you received this email in error, please advise the sender (by return email or otherwise) immediately. You have consented to receive the attached electronically at the above-noted email address; please retain a copy of this confirmation for future reference.  

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation pour les fins de reference future.


This message may contain information that is confidential or privileged. If you are not the intended recipient, please advise the sender immediately and delete this message. See http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for further information.  Please refer to http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more information about BlackRock’s Privacy Policy.

For a list of BlackRock's office addresses worldwide, see http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.

© 2017 BlackRock, Inc. All rights reserved.