You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@drill.apache.org by 南在南方 <i0...@qq.com> on 2014/05/24 15:14:47 UTC

Query error when applying drill in cluster environment

Hi,
Then I put the m2 version into my cluster and replaced drill m1.
I connected with "./bin/sqlline -n admin -p admin -u jdbc:drill:schema=dfs;zk=fan:2181,slave2:2181,slave3:2181" 
some error occurs:


0: jdbc:drill:schema=dfs> select * from dfs.`AllstarFull.csv`;
Query failed: org.apache.drill.exec.rpc.RpcException: Remote failure while running query.[error_id: "29ff1ff5-1d59-4fe5-adf7-eaea48f7b9ed"
endpoint {
  address: "slave2"
  user_port: 31010
  control_port: 31011
  data_port: 31012
}
error_type: 0
message: "Failure while parsing sql. < ValidationException:[ org.eigenbase.util.EigenbaseContextException: From line 1, column 15 to line 1, column 35 ] < EigenbaseContextException:[ From line 1, column 15 to line 1, column 35 ] < SqlValidatorException:[ Table 'dfs.AllstarFull.csv' not found ]"
]
Error: exception while executing query (state=,code=0)



The following query return some error report:
select * from `nation.parquet`;
select * from `tt.json`;


Here is my storage-plugins.json :
{
  "storage":{
    dfs: {
      type: "file",
      connection: "hdfs://100.2.12.103:9000"
      workspaces: {
        "root" : {
          location: "/",
          writable: false
        },
        "tmp" : {
          location: "/tmp",
          writable: true,
          storageformat: "csv"
        }
      },
      formats: {
        "psv" : {
          type: "text",
          extensions: [ "tbl" ],
          delimiter: "|"
        },
        "csv" : {
          type: "text",
          extensions: [ "csv" ],
          delimiter: ","
        },
        "tsv" : {
          type: "text",
          extensions: [ "tsv" ],
          delimiter: "\t"
        },
        "parquet" : {
          type: "parquet"
        },
        "json" : {
          type: "json"
        }
      }
    },
    cp: {
      type: "file",
      connection: "classpath:///"
    }



Thanks.

回复: Query error when applying drill in cluster environment

Posted by 南在南方 <i0...@qq.com>.
Hi,
I connected Drill with 
"./bin/sqlline -n admin -p admin -u jdbc:drill:schema=dfs;zk=fan:2181,slave2:2181,slave3:2181"
"./bin/sqlline -n admin -p admin -u jdbc:drill:zk=fan:2181,slave2:2181,slave3:2181"


Both return No DrillbitEndpoint can be found error:


 [root@fan bin]# sqlline -n admin -p admin -u jdbc:drill:schema=dfs;zk=fan:2181,slave2:2181,slave3:2181
No DrillbitEndpoint can be found
sqlline version 1.1.6
0: jdbc:drill:schema=dfs> 




Is zk course the problem?





------------------ 原始邮件 ------------------
发件人: "南在南方"<i0...@qq.com>; 
发送时间: 2014年5月24日(星期六) 晚上9:52
收件人: "drill-user"<dr...@incubator.apache.org>; "drill-user"<dr...@incubator.apache.org>; 
主题: 回复: Query error when applying drill in cluster environment



I found my zookeeper on slave3 doesn't work . I would rerun those query when I figure this out. thanks




------------------ 原始邮件 ------------------
发件人: "Yash Sharma"<ya...@gmail.com>; 
发送时间: 2014年5月24日(星期六) 晚上9:57
收件人: "drill-user"<dr...@incubator.apache.org>; 
主题: Re: Query error when applying drill in cluster environment



Dis you try connecting to Sqlline without specifying schema as well?
bin/sqlline -u jdbc:drill:zk=local -n admin -p admin

Also, could you paste the error you get while connecting with the dfs
schema specified.



On Sat, May 24, 2014 at 6:44 PM, 南在南方 <i0...@qq.com> wrote:

> Hi,
> Then I put the m2 version into my cluster and replaced drill m1.
> I connected with "./bin/sqlline -n admin -p admin -u
> jdbc:drill:schema=dfs;zk=fan:2181,slave2:2181,slave3:2181"
> some error occurs:
>
>
> 0: jdbc:drill:schema=dfs> select * from dfs.`AllstarFull.csv`;
> Query failed: org.apache.drill.exec.rpc.RpcException: Remote failure while
> running query.[error_id: "29ff1ff5-1d59-4fe5-adf7-eaea48f7b9ed"
> endpoint {
>   address: "slave2"
>   user_port: 31010
>   control_port: 31011
>   data_port: 31012
> }
> error_type: 0
> message: "Failure while parsing sql. < ValidationException:[
> org.eigenbase.util.EigenbaseContextException: From line 1, column 15 to
> line 1, column 35 ] < EigenbaseContextException:[ From line 1, column 15 to
> line 1, column 35 ] < SqlValidatorException:[ Table 'dfs.AllstarFull.csv'
> not found ]"
> ]
> Error: exception while executing query (state=,code=0)
>
>
>
> The following query return some error report:
> select * from `nation.parquet`;
> select * from `tt.json`;
>
>
> Here is my storage-plugins.json :
> {
>   "storage":{
>     dfs: {
>       type: "file",
>       connection: "hdfs://100.2.12.103:9000"
>       workspaces: {
>         "root" : {
>           location: "/",
>           writable: false
>         },
>         "tmp" : {
>           location: "/tmp",
>           writable: true,
>           storageformat: "csv"
>         }
>       },
>       formats: {
>         "psv" : {
>           type: "text",
>           extensions: [ "tbl" ],
>           delimiter: "|"
>         },
>         "csv" : {
>           type: "text",
>           extensions: [ "csv" ],
>           delimiter: ","
>         },
>         "tsv" : {
>           type: "text",
>           extensions: [ "tsv" ],
>           delimiter: "\t"
>         },
>         "parquet" : {
>           type: "parquet"
>         },
>         "json" : {
>           type: "json"
>         }
>       }
>     },
>     cp: {
>       type: "file",
>       connection: "classpath:///"
>     }
>
>
>
> Thanks.

回复: Query error when applying drill in cluster environment

Posted by 南在南方 <i0...@qq.com>.
I found my zookeeper on slave3 doesn't work . I would rerun those query when I figure this out. thanks




------------------ 原始邮件 ------------------
发件人: "Yash Sharma"<ya...@gmail.com>; 
发送时间: 2014年5月24日(星期六) 晚上9:57
收件人: "drill-user"<dr...@incubator.apache.org>; 
主题: Re: Query error when applying drill in cluster environment



Dis you try connecting to Sqlline without specifying schema as well?
bin/sqlline -u jdbc:drill:zk=local -n admin -p admin

Also, could you paste the error you get while connecting with the dfs
schema specified.



On Sat, May 24, 2014 at 6:44 PM, 南在南方 <i0...@qq.com> wrote:

> Hi,
> Then I put the m2 version into my cluster and replaced drill m1.
> I connected with "./bin/sqlline -n admin -p admin -u
> jdbc:drill:schema=dfs;zk=fan:2181,slave2:2181,slave3:2181"
> some error occurs:
>
>
> 0: jdbc:drill:schema=dfs> select * from dfs.`AllstarFull.csv`;
> Query failed: org.apache.drill.exec.rpc.RpcException: Remote failure while
> running query.[error_id: "29ff1ff5-1d59-4fe5-adf7-eaea48f7b9ed"
> endpoint {
>   address: "slave2"
>   user_port: 31010
>   control_port: 31011
>   data_port: 31012
> }
> error_type: 0
> message: "Failure while parsing sql. < ValidationException:[
> org.eigenbase.util.EigenbaseContextException: From line 1, column 15 to
> line 1, column 35 ] < EigenbaseContextException:[ From line 1, column 15 to
> line 1, column 35 ] < SqlValidatorException:[ Table 'dfs.AllstarFull.csv'
> not found ]"
> ]
> Error: exception while executing query (state=,code=0)
>
>
>
> The following query return some error report:
> select * from `nation.parquet`;
> select * from `tt.json`;
>
>
> Here is my storage-plugins.json :
> {
>   "storage":{
>     dfs: {
>       type: "file",
>       connection: "hdfs://100.2.12.103:9000"
>       workspaces: {
>         "root" : {
>           location: "/",
>           writable: false
>         },
>         "tmp" : {
>           location: "/tmp",
>           writable: true,
>           storageformat: "csv"
>         }
>       },
>       formats: {
>         "psv" : {
>           type: "text",
>           extensions: [ "tbl" ],
>           delimiter: "|"
>         },
>         "csv" : {
>           type: "text",
>           extensions: [ "csv" ],
>           delimiter: ","
>         },
>         "tsv" : {
>           type: "text",
>           extensions: [ "tsv" ],
>           delimiter: "\t"
>         },
>         "parquet" : {
>           type: "parquet"
>         },
>         "json" : {
>           type: "json"
>         }
>       }
>     },
>     cp: {
>       type: "file",
>       connection: "classpath:///"
>     }
>
>
>
> Thanks.

Re: Query error when applying drill in cluster environment

Posted by Yash Sharma <ya...@gmail.com>.
Dis you try connecting to Sqlline without specifying schema as well?
bin/sqlline -u jdbc:drill:zk=local -n admin -p admin

Also, could you paste the error you get while connecting with the dfs
schema specified.



On Sat, May 24, 2014 at 6:44 PM, 南在南方 <i0...@qq.com> wrote:

> Hi,
> Then I put the m2 version into my cluster and replaced drill m1.
> I connected with "./bin/sqlline -n admin -p admin -u
> jdbc:drill:schema=dfs;zk=fan:2181,slave2:2181,slave3:2181"
> some error occurs:
>
>
> 0: jdbc:drill:schema=dfs> select * from dfs.`AllstarFull.csv`;
> Query failed: org.apache.drill.exec.rpc.RpcException: Remote failure while
> running query.[error_id: "29ff1ff5-1d59-4fe5-adf7-eaea48f7b9ed"
> endpoint {
>   address: "slave2"
>   user_port: 31010
>   control_port: 31011
>   data_port: 31012
> }
> error_type: 0
> message: "Failure while parsing sql. < ValidationException:[
> org.eigenbase.util.EigenbaseContextException: From line 1, column 15 to
> line 1, column 35 ] < EigenbaseContextException:[ From line 1, column 15 to
> line 1, column 35 ] < SqlValidatorException:[ Table 'dfs.AllstarFull.csv'
> not found ]"
> ]
> Error: exception while executing query (state=,code=0)
>
>
>
> The following query return some error report:
> select * from `nation.parquet`;
> select * from `tt.json`;
>
>
> Here is my storage-plugins.json :
> {
>   "storage":{
>     dfs: {
>       type: "file",
>       connection: "hdfs://100.2.12.103:9000"
>       workspaces: {
>         "root" : {
>           location: "/",
>           writable: false
>         },
>         "tmp" : {
>           location: "/tmp",
>           writable: true,
>           storageformat: "csv"
>         }
>       },
>       formats: {
>         "psv" : {
>           type: "text",
>           extensions: [ "tbl" ],
>           delimiter: "|"
>         },
>         "csv" : {
>           type: "text",
>           extensions: [ "csv" ],
>           delimiter: ","
>         },
>         "tsv" : {
>           type: "text",
>           extensions: [ "tsv" ],
>           delimiter: "\t"
>         },
>         "parquet" : {
>           type: "parquet"
>         },
>         "json" : {
>           type: "json"
>         }
>       }
>     },
>     cp: {
>       type: "file",
>       connection: "classpath:///"
>     }
>
>
>
> Thanks.