You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@calcite.apache.org by Ben Vogan <be...@shopkick.com> on 2017/05/23 00:26:46 UTC

Connecting to Druid via JDBC

Hi all,

We are experimenting with Druid's new JDBC functionality provided by
Calcite and the Avatica JDBC Driver.  I would like to be able to query
Druid via Spark (v1.6) and thought that the JDBC Driver would provide our
analysts with a familiar mechanism.  Is this possible?

I am wholly unfamiliar with the Avatica driver and I am unclear as to what
class is the proper entry point.

I have tried:

val druidDf = sqlContext.read.format("jdbc").options(Map("url" ->
"jdbc:avatica:remote:url=http://mydruidbroker:8082/druid/v2/sql/avatica/",
"dbtable" -> "mydruidtable", "driver" ->
"org.apache.calcite.avatica.remote.Driver",
"fetchSize"->"10000")).load()

But this gives me an UnsupportedOperationException.

I tried changing the driver to org.apache.calcite.avatica.UnregisteredDriver
but this gives me:

java.lang.IllegalAccessException: Class org.apache.spark.sql.
execution.datasources.jdbc.DriverRegistry$ can not access a member of class
org.apache.calcite.avatica.UnregisteredDriver with modifiers "protected"

I presume this is because the constructor is protected.

If someone can point me in the correct direction I would greatly appreciate
it.

Thank you,
-- 
*BENJAMIN VOGAN* | Data Platform Team Lead

<http://www.shopkick.com/>
<https://www.facebook.com/shopkick> <https://www.instagram.com/shopkick/>
<https://www.pinterest.com/shopkick/> <https://twitter.com/shopkickbiz>
<https://www.linkedin.com/company-beta/831240/?pathWildcard=831240>

Re: Connecting to Druid via JDBC

Posted by Josh Elser <el...@apache.org>.
Hi Ben,

Can you share the exception accompanying the UnsupportedOperationException?
Your first example looks right to me, perhaps Spark is invoking a method we
don't have implemented yet (but could easily add before Avatica 1.10).


On May 22, 2017 20:28, "Ben Vogan" <be...@shopkick.com> wrote:

Hi all,

We are experimenting with Druid's new JDBC functionality provided by
Calcite and the Avatica JDBC Driver.  I would like to be able to query
Druid via Spark (v1.6) and thought that the JDBC Driver would provide our
analysts with a familiar mechanism.  Is this possible?

I am wholly unfamiliar with the Avatica driver and I am unclear as to what
class is the proper entry point.

I have tried:

val druidDf = sqlContext.read.format("jdbc").options(Map("url" ->
"jdbc:avatica:remote:url=http://mydruidbroker:8082/druid/v2/sql/avatica/",
"dbtable" -> "mydruidtable", "driver" ->
"org.apache.calcite.avatica.remote.Driver",
"fetchSize"->"10000")).load()

But this gives me an UnsupportedOperationException.

I tried changing the driver to org.apache.calcite.avatica.UnregisteredDriver
but this gives me:

java.lang.IllegalAccessException: Class org.apache.spark.sql.
execution.datasources.jdbc.DriverRegistry$ can not access a member of class
org.apache.calcite.avatica.UnregisteredDriver with modifiers "protected"

I presume this is because the constructor is protected.

If someone can point me in the correct direction I would greatly appreciate
it.

Thank you,
--
*BENJAMIN VOGAN* | Data Platform Team Lead

<http://www.shopkick.com/>
<https://www.facebook.com/shopkick> <https://www.instagram.com/shopkick/>
<https://www.pinterest.com/shopkick/> <https://twitter.com/shopkickbiz>
<https://www.linkedin.com/company-beta/831240/?pathWildcard=831240>