You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Peter Bykov (JIRA)" <ji...@apache.org> on 2017/06/12 09:43:00 UTC
[jira] [Created] (SPARK-21063) Spark return an empty result from
remote hadoop cluster
Peter Bykov created SPARK-21063:
-----------------------------------
Summary: Spark return an empty result from remote hadoop cluster
Key: SPARK-21063
URL: https://issues.apache.org/jira/browse/SPARK-21063
Project: Spark
Issue Type: Bug
Components: Spark Core, SQL
Affects Versions: 2.1.1
Reporter: Peter Bykov
Spark returning empty result from when querying remote hadoop cluster.
All firewall settings removed.
Querying using JDBC working properly using hive-jdbc driver from version 1.1.1
Code snippet is:
{code:scala}
val spark = SparkSession.builder
.appName("RemoteSparkTest")
.master("local")
.getOrCreate()
val df = spark.read
.option("url", "jdbc:hive2://remote.hive.local:10000/default")
.option("user", "user")
.option("password", "pass")
.option("dbtable", "test_table")
.option("driver", "org.apache.hive.jdbc.HiveDriver")
.format("jdbc")
.load()
df.show()
{code}
Result:
{noformat}
+-------------------+
|test_table.test_col|
+-------------------+
+-------------------+
{noformat}
All manipulations like:
{code:scala}
df.select(*).show()
{code}
returns empty result too.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org