You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "YANGFEIRAN (JIRA)" <ji...@apache.org> on 2017/07/07 08:37:00 UTC
[jira] [Commented] (SPARK-21063) Spark return an empty result from
remote hadoop cluster
[ https://issues.apache.org/jira/browse/SPARK-21063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16077812#comment-16077812 ]
YANGFEIRAN commented on SPARK-21063:
------------------------------------
https://stackoverflow.com/q/44434304/6547077 also following this
> Spark return an empty result from remote hadoop cluster
> -------------------------------------------------------
>
> Key: SPARK-21063
> URL: https://issues.apache.org/jira/browse/SPARK-21063
> Project: Spark
> Issue Type: Bug
> Components: Spark Core, SQL
> Affects Versions: 2.1.0, 2.1.1
> Reporter: Peter Bykov
>
> Spark returning empty result from when querying remote hadoop cluster.
> All firewall settings removed.
> Querying using JDBC working properly using hive-jdbc driver from version 1.1.1
> Code snippet is:
> {code:java}
> val spark = SparkSession.builder
> .appName("RemoteSparkTest")
> .master("local")
> .getOrCreate()
> val df = spark.read
> .option("url", "jdbc:hive2://remote.hive.local:10000/default")
> .option("user", "user")
> .option("password", "pass")
> .option("dbtable", "test_table")
> .option("driver", "org.apache.hive.jdbc.HiveDriver")
> .format("jdbc")
> .load()
>
> df.show()
> {code}
> Result:
> {noformat}
> +-------------------+
> |test_table.test_col|
> +-------------------+
> +-------------------+
> {noformat}
> All manipulations like:
> {code:java}
> df.select(*).show()
> {code}
> returns empty result too.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org