You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Tsuyoshi Ozawa (JIRA)" <ji...@apache.org> on 2016/11/10 10:45:58 UTC

[jira] [Commented] (SPARK-18399) Examples in SparkSQL/DataFrame fails with default configurations

    [ https://issues.apache.org/jira/browse/SPARK-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15653722#comment-15653722 ] 

Tsuyoshi Ozawa commented on SPARK-18399:
----------------------------------------

{code}
scala> val df = spark.read.json("examples/src/main/resources/people.json")
16/11/10 19:23:56 WARN datasources.DataSource: Error while looking for metadata directory.
java.net.ConnectException: Call From ozamac-2.local/10.129.56.104 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
  at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
  at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
  at org.apache.hadoop.ipc.Client.call(Client.java:1351)
  at org.apache.hadoop.ipc.Client.call(Client.java:1300)
  at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
  at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:483)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
  at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
  at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1679)
  at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
  at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
  at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
  at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:292)
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:282)
  at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
  at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
  at scala.collection.immutable.List.foreach(List.scala:381)
  at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
  at scala.collection.immutable.List.flatMap(List.scala:344)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:282)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
  at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:297)
  at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:250)
{code}


> Examples in SparkSQL/DataFrame fails with default configurations
> ----------------------------------------------------------------
>
>                 Key: SPARK-18399
>                 URL: https://issues.apache.org/jira/browse/SPARK-18399
>             Project: Spark
>          Issue Type: Bug
>          Components: Documentation, SQL
>            Reporter: Tsuyoshi Ozawa
>
> http://spark.apache.org/docs/latest/sql-programming-guide.html#creating-dataframes
> With default configuration, it results in a failure since it tries to access HDFS while the path to people.json/txt are assumed to be in local file system. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org