You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2015/01/29 23:49:34 UTC
[jira] [Commented] (SPARK-5464) Calling help() on a Python
DataFrame fails with "cannot resolve column name __name__" error
[ https://issues.apache.org/jira/browse/SPARK-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14297842#comment-14297842 ]
Apache Spark commented on SPARK-5464:
-------------------------------------
User 'JoshRosen' has created a pull request for this issue:
https://github.com/apache/spark/pull/4278
> Calling help() on a Python DataFrame fails with "cannot resolve column name __name__" error
> -------------------------------------------------------------------------------------------
>
> Key: SPARK-5464
> URL: https://issues.apache.org/jira/browse/SPARK-5464
> Project: Spark
> Issue Type: Bug
> Components: PySpark, SQL
> Affects Versions: 1.3.0
> Reporter: Josh Rosen
> Assignee: Josh Rosen
> Priority: Blocker
>
> Trying to call {{help()}} on a Python DataFrame fails with an exception:
> {code}
> >>> help(df)
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "/Users/joshrosen/anaconda/lib/python2.7/site.py", line 464, in __call__
> return pydoc.help(*args, **kwds)
> File "/Users/joshrosen/anaconda/lib/python2.7/pydoc.py", line 1787, in __call__
> self.help(request)
> File "/Users/joshrosen/anaconda/lib/python2.7/pydoc.py", line 1834, in help
> else: doc(request, 'Help on %s:')
> File "/Users/joshrosen/anaconda/lib/python2.7/pydoc.py", line 1571, in doc
> pager(render_doc(thing, title, forceload))
> File "/Users/joshrosen/anaconda/lib/python2.7/pydoc.py", line 1545, in render_doc
> object, name = resolve(thing, forceload)
> File "/Users/joshrosen/anaconda/lib/python2.7/pydoc.py", line 1540, in resolve
> name = getattr(thing, '__name__', None)
> File "/Users/joshrosen/Documents/Spark/python/pyspark/sql.py", line 2154, in __getattr__
> return Column(self._jdf.apply(name))
> File "/Users/joshrosen/Documents/Spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
> File "/Users/joshrosen/Documents/Spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
> py4j.protocol.Py4JJavaError: An error occurred while calling o31.apply.
> : java.lang.RuntimeException: Cannot resolve column name "__name__"
> at org.apache.spark.sql.DataFrame$$anonfun$resolve$1.apply(DataFrame.scala:123)
> at org.apache.spark.sql.DataFrame$$anonfun$resolve$1.apply(DataFrame.scala:123)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.sql.DataFrame.resolve(DataFrame.scala:122)
> at org.apache.spark.sql.DataFrame.apply(DataFrame.scala:237)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
> at py4j.Gateway.invoke(Gateway.java:259)
> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
> at py4j.commands.CallCommand.execute(CallCommand.java:79)
> at py4j.GatewayConnection.run(GatewayConnection.java:207)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Here's a reproduction:
> {code}
> >>> from pyspark.sql import SQLContext, Row
> >>> sqlContext = SQLContext(sc)
> >>> rdd = sc.parallelize(['{"foo":"bar"}', '{"foo":"baz"}'])
> >>> df = sqlContext.jsonRDD(rdd)
> >>> help(df)
> {code}
> I think the problem here is that we don't throw the expected exception from our overloaded {{getattr}} if a column can't be found.
> We should be able to fix this by only attempting to call {{apply}} after checking that the column name is valid (e.g. check against {{columns}}).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org