You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sandeep Singh (Jira)" <ji...@apache.org> on 2023/01/02 16:36:00 UTC

[jira] [Created] (SPARK-41819) Implement Dataframe.rdd getNumPartitions

Sandeep Singh created SPARK-41819:
-------------------------------------

             Summary: Implement Dataframe.rdd getNumPartitions
                 Key: SPARK-41819
                 URL: https://issues.apache.org/jira/browse/SPARK-41819
             Project: Spark
          Issue Type: Sub-task
          Components: Connect
    Affects Versions: 3.4.0
            Reporter: Sandeep Singh


{code:java}
File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/readwriter.py", line 122, in pyspark.sql.connect.readwriter.DataFrameReader.load
Failed example:
    with tempfile.TemporaryDirectory() as d:
        # Write a DataFrame into a CSV file with a header
        df = spark.createDataFrame([{"age": 100, "name": "Hyukjin Kwon"}])
        df.write.option("header", True).mode("overwrite").format("csv").save(d)

        # Read the CSV file as a DataFrame with 'nullValue' option set to 'Hyukjin Kwon',
        # and 'header' option set to `True`.
        df = spark.read.load(
            d, schema=df.schema, format="csv", nullValue="Hyukjin Kwon", header=True)
        df.printSchema()
        df.show()
Exception raised:
    Traceback (most recent call last):
      File "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py", line 1350, in __run
        exec(compile(example.source, filename, "single",
      File "<doctest pyspark.sql.connect.readwriter.DataFrameReader.load[1]>", line 10, in <module>
        df.printSchema()
      File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", line 1039, in printSchema
        print(self._tree_string())
      File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", line 1035, in _tree_string
        query = self._plan.to_proto(self._session.client)
      File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/plan.py", line 92, in to_proto
        plan.root.CopyFrom(self.plan(session))
      File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/plan.py", line 245, in plan
        plan.read.data_source.schema = self.schema
    TypeError: bad argument type for built-in operation {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org