You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Rakesh Raushan (Jira)" <ji...@apache.org> on 2019/12/18 04:49:00 UTC
[jira] [Comment Edited] (SPARK-30288) Failed to write valid Parquet
files when column names contains special characters like spaces
[ https://issues.apache.org/jira/browse/SPARK-30288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16998809#comment-16998809 ]
Rakesh Raushan edited comment on SPARK-30288 at 12/18/19 4:48 AM:
------------------------------------------------------------------
[~dongjoon] [~hyukjin.kwon] I have checked locally after making required changes. Column names with space , "=" are working fine for now. Also pandas support this. So should we also allow this?
scala> Seq(100).toDF("a b").write.parquet("/tmp/dir")
scala> spark.read.parquet("/tmp/dir").show()
+---+
|a b|
+---+
|100|
+---+
scala> Seq(1).toDF("a=b").write.parquet("/tmp/dir2")
scala> spark.read.parquet("/tmp/foo").show()
+---+
|a=b|
+---+
|100|
+---+
was (Author: rakson):
[~dongjoon] I have checked locally after making required changes. Column names with space , "=" are working fine for now. Also pandas support this. So should we also allow this?
scala> Seq(100).toDF("a b").write.parquet("/tmp/dir")
scala> spark.read.parquet("/tmp/dir").show()
+---+
|a b|
+---+
|100|
+---+
scala> Seq(1).toDF("a=b").write.parquet("/tmp/dir2")
scala> spark.read.parquet("/tmp/foo").show()
+---+
|a=b|
+---+
|100|
+---+
> Failed to write valid Parquet files when column names contains special characters like spaces
> ---------------------------------------------------------------------------------------------
>
> Key: SPARK-30288
> URL: https://issues.apache.org/jira/browse/SPARK-30288
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.4.3
> Reporter: Jingyuan Wang
> Priority: Major
>
> When I tried to write Parquet files using PySpark with columns containing some special characters in their names, it threw the following exception:
> {code}
> org.apache.spark.sql.AnalysisException: Attribute name "col 1" contains invalid character(s) among " ,;{}()\n\t=". Please use alias to rename it.;
> at org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter$.checkConversionRequirement(ParquetSchemaConverter.scala:583)
> at org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter$.checkFieldName(ParquetSchemaConverter.scala:570)
> at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport$$anonfun$setSchema$2.apply(ParquetWriteSupport.scala:444)
> at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport$$anonfun$setSchema$2.apply(ParquetWriteSupport.scala:444)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport$.setSchema(ParquetWriteSupport.scala:444)
> at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.prepareWrite(ParquetFileFormat.scala:111)
> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:103)
> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159)
> at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
> at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
> at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
> at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
> at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
> at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
> at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
> at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
> at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
> at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
> at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
> at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
> at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
> at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:566)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
> at py4j.Gateway.invoke(Gateway.java:282)
> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
> at py4j.commands.CallCommand.execute(CallCommand.java:79)
> at py4j.GatewayConnection.run(GatewayConnection.java:238)
> at java.base/java.lang.Thread.run(Thread.java:834)
> {code}
> However, it is supported by Pandas for both reading and writing. This validity check of column names seems to be outdated and should be removed.
> {code}
> >>> import pandas as pd
> >>> df = pd.DataFrame(data={'col(1)': [1, 2], 'col 2': [3, 4]})
> >>> df.to_parquet('special_columns.parquet')
> >>> df_written = pd.read_parquet('special_columns.parquet')
> >>> df_written
> col(1) col 2
> 0 1 3
> 1 2 4
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org