You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yanbo Liang (JIRA)" <ji...@apache.org> on 2015/08/26 08:16:46 UTC
[jira] [Comment Edited] (SPARK-9807) pyspark.sql.createDataFrame
does not infer data type of parsed TSV
[ https://issues.apache.org/jira/browse/SPARK-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712577#comment-14712577 ]
Yanbo Liang edited comment on SPARK-9807 at 8/26/15 6:16 AM:
-------------------------------------------------------------
This is not a bug.
{code} map(lambda l: re.split(col_delimiter, l) {code} returns list of string and then `pyspark.sqlContext.createDataFrame` will convert the parsed lines to a PySpark DataFrame with all the columns are string type.
If you want to make it a DataFrame with correct schema, you need to specify it like:
{code}
schema = StructType([
StructField("name", StringType(), True),
StructField("age", IntegerType(), True)])
df3 = sqlContext.createDataFrame(rdd, schema)
{code}
was (Author: yanboliang):
This is not a bug.
{map(lambda l: re.split(col_delimiter, l)} returns list of string and then `pyspark.sqlContext.createDataFrame` will convert the parsed lines to a PySpark DataFrame with all the columns are string type.
If you want to make it a DataFrame with correct schema, you need to specify it like:
{code}
schema = StructType([
StructField("name", StringType(), True),
StructField("age", IntegerType(), True)])
df3 = sqlContext.createDataFrame(rdd, schema)
{code}
> pyspark.sql.createDataFrame does not infer data type of parsed TSV
> ------------------------------------------------------------------
>
> Key: SPARK-9807
> URL: https://issues.apache.org/jira/browse/SPARK-9807
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 1.4.1
> Environment: CentOS 6, Python version 2.7.10, Scala version 2-10
> Reporter: Karen Yin-Yee Ng
> Original Estimate: 24h
> Remaining Estimate: 24h
>
> I tried parsing a space-separated file from HDFS.
> And using `pyspark.sqlContext.createDataFrame` to convert the parsed lines to a PySpark DataFrame. However, all entries are parsed as string type regardless of what the correct data type is.
> An example of my code and output can be found at:
> https://gist.github.com/karenyyng/a1264d6344c54df4fcc5
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org