You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Devesh Raj Singh <ra...@gmail.com> on 2016/02/03 11:35:20 UTC

saveDF issue: dealing with missing values

Hi,


saveDF issue:


I’ve a csv file(*airquality.csv*) with some blank spaces. When I read it
using *read.df*, it’s working fine and schema is retained.


But, when I write the SparkR data frame back to csv file(*airquality1.csv*),
it’s imputing “null” at places with blank spaces. Because of which,  when I
try to read the written back csv file(*airquality1.csv*) as SparkR data
frame, it’s treating “null” as string value and converting the attribute to
string. Can you help me with this issue ?


*My code:*

*library(SparkR)*

*Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages"
"com.databricks:spark-csv_2.11:1.3.0" "sparkr-shell"')*

*sc = sparkR.init("local",sparkHome =
"~/Downloads/spark-1.4.1-bin-hadoop2.6/")*

*sqlContext = sparkRSQL.init(sc)*


*airquality = read.df(sqlContext,"~/Data/airquality.csv",source =
"com.databricks.spark.csv",header = "true",inferSchema = "true")*


*saveDF(airquality,"~/Data/airquality1.csv","com.databricks.spark.csv",mode
= "overwrite",header = "true")*


*airquality1 = read.df(sqlContext,"~/Data/airquality1.csv",source =
"com.databricks.spark.csv",header = "true",inferSchema = "true")*

*schema(airquality)*

*schema(airquality1)*



*[image: Inline image 1]*

-- 
Warm regards,
Devesh.

RE: saveDF issue: dealing with missing values

Posted by "Sun, Rui" <ru...@intel.com>.
According to https://github.com/databricks/spark-csv, you can set the option “nullValue” to an empty string. The default value for “nullValue” is “null” when writing a CSV file.

From: Devesh Raj Singh [mailto:raj.devesh99@gmail.com]
Sent: Wednesday, February 3, 2016 6:35 PM
To: user@spark.apache.org
Subject: saveDF issue: dealing with missing values

Hi,


saveDF issue:

I’ve a csv file(airquality.csv) with some blank spaces. When I read it using read.df, it’s working fine and schema is retained.

But, when I write the SparkR data frame back to csv file(airquality1.csv), it’s imputing “null” at places with blank spaces. Because of which,  when I try to read the written back csv file(airquality1.csv) as SparkR data frame, it’s treating “null” as string value and converting the attribute to string. Can you help me with this issue ?

My code:
library(SparkR)
Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.11:1.3.0" "sparkr-shell"')
sc = sparkR.init("local",sparkHome = "~/Downloads/spark-1.4.1-bin-hadoop2.6/")
sqlContext = sparkRSQL.init(sc)

airquality = read.df(sqlContext,"~/Data/airquality.csv",source = "com.databricks.spark.csv",header = "true",inferSchema = "true")

saveDF(airquality,"~/Data/airquality1.csv","com.databricks.spark.csv",mode = "overwrite",header = "true")

airquality1 = read.df(sqlContext,"~/Data/airquality1.csv",source = "com.databricks.spark.csv",header = "true",inferSchema = "true")
schema(airquality)
schema(airquality1)

[Inline image 1]

--
Warm regards,
Devesh.