You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hao Ren (JIRA)" <ji...@apache.org> on 2015/07/07 14:34:04 UTC

[jira] [Created] (SPARK-8869) DataFrameWriter save action makes DataFrameReader load failed

Hao Ren created SPARK-8869:
------------------------------

             Summary: DataFrameWriter save action makes DataFrameReader load failed
                 Key: SPARK-8869
                 URL: https://issues.apache.org/jira/browse/SPARK-8869
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 1.4.0
            Reporter: Hao Ren


Given the following code, the action is save on DataFrame writer.
However, it blocks, no errors, no exceptions. It just blocks for a long time, nothing happened.

{code}
SparkConf conf = new sparkConf().setAppName(Transform.class.getName()).setMaster("local[4]");
JavaSparkContext sparkContext  = new JavaSparkContext(conf);
SQLContext       sqlContext    = new SQLContext(sparkContext);
DataFrame results = sqlContext.read().format("json").load(input);
//results.show();
results.write().format("json").mode(SaveMode.Overwrite).save(output);
{code}

But if we toggle {{results,show();}}, it works.

>From {{jstack}}, I find something:
{code}
"Executor task launch worker-3" #68 daemon prio=5 os_prio=0 tid=0x00007f50b8025800 nid=0x6bb2 in Object.wait() [0x00007f508cccc000]
   java.lang.Thread.State: WAITING (on object monitor)
	at java.lang.Object.wait(Native Method)
	at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager.doGetConnection(MultiThreadedHttpConnectionManager.java:518)
	- locked <0x0000000086041f10> (a org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$ConnectionPool)
	at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager.getConnectionWithTimeout(MultiThreadedHttpConnectionManager.java:416)
	at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:153)
	at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
	at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
	at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:342)
	at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRestHead(RestS3Service.java:718)
	at org.jets3t.service.impl.rest.httpclient.RestS3Service.getObjectImpl(RestS3Service.java:1599)
	at org.jets3t.service.impl.rest.httpclient.RestS3Service.getObjectDetailsImpl(RestS3Service.java:1535)
	at org.jets3t.service.S3Service.getObjectDetails(S3Service.java:1987)
	at org.jets3t.service.S3Service.getObjectDetails(S3Service.java:1332)
	at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:111)
	at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at org.apache.hadoop.fs.s3native.$Proxy17.retrieveMetadata(Unknown Source)
	at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:414)
	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
	at org.apache.hadoop.fs.s3native.NativeS3FileSystem.create(NativeS3FileSystem.java:341)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:905)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
	at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
	at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:90)
	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1104)
	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1095)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
	at org.apache.spark.scheduler.Task.run(Task.scala:70)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
{code}

It seems that a worker is blocked at {{MultiThreadedHttpConnectionManager.doGetConnection}}. My PC has 4 CPU cores, so I set my master as {{local[4]}}. I find that {{MultiThreadedHttpConnectionManager}}'s {{maxHostConnections}} is 4, too. I am not sure it is caused by this.

I also checked whether it is related to the parquet input file. The answer is no. I change the input file to json format, nothing changed.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org