You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "shankar (JIRA)" <ji...@apache.org> on 2017/01/05 17:26:58 UTC

[jira] [Commented] (SPARK-2356) Exception: Could not locate executable null\bin\winutils.exe in the Hadoop

    [ https://issues.apache.org/jira/browse/SPARK-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801947#comment-15801947 ] 

shankar commented on SPARK-2356:
--------------------------------

I have followed this solution -- https://qnalist.com/questions/4994960/run-spark-unit-test-on-windows-7 but still not able to resolve this issue
17/01/05 22:40:36 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:50594 (size: 9.8 KB, free: 1043.2 MB)
17/01/05 22:40:36 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:12
17/01/05 22:40:36 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
	at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)

I am running simple spark program in Scala IDE on windows 7, I don't have hadoop installed on my windows.
I followed below steps but still why I am failing?
1. I have copied winutils.exe to C:\winutil\bin 
from https://social.msdn.microsoft.com/forums/azure/en-US/28a57efb-082b-424b-8d9e-731b1fe135de/please-read-if-experiencing-job-failures?forum=hdinsight
2. I have set my environment variable as HADOOP_HOME = C:\winutil and in PATH=C:\winutil\bin
3. Below is my spark code
object WordCount extends App {
     val conf = new SparkConf()
    System.setProperty("hadoop.home.dir","C:\\winutil\\")
    val sc=new SparkContext("local","WordCount",conf)
    val test=sc.textFile("food.txt")
    test.flatMap(line=>line.split(" "))
        .map(word => (word,1))
        .reduceByKey(_+_)
        .saveAsTextFile("food_output.txt")
}

> Exception: Could not locate executable null\bin\winutils.exe in the Hadoop 
> ---------------------------------------------------------------------------
>
>                 Key: SPARK-2356
>                 URL: https://issues.apache.org/jira/browse/SPARK-2356
>             Project: Spark
>          Issue Type: Bug
>          Components: Windows
>    Affects Versions: 1.0.0, 1.1.1, 1.2.1, 1.2.2, 1.3.1, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2
>            Reporter: Kostiantyn Kudriavtsev
>            Priority: Critical
>
> I'm trying to run some transformation on Spark, it works fine on cluster (YARN, linux machines). However, when I'm trying to run it on local machine (Windows 7) under unit test, I got errors (I don't use Hadoop, I'm read file from local filesystem):
> {code}
> 14/07/02 19:59:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 14/07/02 19:59:31 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
> java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
> 	at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318)
> 	at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:333)
> 	at org.apache.hadoop.util.Shell.<clinit>(Shell.java:326)
> 	at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
> 	at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:93)
> 	at org.apache.hadoop.security.Groups.<init>(Groups.java:77)
> 	at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
> 	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:255)
> 	at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:283)
> 	at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:36)
> 	at org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala:109)
> 	at org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.scala)
> 	at org.apache.spark.SparkContext.<init>(SparkContext.scala:228)
> 	at org.apache.spark.SparkContext.<init>(SparkContext.scala:97)
> {code}
> It's happened because Hadoop config is initialized each time when spark context is created regardless is hadoop required or not.
> I propose to add some special flag to indicate if hadoop config is required (or start this configuration manually)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org