You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Moshe Beeri <mo...@gmail.com> on 2014/09/20 09:55:58 UTC

Fails to run simple Spark (Hello World) scala program

object Nizoz {

  def connect(): Unit = {
    val conf = new SparkConf().setAppName("nizoz").setMaster("master");
    val spark = new SparkContext(conf)
    val lines =
spark.textFile("file:///home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md")
    val lineLengths = lines.map(s => s.length)
    val totalLength = lineLengths.reduce((a, b) => a + b)
    println("totalLength=" + totalLength)

  }

  def main(args: Array[String]) {
    println(scala.tools.nsc.Properties.versionString)
    try {
      //Nizoz.connect
      val logFile =
"/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
be some file on your system
      val conf = new SparkConf().setAppName("Simple
Application").setMaster("spark://master:7077")
      val sc = new SparkContext(conf)
      val logData = sc.textFile(logFile, 2).cache()
      val numAs = logData.filter(line => line.contains("a")).count()
      val numBs = logData.filter(line => line.contains("b")).count()
      println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))

    } catch {
      case e => {
        println(e.getCause())
        println("stack:")
        e.printStackTrace()
      }
    }
  }
}
Runs with Scala 2.10.4
The problem is this [vogue] exception:

	at com.example.scamel.Nizoz.main(Nizoz.scala)
Caused by: java.lang.RuntimeException:
java.lang.reflect.InvocationTargetException
	at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
	at org.apache.hadoop.security.Groups.<init>(Groups.java:64)
	at
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
...
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
...
	... 10 more
Caused by: java.lang.UnsatisfiedLinkError:
org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
	at org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native
Method)
	at
org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)

I have Hadoop 1.2.1 running on Ubuntu 14.04, the Scala console run as
expected.

What am I doing wrong?
Any idea will be welcome 





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Fails to run simple Spark (Hello World) scala program

Posted by Moshe Beeri <mo...@gmail.com>.
Sure in local mode it works for me as well, the issue is that I run master
only, I needed worker as well.


תודה רבה,
משה בארי.
054-3133943
Email <mo...@gmail.com> | linkedin <http://www.linkedin.com/in/mobee>



On Mon, Sep 22, 2014 at 9:58 AM, Akhil Das-2 [via Apache Spark User List] <
ml-node+s1001560n14785h81@n3.nabble.com> wrote:

> Hi Moshe,
>
> I ran the same code on my machine and it is working without any issues.
> You can try running it in local mode and if that is working fine, then the
> issue is with your configuration.
>
> [image: Inline image 1]
>
> Thanks
> Best Regards
>
> On Sat, Sep 20, 2014 at 3:34 PM, Moshe Beeri <[hidden email]
> <http://user/SendEmail.jtp?type=node&node=14785&i=0>> wrote:
>
>> Hi Sean,
>>
>> Thanks a lot for the answer , I loved your excellent book
>> *​Mahout in Action
>> <http://www.amazon.com/Mahout-Action-Sean-Owen/dp/1935182684> *hope
>> you'll keep on writing more books in the field of Big Data.
>> The issue was with redundant Hadoop library, But now I am facing some
>> other issue (see prev post in this thread)
>> java.lang.ClassNotFoundException: com.example.scamel.Nizoz$$anonfun$3
>>
>> But the class com.example.scamel.Nizoz (in fact Scala object) is the one
>> under debugging.
>>
>>   def main(args: Array[String]) {
>>     println(scala.tools.nsc.Properties.versionString)
>>     try {
>>       //Nizoz.connect
>>       val logFile =
>> "/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
>> be some file on your system
>>       val conf = new SparkConf().setAppName("spark
>> town").setMaster("spark://nash:7077"); //spark://master:7077
>>       val sc = new SparkContext(conf)
>>       val logData = sc.textFile(logFile, 2).cache()
>>       *val numAs = logData.filter(line => line.contains("a")).count()
>>  // <- here is  where the exception thrown *
>>
>> Do you have any idea whats wrong?
>> Thanks,
>> Moshe Beeri.
>>
>>
>> *​*
>>
>>
>> תודה רבה,
>> משה בארי.
>> 054-3133943
>> [hidden email] <http://user/SendEmail.jtp?type=node&node=14731&i=0> |
>> linkedin <http://www.linkedin.com/in/mobee>
>>
>>
>>
>> On Sat, Sep 20, 2014 at 12:02 PM, sowen [via Apache Spark User List] <[hidden
>> email] <http://user/SendEmail.jtp?type=node&node=14731&i=1>> wrote:
>>
>>> Spark does not require Hadoop 2 or YARN. This looks like a problem with
>>> the Hadoop installation as it is not funding native libraries it needs to
>>> make some security related system call. Check the installation.
>>> On Sep 20, 2014 9:13 AM, "Manu Suryavansh" <[hidden email]
>>> <http://user/SendEmail.jtp?type=node&node=14724&i=0>> wrote:
>>>
>>>> Hi Moshe,
>>>>
>>>> Spark needs a Hadoop 2.x/YARN cluster. Other wise you can run it
>>>> without hadoop in the stand alone mode.
>>>>
>>>> Manu
>>>>
>>>>
>>>>
>>>> On Sat, Sep 20, 2014 at 12:55 AM, Moshe Beeri <[hidden email]
>>>> <http://user/SendEmail.jtp?type=node&node=14724&i=1>> wrote:
>>>>
>>>>> object Nizoz {
>>>>>
>>>>>   def connect(): Unit = {
>>>>>     val conf = new SparkConf().setAppName("nizoz").setMaster("master");
>>>>>     val spark = new SparkContext(conf)
>>>>>     val lines =
>>>>>
>>>>> spark.textFile("file:///home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md")
>>>>>     val lineLengths = lines.map(s => s.length)
>>>>>     val totalLength = lineLengths.reduce((a, b) => a + b)
>>>>>     println("totalLength=" + totalLength)
>>>>>
>>>>>   }
>>>>>
>>>>>   def main(args: Array[String]) {
>>>>>     println(scala.tools.nsc.Properties.versionString)
>>>>>     try {
>>>>>       //Nizoz.connect
>>>>>       val logFile =
>>>>> "/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" //
>>>>> Should
>>>>> be some file on your system
>>>>>       val conf = new SparkConf().setAppName("Simple
>>>>> Application").setMaster("spark://master:7077")
>>>>>       val sc = new SparkContext(conf)
>>>>>       val logData = sc.textFile(logFile, 2).cache()
>>>>>       val numAs = logData.filter(line => line.contains("a")).count()
>>>>>       val numBs = logData.filter(line => line.contains("b")).count()
>>>>>       println("Lines with a: %s, Lines with b: %s".format(numAs,
>>>>> numBs))
>>>>>
>>>>>     } catch {
>>>>>       case e => {
>>>>>         println(e.getCause())
>>>>>         println("stack:")
>>>>>         e.printStackTrace()
>>>>>       }
>>>>>     }
>>>>>   }
>>>>> }
>>>>> Runs with Scala 2.10.4
>>>>> The problem is this [vogue] exception:
>>>>>
>>>>>         at com.example.scamel.Nizoz.main(Nizoz.scala)
>>>>> Caused by: java.lang.RuntimeException:
>>>>> java.lang.reflect.InvocationTargetException
>>>>>         at
>>>>>
>>>>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
>>>>>         at org.apache.hadoop.security.Groups.<init>(Groups.java:64)
>>>>>         at
>>>>>
>>>>> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
>>>>> ...
>>>>> Caused by: java.lang.reflect.InvocationTargetException
>>>>>         at
>>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>>>         at
>>>>>
>>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>>>>         at
>>>>>
>>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>>> ...
>>>>>         ... 10 more
>>>>> Caused by: java.lang.UnsatisfiedLinkError:
>>>>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
>>>>>         at
>>>>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native
>>>>> Method)
>>>>>         at
>>>>>
>>>>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)
>>>>>
>>>>> I have Hadoop 1.2.1 running on Ubuntu 14.04, the Scala console run as
>>>>> expected.
>>>>>
>>>>> What am I doing wrong?
>>>>> Any idea will be welcome
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> View this message in context:
>>>>> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718.html
>>>>> Sent from the Apache Spark User List mailing list archive at
>>>>> Nabble.com.
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: [hidden email]
>>>>> <http://user/SendEmail.jtp?type=node&node=14724&i=2>
>>>>> For additional commands, e-mail: [hidden email]
>>>>> <http://user/SendEmail.jtp?type=node&node=14724&i=3>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Manu Suryavansh
>>>>
>>>
>>>
>>> ------------------------------
>>>  If you reply to this email, your message will be added to the
>>> discussion below:
>>>
>>> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14724.html
>>>  To unsubscribe from Fails to run simple Spark (Hello World) scala
>>> program, click here.
>>> NAML
>>> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>
>>
>>
>> ------------------------------
>> View this message in context: Re: Fails to run simple Spark (Hello
>> World) scala program
>> <http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14731.html>
>>
>> Sent from the Apache Spark User List mailing list archive
>> <http://apache-spark-user-list.1001560.n3.nabble.com/> at Nabble.com.
>>
>
>
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14785.html
>  To unsubscribe from Fails to run simple Spark (Hello World) scala
> program, click here
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=14718&code=bW9zaGUuYmVlcmlAZ21haWwuY29tfDE0NzE4fDE0NzUwMDQ2Ng==>
> .
> NAML
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14907.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Fails to run simple Spark (Hello World) scala program

Posted by Akhil Das <ak...@sigmoidanalytics.com>.
Hi Moshe,

I ran the same code on my machine and it is working without any issues. You
can try running it in local mode and if that is working fine, then the
issue is with your configuration.

[image: Inline image 1]

Thanks
Best Regards

On Sat, Sep 20, 2014 at 3:34 PM, Moshe Beeri <mo...@gmail.com> wrote:

> Hi Sean,
>
> Thanks a lot for the answer , I loved your excellent book
> *​Mahout in Action
> <http://www.amazon.com/Mahout-Action-Sean-Owen/dp/1935182684> *hope
> you'll keep on writing more books in the field of Big Data.
> The issue was with redundant Hadoop library, But now I am facing some
> other issue (see prev post in this thread)
> java.lang.ClassNotFoundException: com.example.scamel.Nizoz$$anonfun$3
>
> But the class com.example.scamel.Nizoz (in fact Scala object) is the one
> under debugging.
>
>   def main(args: Array[String]) {
>     println(scala.tools.nsc.Properties.versionString)
>     try {
>       //Nizoz.connect
>       val logFile =
> "/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
> be some file on your system
>       val conf = new SparkConf().setAppName("spark
> town").setMaster("spark://nash:7077"); //spark://master:7077
>       val sc = new SparkContext(conf)
>       val logData = sc.textFile(logFile, 2).cache()
>       *val numAs = logData.filter(line => line.contains("a")).count()
>  // <- here is  where the exception thrown *
>
> Do you have any idea whats wrong?
> Thanks,
> Moshe Beeri.
>
>
> *​*
>
>
> תודה רבה,
> משה בארי.
> 054-3133943
> [hidden email] <http://user/SendEmail.jtp?type=node&node=14731&i=0> |
> linkedin <http://www.linkedin.com/in/mobee>
>
>
>
> On Sat, Sep 20, 2014 at 12:02 PM, sowen [via Apache Spark User List] <[hidden
> email] <http://user/SendEmail.jtp?type=node&node=14731&i=1>> wrote:
>
>> Spark does not require Hadoop 2 or YARN. This looks like a problem with
>> the Hadoop installation as it is not funding native libraries it needs to
>> make some security related system call. Check the installation.
>> On Sep 20, 2014 9:13 AM, "Manu Suryavansh" <[hidden email]
>> <http://user/SendEmail.jtp?type=node&node=14724&i=0>> wrote:
>>
>>> Hi Moshe,
>>>
>>> Spark needs a Hadoop 2.x/YARN cluster. Other wise you can run it without
>>> hadoop in the stand alone mode.
>>>
>>> Manu
>>>
>>>
>>>
>>> On Sat, Sep 20, 2014 at 12:55 AM, Moshe Beeri <[hidden email]
>>> <http://user/SendEmail.jtp?type=node&node=14724&i=1>> wrote:
>>>
>>>> object Nizoz {
>>>>
>>>>   def connect(): Unit = {
>>>>     val conf = new SparkConf().setAppName("nizoz").setMaster("master");
>>>>     val spark = new SparkContext(conf)
>>>>     val lines =
>>>>
>>>> spark.textFile("file:///home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md")
>>>>     val lineLengths = lines.map(s => s.length)
>>>>     val totalLength = lineLengths.reduce((a, b) => a + b)
>>>>     println("totalLength=" + totalLength)
>>>>
>>>>   }
>>>>
>>>>   def main(args: Array[String]) {
>>>>     println(scala.tools.nsc.Properties.versionString)
>>>>     try {
>>>>       //Nizoz.connect
>>>>       val logFile =
>>>> "/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" //
>>>> Should
>>>> be some file on your system
>>>>       val conf = new SparkConf().setAppName("Simple
>>>> Application").setMaster("spark://master:7077")
>>>>       val sc = new SparkContext(conf)
>>>>       val logData = sc.textFile(logFile, 2).cache()
>>>>       val numAs = logData.filter(line => line.contains("a")).count()
>>>>       val numBs = logData.filter(line => line.contains("b")).count()
>>>>       println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
>>>>
>>>>     } catch {
>>>>       case e => {
>>>>         println(e.getCause())
>>>>         println("stack:")
>>>>         e.printStackTrace()
>>>>       }
>>>>     }
>>>>   }
>>>> }
>>>> Runs with Scala 2.10.4
>>>> The problem is this [vogue] exception:
>>>>
>>>>         at com.example.scamel.Nizoz.main(Nizoz.scala)
>>>> Caused by: java.lang.RuntimeException:
>>>> java.lang.reflect.InvocationTargetException
>>>>         at
>>>>
>>>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
>>>>         at org.apache.hadoop.security.Groups.<init>(Groups.java:64)
>>>>         at
>>>>
>>>> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
>>>> ...
>>>> Caused by: java.lang.reflect.InvocationTargetException
>>>>         at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>>         at
>>>>
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>>>         at
>>>>
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>> ...
>>>>         ... 10 more
>>>> Caused by: java.lang.UnsatisfiedLinkError:
>>>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
>>>>         at
>>>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native
>>>> Method)
>>>>         at
>>>>
>>>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)
>>>>
>>>> I have Hadoop 1.2.1 running on Ubuntu 14.04, the Scala console run as
>>>> expected.
>>>>
>>>> What am I doing wrong?
>>>> Any idea will be welcome
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> View this message in context:
>>>> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718.html
>>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: [hidden email]
>>>> <http://user/SendEmail.jtp?type=node&node=14724&i=2>
>>>> For additional commands, e-mail: [hidden email]
>>>> <http://user/SendEmail.jtp?type=node&node=14724&i=3>
>>>>
>>>>
>>>
>>>
>>> --
>>> Manu Suryavansh
>>>
>>
>>
>> ------------------------------
>>  If you reply to this email, your message will be added to the
>> discussion below:
>>
>> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14724.html
>>  To unsubscribe from Fails to run simple Spark (Hello World) scala
>> program, click here.
>> NAML
>> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>
>
>
> ------------------------------
> View this message in context: Re: Fails to run simple Spark (Hello World)
> scala program
> <http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14731.html>
>
> Sent from the Apache Spark User List mailing list archive
> <http://apache-spark-user-list.1001560.n3.nabble.com/> at Nabble.com.
>

Re: Fails to run simple Spark (Hello World) scala program

Posted by Moshe Beeri <mo...@gmail.com>.
Hi Sean,

Thanks a lot for the answer , I loved your excellent book
*​Mahout in Action
<http://www.amazon.com/Mahout-Action-Sean-Owen/dp/1935182684> *hope you'll
keep on writing more books in the field of Big Data.
The issue was with redundant Hadoop library, But now I am facing some other
issue (see prev post in this thread)
java.lang.ClassNotFoundException: com.example.scamel.Nizoz$$anonfun$3

But the class com.example.scamel.Nizoz (in fact Scala object) is the one
under debugging.

  def main(args: Array[String]) {
    println(scala.tools.nsc.Properties.versionString)
    try {
      //Nizoz.connect
      val logFile =
"/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
be some file on your system
      val conf = new SparkConf().setAppName("spark
town").setMaster("spark://nash:7077"); //spark://master:7077
      val sc = new SparkContext(conf)
      val logData = sc.textFile(logFile, 2).cache()
      *val numAs = logData.filter(line => line.contains("a")).count()    //
<- here is  where the exception thrown *

Do you have any idea whats wrong?
Thanks,
Moshe Beeri.


*​*


תודה רבה,
משה בארי.
054-3133943
Email <mo...@gmail.com> | linkedin <http://www.linkedin.com/in/mobee>



On Sat, Sep 20, 2014 at 12:02 PM, sowen [via Apache Spark User List] <
ml-node+s1001560n14724h90@n3.nabble.com> wrote:

> Spark does not require Hadoop 2 or YARN. This looks like a problem with
> the Hadoop installation as it is not funding native libraries it needs to
> make some security related system call. Check the installation.
> On Sep 20, 2014 9:13 AM, "Manu Suryavansh" <[hidden email]
> <http://user/SendEmail.jtp?type=node&node=14724&i=0>> wrote:
>
>> Hi Moshe,
>>
>> Spark needs a Hadoop 2.x/YARN cluster. Other wise you can run it without
>> hadoop in the stand alone mode.
>>
>> Manu
>>
>>
>>
>> On Sat, Sep 20, 2014 at 12:55 AM, Moshe Beeri <[hidden email]
>> <http://user/SendEmail.jtp?type=node&node=14724&i=1>> wrote:
>>
>>> object Nizoz {
>>>
>>>   def connect(): Unit = {
>>>     val conf = new SparkConf().setAppName("nizoz").setMaster("master");
>>>     val spark = new SparkContext(conf)
>>>     val lines =
>>>
>>> spark.textFile("file:///home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md")
>>>     val lineLengths = lines.map(s => s.length)
>>>     val totalLength = lineLengths.reduce((a, b) => a + b)
>>>     println("totalLength=" + totalLength)
>>>
>>>   }
>>>
>>>   def main(args: Array[String]) {
>>>     println(scala.tools.nsc.Properties.versionString)
>>>     try {
>>>       //Nizoz.connect
>>>       val logFile =
>>> "/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" //
>>> Should
>>> be some file on your system
>>>       val conf = new SparkConf().setAppName("Simple
>>> Application").setMaster("spark://master:7077")
>>>       val sc = new SparkContext(conf)
>>>       val logData = sc.textFile(logFile, 2).cache()
>>>       val numAs = logData.filter(line => line.contains("a")).count()
>>>       val numBs = logData.filter(line => line.contains("b")).count()
>>>       println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
>>>
>>>     } catch {
>>>       case e => {
>>>         println(e.getCause())
>>>         println("stack:")
>>>         e.printStackTrace()
>>>       }
>>>     }
>>>   }
>>> }
>>> Runs with Scala 2.10.4
>>> The problem is this [vogue] exception:
>>>
>>>         at com.example.scamel.Nizoz.main(Nizoz.scala)
>>> Caused by: java.lang.RuntimeException:
>>> java.lang.reflect.InvocationTargetException
>>>         at
>>>
>>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
>>>         at org.apache.hadoop.security.Groups.<init>(Groups.java:64)
>>>         at
>>>
>>> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
>>> ...
>>> Caused by: java.lang.reflect.InvocationTargetException
>>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>>         at
>>>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>>         at
>>>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> ...
>>>         ... 10 more
>>> Caused by: java.lang.UnsatisfiedLinkError:
>>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
>>>         at
>>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native
>>> Method)
>>>         at
>>>
>>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)
>>>
>>> I have Hadoop 1.2.1 running on Ubuntu 14.04, the Scala console run as
>>> expected.
>>>
>>> What am I doing wrong?
>>> Any idea will be welcome
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [hidden email]
>>> <http://user/SendEmail.jtp?type=node&node=14724&i=2>
>>> For additional commands, e-mail: [hidden email]
>>> <http://user/SendEmail.jtp?type=node&node=14724&i=3>
>>>
>>>
>>
>>
>> --
>> Manu Suryavansh
>>
>
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14724.html
>  To unsubscribe from Fails to run simple Spark (Hello World) scala
> program, click here
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=14718&code=bW9zaGUuYmVlcmlAZ21haWwuY29tfDE0NzE4fDE0NzUwMDQ2Ng==>
> .
> NAML
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14731.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Fails to run simple Spark (Hello World) scala program

Posted by Sean Owen <so...@cloudera.com>.
Spark does not require Hadoop 2 or YARN. This looks like a problem with the
Hadoop installation as it is not funding native libraries it needs to make
some security related system call. Check the installation.
On Sep 20, 2014 9:13 AM, "Manu Suryavansh" <su...@gmail.com>
wrote:

> Hi Moshe,
>
> Spark needs a Hadoop 2.x/YARN cluster. Other wise you can run it without
> hadoop in the stand alone mode.
>
> Manu
>
>
>
> On Sat, Sep 20, 2014 at 12:55 AM, Moshe Beeri <mo...@gmail.com>
> wrote:
>
>> object Nizoz {
>>
>>   def connect(): Unit = {
>>     val conf = new SparkConf().setAppName("nizoz").setMaster("master");
>>     val spark = new SparkContext(conf)
>>     val lines =
>>
>> spark.textFile("file:///home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md")
>>     val lineLengths = lines.map(s => s.length)
>>     val totalLength = lineLengths.reduce((a, b) => a + b)
>>     println("totalLength=" + totalLength)
>>
>>   }
>>
>>   def main(args: Array[String]) {
>>     println(scala.tools.nsc.Properties.versionString)
>>     try {
>>       //Nizoz.connect
>>       val logFile =
>> "/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
>> be some file on your system
>>       val conf = new SparkConf().setAppName("Simple
>> Application").setMaster("spark://master:7077")
>>       val sc = new SparkContext(conf)
>>       val logData = sc.textFile(logFile, 2).cache()
>>       val numAs = logData.filter(line => line.contains("a")).count()
>>       val numBs = logData.filter(line => line.contains("b")).count()
>>       println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
>>
>>     } catch {
>>       case e => {
>>         println(e.getCause())
>>         println("stack:")
>>         e.printStackTrace()
>>       }
>>     }
>>   }
>> }
>> Runs with Scala 2.10.4
>> The problem is this [vogue] exception:
>>
>>         at com.example.scamel.Nizoz.main(Nizoz.scala)
>> Caused by: java.lang.RuntimeException:
>> java.lang.reflect.InvocationTargetException
>>         at
>>
>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
>>         at org.apache.hadoop.security.Groups.<init>(Groups.java:64)
>>         at
>>
>> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
>> ...
>> Caused by: java.lang.reflect.InvocationTargetException
>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>>         at
>>
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>         at
>>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> ...
>>         ... 10 more
>> Caused by: java.lang.UnsatisfiedLinkError:
>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
>>         at
>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native
>> Method)
>>         at
>>
>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)
>>
>> I have Hadoop 1.2.1 running on Ubuntu 14.04, the Scala console run as
>> expected.
>>
>> What am I doing wrong?
>> Any idea will be welcome
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>> For additional commands, e-mail: user-help@spark.apache.org
>>
>>
>
>
> --
> Manu Suryavansh
>

Re: Fails to run simple Spark (Hello World) scala program

Posted by Moshe Beeri <mo...@gmail.com>.
Hi Nanu/All

Now I interfacing an other strange (relatively to new complex framework)
error.
I run ./sbin/start-all.sh (my computer name after John nash) and got the
connection Connecting to master spark://nash:7077
running on my local machine yields
java.lang.ClassNotFoundException: com.example.scamel.Nizoz$$anonfun$3

But the class com.example.scamel.Nizoz (in fact Scala object) is the one
under debugging.

  def main(args: Array[String]) {
    println(scala.tools.nsc.Properties.versionString)
    try {
      //Nizoz.connect
      val logFile =
"/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
be some file on your system
      val conf = new SparkConf().setAppName("spark
town").setMaster("spark://nash:7077"); //spark://master:7077
      val sc = new SparkContext(conf)
      val logData = sc.textFile(logFile, 2).cache()
      *val numAs = logData.filter(line => line.contains("a")).count()    //
<- here is  where the exception thrown *

Any help will be welcome





תודה רבה,
משה בארי.
054-3133943
Email <mo...@gmail.com> | linkedin <http://www.linkedin.com/in/mobee>



On Sat, Sep 20, 2014 at 11:22 AM, Moshe Beeri <mo...@gmail.com> wrote:

> Thank Manu,
>
> I just saw I have included hadoop client 2.x in my pom.xml, removing it
> solved the problem.
>
> Thanks for you help
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14721.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Re: Fails to run simple Spark (Hello World) scala program

Posted by Moshe Beeri <mo...@gmail.com>.
Thank Manu,

I just saw I have included hadoop client 2.x in my pom.xml, removing it
solved the problem.

Thanks for you help



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14721.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Fails to run simple Spark (Hello World) scala program

Posted by Manu Suryavansh <su...@gmail.com>.
Hi Moshe,

Spark needs a Hadoop 2.x/YARN cluster. Other wise you can run it without
hadoop in the stand alone mode.

Manu



On Sat, Sep 20, 2014 at 12:55 AM, Moshe Beeri <mo...@gmail.com> wrote:

> object Nizoz {
>
>   def connect(): Unit = {
>     val conf = new SparkConf().setAppName("nizoz").setMaster("master");
>     val spark = new SparkContext(conf)
>     val lines =
>
> spark.textFile("file:///home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md")
>     val lineLengths = lines.map(s => s.length)
>     val totalLength = lineLengths.reduce((a, b) => a + b)
>     println("totalLength=" + totalLength)
>
>   }
>
>   def main(args: Array[String]) {
>     println(scala.tools.nsc.Properties.versionString)
>     try {
>       //Nizoz.connect
>       val logFile =
> "/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
> be some file on your system
>       val conf = new SparkConf().setAppName("Simple
> Application").setMaster("spark://master:7077")
>       val sc = new SparkContext(conf)
>       val logData = sc.textFile(logFile, 2).cache()
>       val numAs = logData.filter(line => line.contains("a")).count()
>       val numBs = logData.filter(line => line.contains("b")).count()
>       println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
>
>     } catch {
>       case e => {
>         println(e.getCause())
>         println("stack:")
>         e.printStackTrace()
>       }
>     }
>   }
> }
> Runs with Scala 2.10.4
> The problem is this [vogue] exception:
>
>         at com.example.scamel.Nizoz.main(Nizoz.scala)
> Caused by: java.lang.RuntimeException:
> java.lang.reflect.InvocationTargetException
>         at
>
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
>         at org.apache.hadoop.security.Groups.<init>(Groups.java:64)
>         at
>
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
> ...
> Caused by: java.lang.reflect.InvocationTargetException
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at
>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at
>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> ...
>         ... 10 more
> Caused by: java.lang.UnsatisfiedLinkError:
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
>         at
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native
> Method)
>         at
>
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)
>
> I have Hadoop 1.2.1 running on Ubuntu 14.04, the Scala console run as
> expected.
>
> What am I doing wrong?
> Any idea will be welcome
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>


-- 
Manu Suryavansh