You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@oozie.apache.org by Liping Zhang <zl...@gmail.com> on 2016/02/09 21:45:28 UTC

Re: spark job failed with oozie

Thanks Serega for your answers!

I increazed memory for oozie launcher itself in workflow.xml as following,
but I'm not sure whether I increase memory in the right way. Please correct
me if I'm wrong.

    <action name="spark-17c0">
        <spark xmlns="uri:oozie:spark-action:0.1">
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <configuration>
*                <property>*
*
<name>oozie.launcher.mapreduce.map.memory.mb</name>*
*                        <value>6144</value> *
*                </property>*
            </configuration>

<master>spark://ip-10-0-4-248.us-west-1.compute.internal:7077</master>
            <name>MeterReadingLoader</name>
              <class>com.gridx.spark.MeterReadingLoader</class>
            <jar>/user/root/workspaces/lib/spark-all.jar</jar>
              <spark-opts>--conf
spark.driver.extraJavaOptions="-XX:MaxPermSize=10g" --conf
spark.executor.extraJavaOptions="-XX:MaxPermSize=4g"  --driver-memory 8g
 --executor-memory 2g --num-executors 3 --executor-cores 8
--driver-class-path
/opt/cloudera/parcels/CDH/jars/guava-16.0.1.jar:/opt/cloudera/parcels/CDH/jars/spark-assembly-1.5.0-cdh5.5.0-hadoop2.6.0-cdh5.5.0.jar:/opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/jets3t-0.9.0.jar
--conf
spark.executor.extraClassPath=/opt/cloudera/parcels/CDH/jars/jets3t-0.9.0.jar</spark-opts>
              <arg>${i}</arg>
              <arg>${path}</arg>
              <arg>${k}</arg>
              <arg>${keyspace}</arg>
              <arg>${h}</arg>
              <arg>${cassandrahost}</arg>
              <arg>${t}</arg>
              <arg>${interval}</arg>
              <arg>${z}</arg>
              <arg>${timezone}</arg>
              <arg>${l}</arg>
              <arg>${listname}</arg>
              <arg>${g}</arg>
              <arg>${company}</arg>
        </spark>
        <ok to="End"/>
        <error to="Kill"/>
    </action>


But it still has the Permian space issue.

2016-02-09 20:35:44,853 INFO
[sparkDriver-akka.actor.default-dispatcher-18]
org.apache.spark.storage.BlockManagerInfo: Removed broadcast_3_piece0
on 10.0.4.249:47565 in memory (size: 1884.0 B, free: 1060.2 MB)
2016-02-09 20:35:45,457 ERROR
[sparkDriver-akka.actor.default-dispatcher-6]
org.apache.spark.rpc.akka.ErrorMonitor: Uncaught fatal error from
thread [sparkDriver-akka.actor.default-dispatcher-17] shutting down
ActorSystem [sparkDriver]
java.lang.OutOfMemoryError: PermGen space
	at java.lang.Class.getDeclaredConstructors0(Native Method)
	at java.lang.Class.privateGetDeclaredConstructors(Class.java:2532)
	at java.lang.Class.getConstructor0(Class.java:2842)
	at java.lang.Class.newInstance(Class.java:345)
	at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:399)
	at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:396)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:395)
	at sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:113)
	at sun.reflect.ReflectionFactory.newConstructorForSerialization(ReflectionFactory.java:331)
	at java.io.ObjectStreamClass.getSerializableConstructor(ObjectStreamClass.java:1376)
	at java.io.ObjectStreamClass.access$1500(ObjectStreamClass.java:72)
	at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:493)
	at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:468)
	at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
	at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602)
	at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
	at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
	at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
	at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
	at akka.serialization.JavaSerializer$$anonfun$1.apply(Serializer.scala:136)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
	at akka.serialization.JavaSerializer.fromBinary(Serializer.scala:136)





On Thu, Feb 4, 2016 at 11:10 PM, Serega Sheypak <se...@gmail.com>
wrote:

> probably you need to increase mem for oozie launcher itself?
>
> 2016-02-04 20:57 GMT+01:00 Liping Zhang <zl...@gmail.com>:
>
>> Dear Oozie user and dev,
>>
>> We have a our spark job need to be run as a workflow in oozie.
>>
>>
>> 1.Now the spark job can be run successfully in submmit command line as
>> below:
>>
>> spark-submit --master
>> spark://ip-10-0-4-248.us-west-1.compute.internal:7077 --class
>> com.gridx.spark.MeterReadingLoader --name 'smud_test1' --driver-class-path
>> /opt/cloudera/parcels/CDH/jars/guava-16.0.1.jar:/opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/jets3t-0.9.0.jar  --conf
>> spark.executor.extraClassPath=/opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/jets3t-0.9.0.jar
>> ~/spark-all.jar -i s3n://meter-data/batch_data_phase1/smud_phase1_10.csv -k
>> smud_stage -h 10.0.4.243 -t 60 -z America/Los_Angeles -l smud_test1 -g SMUD
>>
>>
>> 2.However, when we use oozie REST API or Hue-OOzie in CDH to submit the
>> same spark job with following REST API, it will launch an oozie launcher
>> job"
>> oozie:launcher:T=spark:W=meter_reading_loader:A=spark-17c0:ID=0000027-160202081901924-oozie-oozi-W",
>> and  be failed with  OOM and PermGen exception.
>>
>> BTW, Our gridx jar "spark-all.jar" has 88M size.
>>
>> Here is the screenshot, and attached is the workflow for oozie.
>>
>> curl -X POST -H "Content-Type: application/xml" -d @config.xml
>> http://localhost:11000/oozie/v2/jobs?action=start
>>
>>
>> oozie parameters:
>>
>> [image: Inline image 4]
>>
>>
>> oozie job in job CDH resource manager UI(port 8088):
>>
>> [image: Inline image 2]
>>
>>
>>
>> Exceptions and logs:
>>
>> [image: Inline image 1]
>>
>> [image: Inline image 3]
>>
>>
>>
>> I also tried to enlarge the MaxPermGen  and memory, still got no luck. Can
>> you help out? Thanks very much!
>>
>>
>>
>> --
>> Cheers,
>> -----
>> Big Data - Big Wisdom - Big Value
>> --------------
>> Michelle Zhang (张莉苹)
>>
>
>


-- 
Cheers,
-----
Big Data - Big Wisdom - Big Value
--------------
Michelle Zhang (张莉苹)

Re: spark job failed with oozie

Posted by Liping Zhang <zl...@gmail.com>.
Hi Serega, oozie users and devs,

According to
http://stackoverflow.com/questions/24262896/oozie-shell-action-memory-limit,
 I added following lines in workflow.xml, but still got OOM PermGem issue.
I guess I set the oozie launcher memory in a wrong way, what should be the
right way to set oozie launcher's memory? Thanks very much for your answers!

            <configuration>
*                <property>*
*
<name>oozie.launcher.mapreduce.map.memory.mb</name>*
*                        <value>6144</value> *
*                </property>*
            </configuration>

On Tue, Feb 9, 2016 at 12:45 PM, Liping Zhang <zl...@gmail.com> wrote:

> Thanks Serega for your answers!
>
> I increazed memory for oozie launcher itself in workflow.xml as following,
> but I'm not sure whether I increase memory in the right way. Please correct
> me if I'm wrong.
>
>     <action name="spark-17c0">
>         <spark xmlns="uri:oozie:spark-action:0.1">
>             <job-tracker>${jobTracker}</job-tracker>
>             <name-node>${nameNode}</name-node>
>             <configuration>
> *                <property>*
> *
> <name>oozie.launcher.mapreduce.map.memory.mb</name>*
> *                        <value>6144</value> *
> *                </property>*
>             </configuration>
>
> <master>spark://ip-10-0-4-248.us-west-1.compute.internal:7077</master>
>             <name>MeterReadingLoader</name>
>               <class>com.gridx.spark.MeterReadingLoader</class>
>             <jar>/user/root/workspaces/lib/spark-all.jar</jar>
>               <spark-opts>--conf
> spark.driver.extraJavaOptions="-XX:MaxPermSize=10g" --conf
> spark.executor.extraJavaOptions="-XX:MaxPermSize=4g"  --driver-memory 8g
>  --executor-memory 2g --num-executors 3 --executor-cores 8
> --driver-class-path
> /opt/cloudera/parcels/CDH/jars/guava-16.0.1.jar:/opt/cloudera/parcels/CDH/jars/spark-assembly-1.5.0-cdh5.5.0-hadoop2.6.0-cdh5.5.0.jar:/opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/jets3t-0.9.0.jar
> --conf
> spark.executor.extraClassPath=/opt/cloudera/parcels/CDH/jars/jets3t-0.9.0.jar</spark-opts>
>               <arg>${i}</arg>
>               <arg>${path}</arg>
>               <arg>${k}</arg>
>               <arg>${keyspace}</arg>
>               <arg>${h}</arg>
>               <arg>${cassandrahost}</arg>
>               <arg>${t}</arg>
>               <arg>${interval}</arg>
>               <arg>${z}</arg>
>               <arg>${timezone}</arg>
>               <arg>${l}</arg>
>               <arg>${listname}</arg>
>               <arg>${g}</arg>
>               <arg>${company}</arg>
>         </spark>
>         <ok to="End"/>
>         <error to="Kill"/>
>     </action>
>
>
> But it still has the Permian space issue.
>
> 2016-02-09 20:35:44,853 INFO [sparkDriver-akka.actor.default-dispatcher-18] org.apache.spark.storage.BlockManagerInfo: Removed broadcast_3_piece0 on 10.0.4.249:47565 in memory (size: 1884.0 B, free: 1060.2 MB)
> 2016-02-09 20:35:45,457 ERROR [sparkDriver-akka.actor.default-dispatcher-6] org.apache.spark.rpc.akka.ErrorMonitor: Uncaught fatal error from thread [sparkDriver-akka.actor.default-dispatcher-17] shutting down ActorSystem [sparkDriver]
> java.lang.OutOfMemoryError: PermGen space
> 	at java.lang.Class.getDeclaredConstructors0(Native Method)
> 	at java.lang.Class.privateGetDeclaredConstructors(Class.java:2532)
> 	at java.lang.Class.getConstructor0(Class.java:2842)
> 	at java.lang.Class.newInstance(Class.java:345)
> 	at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:399)
> 	at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:396)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:395)
> 	at sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:113)
> 	at sun.reflect.ReflectionFactory.newConstructorForSerialization(ReflectionFactory.java:331)
> 	at java.io.ObjectStreamClass.getSerializableConstructor(ObjectStreamClass.java:1376)
> 	at java.io.ObjectStreamClass.access$1500(ObjectStreamClass.java:72)
> 	at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:493)
> 	at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:468)
> 	at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
> 	at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602)
> 	at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
> 	at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
> 	at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
> 	at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
> 	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> 	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> 	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
> 	at akka.serialization.JavaSerializer$$anonfun$1.apply(Serializer.scala:136)
> 	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
> 	at akka.serialization.JavaSerializer.fromBinary(Serializer.scala:136)
>
>
>
>
>
> On Thu, Feb 4, 2016 at 11:10 PM, Serega Sheypak <se...@gmail.com>
> wrote:
>
>> probably you need to increase mem for oozie launcher itself?
>>
>> 2016-02-04 20:57 GMT+01:00 Liping Zhang <zl...@gmail.com>:
>>
>>> Dear Oozie user and dev,
>>>
>>> We have a our spark job need to be run as a workflow in oozie.
>>>
>>>
>>> 1.Now the spark job can be run successfully in submmit command line as
>>> below:
>>>
>>> spark-submit --master
>>> spark://ip-10-0-4-248.us-west-1.compute.internal:7077 --class
>>> com.gridx.spark.MeterReadingLoader --name 'smud_test1' --driver-class-path
>>> /opt/cloudera/parcels/CDH/jars/guava-16.0.1.jar:/opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/jets3t-0.9.0.jar  --conf
>>> spark.executor.extraClassPath=/opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/jets3t-0.9.0.jar
>>> ~/spark-all.jar -i s3n://meter-data/batch_data_phase1/smud_phase1_10.csv -k
>>> smud_stage -h 10.0.4.243 -t 60 -z America/Los_Angeles -l smud_test1 -g SMUD
>>>
>>>
>>> 2.However, when we use oozie REST API or Hue-OOzie in CDH to submit the
>>> same spark job with following REST API, it will launch an oozie launcher
>>> job"
>>> oozie:launcher:T=spark:W=meter_reading_loader:A=spark-17c0:ID=0000027-160202081901924-oozie-oozi-W",
>>> and  be failed with  OOM and PermGen exception.
>>>
>>> BTW, Our gridx jar "spark-all.jar" has 88M size.
>>>
>>> Here is the screenshot, and attached is the workflow for oozie.
>>>
>>> curl -X POST -H "Content-Type: application/xml" -d @config.xml
>>> http://localhost:11000/oozie/v2/jobs?action=start
>>>
>>>
>>> oozie parameters:
>>>
>>> [image: Inline image 4]
>>>
>>>
>>> oozie job in job CDH resource manager UI(port 8088):
>>>
>>> [image: Inline image 2]
>>>
>>>
>>>
>>> Exceptions and logs:
>>>
>>> [image: Inline image 1]
>>>
>>> [image: Inline image 3]
>>>
>>>
>>>
>>> I also tried to enlarge the MaxPermGen  and memory, still got no luck. Can
>>> you help out? Thanks very much!
>>>
>>>
>>>
>>> --
>>> Cheers,
>>> -----
>>> Big Data - Big Wisdom - Big Value
>>> --------------
>>> Michelle Zhang (张莉苹)
>>>
>>
>>
>
>
> --
> Cheers,
> -----
> Big Data - Big Wisdom - Big Value
> --------------
> Michelle Zhang (张莉苹)
>



-- 
Cheers,
-----
Big Data - Big Wisdom - Big Value
--------------
Michelle Zhang (张莉苹)

Re: spark job failed with oozie

Posted by Liping Zhang <zl...@gmail.com>.
Hi Serega, oozie users and devs,

According to
http://stackoverflow.com/questions/24262896/oozie-shell-action-memory-limit,
 I added following lines in workflow.xml, but still got OOM PermGem issue.
I guess I set the oozie launcher memory in a wrong way, what should be the
right way to set oozie launcher's memory? Thanks very much for your answers!

            <configuration>
*                <property>*
*
<name>oozie.launcher.mapreduce.map.memory.mb</name>*
*                        <value>6144</value> *
*                </property>*
            </configuration>

On Tue, Feb 9, 2016 at 12:45 PM, Liping Zhang <zl...@gmail.com> wrote:

> Thanks Serega for your answers!
>
> I increazed memory for oozie launcher itself in workflow.xml as following,
> but I'm not sure whether I increase memory in the right way. Please correct
> me if I'm wrong.
>
>     <action name="spark-17c0">
>         <spark xmlns="uri:oozie:spark-action:0.1">
>             <job-tracker>${jobTracker}</job-tracker>
>             <name-node>${nameNode}</name-node>
>             <configuration>
> *                <property>*
> *
> <name>oozie.launcher.mapreduce.map.memory.mb</name>*
> *                        <value>6144</value> *
> *                </property>*
>             </configuration>
>
> <master>spark://ip-10-0-4-248.us-west-1.compute.internal:7077</master>
>             <name>MeterReadingLoader</name>
>               <class>com.gridx.spark.MeterReadingLoader</class>
>             <jar>/user/root/workspaces/lib/spark-all.jar</jar>
>               <spark-opts>--conf
> spark.driver.extraJavaOptions="-XX:MaxPermSize=10g" --conf
> spark.executor.extraJavaOptions="-XX:MaxPermSize=4g"  --driver-memory 8g
>  --executor-memory 2g --num-executors 3 --executor-cores 8
> --driver-class-path
> /opt/cloudera/parcels/CDH/jars/guava-16.0.1.jar:/opt/cloudera/parcels/CDH/jars/spark-assembly-1.5.0-cdh5.5.0-hadoop2.6.0-cdh5.5.0.jar:/opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/jets3t-0.9.0.jar
> --conf
> spark.executor.extraClassPath=/opt/cloudera/parcels/CDH/jars/jets3t-0.9.0.jar</spark-opts>
>               <arg>${i}</arg>
>               <arg>${path}</arg>
>               <arg>${k}</arg>
>               <arg>${keyspace}</arg>
>               <arg>${h}</arg>
>               <arg>${cassandrahost}</arg>
>               <arg>${t}</arg>
>               <arg>${interval}</arg>
>               <arg>${z}</arg>
>               <arg>${timezone}</arg>
>               <arg>${l}</arg>
>               <arg>${listname}</arg>
>               <arg>${g}</arg>
>               <arg>${company}</arg>
>         </spark>
>         <ok to="End"/>
>         <error to="Kill"/>
>     </action>
>
>
> But it still has the Permian space issue.
>
> 2016-02-09 20:35:44,853 INFO [sparkDriver-akka.actor.default-dispatcher-18] org.apache.spark.storage.BlockManagerInfo: Removed broadcast_3_piece0 on 10.0.4.249:47565 in memory (size: 1884.0 B, free: 1060.2 MB)
> 2016-02-09 20:35:45,457 ERROR [sparkDriver-akka.actor.default-dispatcher-6] org.apache.spark.rpc.akka.ErrorMonitor: Uncaught fatal error from thread [sparkDriver-akka.actor.default-dispatcher-17] shutting down ActorSystem [sparkDriver]
> java.lang.OutOfMemoryError: PermGen space
> 	at java.lang.Class.getDeclaredConstructors0(Native Method)
> 	at java.lang.Class.privateGetDeclaredConstructors(Class.java:2532)
> 	at java.lang.Class.getConstructor0(Class.java:2842)
> 	at java.lang.Class.newInstance(Class.java:345)
> 	at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:399)
> 	at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:396)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:395)
> 	at sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:113)
> 	at sun.reflect.ReflectionFactory.newConstructorForSerialization(ReflectionFactory.java:331)
> 	at java.io.ObjectStreamClass.getSerializableConstructor(ObjectStreamClass.java:1376)
> 	at java.io.ObjectStreamClass.access$1500(ObjectStreamClass.java:72)
> 	at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:493)
> 	at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:468)
> 	at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
> 	at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602)
> 	at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
> 	at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
> 	at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
> 	at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
> 	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> 	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> 	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
> 	at akka.serialization.JavaSerializer$$anonfun$1.apply(Serializer.scala:136)
> 	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
> 	at akka.serialization.JavaSerializer.fromBinary(Serializer.scala:136)
>
>
>
>
>
> On Thu, Feb 4, 2016 at 11:10 PM, Serega Sheypak <se...@gmail.com>
> wrote:
>
>> probably you need to increase mem for oozie launcher itself?
>>
>> 2016-02-04 20:57 GMT+01:00 Liping Zhang <zl...@gmail.com>:
>>
>>> Dear Oozie user and dev,
>>>
>>> We have a our spark job need to be run as a workflow in oozie.
>>>
>>>
>>> 1.Now the spark job can be run successfully in submmit command line as
>>> below:
>>>
>>> spark-submit --master
>>> spark://ip-10-0-4-248.us-west-1.compute.internal:7077 --class
>>> com.gridx.spark.MeterReadingLoader --name 'smud_test1' --driver-class-path
>>> /opt/cloudera/parcels/CDH/jars/guava-16.0.1.jar:/opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/jets3t-0.9.0.jar  --conf
>>> spark.executor.extraClassPath=/opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/jets3t-0.9.0.jar
>>> ~/spark-all.jar -i s3n://meter-data/batch_data_phase1/smud_phase1_10.csv -k
>>> smud_stage -h 10.0.4.243 -t 60 -z America/Los_Angeles -l smud_test1 -g SMUD
>>>
>>>
>>> 2.However, when we use oozie REST API or Hue-OOzie in CDH to submit the
>>> same spark job with following REST API, it will launch an oozie launcher
>>> job"
>>> oozie:launcher:T=spark:W=meter_reading_loader:A=spark-17c0:ID=0000027-160202081901924-oozie-oozi-W",
>>> and  be failed with  OOM and PermGen exception.
>>>
>>> BTW, Our gridx jar "spark-all.jar" has 88M size.
>>>
>>> Here is the screenshot, and attached is the workflow for oozie.
>>>
>>> curl -X POST -H "Content-Type: application/xml" -d @config.xml
>>> http://localhost:11000/oozie/v2/jobs?action=start
>>>
>>>
>>> oozie parameters:
>>>
>>> [image: Inline image 4]
>>>
>>>
>>> oozie job in job CDH resource manager UI(port 8088):
>>>
>>> [image: Inline image 2]
>>>
>>>
>>>
>>> Exceptions and logs:
>>>
>>> [image: Inline image 1]
>>>
>>> [image: Inline image 3]
>>>
>>>
>>>
>>> I also tried to enlarge the MaxPermGen  and memory, still got no luck. Can
>>> you help out? Thanks very much!
>>>
>>>
>>>
>>> --
>>> Cheers,
>>> -----
>>> Big Data - Big Wisdom - Big Value
>>> --------------
>>> Michelle Zhang (张莉苹)
>>>
>>
>>
>
>
> --
> Cheers,
> -----
> Big Data - Big Wisdom - Big Value
> --------------
> Michelle Zhang (张莉苹)
>



-- 
Cheers,
-----
Big Data - Big Wisdom - Big Value
--------------
Michelle Zhang (张莉苹)