You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@apex.apache.org by bhidevivek <bh...@gmail.com> on 2017/05/17 21:51:20 UTC

NullPointerException at AbstractFSRollingOutputOperator while using HiveOutputModule

While using the HiveOutputModule to save the data into Hive partitioned
table, the application submission fails many times with below error 

2017-05-17 16:20:13,503 INFO  stram.StreamingContainerManager
(StreamingContainerManager.java:processHeartbeat(1486)) - Container
container_e3092_1491920474239_122895_01_000014 buffer server:
brdn1362.target.com:34963
2017-05-17 16:20:13,841 INFO  stram.StreamingContainerParent
(StreamingContainerParent.java:log(170)) - child msg: Stopped running due to
an exception. java.lang.NullPointerException
	at
com.datatorrent.contrib.hive.AbstractFSRollingOutputOperator.getHDFSRollingLastFile(AbstractFSRollingOutputOperator.java:204)
	at
com.datatorrent.contrib.hive.AbstractFSRollingOutputOperator.endWindow(AbstractFSRollingOutputOperator.java:226)
	at
com.datatorrent.stram.engine.GenericNode.processEndWindow(GenericNode.java:153)
	at com.datatorrent.stram.engine.GenericNode.run(GenericNode.java:397)
	at
com.datatorrent.stram.engine.StreamingContainer$2.run(StreamingContainer.java:1428)
 context:
PTContainer[id=9(container_e3092_1491920474239_122895_01_000014),state=ACTIVE,operators=[PTOperator[id=10,name=hiveOutput$fsRolling,state=PENDING_DEPLOY]]]

I don't see any pattern when this error is reported. I made sure the table
exists in Hive and the location is correct. Is there any particular
configuration or settings I should look for to avoid this? 




--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/NullPointerException-at-AbstractFSRollingOutputOperator-while-using-HiveOutputModule-tp1625.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: NullPointerException at AbstractFSRollingOutputOperator while using HiveOutputModule

Posted by Vivek Bhide <bh...@gmail.com>.
Thank you Sanjay for your reply. I am using Malhar 3.7.0 but still facing the
issue. I will reevaluate and recreate the issue with all the logs to support
my findings before I report any of the issues.

Regards
Vivek



--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/NullPointerException-at-AbstractFSRollingOutputOperator-while-using-HiveOutputModule-tp1625p1645.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: NullPointerException at AbstractFSRollingOutputOperator while using HiveOutputModule

Posted by Sanjay Pujare <sa...@datatorrent.com>.
Hi Vivek

Which version of malhar and malhar-hive are you using? It may help to use
the latest version (3.7.0) as a couple of fixes have gone in recently that
might fix your issue (APEXMALHAR-2394
<https://github.com/apache/apex-malhar/commit/4587a55c0bc7b178ea6fc13a49db4cd7b1ac1ebb>
 and APEXMALHAR-2342
<https://github.com/apache/apex-malhar/commit/1df0b523ad522595f6bb30ff76aacb57d239807f>
).

Also with regard to my suggestion about (3) it seems this is by design i.e.
operator properties inside a module are not meant to be exposed unless the
module writer intends to expose them explicitly (as the module's own
properties).

Sanjay


On Mon, May 22, 2017 at 12:01 PM, Sanjay Pujare <sa...@datatorrent.com>
wrote:

> Hi Vivek
>
> Looks like you have run into a couple of bugs. You may want to create the
> following JIRAs (and look into fixing them?)
>
> 1) JIRA for your case 1. What can be suggested is to add another trigger
> for the roll up event (say when the number of tmp files exceeds a certain
> configurable number instead of just the empty window count).
>
> 2) JIRA for the 2 NPEs: I tried to look into these to determine if these
> are bugs or configuration issues but wasn't able to. May be you can open 2
> JIRAs for the 2 different NPEs stack traces.
>
> 3) Enhancement request: ability to set a property of an operator inside a
> module e.g. modulename$operatorname that you mentioned.
>
> I think what will be quickest is for you to address this and submit it to
> malhar for review and commit.
>
>
> Sanjay
>
>
>
> On Sat, May 20, 2017 at 8:35 AM, Vivek Bhide <bh...@gmail.com>
> wrote:
>
>> Hi Sanjay,
>>
>> After working for some more time i could find a pattern on how and when
>> the
>> code breaks but for sure in any situation it doesn't work. Below are my
>> oservations till now
>>
>> 1. Regarding setting a parameter, as you said, the app_name is optional
>> and
>> reason is you don't expect to have more than one streamingapplications in
>> your project. I think the app_name will matter in case if there are more
>> than one streamingapplications in your .apa with properties in same .xml
>> file
>> 2. I tried setting the maxWindowsWithNoData to very high value but only
>> way
>> I could set it up is by using * in place of operator name. The reason  is,
>> HiveOutputModule doen't accept it as a parameter and instead it is one of
>> the operators params from HiveOutputModule i.e
>> AbstractFSRollingOutputOperator. At this point, there is no provision for
>> setting the parameter which are embeded in a module, even if using
>> <modulename$operatorname> pattern, and it is module's responsibility to
>> accept it as a level 1 operator from properties file and set it for the
>> level 2 operator when it is building the DAG. I could verify this with a
>> quick test case for another module that I have built in my project and can
>> share the code base for the same
>> 3. File rollup depend on 2 params maxWindowsWithNoData (from
>> AbstractFSRollingOutputOperator ) and maxLength from HiveModule
>>         case 1 : maxWindowsWithNoData set to high no and maxlenght = 50MB
>> (default
>> 128MB)
>>         Result : In this case, the file rollup doesn't happen until the
>> emptywindow
>> count reach to this point. I could that there were multiple 50 MB files
>> created under <hdfs_dir>/<yarn_app_id>/10/<partition_col> location but
>> none
>> of the filed rolled up from .tmp to final file even after running the app
>> for more than 10 hours
>>
>>         case 2 : maxWindowsWithNoData set to 480 (4 mins) and maxlenght =
>> 50MB
>> (default 128MB)
>>         Result : In this case if maxlenght limit reaches first I get below
>> exception, nullpointer again but the stack trace is different and if the
>> maxWindowsWithNoData reaches first then I get the same null pointer that I
>> reported at first place
>>
>>         2017-05-19 10:02:37,401 INFO  stram.StreamingContainerParent
>> (StreamingContainerParent.java:log(170)) - child msg:
>> [container_e3092_1491920474239_131026_01_000016] Entering heartbeat
>> loop..
>> context:
>> PTContainer[id=9(container_e3092_1491920474239_131026_01_000
>> 016),state=ALLOCATED,operators=[PTOperator[id=10,name=
>> hiveOutput$fsRolling,state=PENDING_DEPLOY]]]
>>                 2017-05-19 10:02:38,414 INFO
>> stram.StreamingContainerManager
>> (StreamingContainerManager.java:processHeartbeat(1486)) - Container
>> container_e3092_1491920474239_131026_01_000016 buffer server:
>> d-d7zvfz1.target.com:45373
>>                 2017-05-19 10:02:38,725 INFO
>> stram.StreamingContainerParent
>> (StreamingContainerParent.java:log(170)) - child msg: Stopped running
>> due to
>> an exception. java.lang.NullPointerException
>>                         at
>> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.request
>> Finalize(AbstractFileOutputOperator.java:742)
>>                         at
>> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.rotate(
>> AbstractFileOutputOperator.java:883)
>>                         at
>> com.datatorrent.contrib.hive.AbstractFSRollingOutputOperator
>> .rotateCall(AbstractFSRollingOutputOperator.java:186)
>>                         at
>> com.datatorrent.contrib.hive.AbstractFSRollingOutputOperator
>> .endWindow(AbstractFSRollingOutputOperator.java:227)
>>                         at
>> com.datatorrent.stram.engine.GenericNode.processEndWindow(Ge
>> nericNode.java:153)
>>                         at com.datatorrent.stram.engine.G
>> enericNode.run(GenericNode.java:397)
>>                         at
>> com.datatorrent.stram.engine.StreamingContainer$2.run(Stream
>> ingContainer.java:1428)
>>                  context:
>> PTContainer[id=9(container_e3092_1491920474239_131026_01_000
>> 016),state=ACTIVE,operators=[PTOperator[id=10,name=
>> hiveOutput$fsRolling,state=PENDING_DEPLOY]]]
>>
>>
>> In any case the code always fails. I was really excited to have thi
>> incorporated but for now, I had kept it aside and sticking to simple HDFS
>> sink. Will work on it again to find more as time permits
>>
>> Let me know your thoughts on this
>>
>> Regards
>> Vivek
>>
>>
>>
>> --
>> View this message in context: http://apache-apex-users-list.
>> 78494.x6.nabble.com/NullPointerException-at-AbstractFSRollin
>> gOutputOperator-while-using-HiveOutputModule-tp1625p1639.html
>> Sent from the Apache Apex Users list mailing list archive at Nabble.com.
>>
>
>

Re: NullPointerException at AbstractFSRollingOutputOperator while using HiveOutputModule

Posted by Sanjay Pujare <sa...@datatorrent.com>.
Hi Vivek

Looks like you have run into a couple of bugs. You may want to create the
following JIRAs (and look into fixing them?)

1) JIRA for your case 1. What can be suggested is to add another trigger
for the roll up event (say when the number of tmp files exceeds a certain
configurable number instead of just the empty window count).

2) JIRA for the 2 NPEs: I tried to look into these to determine if these
are bugs or configuration issues but wasn't able to. May be you can open 2
JIRAs for the 2 different NPEs stack traces.

3) Enhancement request: ability to set a property of an operator inside a
module e.g. modulename$operatorname that you mentioned.

I think what will be quickest is for you to address this and submit it to
malhar for review and commit.


Sanjay



On Sat, May 20, 2017 at 8:35 AM, Vivek Bhide <bh...@gmail.com> wrote:

> Hi Sanjay,
>
> After working for some more time i could find a pattern on how and when the
> code breaks but for sure in any situation it doesn't work. Below are my
> oservations till now
>
> 1. Regarding setting a parameter, as you said, the app_name is optional and
> reason is you don't expect to have more than one streamingapplications in
> your project. I think the app_name will matter in case if there are more
> than one streamingapplications in your .apa with properties in same .xml
> file
> 2. I tried setting the maxWindowsWithNoData to very high value but only way
> I could set it up is by using * in place of operator name. The reason  is,
> HiveOutputModule doen't accept it as a parameter and instead it is one of
> the operators params from HiveOutputModule i.e
> AbstractFSRollingOutputOperator. At this point, there is no provision for
> setting the parameter which are embeded in a module, even if using
> <modulename$operatorname> pattern, and it is module's responsibility to
> accept it as a level 1 operator from properties file and set it for the
> level 2 operator when it is building the DAG. I could verify this with a
> quick test case for another module that I have built in my project and can
> share the code base for the same
> 3. File rollup depend on 2 params maxWindowsWithNoData (from
> AbstractFSRollingOutputOperator ) and maxLength from HiveModule
>         case 1 : maxWindowsWithNoData set to high no and maxlenght = 50MB
> (default
> 128MB)
>         Result : In this case, the file rollup doesn't happen until the
> emptywindow
> count reach to this point. I could that there were multiple 50 MB files
> created under <hdfs_dir>/<yarn_app_id>/10/<partition_col> location but
> none
> of the filed rolled up from .tmp to final file even after running the app
> for more than 10 hours
>
>         case 2 : maxWindowsWithNoData set to 480 (4 mins) and maxlenght =
> 50MB
> (default 128MB)
>         Result : In this case if maxlenght limit reaches first I get below
> exception, nullpointer again but the stack trace is different and if the
> maxWindowsWithNoData reaches first then I get the same null pointer that I
> reported at first place
>
>         2017-05-19 10:02:37,401 INFO  stram.StreamingContainerParent
> (StreamingContainerParent.java:log(170)) - child msg:
> [container_e3092_1491920474239_131026_01_000016] Entering heartbeat loop..
> context:
> PTContainer[id=9(container_e3092_1491920474239_131026_01_
> 000016),state=ALLOCATED,operators=[PTOperator[id=10,
> name=hiveOutput$fsRolling,state=PENDING_DEPLOY]]]
>                 2017-05-19 10:02:38,414 INFO  stram.
> StreamingContainerManager
> (StreamingContainerManager.java:processHeartbeat(1486)) - Container
> container_e3092_1491920474239_131026_01_000016 buffer server:
> d-d7zvfz1.target.com:45373
>                 2017-05-19 10:02:38,725 INFO
> stram.StreamingContainerParent
> (StreamingContainerParent.java:log(170)) - child msg: Stopped running due
> to
> an exception. java.lang.NullPointerException
>                         at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.requestFinalize(
> AbstractFileOutputOperator.java:742)
>                         at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.rotate(
> AbstractFileOutputOperator.java:883)
>                         at
> com.datatorrent.contrib.hive.AbstractFSRollingOutputOperator.rotateCall(
> AbstractFSRollingOutputOperator.java:186)
>                         at
> com.datatorrent.contrib.hive.AbstractFSRollingOutputOperator.endWindow(
> AbstractFSRollingOutputOperator.java:227)
>                         at
> com.datatorrent.stram.engine.GenericNode.processEndWindow(
> GenericNode.java:153)
>                         at com.datatorrent.stram.engine.
> GenericNode.run(GenericNode.java:397)
>                         at
> com.datatorrent.stram.engine.StreamingContainer$2.run(
> StreamingContainer.java:1428)
>                  context:
> PTContainer[id=9(container_e3092_1491920474239_131026_01_
> 000016),state=ACTIVE,operators=[PTOperator[id=10,
> name=hiveOutput$fsRolling,state=PENDING_DEPLOY]]]
>
>
> In any case the code always fails. I was really excited to have thi
> incorporated but for now, I had kept it aside and sticking to simple HDFS
> sink. Will work on it again to find more as time permits
>
> Let me know your thoughts on this
>
> Regards
> Vivek
>
>
>
> --
> View this message in context: http://apache-apex-users-list.
> 78494.x6.nabble.com/NullPointerException-at-AbstractFSRollingOutputOperato
> r-while-using-HiveOutputModule-tp1625p1639.html
> Sent from the Apache Apex Users list mailing list archive at Nabble.com.
>

Re: NullPointerException at AbstractFSRollingOutputOperator while using HiveOutputModule

Posted by Vivek Bhide <bh...@gmail.com>.
Hi Sanjay,

After working for some more time i could find a pattern on how and when the
code breaks but for sure in any situation it doesn't work. Below are my
oservations till now

1. Regarding setting a parameter, as you said, the app_name is optional and
reason is you don't expect to have more than one streamingapplications in
your project. I think the app_name will matter in case if there are more
than one streamingapplications in your .apa with properties in same .xml
file
2. I tried setting the maxWindowsWithNoData to very high value but only way
I could set it up is by using * in place of operator name. The reason  is,
HiveOutputModule doen't accept it as a parameter and instead it is one of
the operators params from HiveOutputModule i.e
AbstractFSRollingOutputOperator. At this point, there is no provision for
setting the parameter which are embeded in a module, even if using
<modulename$operatorname> pattern, and it is module's responsibility to
accept it as a level 1 operator from properties file and set it for the
level 2 operator when it is building the DAG. I could verify this with a
quick test case for another module that I have built in my project and can
share the code base for the same
3. File rollup depend on 2 params maxWindowsWithNoData (from
AbstractFSRollingOutputOperator ) and maxLength from HiveModule
	case 1 : maxWindowsWithNoData set to high no and maxlenght = 50MB (default
128MB)
	Result : In this case, the file rollup doesn't happen until the emptywindow
count reach to this point. I could that there were multiple 50 MB files
created under <hdfs_dir>/<yarn_app_id>/10/<partition_col> location but none
of the filed rolled up from .tmp to final file even after running the app
for more than 10 hours

	case 2 : maxWindowsWithNoData set to 480 (4 mins) and maxlenght = 50MB
(default 128MB)
	Result : In this case if maxlenght limit reaches first I get below
exception, nullpointer again but the stack trace is different and if the
maxWindowsWithNoData reaches first then I get the same null pointer that I
reported at first place

	2017-05-19 10:02:37,401 INFO  stram.StreamingContainerParent
(StreamingContainerParent.java:log(170)) - child msg:
[container_e3092_1491920474239_131026_01_000016] Entering heartbeat loop..
context:
PTContainer[id=9(container_e3092_1491920474239_131026_01_000016),state=ALLOCATED,operators=[PTOperator[id=10,name=hiveOutput$fsRolling,state=PENDING_DEPLOY]]]
		2017-05-19 10:02:38,414 INFO  stram.StreamingContainerManager
(StreamingContainerManager.java:processHeartbeat(1486)) - Container
container_e3092_1491920474239_131026_01_000016 buffer server:
d-d7zvfz1.target.com:45373
		2017-05-19 10:02:38,725 INFO  stram.StreamingContainerParent
(StreamingContainerParent.java:log(170)) - child msg: Stopped running due to
an exception. java.lang.NullPointerException
			at
com.datatorrent.lib.io.fs.AbstractFileOutputOperator.requestFinalize(AbstractFileOutputOperator.java:742)
			at
com.datatorrent.lib.io.fs.AbstractFileOutputOperator.rotate(AbstractFileOutputOperator.java:883)
			at
com.datatorrent.contrib.hive.AbstractFSRollingOutputOperator.rotateCall(AbstractFSRollingOutputOperator.java:186)
			at
com.datatorrent.contrib.hive.AbstractFSRollingOutputOperator.endWindow(AbstractFSRollingOutputOperator.java:227)
			at
com.datatorrent.stram.engine.GenericNode.processEndWindow(GenericNode.java:153)
			at com.datatorrent.stram.engine.GenericNode.run(GenericNode.java:397)
			at
com.datatorrent.stram.engine.StreamingContainer$2.run(StreamingContainer.java:1428)
		 context:
PTContainer[id=9(container_e3092_1491920474239_131026_01_000016),state=ACTIVE,operators=[PTOperator[id=10,name=hiveOutput$fsRolling,state=PENDING_DEPLOY]]]


In any case the code always fails. I was really excited to have thi 
incorporated but for now, I had kept it aside and sticking to simple HDFS
sink. Will work on it again to find more as time permits

Let me know your thoughts on this

Regards
Vivek



--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/NullPointerException-at-AbstractFSRollingOutputOperator-while-using-HiveOutputModule-tp1625p1639.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: NullPointerException at AbstractFSRollingOutputOperator while using HiveOutputModule

Posted by Sanjay Pujare <sa...@datatorrent.com>.
Not sure if you need "dt.application.<app_name>" in the name element. As
per http://docs.datatorrent.com/application_packages/#operator-properties
your property spec should be

<property>
  <name>dt.operator.fsRolling.prop.maxWindowsWithNoData
            </name>
  <value>100000000</value>
</property>

I have been trying to find an example of setting a property for an operator
inside a module (like this case) but couldn't. If the above doesn't work
you can try the $ notation as follows:

<property>
<name>dt.operator.hiveOutput$fsRolling.prop.maxWindowsWithNoData
                </name>
<value>100000000</value>
 </property>

Let me know if that works or not.


On Wed, May 17, 2017 at 11:10 PM, Vivek Bhide <bh...@gmail.com> wrote:

> Hi Sanjay,
>
> I did all required changes but application is still throwing the same
> error.
> I increased the value to 100000000 (default 100) and even removed the
> setting of upstream operator's streaming window customization
>
>  <property>
>
> <name>dt.application.<app_name>.operator.fsRolling.prop.
> maxWindowsWithNoData
>                 </name>
>                 <value>100000000</value>
>         </property>
>
> I also tried setting the property like
>
>  <property>
>
> <name>dt.application.<app_name>.operator.hiveOutput$fsRolling.prop.
> maxWindowsWithNoData
>                 </name>
>                 <value>100000000</value>
>         </property>
>
> since 'hiveOutput' is the operator name for HiveModule in my application
> and
> 'hiveOutput$fsRolling' is how the name appears when i do list-operators but
> its of no use
>
> Is there any working example of HdfsOutputModule that I can refer to?
>
> When I have my ownHdfsSinkOperator which extends AbstractFileOutputOperator
> the file rollup works perfectly fine but not with HdfsOutputModule or
> FSPojoToHiveOperator
>
> Is there an alternative to achieve the same functionality? please let me
> know
>
> Regards
> Vivek
>
>
>
> --
> View this message in context: http://apache-apex-users-list.
> 78494.x6.nabble.com/NullPointerException-at-AbstractFSRollingOutputOperato
> r-while-using-HiveOutputModule-tp1625p1633.html
> Sent from the Apache Apex Users list mailing list archive at Nabble.com.
>

Re: NullPointerException at AbstractFSRollingOutputOperator while using HiveOutputModule

Posted by Vivek Bhide <bh...@gmail.com>.
Hi Sanjay,

I did all required changes but application is still throwing the same error.
I increased the value to 100000000 (default 100) and even removed the
setting of upstream operator's streaming window customization

 <property>
               
<name>dt.application.<app_name>.operator.fsRolling.prop.maxWindowsWithNoData
                </name>
                <value>100000000</value>
        </property>

I also tried setting the property like

 <property>
               
<name>dt.application.<app_name>.operator.hiveOutput$fsRolling.prop.maxWindowsWithNoData
                </name>
                <value>100000000</value>
        </property>

since 'hiveOutput' is the operator name for HiveModule in my application and
'hiveOutput$fsRolling' is how the name appears when i do list-operators but
its of no use

Is there any working example of HdfsOutputModule that I can refer to? 

When I have my ownHdfsSinkOperator which extends AbstractFileOutputOperator
the file rollup works perfectly fine but not with HdfsOutputModule or
FSPojoToHiveOperator

Is there an alternative to achieve the same functionality? please let me
know

Regards
Vivek



--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/NullPointerException-at-AbstractFSRollingOutputOperator-while-using-HiveOutputModule-tp1625p1633.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: NullPointerException at AbstractFSRollingOutputOperator while using HiveOutputModule

Posted by Vivek Bhide <bh...@gmail.com>.
Thank you Sanjay. I did go through the code and found the value you mentioned
but was not sure if I should override the value.

Regarding no data being sent to the HiveOutputModule for a number of
windows, it could be a case since the upstream operator to hive module has
the streaming window interval set to 1 min (since its doing an aggregation
of counts, i didn't want to it emit the aggregation at the default window
time but wait and have some considerable aggregated values before it emits
to Hdfs sink operator)

Regards
Vivek



--
View this message in context: http://apache-apex-users-list.78494.x6.nabble.com/NullPointerException-at-AbstractFSRollingOutputOperator-while-using-HiveOutputModule-tp1625p1629.html
Sent from the Apache Apex Users list mailing list archive at Nabble.com.

Re: NullPointerException at AbstractFSRollingOutputOperator while using HiveOutputModule

Posted by Sanjay Pujare <sa...@datatorrent.com>.
Vivek

Looking at the code you could have run into a corner case bug where no data
is being sent to the HiveOutputModule for a number of windows (could that
be happening)?

The corner case is that AbstractFSRollingOutputOperator rotates files if
number of empty windows exceeds a certain number. That certain number can
be set as the property maxWindowsWithNoData of that operator. Within the
HiveOutputModule the name of that operator is fsRolling so you can use
....operator.fsRolling.prop.maxWindowsWithNoData
as the name to set a very high value so the operator doesn't try to create
a rolling partition for empty windows.


On Wed, May 17, 2017 at 2:51 PM, bhidevivek <bh...@gmail.com> wrote:

> While using the HiveOutputModule to save the data into Hive partitioned
> table, the application submission fails many times with below error
>
> 2017-05-17 16:20:13,503 INFO  stram.StreamingContainerManager
> (StreamingContainerManager.java:processHeartbeat(1486)) - Container
> container_e3092_1491920474239_122895_01_000014 buffer server:
> brdn1362.target.com:34963
> 2017-05-17 16:20:13,841 INFO  stram.StreamingContainerParent
> (StreamingContainerParent.java:log(170)) - child msg: Stopped running due
> to
> an exception. java.lang.NullPointerException
>         at
> com.datatorrent.contrib.hive.AbstractFSRollingOutputOperato
> r.getHDFSRollingLastFile(AbstractFSRollingOutputOperator.java:204)
>         at
> com.datatorrent.contrib.hive.AbstractFSRollingOutputOperator.endWindow(
> AbstractFSRollingOutputOperator.java:226)
>         at
> com.datatorrent.stram.engine.GenericNode.processEndWindow(
> GenericNode.java:153)
>         at com.datatorrent.stram.engine.GenericNode.run(GenericNode.
> java:397)
>         at
> com.datatorrent.stram.engine.StreamingContainer$2.run(
> StreamingContainer.java:1428)
>  context:
> PTContainer[id=9(container_e3092_1491920474239_122895_01_
> 000014),state=ACTIVE,operators=[PTOperator[id=10,
> name=hiveOutput$fsRolling,state=PENDING_DEPLOY]]]
>
> I don't see any pattern when this error is reported. I made sure the table
> exists in Hive and the location is correct. Is there any particular
> configuration or settings I should look for to avoid this?
>
>
>
>
> --
> View this message in context: http://apache-apex-users-list.
> 78494.x6.nabble.com/NullPointerException-at-AbstractFSRollingOutputOperato
> r-while-using-HiveOutputModule-tp1625.html
> Sent from the Apache Apex Users list mailing list archive at Nabble.com.
>