You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@apex.apache.org by "Feldkamp, Brandon (CONT)" <Br...@capitalone.com> on 2016/11/10 02:09:38 UTC
error with AbstractFileOutputOperator rolling files from tmp
Hello,
I’m seeing this error:
hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
at com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:411)
... 6 more
For some reason “application_1478724068939_0002” is being added to the path twice. Any idea why this could be happening?
This is how we set up the path in our FileOutputOperator which extends AbstractFileOutputOperator
@Override
public void setup(Context.OperatorContext context) {
…
//create directories based on application_id
String applicationId = context.getValue(Context.DAGContext.APPLICATION_ID);
setFilePath((getFilePath()+"/"+applicationId));
…
super.setup(context);
}
________________________________________________________
The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Re: error with AbstractFileOutputOperator rolling files from tmp
Posted by "Feldkamp, Brandon (CONT)" <Br...@capitalone.com>.
That’s what I was afraid of!
We did something similar (set a Boolean flag to mark when the file path was set).
Thanks for everyone’s help!
From: Munagala Ramanath <ra...@datatorrent.com>
Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
Date: Thursday, November 10, 2016 at 10:16 AM
To: "users@apex.apache.org" <us...@apex.apache.org>
Subject: Re: error with AbstractFileOutputOperator rolling files from tmp
The application id is not known until the application starts running so that kind of substitution
likely won't be possible.
Can you not simply check if filePath already ends with the application id before appending ? e.g. :
String appid = Context.OperatorContext.getValue(Context.DAGContext.APPLICATION_ID);
if (! filePath.endsWith(appid)) filePath = filePath + "/" + appid)
Ram
On Thu, Nov 10, 2016 at 5:34 AM, Feldkamp, Brandon (CONT) <Br...@capitalone.com>> wrote:
Good point. I’m sure that’s most like what happened.
Is there anyway to reference in the application id in the properties.xml? I tried the following but it didn’t work:
<property>
<name>dt.operator.fileOut.prop.filePath</name>
<value>output/${dt.attr.APPLICATION_ID}</value>
</property>
Thanks!
Brandon
On 11/10/16, 12:33 AM, "Tushar Gosavi" <tu...@datatorrent.com>> wrote:
was there any failure in the operator or redeploy of the operator? do
you have any killed container before seeing this error on the
operator?
- First initialization of operator correctly set filePath to filePath
+ "/" + applicationId
at this stage filePath is set to filePath + "/" + applicationId.
- If the operator is redeployed again due to upstream operator
failure, or this operator failure. The setup gets called again, which
again appends applicationId to
the last set value of filePath causing applicationId appended twice.
- Tushar.
On Thu, Nov 10, 2016 at 7:50 AM, Feldkamp, Brandon (CONT)
<Br...@capitalone.com>> wrote:
> Cut off part of the stack trace
>
>
>
> Abandoning deployment due to setup failure. java.lang.RuntimeException:
> java.io.FileNotFoundException: File does not exist:
> hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
>
> at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:418)
>
> at
> com.capitalone.cerberus.lazarus.operators.FileOutputOperator.setup(FileOutputOperator.java:58)
>
> at
> com.capitalone.cerberus.lazarus.operators.FileOutputOperator.setup(FileOutputOperator.java:27)
>
> at com.datatorrent.stram.engine.Node.setup(Node.java:187)
>
> at
> com.datatorrent.stram.engine.StreamingContainer.setupNode(StreamingContainer.java:1309)
>
> at
> com.datatorrent.stram.engine.StreamingContainer.access$100(StreamingContainer.java:130)
>
> at
> com.datatorrent.stram.engine.StreamingContainer$2.run(StreamingContainer.java:1388)
>
> Caused by: java.io.FileNotFoundException: File does not exist:
>
> hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
>
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
>
> at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:411)
>
> ... 6 more
>
>
>
>
>
> From: "Feldkamp, Brandon (CONT)" <Br...@capitalone.com>>
> Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
> Date: Wednesday, November 9, 2016 at 9:09 PM
> To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
> Subject: error with AbstractFileOutputOperator rolling files from tmp
>
>
>
> Hello,
>
>
>
> I’m seeing this error:
>
>
>
> hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
>
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
>
> at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:411)
>
> ... 6 more
>
>
>
> For some reason “application_1478724068939_0002” is being added to the path
> twice. Any idea why this could be happening?
>
>
>
> This is how we set up the path in our FileOutputOperator which extends
> AbstractFileOutputOperator
>
>
>
> @Override
> public void setup(Context.OperatorContext context) {
> …
>
> //create directories based on application_id
> String applicationId =
> context.getValue(Context.DAGContext.APPLICATION_ID);
> setFilePath((getFilePath()+"/"+applicationId));
>
>
> …
>
>
> super.setup(context);
> }
>
>
>
>
>
>
>
> ________________________________
>
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates and may only be used solely in
> performance of work or services for Capital One. The information transmitted
> herewith is intended only for use by the individual or entity to which it is
> addressed. If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.
>
>
> ________________________________
>
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates and may only be used solely in
> performance of work or services for Capital One. The information transmitted
> herewith is intended only for use by the individual or entity to which it is
> addressed. If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.
________________________________________________________
The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
________________________________________________________
The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Re: error with AbstractFileOutputOperator rolling files from tmp
Posted by Munagala Ramanath <ra...@datatorrent.com>.
The application id is not known until the application starts running so
that kind of substitution
likely won't be possible.
Can you not simply check if filePath already ends with the application id
before appending ? e.g. :
*String appid =
Context.OperatorContext.getValue(Context.DAGContext.APPLICATION_ID);*
*if (! filePath.endsWith(appid)) filePath = filePath + "/" + appid)*
Ram
On Thu, Nov 10, 2016 at 5:34 AM, Feldkamp, Brandon (CONT) <
Brandon.Feldkamp@capitalone.com> wrote:
> Good point. I’m sure that’s most like what happened.
>
> Is there anyway to reference in the application id in the properties.xml?
> I tried the following but it didn’t work:
>
> <property>
> <name>dt.operator.fileOut.prop.filePath</name>
> <value>output/${dt.attr.APPLICATION_ID}</value>
> </property>
>
> Thanks!
> Brandon
>
> On 11/10/16, 12:33 AM, "Tushar Gosavi" <tu...@datatorrent.com> wrote:
>
> was there any failure in the operator or redeploy of the operator? do
> you have any killed container before seeing this error on the
> operator?
>
> - First initialization of operator correctly set filePath to filePath
> + "/" + applicationId
> at this stage filePath is set to filePath + "/" + applicationId.
>
> - If the operator is redeployed again due to upstream operator
> failure, or this operator failure. The setup gets called again, which
> again appends applicationId to
> the last set value of filePath causing applicationId appended twice.
>
> - Tushar.
>
>
> On Thu, Nov 10, 2016 at 7:50 AM, Feldkamp, Brandon (CONT)
> <Br...@capitalone.com> wrote:
> > Cut off part of the stack trace
> >
> >
> >
> > Abandoning deployment due to setup failure.
> java.lang.RuntimeException:
> > java.io.FileNotFoundException: File does not exist:
> > hdfs://.../output/application_1478724068939_0002/
> application_1478724068939_0002/output.txt.0.1478726546727.tmp
> >
> > at
> > com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(
> AbstractFileOutputOperator.java:418)
> >
> > at
> > com.capitalone.cerberus.lazarus.operators.FileOutputOperator.setup(
> FileOutputOperator.java:58)
> >
> > at
> > com.capitalone.cerberus.lazarus.operators.FileOutputOperator.setup(
> FileOutputOperator.java:27)
> >
> > at com.datatorrent.stram.engine.Node.setup(Node.java:187)
> >
> > at
> > com.datatorrent.stram.engine.StreamingContainer.setupNode(
> StreamingContainer.java:1309)
> >
> > at
> > com.datatorrent.stram.engine.StreamingContainer.access$100(
> StreamingContainer.java:130)
> >
> > at
> > com.datatorrent.stram.engine.StreamingContainer$2.run(
> StreamingContainer.java:1388)
> >
> > Caused by: java.io.FileNotFoundException: File does not exist:
> >
> > hdfs://.../output/application_1478724068939_0002/
> application_1478724068939_0002/output.txt.0.1478726546727.tmp
> >
> > at
> > org.apache.hadoop.hdfs.DistributedFileSystem$19.
> doCall(DistributedFileSystem.java:1219)
> >
> > at
> > org.apache.hadoop.hdfs.DistributedFileSystem$19.
> doCall(DistributedFileSystem.java:1211)
> >
> > at
> > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(
> FileSystemLinkResolver.java:81)
> >
> > at
> > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(
> DistributedFileSystem.java:1211)
> >
> > at
> > com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(
> AbstractFileOutputOperator.java:411)
> >
> > ... 6 more
> >
> >
> >
> >
> >
> > From: "Feldkamp, Brandon (CONT)" <Br...@capitalone.com>
> > Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
> > Date: Wednesday, November 9, 2016 at 9:09 PM
> > To: "users@apex.apache.org" <us...@apex.apache.org>
> > Subject: error with AbstractFileOutputOperator rolling files from tmp
> >
> >
> >
> > Hello,
> >
> >
> >
> > I’m seeing this error:
> >
> >
> >
> > hdfs://.../output/application_1478724068939_0002/
> application_1478724068939_0002/output.txt.0.1478726546727.tmp
> >
> > at
> > org.apache.hadoop.hdfs.DistributedFileSystem$19.
> doCall(DistributedFileSystem.java:1219)
> >
> > at
> > org.apache.hadoop.hdfs.DistributedFileSystem$19.
> doCall(DistributedFileSystem.java:1211)
> >
> > at
> > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(
> FileSystemLinkResolver.java:81)
> >
> > at
> > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(
> DistributedFileSystem.java:1211)
> >
> > at
> > com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(
> AbstractFileOutputOperator.java:411)
> >
> > ... 6 more
> >
> >
> >
> > For some reason “application_1478724068939_0002” is being added to
> the path
> > twice. Any idea why this could be happening?
> >
> >
> >
> > This is how we set up the path in our FileOutputOperator which
> extends
> > AbstractFileOutputOperator
> >
> >
> >
> > @Override
> > public void setup(Context.OperatorContext context) {
> > …
> >
> > //create directories based on application_id
> > String applicationId =
> > context.getValue(Context.DAGContext.APPLICATION_ID);
> > setFilePath((getFilePath()+"/"+applicationId));
> >
> >
> > …
> >
> >
> > super.setup(context);
> > }
> >
> >
> >
> >
> >
> >
> >
> > ________________________________
> >
> > The information contained in this e-mail is confidential and/or
> proprietary
> > to Capital One and/or its affiliates and may only be used solely in
> > performance of work or services for Capital One. The information
> transmitted
> > herewith is intended only for use by the individual or entity to
> which it is
> > addressed. If the reader of this message is not the intended
> recipient, you
> > are hereby notified that any review, retransmission, dissemination,
> > distribution, copying or other use of, or taking of any action in
> reliance
> > upon this information is strictly prohibited. If you have received
> this
> > communication in error, please contact the sender and delete the
> material
> > from your computer.
> >
> >
> > ________________________________
> >
> > The information contained in this e-mail is confidential and/or
> proprietary
> > to Capital One and/or its affiliates and may only be used solely in
> > performance of work or services for Capital One. The information
> transmitted
> > herewith is intended only for use by the individual or entity to
> which it is
> > addressed. If the reader of this message is not the intended
> recipient, you
> > are hereby notified that any review, retransmission, dissemination,
> > distribution, copying or other use of, or taking of any action in
> reliance
> > upon this information is strictly prohibited. If you have received
> this
> > communication in error, please contact the sender and delete the
> material
> > from your computer.
>
>
>
> ________________________________________________________
>
> The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
>
Re: error with AbstractFileOutputOperator rolling files from tmp
Posted by "Feldkamp, Brandon (CONT)" <Br...@capitalone.com>.
Good point. I’m sure that’s most like what happened.
Is there anyway to reference in the application id in the properties.xml? I tried the following but it didn’t work:
<property>
<name>dt.operator.fileOut.prop.filePath</name>
<value>output/${dt.attr.APPLICATION_ID}</value>
</property>
Thanks!
Brandon
On 11/10/16, 12:33 AM, "Tushar Gosavi" <tu...@datatorrent.com> wrote:
was there any failure in the operator or redeploy of the operator? do
you have any killed container before seeing this error on the
operator?
- First initialization of operator correctly set filePath to filePath
+ "/" + applicationId
at this stage filePath is set to filePath + "/" + applicationId.
- If the operator is redeployed again due to upstream operator
failure, or this operator failure. The setup gets called again, which
again appends applicationId to
the last set value of filePath causing applicationId appended twice.
- Tushar.
On Thu, Nov 10, 2016 at 7:50 AM, Feldkamp, Brandon (CONT)
<Br...@capitalone.com> wrote:
> Cut off part of the stack trace
>
>
>
> Abandoning deployment due to setup failure. java.lang.RuntimeException:
> java.io.FileNotFoundException: File does not exist:
> hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
>
> at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:418)
>
> at
> com.capitalone.cerberus.lazarus.operators.FileOutputOperator.setup(FileOutputOperator.java:58)
>
> at
> com.capitalone.cerberus.lazarus.operators.FileOutputOperator.setup(FileOutputOperator.java:27)
>
> at com.datatorrent.stram.engine.Node.setup(Node.java:187)
>
> at
> com.datatorrent.stram.engine.StreamingContainer.setupNode(StreamingContainer.java:1309)
>
> at
> com.datatorrent.stram.engine.StreamingContainer.access$100(StreamingContainer.java:130)
>
> at
> com.datatorrent.stram.engine.StreamingContainer$2.run(StreamingContainer.java:1388)
>
> Caused by: java.io.FileNotFoundException: File does not exist:
>
> hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
>
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
>
> at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:411)
>
> ... 6 more
>
>
>
>
>
> From: "Feldkamp, Brandon (CONT)" <Br...@capitalone.com>
> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
> Date: Wednesday, November 9, 2016 at 9:09 PM
> To: "users@apex.apache.org" <us...@apex.apache.org>
> Subject: error with AbstractFileOutputOperator rolling files from tmp
>
>
>
> Hello,
>
>
>
> I’m seeing this error:
>
>
>
> hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
>
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
>
> at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:411)
>
> ... 6 more
>
>
>
> For some reason “application_1478724068939_0002” is being added to the path
> twice. Any idea why this could be happening?
>
>
>
> This is how we set up the path in our FileOutputOperator which extends
> AbstractFileOutputOperator
>
>
>
> @Override
> public void setup(Context.OperatorContext context) {
> …
>
> //create directories based on application_id
> String applicationId =
> context.getValue(Context.DAGContext.APPLICATION_ID);
> setFilePath((getFilePath()+"/"+applicationId));
>
>
> …
>
>
> super.setup(context);
> }
>
>
>
>
>
>
>
> ________________________________
>
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates and may only be used solely in
> performance of work or services for Capital One. The information transmitted
> herewith is intended only for use by the individual or entity to which it is
> addressed. If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.
>
>
> ________________________________
>
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates and may only be used solely in
> performance of work or services for Capital One. The information transmitted
> herewith is intended only for use by the individual or entity to which it is
> addressed. If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.
________________________________________________________
The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Re: error with AbstractFileOutputOperator rolling files from tmp
Posted by Tushar Gosavi <tu...@datatorrent.com>.
was there any failure in the operator or redeploy of the operator? do
you have any killed container before seeing this error on the
operator?
- First initialization of operator correctly set filePath to filePath
+ "/" + applicationId
at this stage filePath is set to filePath + "/" + applicationId.
- If the operator is redeployed again due to upstream operator
failure, or this operator failure. The setup gets called again, which
again appends applicationId to
the last set value of filePath causing applicationId appended twice.
- Tushar.
On Thu, Nov 10, 2016 at 7:50 AM, Feldkamp, Brandon (CONT)
<Br...@capitalone.com> wrote:
> Cut off part of the stack trace
>
>
>
> Abandoning deployment due to setup failure. java.lang.RuntimeException:
> java.io.FileNotFoundException: File does not exist:
> hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
>
> at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:418)
>
> at
> com.capitalone.cerberus.lazarus.operators.FileOutputOperator.setup(FileOutputOperator.java:58)
>
> at
> com.capitalone.cerberus.lazarus.operators.FileOutputOperator.setup(FileOutputOperator.java:27)
>
> at com.datatorrent.stram.engine.Node.setup(Node.java:187)
>
> at
> com.datatorrent.stram.engine.StreamingContainer.setupNode(StreamingContainer.java:1309)
>
> at
> com.datatorrent.stram.engine.StreamingContainer.access$100(StreamingContainer.java:130)
>
> at
> com.datatorrent.stram.engine.StreamingContainer$2.run(StreamingContainer.java:1388)
>
> Caused by: java.io.FileNotFoundException: File does not exist:
>
> hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
>
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
>
> at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:411)
>
> ... 6 more
>
>
>
>
>
> From: "Feldkamp, Brandon (CONT)" <Br...@capitalone.com>
> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
> Date: Wednesday, November 9, 2016 at 9:09 PM
> To: "users@apex.apache.org" <us...@apex.apache.org>
> Subject: error with AbstractFileOutputOperator rolling files from tmp
>
>
>
> Hello,
>
>
>
> I’m seeing this error:
>
>
>
> hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
>
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
>
> at
> com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:411)
>
> ... 6 more
>
>
>
> For some reason “application_1478724068939_0002” is being added to the path
> twice. Any idea why this could be happening?
>
>
>
> This is how we set up the path in our FileOutputOperator which extends
> AbstractFileOutputOperator
>
>
>
> @Override
> public void setup(Context.OperatorContext context) {
> …
>
> //create directories based on application_id
> String applicationId =
> context.getValue(Context.DAGContext.APPLICATION_ID);
> setFilePath((getFilePath()+"/"+applicationId));
>
>
> …
>
>
> super.setup(context);
> }
>
>
>
>
>
>
>
> ________________________________
>
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates and may only be used solely in
> performance of work or services for Capital One. The information transmitted
> herewith is intended only for use by the individual or entity to which it is
> addressed. If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.
>
>
> ________________________________
>
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates and may only be used solely in
> performance of work or services for Capital One. The information transmitted
> herewith is intended only for use by the individual or entity to which it is
> addressed. If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.
Re: error with AbstractFileOutputOperator rolling files from tmp
Posted by "Feldkamp, Brandon (CONT)" <Br...@capitalone.com>.
Cut off part of the stack trace
Abandoning deployment due to setup failure. java.lang.RuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
at com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:418)
at com.capitalone.cerberus.lazarus.operators.FileOutputOperator.setup(FileOutputOperator.java:58)
at com.capitalone.cerberus.lazarus.operators.FileOutputOperator.setup(FileOutputOperator.java:27)
at com.datatorrent.stram.engine.Node.setup(Node.java:187)
at com.datatorrent.stram.engine.StreamingContainer.setupNode(StreamingContainer.java:1309)
at com.datatorrent.stram.engine.StreamingContainer.access$100(StreamingContainer.java:130)
at com.datatorrent.stram.engine.StreamingContainer$2.run(StreamingContainer.java:1388)
Caused by: java.io.FileNotFoundException: File does not exist:
hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
at com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:411)
... 6 more
From: "Feldkamp, Brandon (CONT)" <Br...@capitalone.com>
Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
Date: Wednesday, November 9, 2016 at 9:09 PM
To: "users@apex.apache.org" <us...@apex.apache.org>
Subject: error with AbstractFileOutputOperator rolling files from tmp
Hello,
I’m seeing this error:
hdfs://.../output/application_1478724068939_0002/application_1478724068939_0002/output.txt.0.1478726546727.tmp
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
at com.datatorrent.lib.io.fs.AbstractFileOutputOperator.setup(AbstractFileOutputOperator.java:411)
... 6 more
For some reason “application_1478724068939_0002” is being added to the path twice. Any idea why this could be happening?
This is how we set up the path in our FileOutputOperator which extends AbstractFileOutputOperator
@Override
public void setup(Context.OperatorContext context) {
…
//create directories based on application_id
String applicationId = context.getValue(Context.DAGContext.APPLICATION_ID);
setFilePath((getFilePath()+"/"+applicationId));
…
super.setup(context);
}
________________________________
The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
________________________________________________________
The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.