You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@apex.apache.org by "Ganelin, Ilya" <Il...@capitalone.com> on 2016/03/10 03:53:22 UTC

Operator ID Overlap

Hi all – I’ve created some helper functions to efficiently create HDFS Output operators (which require common configurations). However, I run into a duplicate operator ID error when attempting to run this, error and code are below:

Why would the DAG be assigning a duplicate operator ID? For the record, we’re on Apex 3.0.0 so if this is a bug that has subsequently been fixed please let me know.


:2016-03-09 18:50:48,074 [main] DEBUG logical.LogicalPlan <init> - Initializing LatencyViz_HDHT as com.capitalone.vault8.citadel.operators.impl.LatencyVisualization
LatencyOut_HDHT,latencies
2016-03-09 18:50:48,079 [main] DEBUG logical.LogicalPlan <init> - Initializing LatencyOut_HDHT as com.capitalone.vault8.citadel.operators.impl.HdfsFileOutputOperator
DurabilityOut_HDHT,durability
2016-03-09 18:50:48,081 [main] DEBUG logical.LogicalPlan <init> - Initializing DurabilityOut_HDHT as com.capitalone.vault8.citadel.operators.impl.HdfsFileOutputOperator
Records_HDHT,records

java.lang.IllegalArgumentException: duplicate operator id: OperatorMeta{name=Records_HDHT, operator=RecordMaker{name=Records_HDHT}, attributes={Attribute{defaultValue=null, name=com.datatorrent.api.Context.OperatorContext.PARTITIONER, codec=com.datatorrent.api.StringCodec$Object2String@6f96c77}=com.datatorrent.common.partitioner.StatelessPartitioner@be64738}}
at com.datatorrent.stram.plan.logical.LogicalPlan.addOperator(LogicalPlan.java:865)
at com.capitalone.vault8.citadel.Application.addHdfsOutputOp(Application.java:171)
at com.capitalone.vault8.citadel.Application.createPipelineHdht(Application.java:215)
at com.capitalone.vault8.citadel.Application.populateDAG(Application.java:91)
at com.datatorrent.stram.plan.logical.LogicalPlanConfiguration.prepareDAG(LogicalPlanConfiguration.java:1171)
at com.datatorrent.stram.LocalModeImpl.prepareDAG(LocalModeImpl.java:57)
at com.capitalone.vault8.citadel.ApplicationTest.testApplication(ApplicationTest.java:29)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)

private HdfsFileOutputOperator addHdfsOutputOp(DAG dag,
    String opName,
    String fileName,
    Configuration conf) {
  System.out.println(opName + "," + fileName);
  HdfsFileOutputOperator outputOp = new HdfsFileOutputOperator();
  outputOp.setCoreSite(conf.get("coreSite"));
  outputOp.setHdfsSite(conf.get("hdfsSite"));
  outputOp.setFilePath(getOutputPath(conf, fileName));
  dag.addOperator(opName, outputOp);
  return outputOp;
}

final HdfsFileOutputOperator latenciesOutput =
    addHdfsOutputOp(dag, "LatencyOut" + label, "latencies", conf);

final HdfsFileOutputOperator durabilityOutput =
    addHdfsOutputOp(dag, "DurabilityOut" + label, "durability", conf);

final HdfsFileOutputOperator recordsOut =
    addHdfsOutputOp(dag, "Records" + label, "records", conf);

final HdfsFileOutputOperator recordsSchemaOut =
    addHdfsOutputOp(dag, "RecordsSchema" + label, "records", conf);

________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.

RE: Operator ID Overlap

Posted by "Ganelin, Ilya" <Il...@capitalone.com>.
I just realized on second glance that I do have a naming conflict! Thanks!



Sent with Good (www.good.com)
________________________________
From: Tushar Gosavi <tu...@datatorrent.com>
Sent: Thursday, March 10, 2016 12:07:33 AM
To: dev@apex.incubator.apache.org
Subject: Re: Operator ID Overlap

Hi,

>From the stack trace it seems that you are adding two or more operator in
the DAG using same name (Records_HDHT). Adding multiple operators to
logical DAG using same name is not allowed. Can you check
if this is the case, else there might be the problem in the platform.

- Tushar.


On Thu, Mar 10, 2016 at 8:28 AM, Ganelin, Ilya <Il...@capitalone.com>
wrote:

> Additional note: I am extracting operatorId from the context in setup:
>
> @Override
> public void setup(Context.OperatorContext context) {
>         super.setup(context);
>         operatorId = context.getId();
>         operatorUniquePath = new Path(getOutputFileName() + idDelim +
> operatorId + ".txt").toString();
> }
>
>
>
>
>
>
> On 3/9/16, 6:53 PM, "Ganelin, Ilya" <Il...@capitalone.com> wrote:
>
> >Hi all – I’ve created some helper functions to efficiently create HDFS
> Output operators (which require common configurations). However, I run into
> a duplicate operator ID error when attempting to run this, error and code
> are below:
> >
> >Why would the DAG be assigning a duplicate operator ID? For the record,
> we’re on Apex 3.0.0 so if this is a bug that has subsequently been fixed
> please let me know.
> >
> >
> >:2016-03-09 18:50:48,074 [main] DEBUG logical.LogicalPlan <init> -
> Initializing LatencyViz_HDHT as
> com.capitalone.vault8.citadel.operators.impl.LatencyVisualization
> >LatencyOut_HDHT,latencies
> >2016-03-09 18:50:48,079 [main] DEBUG logical.LogicalPlan <init> -
> Initializing LatencyOut_HDHT as
> com.capitalone.vault8.citadel.operators.impl.HdfsFileOutputOperator
> >DurabilityOut_HDHT,durability
> >2016-03-09 18:50:48,081 [main] DEBUG logical.LogicalPlan <init> -
> Initializing DurabilityOut_HDHT as
> com.capitalone.vault8.citadel.operators.impl.HdfsFileOutputOperator
> >Records_HDHT,records
> >
> >java.lang.IllegalArgumentException: duplicate operator id:
> OperatorMeta{name=Records_HDHT, operator=RecordMaker{name=Records_HDHT},
> attributes={Attribute{defaultValue=null,
> name=com.datatorrent.api.Context.OperatorContext.PARTITIONER,
> codec=com.datatorrent.api.StringCodec$Object2String@6f96c77
> }=com.datatorrent.common.partitioner.StatelessPartitioner@be64738}}
> >at
> com.datatorrent.stram.plan.logical.LogicalPlan.addOperator(LogicalPlan.java:865)
> >at
> com.capitalone.vault8.citadel.Application.addHdfsOutputOp(Application.java:171)
> >at
> com.capitalone.vault8.citadel.Application.createPipelineHdht(Application.java:215)
> >at
> com.capitalone.vault8.citadel.Application.populateDAG(Application.java:91)
> >at
> com.datatorrent.stram.plan.logical.LogicalPlanConfiguration.prepareDAG(LogicalPlanConfiguration.java:1171)
> >at com.datatorrent.stram.LocalModeImpl.prepareDAG(LocalModeImpl.java:57)
> >at
> com.capitalone.vault8.citadel.ApplicationTest.testApplication(ApplicationTest.java:29)
> >at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> >at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> >at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> >at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> >at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
> >at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
> >at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
> >at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
> >at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
> >at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
> >at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
> >at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
> >at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
> >at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
> >at
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78)
> >at
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212)
> >at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
> >at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
> >
> >private HdfsFileOutputOperator addHdfsOutputOp(DAG dag,
> >    String opName,
> >    String fileName,
> >    Configuration conf) {
> >  System.out.println(opName + "," + fileName);
> >  HdfsFileOutputOperator outputOp = new HdfsFileOutputOperator();
> >  outputOp.setCoreSite(conf.get("coreSite"));
> >  outputOp.setHdfsSite(conf.get("hdfsSite"));
> >  outputOp.setFilePath(getOutputPath(conf, fileName));
> >  dag.addOperator(opName, outputOp);
> >  return outputOp;
> >}
> >
> >final HdfsFileOutputOperator latenciesOutput =
> >    addHdfsOutputOp(dag, "LatencyOut" + label, "latencies", conf);
> >
> >final HdfsFileOutputOperator durabilityOutput =
> >    addHdfsOutputOp(dag, "DurabilityOut" + label, "durability", conf);
> >
> >final HdfsFileOutputOperator recordsOut =
> >    addHdfsOutputOp(dag, "Records" + label, "records", conf);
> >
> >final HdfsFileOutputOperator recordsSchemaOut =
> >    addHdfsOutputOp(dag, "RecordsSchema" + label, "records", conf);
> >
> >________________________________________________________
> >
> >The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
> ________________________________________________________
>
> The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
>
________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.

Re: Operator ID Overlap

Posted by Tushar Gosavi <tu...@datatorrent.com>.
Hi,

>From the stack trace it seems that you are adding two or more operator in
the DAG using same name (Records_HDHT). Adding multiple operators to
logical DAG using same name is not allowed. Can you check
if this is the case, else there might be the problem in the platform.

- Tushar.


On Thu, Mar 10, 2016 at 8:28 AM, Ganelin, Ilya <Il...@capitalone.com>
wrote:

> Additional note: I am extracting operatorId from the context in setup:
>
> @Override
> public void setup(Context.OperatorContext context) {
>         super.setup(context);
>         operatorId = context.getId();
>         operatorUniquePath = new Path(getOutputFileName() + idDelim +
> operatorId + ".txt").toString();
> }
>
>
>
>
>
>
> On 3/9/16, 6:53 PM, "Ganelin, Ilya" <Il...@capitalone.com> wrote:
>
> >Hi all – I’ve created some helper functions to efficiently create HDFS
> Output operators (which require common configurations). However, I run into
> a duplicate operator ID error when attempting to run this, error and code
> are below:
> >
> >Why would the DAG be assigning a duplicate operator ID? For the record,
> we’re on Apex 3.0.0 so if this is a bug that has subsequently been fixed
> please let me know.
> >
> >
> >:2016-03-09 18:50:48,074 [main] DEBUG logical.LogicalPlan <init> -
> Initializing LatencyViz_HDHT as
> com.capitalone.vault8.citadel.operators.impl.LatencyVisualization
> >LatencyOut_HDHT,latencies
> >2016-03-09 18:50:48,079 [main] DEBUG logical.LogicalPlan <init> -
> Initializing LatencyOut_HDHT as
> com.capitalone.vault8.citadel.operators.impl.HdfsFileOutputOperator
> >DurabilityOut_HDHT,durability
> >2016-03-09 18:50:48,081 [main] DEBUG logical.LogicalPlan <init> -
> Initializing DurabilityOut_HDHT as
> com.capitalone.vault8.citadel.operators.impl.HdfsFileOutputOperator
> >Records_HDHT,records
> >
> >java.lang.IllegalArgumentException: duplicate operator id:
> OperatorMeta{name=Records_HDHT, operator=RecordMaker{name=Records_HDHT},
> attributes={Attribute{defaultValue=null,
> name=com.datatorrent.api.Context.OperatorContext.PARTITIONER,
> codec=com.datatorrent.api.StringCodec$Object2String@6f96c77
> }=com.datatorrent.common.partitioner.StatelessPartitioner@be64738}}
> >at
> com.datatorrent.stram.plan.logical.LogicalPlan.addOperator(LogicalPlan.java:865)
> >at
> com.capitalone.vault8.citadel.Application.addHdfsOutputOp(Application.java:171)
> >at
> com.capitalone.vault8.citadel.Application.createPipelineHdht(Application.java:215)
> >at
> com.capitalone.vault8.citadel.Application.populateDAG(Application.java:91)
> >at
> com.datatorrent.stram.plan.logical.LogicalPlanConfiguration.prepareDAG(LogicalPlanConfiguration.java:1171)
> >at com.datatorrent.stram.LocalModeImpl.prepareDAG(LocalModeImpl.java:57)
> >at
> com.capitalone.vault8.citadel.ApplicationTest.testApplication(ApplicationTest.java:29)
> >at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> >at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> >at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> >at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> >at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
> >at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
> >at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
> >at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
> >at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
> >at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
> >at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
> >at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
> >at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
> >at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
> >at
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78)
> >at
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212)
> >at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
> >at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
> >
> >private HdfsFileOutputOperator addHdfsOutputOp(DAG dag,
> >    String opName,
> >    String fileName,
> >    Configuration conf) {
> >  System.out.println(opName + "," + fileName);
> >  HdfsFileOutputOperator outputOp = new HdfsFileOutputOperator();
> >  outputOp.setCoreSite(conf.get("coreSite"));
> >  outputOp.setHdfsSite(conf.get("hdfsSite"));
> >  outputOp.setFilePath(getOutputPath(conf, fileName));
> >  dag.addOperator(opName, outputOp);
> >  return outputOp;
> >}
> >
> >final HdfsFileOutputOperator latenciesOutput =
> >    addHdfsOutputOp(dag, "LatencyOut" + label, "latencies", conf);
> >
> >final HdfsFileOutputOperator durabilityOutput =
> >    addHdfsOutputOp(dag, "DurabilityOut" + label, "durability", conf);
> >
> >final HdfsFileOutputOperator recordsOut =
> >    addHdfsOutputOp(dag, "Records" + label, "records", conf);
> >
> >final HdfsFileOutputOperator recordsSchemaOut =
> >    addHdfsOutputOp(dag, "RecordsSchema" + label, "records", conf);
> >
> >________________________________________________________
> >
> >The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
> ________________________________________________________
>
> The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
>

Re: Operator ID Overlap

Posted by "Ganelin, Ilya" <Il...@capitalone.com>.
Additional note: I am extracting operatorId from the context in setup:

@Override
public void setup(Context.OperatorContext context) {
	super.setup(context);
	operatorId = context.getId();
	operatorUniquePath = new Path(getOutputFileName() + idDelim + operatorId + ".txt").toString();
}






On 3/9/16, 6:53 PM, "Ganelin, Ilya" <Il...@capitalone.com> wrote:

>Hi all – I’ve created some helper functions to efficiently create HDFS Output operators (which require common configurations). However, I run into a duplicate operator ID error when attempting to run this, error and code are below:
>
>Why would the DAG be assigning a duplicate operator ID? For the record, we’re on Apex 3.0.0 so if this is a bug that has subsequently been fixed please let me know.
>
>
>:2016-03-09 18:50:48,074 [main] DEBUG logical.LogicalPlan <init> - Initializing LatencyViz_HDHT as com.capitalone.vault8.citadel.operators.impl.LatencyVisualization
>LatencyOut_HDHT,latencies
>2016-03-09 18:50:48,079 [main] DEBUG logical.LogicalPlan <init> - Initializing LatencyOut_HDHT as com.capitalone.vault8.citadel.operators.impl.HdfsFileOutputOperator
>DurabilityOut_HDHT,durability
>2016-03-09 18:50:48,081 [main] DEBUG logical.LogicalPlan <init> - Initializing DurabilityOut_HDHT as com.capitalone.vault8.citadel.operators.impl.HdfsFileOutputOperator
>Records_HDHT,records
>
>java.lang.IllegalArgumentException: duplicate operator id: OperatorMeta{name=Records_HDHT, operator=RecordMaker{name=Records_HDHT}, attributes={Attribute{defaultValue=null, name=com.datatorrent.api.Context.OperatorContext.PARTITIONER, codec=com.datatorrent.api.StringCodec$Object2String@6f96c77}=com.datatorrent.common.partitioner.StatelessPartitioner@be64738}}
>at com.datatorrent.stram.plan.logical.LogicalPlan.addOperator(LogicalPlan.java:865)
>at com.capitalone.vault8.citadel.Application.addHdfsOutputOp(Application.java:171)
>at com.capitalone.vault8.citadel.Application.createPipelineHdht(Application.java:215)
>at com.capitalone.vault8.citadel.Application.populateDAG(Application.java:91)
>at com.datatorrent.stram.plan.logical.LogicalPlanConfiguration.prepareDAG(LogicalPlanConfiguration.java:1171)
>at com.datatorrent.stram.LocalModeImpl.prepareDAG(LocalModeImpl.java:57)
>at com.capitalone.vault8.citadel.ApplicationTest.testApplication(ApplicationTest.java:29)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
>at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78)
>at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212)
>at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
>
>private HdfsFileOutputOperator addHdfsOutputOp(DAG dag,
>    String opName,
>    String fileName,
>    Configuration conf) {
>  System.out.println(opName + "," + fileName);
>  HdfsFileOutputOperator outputOp = new HdfsFileOutputOperator();
>  outputOp.setCoreSite(conf.get("coreSite"));
>  outputOp.setHdfsSite(conf.get("hdfsSite"));
>  outputOp.setFilePath(getOutputPath(conf, fileName));
>  dag.addOperator(opName, outputOp);
>  return outputOp;
>}
>
>final HdfsFileOutputOperator latenciesOutput =
>    addHdfsOutputOp(dag, "LatencyOut" + label, "latencies", conf);
>
>final HdfsFileOutputOperator durabilityOutput =
>    addHdfsOutputOp(dag, "DurabilityOut" + label, "durability", conf);
>
>final HdfsFileOutputOperator recordsOut =
>    addHdfsOutputOp(dag, "Records" + label, "records", conf);
>
>final HdfsFileOutputOperator recordsSchemaOut =
>    addHdfsOutputOp(dag, "RecordsSchema" + label, "records", conf);
>
>________________________________________________________
>
>The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.