You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@nifi.apache.org by "Joseph Witt (JIRA)" <ji...@apache.org> on 2015/11/07 20:34:10 UTC

[jira] [Updated] (NIFI-1062) PutHDFS will pass files to success when they were not successfully written with hadoop client misconfiguration

     [ https://issues.apache.org/jira/browse/NIFI-1062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Joseph Witt updated NIFI-1062:
------------------------------
    Fix Version/s: 0.4.0

> PutHDFS will pass files to success when they were not successfully written with hadoop client misconfiguration
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: NIFI-1062
>                 URL: https://issues.apache.org/jira/browse/NIFI-1062
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Extensions
>    Affects Versions: 0.3.0
>         Environment: hadoop-2.6.0 with hadoop.security.authentication=kerberos and dfs.data.transfer.protection=privacy
>            Reporter: Tony Kurc
>            Priority: Minor
>             Fix For: 0.4.0
>
>
> PutHDFS will create an empty file, but the data it is attempting to write will fail with this stack trace, and the flow file gets routed to success.
> {noformat}
> 2015-10-24 11:16:19,278 WARN [Thread-4674] org.apache.hadoop.hdfs.DFSClient DataStreamer Exception
> java.lang.IllegalArgumentException: null
>         at javax.security.auth.callback.NameCallback.<init>(NameCallback.java:90) ~[na:1.8.0_45]
>         at com.sun.security.sasl.digest.DigestMD5Client.processChallenge(DigestMD5Client.java:324) ~[na:1.8.0_45]
>         at com.sun.security.sasl.digest.DigestMD5Client.evaluateChallenge(DigestMD5Client.java:220) ~[na:1.8.0_45]
>         at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslParticipant.evaluateChallengeOrResponse(SaslParticipant.java:113) ~[na:na]
>         at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:451) ~[na:na]
>         at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:390) ~[na:na]
>         at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:262) ~[na:na]
>         at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:210) ~[na:na]
>         at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:182) ~[na:na]
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1413) ~[na:na]
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361) ~[na:na]
>         at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588) ~[na:na]
> 2015-10-24 11:16:19,303 INFO [Timer-Driven Process Thread-2] o.apache.nifi.processors.hadoop.PutHDFS PutHDFS[id=3c30a474-86b4-45fc-b771-95d7a1b5054d] copied StandardFlowFileRecord[uuid=5a3deada-9739-474f-a83d-0447ad5aefd9,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1445694733273-1, container=default, section=1], offset=87, length=29],offset=0,name=x11.txt,size=29] to HDFS at /use/hdfs/x11.txt in 56 milliseconds at a rate of 511 bytes/sec
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)