You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sqoop.apache.org by "Hari Shreedharan (JIRA)" <ji...@apache.org> on 2014/01/16 20:57:21 UTC

[jira] [Commented] (SQOOP-1226) --password-file option triggers FileSystemClosed exception at end of Oozie action

    [ https://issues.apache.org/jira/browse/SQOOP-1226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873847#comment-13873847 ] 

Hari Shreedharan commented on SQOOP-1226:
-----------------------------------------

Unfortunately, this does introduce a limited a leak. Can you please add a comment that explains the reason that the close call is not made (and reference this jira) so someone does not introduce the close call again?

> --password-file option triggers FileSystemClosed exception at end of Oozie action
> ---------------------------------------------------------------------------------
>
>                 Key: SQOOP-1226
>                 URL: https://issues.apache.org/jira/browse/SQOOP-1226
>             Project: Sqoop
>          Issue Type: Bug
>    Affects Versions: 1.4.3
>         Environment: Centos 6.2 + jdk-1.6.0_31-fcs.x86_64
>            Reporter: David Morel
>            Assignee: Jarek Jarcec Cecho
>             Fix For: 1.4.5
>
>         Attachments: SQOOP-1226.patch
>
>
> When using the --password-file option, a Sqoop action running inside an Oozie workflow will ERROR out at the very end, like so:
> {noformat}
> 2013-10-31 13:38:45,095 INFO org.apache.sqoop.hive.HiveImport: Hive import complete.
> 2013-10-31 13:38:45,098 INFO org.apache.sqoop.hive.HiveImport: Export directory is empty, removing it.
> 2013-10-31 13:38:45,213 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
> 2013-10-31 13:38:45,217 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:java.io.IOException: Filesystem closed
> 2013-10-31 13:38:45,218 WARN org.apache.hadoop.mapred.Child: Error running child
> java.io.IOException: Filesystem closed
> 	at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:565)
> 	at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:589)
> 	at java.io.FilterInputStream.close(FilterInputStream.java:155)
> 	at org.apache.hadoop.util.LineReader.close(LineReader.java:149)
> 	at org.apache.hadoop.mapred.LineRecordReader.close(LineRecordReader.java:243)
> 	at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.close(MapTask.java:222)
> 	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:421)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
> 	at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.mapred.Child.main(Child.java:262)
> 2013-10-31 13:38:45,234 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task
> {noformat}
> With the --password option, the job completes with no error. I believe the --password-file option handling closes the FS which happens to be shared with the Oozie launcher, which can't write to it anymore on completion. The solution I found was adding:
> {noformat}
>   <property>
>     <name>fs.hdfs.impl.disable.cache</name>
>     <value>true</value>
>   </property>
> {noformat}
> in the sqoop action definition in the oozie workflow, and that works, but isn't really handy.
> Details are at https://groups.google.com/a/cloudera.org/d/msg/cdh-user/pdsxiy5C_IY/OD8wR0rhHgMJ



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)