You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Nilotpal Nandi (JIRA)" <ji...@apache.org> on 2018/09/07 12:46:00 UTC

[jira] [Resolved] (HDDS-321) ozoneFS put/copyFromLocal command does not work for a directory when the directory contains file(s) as well as subdirectories

     [ https://issues.apache.org/jira/browse/HDDS-321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Nilotpal Nandi resolved HDDS-321.
---------------------------------
    Resolution: Fixed

> ozoneFS put/copyFromLocal command does not work for a directory when the directory contains file(s) as well as subdirectories
> -----------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDDS-321
>                 URL: https://issues.apache.org/jira/browse/HDDS-321
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>            Reporter: Nilotpal Nandi
>            Assignee: Nilotpal Nandi
>            Priority: Blocker
>             Fix For: 0.2.1
>
>
> Steps taken :
> ---------------------
>  # Created a local directory 'TEST_DIR1' which contains  directory "SUB_DIR1" and a file   "test_file1".
>  # Ran "./ozone fs -put TEST_DIR1/ /" . The command kept on running , throwing error on console.
> stack trace of the error thrown on the console :
> {noformat}
> 2018-08-02 12:55:46 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB (=1048576) (default)
> 2018-08-02 12:55:46 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 (custom)
> 2018-08-02 12:55:46 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 3000 ms (default)
> Aug 02, 2018 12:55:46 PM org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13: https://ozone_datanode_3.ozone_default:9858
>  at java.net.URI$Parser.fail(URI.java:2848)
>  at java.net.URI$Parser.parseHostname(URI.java:3387)
>  at java.net.URI$Parser.parseServer(URI.java:3236)
>  at java.net.URI$Parser.parseAuthority(URI.java:3155)
>  at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>  at java.net.URI$Parser.parse(URI.java:3053)
>  at java.net.URI.<init>(URI.java:673)
>  at org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
>  at org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
>  at org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
>  at org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
>  at org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
>  at org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
>  at org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
>  at org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
>  at org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$4.get(ManagedChannelImpl.java:403)
>  at org.apache.ratis.shaded.io.grpc.internal.ClientCallImpl.start(ClientCallImpl.java:238)
>  at org.apache.ratis.shaded.io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1.start(CensusTracingModule.java:386)
>  at org.apache.ratis.shaded.io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1.start(CensusStatsModule.java:679)
>  at org.apache.ratis.shaded.io.grpc.stub.ClientCalls.startCall(ClientCalls.java:293)
>  at org.apache.ratis.shaded.io.grpc.stub.ClientCalls.asyncStreamingRequestCall(ClientCalls.java:283)
>  at org.apache.ratis.shaded.io.grpc.stub.ClientCalls.asyncBidiStreamingCall(ClientCalls.java:92)
>  at org.apache.ratis.shaded.proto.grpc.RaftClientProtocolServiceGrpc$RaftClientProtocolServiceStub.append(RaftClientProtocolServiceGrpc.java:208)
>  at org.apache.ratis.grpc.client.RaftClientProtocolClient.appendWithTimeout(RaftClientProtocolClient.java:139)
>  at org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:109)
>  at org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:88)
>  at org.apache.ratis.client.impl.RaftClientImpl.sendRequest(RaftClientImpl.java:302)
>  at org.apache.ratis.client.impl.RaftClientImpl.sendRequestWithRetry(RaftClientImpl.java:256)
>  at org.apache.ratis.client.impl.RaftClientImpl.send(RaftClientImpl.java:192)
>  at org.apache.ratis.client.impl.RaftClientImpl.send(RaftClientImpl.java:173)
>  at org.apache.ratis.client.RaftClient.send(RaftClient.java:80)
>  at org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequest(XceiverClientRatis.java:218)
>  at org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommand(XceiverClientRatis.java:235)
>  at org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunk(ContainerProtocolCalls.java:219)
>  at org.apache.hadoop.hdds.scm.storage.ChunkOutputStream.writeChunkToContainer(ChunkOutputStream.java:220)
>  at org.apache.hadoop.hdds.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:150)
>  at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:486)
>  at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:326)
>  at org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:57)
>  at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>  at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>  at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:70)
>  at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129)
>  at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:485)
>  at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:407)
>  at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:342)
>  at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:277)
>  at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:262)
>  at org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>  at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>  at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:352)
>  at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:441)
>  at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:305)
>  at org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:369)
>  at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>  at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>  at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:257)
>  at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>  at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>  at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:228)
>  at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:295)
>  at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
>  at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
>  at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  at org.apache.hadoop.fs.FsShell.main(FsShell.java:390){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org