You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by GitBox <gi...@apache.org> on 2020/01/11 00:39:56 UTC

[GitHub] [hbase] joshelser commented on issue #1019: HBASE-23679 Use new FileSystem objects during bulk loads

joshelser commented on issue #1019: HBASE-23679 Use new FileSystem objects during bulk loads
URL: https://github.com/apache/hbase/pull/1019#issuecomment-573258755
 
 
   ```
   ctr-e141-1563959304486-133915-01-000004: 2020-01-11 00:15:00,797 WARN  [RpcServer.default.FPBQ.Fifo.handler=99,queue=9,port=16020] fs.FileSystem: Caching new filesystem: -279427062
   ctr-e141-1563959304486-133915-01-000004: java.lang.Exception
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3365)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.hbase.regionserver.HStore.assertBulkLoadHFileOk(HStore.java:761)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:5958)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.hbase.regionserver.SecureBulkLoadManager$1.run(SecureBulkLoadManager.java:264)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.hbase.regionserver.SecureBulkLoadManager$1.run(SecureBulkLoadManager.java:233)
   ctr-e141-1563959304486-133915-01-000004: 	at java.security.AccessController.doPrivileged(Native Method)
   ctr-e141-1563959304486-133915-01-000004: 	at javax.security.auth.Subject.doAs(Subject.java:360)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1710)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.hbase.regionserver.SecureBulkLoadManager.secureBulkLoadHFiles(SecureBulkLoadManager.java:233)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:2338)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42004)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
   ctr-e141-1563959304486-133915-01-000004: 	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
   ```
   
   Looks like this isn't quite sufficient. Another leak (albeit, much slower) coming here. Need to do more to push down that DFS instance we made and use that until we move the files into the final location.
   
   Added some debug to FileSystem.java to see the above. Testing is just done via IntegrationTestBulkLoad with high number of loops but small chain length.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services