You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2022/04/21 00:33:00 UTC

[jira] [Work logged] (HADOOP-16768) SnappyCompressor test cases wrongly assume that the compressed data is always smaller than the input data

     [ https://issues.apache.org/jira/browse/HADOOP-16768?focusedWorklogId=759671&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-759671 ]

ASF GitHub Bot logged work on HADOOP-16768:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 21/Apr/22 00:32
            Start Date: 21/Apr/22 00:32
    Worklog Time Spent: 10m 
      Work Description: xinglin opened a new pull request, #4208:
URL: https://github.com/apache/hadoop/pull/4208

   
   
   ### Description of PR
   
   It is a clean cherry pick from commit 328eae9a146b2dd9857a17a0db6fcddb1de23a0d from trunk.
   
   However, we manually added dependency for assertj. assertj dependency was added as part of a not small patch in HADOOP-16287 and subsequently modified again in HADOOP-16253. Too much overhead in bringing in these PRs and thus we decided to add the dependency manually ourselves.
   
   
   
   ### How was this patch tested?
   
   mvn test -Dtest=hadoop.io.compress.TestCompressorDecompressor
   mvn test -Dtest=hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
   
   
   




Issue Time Tracking
-------------------

    Worklog Id:     (was: 759671)
    Time Spent: 40m  (was: 0.5h)

> SnappyCompressor test cases wrongly assume that the compressed data is always smaller than the input data
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-16768
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16768
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: io, test
>         Environment: X86/Aarch64
> OS: Ubuntu 18.04, CentOS 8
> Snappy 1.1.7
>            Reporter: zhao bo
>            Assignee: Akira Ajisaka
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.3.1, 3.4.0, 3.2.3
>
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> * org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' but got un
> expected exception: java.lang.NullPointerException                                          
>         at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>         at com.google.common.base.Joiner.toString(Joiner.java:452)                         
>         at com.google.common.base.Joiner.appendTo(Joiner.java:109)                                                
>         at com.google.common.base.Joiner.appendTo(Joiner.java:152)                                
>         at com.google.common.base.Joiner.join(Joiner.java:195)                            
>         at com.google.common.base.Joiner.join(Joiner.java:185)
>         at com.google.common.base.Joiner.join(Joiner.java:211)
>         at org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>         at org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>         at org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>         at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>         at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>         at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>         at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>         at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>         at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>         at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>         at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>         at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>         at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>         at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>         at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>         at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>         at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
>  
>  
>  * org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> [ERROR] testSnappyCompressDecompress(org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor) Time elapsed: 0.003 s <<< ERROR!
> java.lang.InternalError: Could not decompress data. Input is invalid.
>  at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native Method)
>  at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:235)
>  at org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress(TestSnappyCompressorDecompressor.java:192)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>  at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>  at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>  at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>  at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>  at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>  at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org