You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Yeliang Cang (JIRA)" <ji...@apache.org> on 2018/11/08 15:02:00 UTC

[jira] [Commented] (HADOOP-15913) xml parsing error in a heavily multi-threaded environment

    [ https://issues.apache.org/jira/browse/HADOOP-15913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679875#comment-16679875 ] 

Yeliang Cang commented on HADOOP-15913:
---------------------------------------

We have already applied https://issues.apache.org/jira/browse/HADOOP-12404
and still see the error.
Based on comments in https://github.com/mikiobraun/jblas/issues/103, it suggests that it is misuse ZipFile object between multiple threads

> xml parsing error in a heavily multi-threaded environment
> ---------------------------------------------------------
>
>                 Key: HADOOP-15913
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15913
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: common
>    Affects Versions: 2.7.3
>            Reporter: Yeliang Cang
>            Priority: Critical
>
> We met this problem in a production environment, the stack trace like this:
> {code}ERROR org.apache.hadoop.hive.ql.exec.Task: Ended Job = job_1541600895081_0580 with exception 'java.lang.NullPointerException(Inflater has been closed)'
> java.lang.NullPointerException: Inflater has been closed
>         at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
>         at java.util.zip.Inflater.inflate(Inflater.java:257)
>         at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
>         at java.io.FilterInputStream.read(FilterInputStream.java:133)
>         at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
>         at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
>         at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
>         at java.io.InputStreamReader.read(InputStreamReader.java:184)
>         at java.io.BufferedReader.fill(BufferedReader.java:154)
>         at java.io.BufferedReader.readLine(BufferedReader.java:317)
>         at java.io.BufferedReader.readLine(BufferedReader.java:382)
>         at javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:319)
>         at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255)
>         at javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
>         at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2524)
>         at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2501)
>         at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2407)
>         at org.apache.hadoop.conf.Configuration.get(Configuration.java:983)
>         at org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007)
>         at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:479)
>         at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:469)
>         at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:188)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:601)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:599)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>         at org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:599)
>         at org.apache.hadoop.mapred.JobClient.getJobInner(JobClient.java:609)
>         at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:639)
>         at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:294)
>         at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:558)
>         at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:457)
>         at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:141)
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197){code}
>  and can reproduce it in our test environment by steps below:
> 1. set configs:
> {code}
> hive.server2.async.exec.threads  = 50
> hive.server2.async.exec.wait.queue.size = 100
> {code}
> 2. open 4 beeline terminates in 4 different nodes.
> 3. create 30 queries in each beeline terminate. Each query include "add jar xxx.jar" like this:
> {code}
> add jar mykeytest-1.0-SNAPSHOT.jar;
> create temporary function ups as 'com.xxx.manager.GetCommentNameOrId';
> insert into test partition(tjrq = ${my_no}, ywtx = '${my_no2}' )
> select  dt.d_year as i_brand
>        ,item.i_brand_id as i_item_sk
>        ,ups(item.i_brand) as i_product_name
>        ,sum(ss_ext_sales_price) as i_category_id
>  from  date_dim dt
>       ,store_sales
>       ,item
>  where dt.d_date_sk = store_sales.ss_sold_date_sk
>    and store_sales.ss_item_sk = item.i_item_sk
>    and item.i_manufact_id = 436
>    and dt.d_moy=12
>  group by dt.d_year
>       ,item.i_brand
>       ,item.i_brand_id
>  order by dt.d_year
> {code}
> and all these 120 queries connect to one hiveserver2
> Run all the query concurrently, and will see the stack trace abover in hiveserver2 log



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org