You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Show You (JIRA)" <ji...@apache.org> on 2012/08/06 03:09:02 UTC

[jira] [Created] (HIVE-3335) Thousand of CLOSE_WAIT socket when we using SymbolicInputFormat

Show You created HIVE-3335:
------------------------------

             Summary: Thousand of CLOSE_WAIT socket when we using SymbolicInputFormat
                 Key: HIVE-3335
                 URL: https://issues.apache.org/jira/browse/HIVE-3335
             Project: Hive
          Issue Type: Bug
          Components: Clients
    Affects Versions: 0.8.1
         Environment:  CentOS 5.8 x64
 CDH3u4
   hadoop-0.20-0.20.2+923.256-1
   hadoop-0.20-{namenode,secondarynamenode,jobtracker,tasktracker,datanode}-0.20.2+923.256-1
   hadoop-0.20-conf-pseudo-0.20.2+923.256-1(but same error was
occurred on not pseudo env)
 apache hive-0.8.1(but same error was occurred on hive 0.9)
            Reporter: Show You


Procedure for reproduction:
 1. Set up hadoop
 2. Prepare data file and link.txt:
    data:
      $ hadoop fs -cat /path/to/data/2012-07-01/20120701.csv
      1, 20120701 00:00:00
      2, 20120701 00:00:01
      3, 20120701 01:12:45
    link.txt
      $ cat link.txt
       /path/to/data/2012-07-01//*

 2. On hive, create table like below:
   CREATE TABLE user_logs(id INT, created_at STRING)
   row format delimited fields terminated by ',' lines terminated by '\n'
   stored as inputformat 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
   outputformat 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';

 3. Put link.txt to /user/hive/warehouse/user_logs
   $ sudo -u hdfs hadoop fs -put link.txt  /user/hive/warehouse/user_logs

 4. Open another session(A session), and watch socket,
   $ netstat -a | grep CLOSE_WAIT
    tcp        1      0 localhost:48121             localhost:50010
         CLOSE_WAIT
    tcp        1      0 localhost:48124             localhost:50010
         CLOSE_WAIT
   $

 5. Return to hive session, execute this,
   $ select * from user_logs;

 6. Return to A session, watch socket again,
   $ netstat -a | grep CLOSE_WAIT
   tcp        1      0 localhost:48121             localhost:50010
        CLOSE_WAIT
   tcp        1      0 localhost:48124             localhost:50010
        CLOSE_WAIT
   tcp        1      0 localhost:48166             localhost:50010
        CLOSE_WAIT

 If you makes any partitions, you'll watch unclosed socket whose count
equals partitions by once.


I think that this problem maybe is caused by this point:
  At https://github.com/apache/hive/blob/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/SymbolicInputFormat.java,
  line 66. BufferedReader was opened, but it doesn't closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HIVE-3335) Thousand of CLOSE_WAIT socket when we using SymbolicInputFormat

Posted by "Show You (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HIVE-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13480933#comment-13480933 ] 

Show You commented on HIVE-3335:
--------------------------------

I'm sorry to too late. I attach a patch for this issue.
                
> Thousand of CLOSE_WAIT socket when we using SymbolicInputFormat
> ---------------------------------------------------------------
>
>                 Key: HIVE-3335
>                 URL: https://issues.apache.org/jira/browse/HIVE-3335
>             Project: Hive
>          Issue Type: Bug
>          Components: Clients
>    Affects Versions: 0.8.1
>         Environment:  CentOS 5.8 x64
>  CDH3u4
>    hadoop-0.20-0.20.2+923.256-1
>    hadoop-0.20-{namenode,secondarynamenode,jobtracker,tasktracker,datanode}-0.20.2+923.256-1
>    hadoop-0.20-conf-pseudo-0.20.2+923.256-1(but same error was
> occurred on not pseudo env)
>  apache hive-0.8.1(but same error was occurred on hive 0.9)
>            Reporter: Show You
>         Attachments: HIVE-3335.patch
>
>
> Procedure for reproduction:
>  1. Set up hadoop
>  2. Prepare data file and link.txt:
>     data:
>       $ hadoop fs -cat /path/to/data/2012-07-01/20120701.csv
>       1, 20120701 00:00:00
>       2, 20120701 00:00:01
>       3, 20120701 01:12:45
>     link.txt
>       $ cat link.txt
>        /path/to/data/2012-07-01//*
>  2. On hive, create table like below:
>    CREATE TABLE user_logs(id INT, created_at STRING)
>    row format delimited fields terminated by ',' lines terminated by '\n'
>    stored as inputformat 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
>    outputformat 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
>  3. Put link.txt to /user/hive/warehouse/user_logs
>    $ sudo -u hdfs hadoop fs -put link.txt  /user/hive/warehouse/user_logs
>  4. Open another session(A session), and watch socket,
>    $ netstat -a | grep CLOSE_WAIT
>     tcp        1      0 localhost:48121             localhost:50010
>          CLOSE_WAIT
>     tcp        1      0 localhost:48124             localhost:50010
>          CLOSE_WAIT
>    $
>  5. Return to hive session, execute this,
>    $ select * from user_logs;
>  6. Return to A session, watch socket again,
>    $ netstat -a | grep CLOSE_WAIT
>    tcp        1      0 localhost:48121             localhost:50010
>         CLOSE_WAIT
>    tcp        1      0 localhost:48124             localhost:50010
>         CLOSE_WAIT
>    tcp        1      0 localhost:48166             localhost:50010
>         CLOSE_WAIT
>  If you makes any partitions, you'll watch unclosed socket whose count
> equals partitions by once.
> I think that this problem maybe is caused by this point:
>   At https://github.com/apache/hive/blob/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/SymbolicInputFormat.java,
>   line 66. BufferedReader was opened, but it doesn't closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HIVE-3335) Thousand of CLOSE_WAIT socket when we using SymbolicInputFormat

Posted by "Ashutosh Chauhan (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HIVE-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428987#comment-13428987 ] 

Ashutosh Chauhan commented on HIVE-3335:
----------------------------------------

[~showyou] Your analysis seems correct. Mind submitting a patch for it?
                
> Thousand of CLOSE_WAIT socket when we using SymbolicInputFormat
> ---------------------------------------------------------------
>
>                 Key: HIVE-3335
>                 URL: https://issues.apache.org/jira/browse/HIVE-3335
>             Project: Hive
>          Issue Type: Bug
>          Components: Clients
>    Affects Versions: 0.8.1
>         Environment:  CentOS 5.8 x64
>  CDH3u4
>    hadoop-0.20-0.20.2+923.256-1
>    hadoop-0.20-{namenode,secondarynamenode,jobtracker,tasktracker,datanode}-0.20.2+923.256-1
>    hadoop-0.20-conf-pseudo-0.20.2+923.256-1(but same error was
> occurred on not pseudo env)
>  apache hive-0.8.1(but same error was occurred on hive 0.9)
>            Reporter: Show You
>
> Procedure for reproduction:
>  1. Set up hadoop
>  2. Prepare data file and link.txt:
>     data:
>       $ hadoop fs -cat /path/to/data/2012-07-01/20120701.csv
>       1, 20120701 00:00:00
>       2, 20120701 00:00:01
>       3, 20120701 01:12:45
>     link.txt
>       $ cat link.txt
>        /path/to/data/2012-07-01//*
>  2. On hive, create table like below:
>    CREATE TABLE user_logs(id INT, created_at STRING)
>    row format delimited fields terminated by ',' lines terminated by '\n'
>    stored as inputformat 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
>    outputformat 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
>  3. Put link.txt to /user/hive/warehouse/user_logs
>    $ sudo -u hdfs hadoop fs -put link.txt  /user/hive/warehouse/user_logs
>  4. Open another session(A session), and watch socket,
>    $ netstat -a | grep CLOSE_WAIT
>     tcp        1      0 localhost:48121             localhost:50010
>          CLOSE_WAIT
>     tcp        1      0 localhost:48124             localhost:50010
>          CLOSE_WAIT
>    $
>  5. Return to hive session, execute this,
>    $ select * from user_logs;
>  6. Return to A session, watch socket again,
>    $ netstat -a | grep CLOSE_WAIT
>    tcp        1      0 localhost:48121             localhost:50010
>         CLOSE_WAIT
>    tcp        1      0 localhost:48124             localhost:50010
>         CLOSE_WAIT
>    tcp        1      0 localhost:48166             localhost:50010
>         CLOSE_WAIT
>  If you makes any partitions, you'll watch unclosed socket whose count
> equals partitions by once.
> I think that this problem maybe is caused by this point:
>   At https://github.com/apache/hive/blob/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/SymbolicInputFormat.java,
>   line 66. BufferedReader was opened, but it doesn't closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HIVE-3335) Thousand of CLOSE_WAIT socket when we using SymbolicInputFormat

Posted by "Ashutosh Chauhan (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HIVE-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13481011#comment-13481011 ] 

Ashutosh Chauhan commented on HIVE-3335:
----------------------------------------

Couple of comments:
* In finally block, instead of {{reader.close();}} better is to do {{org.apache.hadoop.io.IOUtils.closeStream(reader);}} since reader could be either null or can throw IOException in close(). IOUtils handles both of those cases.
* Same problem exists even in unit test code of this class, where reader.close() is never invoked, resulting in socket leak. In both tests testAccuracy1() and testAccuracy2() can you add {{reader.close()}} I don't think we need to do full try-catch-block in testcases, since there as soon as exception occurs we want to start unwinding the stack.
                
> Thousand of CLOSE_WAIT socket when we using SymbolicInputFormat
> ---------------------------------------------------------------
>
>                 Key: HIVE-3335
>                 URL: https://issues.apache.org/jira/browse/HIVE-3335
>             Project: Hive
>          Issue Type: Bug
>          Components: Clients
>    Affects Versions: 0.8.1
>         Environment:  CentOS 5.8 x64
>  CDH3u4
>    hadoop-0.20-0.20.2+923.256-1
>    hadoop-0.20-{namenode,secondarynamenode,jobtracker,tasktracker,datanode}-0.20.2+923.256-1
>    hadoop-0.20-conf-pseudo-0.20.2+923.256-1(but same error was
> occurred on not pseudo env)
>  apache hive-0.8.1(but same error was occurred on hive 0.9)
>            Reporter: Show You
>         Attachments: HIVE-3335.patch
>
>
> Procedure for reproduction:
>  1. Set up hadoop
>  2. Prepare data file and link.txt:
>     data:
>       $ hadoop fs -cat /path/to/data/2012-07-01/20120701.csv
>       1, 20120701 00:00:00
>       2, 20120701 00:00:01
>       3, 20120701 01:12:45
>     link.txt
>       $ cat link.txt
>        /path/to/data/2012-07-01//*
>  2. On hive, create table like below:
>    CREATE TABLE user_logs(id INT, created_at STRING)
>    row format delimited fields terminated by ',' lines terminated by '\n'
>    stored as inputformat 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
>    outputformat 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
>  3. Put link.txt to /user/hive/warehouse/user_logs
>    $ sudo -u hdfs hadoop fs -put link.txt  /user/hive/warehouse/user_logs
>  4. Open another session(A session), and watch socket,
>    $ netstat -a | grep CLOSE_WAIT
>     tcp        1      0 localhost:48121             localhost:50010
>          CLOSE_WAIT
>     tcp        1      0 localhost:48124             localhost:50010
>          CLOSE_WAIT
>    $
>  5. Return to hive session, execute this,
>    $ select * from user_logs;
>  6. Return to A session, watch socket again,
>    $ netstat -a | grep CLOSE_WAIT
>    tcp        1      0 localhost:48121             localhost:50010
>         CLOSE_WAIT
>    tcp        1      0 localhost:48124             localhost:50010
>         CLOSE_WAIT
>    tcp        1      0 localhost:48166             localhost:50010
>         CLOSE_WAIT
>  If you makes any partitions, you'll watch unclosed socket whose count
> equals partitions by once.
> I think that this problem maybe is caused by this point:
>   At https://github.com/apache/hive/blob/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/SymbolicInputFormat.java,
>   line 66. BufferedReader was opened, but it doesn't closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HIVE-3335) Thousand of CLOSE_WAIT socket when we using SymbolicInputFormat

Posted by "Show You (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HIVE-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Show You updated HIVE-3335:
---------------------------

    Attachment: HIVE-3335.patch

A patch of this issue
                
> Thousand of CLOSE_WAIT socket when we using SymbolicInputFormat
> ---------------------------------------------------------------
>
>                 Key: HIVE-3335
>                 URL: https://issues.apache.org/jira/browse/HIVE-3335
>             Project: Hive
>          Issue Type: Bug
>          Components: Clients
>    Affects Versions: 0.8.1
>         Environment:  CentOS 5.8 x64
>  CDH3u4
>    hadoop-0.20-0.20.2+923.256-1
>    hadoop-0.20-{namenode,secondarynamenode,jobtracker,tasktracker,datanode}-0.20.2+923.256-1
>    hadoop-0.20-conf-pseudo-0.20.2+923.256-1(but same error was
> occurred on not pseudo env)
>  apache hive-0.8.1(but same error was occurred on hive 0.9)
>            Reporter: Show You
>         Attachments: HIVE-3335.patch
>
>
> Procedure for reproduction:
>  1. Set up hadoop
>  2. Prepare data file and link.txt:
>     data:
>       $ hadoop fs -cat /path/to/data/2012-07-01/20120701.csv
>       1, 20120701 00:00:00
>       2, 20120701 00:00:01
>       3, 20120701 01:12:45
>     link.txt
>       $ cat link.txt
>        /path/to/data/2012-07-01//*
>  2. On hive, create table like below:
>    CREATE TABLE user_logs(id INT, created_at STRING)
>    row format delimited fields terminated by ',' lines terminated by '\n'
>    stored as inputformat 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
>    outputformat 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
>  3. Put link.txt to /user/hive/warehouse/user_logs
>    $ sudo -u hdfs hadoop fs -put link.txt  /user/hive/warehouse/user_logs
>  4. Open another session(A session), and watch socket,
>    $ netstat -a | grep CLOSE_WAIT
>     tcp        1      0 localhost:48121             localhost:50010
>          CLOSE_WAIT
>     tcp        1      0 localhost:48124             localhost:50010
>          CLOSE_WAIT
>    $
>  5. Return to hive session, execute this,
>    $ select * from user_logs;
>  6. Return to A session, watch socket again,
>    $ netstat -a | grep CLOSE_WAIT
>    tcp        1      0 localhost:48121             localhost:50010
>         CLOSE_WAIT
>    tcp        1      0 localhost:48124             localhost:50010
>         CLOSE_WAIT
>    tcp        1      0 localhost:48166             localhost:50010
>         CLOSE_WAIT
>  If you makes any partitions, you'll watch unclosed socket whose count
> equals partitions by once.
> I think that this problem maybe is caused by this point:
>   At https://github.com/apache/hive/blob/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/SymbolicInputFormat.java,
>   line 66. BufferedReader was opened, but it doesn't closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira