You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by "Aaron Kimball (JIRA)" <ji...@apache.org> on 2009/06/30 23:16:47 UTC

[jira] Created: (MAPREDUCE-685) Sqoop will fail with OutOfMemory on large tables using mysql

Sqoop will fail with OutOfMemory on large tables using mysql
------------------------------------------------------------

                 Key: MAPREDUCE-685
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-685
             Project: Hadoop Map/Reduce
          Issue Type: Bug
          Components: contrib/sqoop
            Reporter: Aaron Kimball
            Assignee: Aaron Kimball
         Attachments: MAPREDUCE-685.patch

The default MySQL JDBC client behavior is to buffer the entire ResultSet in the client before allowing the user to use the ResultSet object. On large SELECTs, this can cause OutOfMemory exceptions, even when the client intends to close the ResultSet after reading only a few rows. The MySQL ConnManager should configure its connection to use row-at-a-time delivery of results to the client.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.