You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Aaron Kimball (JIRA)" <ji...@apache.org> on 2009/06/04 03:04:07 UTC
[jira] Updated: (HADOOP-5967) Sqoop should only use a single map
task
[ https://issues.apache.org/jira/browse/HADOOP-5967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Aaron Kimball updated HADOOP-5967:
----------------------------------
Attachment: single-mapper.patch
This patch implements this as a one-liner. No new tests because it's trivial. I've verified that it passes existing unit tests, and also that it does indeed use a single mapper on a cluster.
> Sqoop should only use a single map task
> ---------------------------------------
>
> Key: HADOOP-5967
> URL: https://issues.apache.org/jira/browse/HADOOP-5967
> Project: Hadoop Core
> Issue Type: Improvement
> Reporter: Aaron Kimball
> Assignee: Aaron Kimball
> Priority: Minor
> Attachments: single-mapper.patch
>
>
> The current DBInputFormat implementation uses SELECT ... LIMIT ... OFFSET statements to read from a database table. This actually results in several queries all accessing the same table at the same time. Most database implementations will actually use a full table scan for each such query, starting at row 1 and scanning down until the OFFSET is reached before emitting data to the client. The upshot of this is that we see O(n^2) performance in the size of the table when using a large number of mappers, when a single mapper would read through the table in O(n) time in the number of rows.
> This patch sets the number of map tasks to 1 in the MapReduce job sqoop launches.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.