You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sqoop.apache.org by "Toan Nguyen (Jira)" <ji...@apache.org> on 2020/02/04 04:18:00 UTC
[jira] [Updated] (SQOOP-3463) Sqoop import from MYSQL to Hive fails
when increase mapper
[ https://issues.apache.org/jira/browse/SQOOP-3463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Toan Nguyen updated SQOOP-3463:
-------------------------------
Description:
It's my script
" sqoop-import --connect $DATASOURCE --username $USERNAME --password $PASSWORD --driver com.mysql.jdbc.Driver \
--query "SELECT * FROM sales_order_item WHERE item_id > 0 AND \$CONDITIONS LIMIT $ITEM_ID, 10000" \
--target-dir /user/raw/magento/$MERCHANT_ID/sales_order_item -m 8 \
--split-by item_id \
--fields-terminated-by "|" \
--merge-key item_id \
--hive-import \
--hive-table raw_magento_$MERCHANT_ID.sales_order_item \
--verbose \
--direct – "
And after completed process, i got only 9992 records. The happen is same with n mapper, when i increase n, it will lost n records.
Everything will be OK with only 1 mapper. But if i want to import large records, i have to use more mapper. So what should i do now? Please support me. Thanks in advance
> Sqoop import from MYSQL to Hive fails when increase mapper
> ----------------------------------------------------------
>
> Key: SQOOP-3463
> URL: https://issues.apache.org/jira/browse/SQOOP-3463
> Project: Sqoop
> Issue Type: Bug
> Reporter: Toan Nguyen
> Priority: Major
>
> It's my script
> " sqoop-import --connect $DATASOURCE --username $USERNAME --password $PASSWORD --driver com.mysql.jdbc.Driver \
> --query "SELECT * FROM sales_order_item WHERE item_id > 0 AND \$CONDITIONS LIMIT $ITEM_ID, 10000" \
> --target-dir /user/raw/magento/$MERCHANT_ID/sales_order_item -m 8 \
> --split-by item_id \
> --fields-terminated-by "|" \
> --merge-key item_id \
> --hive-import \
> --hive-table raw_magento_$MERCHANT_ID.sales_order_item \
> --verbose \
> --direct – "
> And after completed process, i got only 9992 records. The happen is same with n mapper, when i increase n, it will lost n records.
> Everything will be OK with only 1 mapper. But if i want to import large records, i have to use more mapper. So what should i do now? Please support me. Thanks in advance
--
This message was sent by Atlassian Jira
(v8.3.4#803005)