You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@sqoop.apache.org by chelikani narasimharao <na...@gmail.com> on 2012/01/20 16:28:47 UTC

need help on import tool

Hi ,
      I am using sqoop to get extract the data from HPNeoView database and
then want to write onto Hadoop file system.When i am using the following
command i am getting the below error. Here i am providing full error
details what i was received when i am running the below command

sqoop import --connect jdbc:hpt4jdbc://
g4n0601a.houston.hp.com:18650/chema=EDW_INT --driver
com.hp.t4jdbc.HPT4Driver --username boyapatr_write -P --verbose --table
EDW_INT.OPPTY_ALNC_PTNR_F  --split-by SRC_SYS_KY --target-dir
/home/narasimharao/opptyfact

DEBUG manager.SqlManager: Using fetchSize for next query: 1000
12/01/20 19:32:31 INFO manager.SqlManager: Executing SQL statement: SELECT
t.* FROM EDW_INT.OPPTY_ALNC_PTNR_F AS t WHERE 1=0
12/01/20 19:32:40 DEBUG manager.SqlManager: Using fetchSize for next query:
1000
12/01/20 19:32:40 INFO manager.SqlManager: Executing SQL statement: SELECT
t.* FROM EDW_INT.OPPTY_ALNC_PTNR_F AS t WHERE 1=0
12/01/20 19:32:47 DEBUG orm.ClassWriter: selected columns:
12/01/20 19:32:47 DEBUG orm.ClassWriter:   OPPTY_ALNC_PTNR_ID
12/01/20 19:32:47 DEBUG orm.ClassWriter:   SRC_SYS_KY
12/01/20 19:32:47 DEBUG orm.ClassWriter:   PTNR_SRC_SYS_KY
12/01/20 19:32:47 DEBUG orm.ClassWriter:   OPPTY_ID
12/01/20 19:32:47 DEBUG orm.ClassWriter:   PTNR_ID
12/01/20 19:32:47 DEBUG orm.ClassWriter:   PTNR_ROLE_CD
12/01/20 19:32:47 DEBUG orm.ClassWriter:   PRIM_PTNR_FG
12/01/20 19:32:47 DEBUG orm.ClassWriter:   INS_GMT_TS
12/01/20 19:32:47 DEBUG orm.ClassWriter:   UPD_GMT_TS
12/01/20 19:32:47 DEBUG orm.ClassWriter:   LOAD_JOB_NR
12/01/20 19:32:47 DEBUG orm.ClassWriter:   REC_ST_NR
12/01/20 19:32:47 DEBUG orm.ClassWriter:   HPQ_RATING_CD
12/01/20 19:32:47 DEBUG orm.ClassWriter:   SRC_SYS_UPD_TS
12/01/20 19:32:47 DEBUG orm.ClassWriter: Writing source file:
/tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT_OPPTY_ALNC_PTNR_F.java
12/01/20 19:32:47 DEBUG orm.ClassWriter: Table name:
EDW_INT.OPPTY_ALNC_PTNR_F
12/01/20 19:32:47 DEBUG orm.ClassWriter: Columns: OPPTY_ALNC_PTNR_ID:1,
SRC_SYS_KY:-5, PTNR_SRC_SYS_KY:-5, OPPTY_ID:1, PTNR_ID:1, PTNR_ROLE_CD:1,
PRIM_PTNR_FG:1, INS_GMT_TS:93, UPD_GMT_TS:93, LOAD_JOB_NR:2, REC_ST_NR:5,
HPQ_RATING_CD:1, SRC_SYS_UPD_TS:93,
12/01/20 19:32:47 DEBUG orm.ClassWriter: sourceFilename is
EDW_INT_OPPTY_ALNC_PTNR_F.java
12/01/20 19:32:47 DEBUG orm.CompilationManager: Found existing
/tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/
12/01/20 19:32:47 INFO orm.CompilationManager: HADOOP_HOME is
/usr/lib/hadoop
12/01/20 19:32:47 INFO orm.CompilationManager: Found hadoop core jar at:
/usr/lib/hadoop/hadoop-core.jar
12/01/20 19:32:48 DEBUG orm.CompilationManager: Adding source file:
/tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT_OPPTY_ALNC_PTNR_F.java
12/01/20 19:32:48 DEBUG orm.CompilationManager: Invoking javac with args:
12/01/20 19:32:48 DEBUG orm.CompilationManager:   -sourcepath
12/01/20 19:32:48 DEBUG orm.CompilationManager:
/tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/
12/01/20 19:32:48 DEBUG orm.CompilationManager:   -d
12/01/20 19:32:48 DEBUG orm.CompilationManager:
/tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/
12/01/20 19:32:48 DEBUG orm.CompilationManager:   -classpath
narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT.OPPTY_ALNC_PTNR_F.jar
12/01/20 19:32:52 DEBUG orm.CompilationManager: Scanning for .class files
in directory:
/tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19
12/01/20 19:32:52 DEBUG orm.CompilationManager: Got classfile:
/tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT_OPPTY_ALNC_PTNR_F.class
-> EDW_INT_OPPTY_ALNC_PTNR_F.class
12/01/20 19:32:52 DEBUG orm.CompilationManager: Finished writing jar file
/tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT.OPPTY_ALNC_PTNR_F.jar
12/01/20 19:32:52 INFO mapreduce.ImportJobBase: Beginning import of
EDW_INT.OPPTY_ALNC_PTNR_F
12/01/20 19:32:55 DEBUG manager.SqlManager: Using fetchSize for next query:
1000
12/01/20 19:32:55 INFO manager.SqlManager: Executing SQL statement: SELECT
t.* FROM EDW_INT.OPPTY_ALNC_PTNR_F AS t WHERE 1=0
12/01/20 19:33:05 DEBUG mapreduce.DataDrivenImportJob: Using table class:
EDW_INT_OPPTY_ALNC_PTNR_F
12/01/20 19:33:05 DEBUG mapreduce.DataDrivenImportJob: Using InputFormat:
class com.cloudera.sqoop.mapreduce.db.DataDrivenDBInputFormat
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/sqoop-1.3.0-cdh3u2.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/hpt4jdbc.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/sqoop-1.3.0-cdh3u2.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/sqoop-1.3.0-cdh3u2.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/hpt4jdbc.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/jackson-mapper-asl-1.7.3.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/paranamer-2.3.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/jackson-core-asl-1.7.3.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/avro-mapred-1.5.4.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/ant-contrib-1.0b3.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/commons-io-1.4.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/jopt-simple-3.2.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/avro-ipc-1.5.4.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/snappy-java-1.0.3.2.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/ant-eclipse-1.0-jvm1.2.jar
12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
file:/usr/lib/sqoop/lib/avro-1.5.4.jar
12/01/20 19:33:28 INFO db.DataDrivenDBInputFormat: BoundingValsQuery:
SELECT MIN(SRC_SYS_KY), MAX(SRC_SYS_KY) FROM EDW_INT.OPPTY_ALNC_PTNR_F
12/01/20 19:33:29 DEBUG db.IntegerSplitter: Splits: [
  113 to                          126] into 4 parts
12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          113
12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          117
12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          120
12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          123
12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          126
12/01/20 19:33:35 INFO mapred.JobClient: Running job: job_201201201838_0001
12/01/20 19:33:36 INFO mapred.JobClient:  map 0% reduce 0%
12/01/20 19:34:18 INFO mapred.JobClient: Task Id :
attempt_201201201838_0001_m_000000_0, Status : FAILED
java.io.IOException: SQLException in nextKeyValue
at
com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:251)
at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at
com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:189)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
Caused by: com.hp.t4jdbc.HPT4Exception: *** ERROR[15001] A syntax error
occurred at or before:
SELECT OPPTY_ALNC_PTNR_ID, SR
attempt_201201201838_0001_m_000000_0: log4j:WARN No appenders could be
found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201201201838_0001_m_000000_0: log4j:WARN Please initialize the
log4j system properly.
12/01/20 19:34:28 INFO mapred.JobClient: Task Id :
attempt_201201201838_0001_m_000001_0, Status : FAILED
java.io.IOException: SQLException in nextKeyValue
at
com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:251)
at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at
com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:189)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
Caused by: com.hp.t4jdbc.HPT4Exception: *** ERROR[15001] A syntax error
occurred at or before:
SELECT OPPTY_ALNC_PTNR_ID, SR
attempt_201201201838_0001_m_000001_0: log4j:WARN No appenders could be
found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201201201838_0001_m_000001_0: log4j:WARN Please initialize the
log4j system properly.
12/01/20 19:35:13 INFO mapred.JobClient: Task Id :
attempt_201201201838_0001_m_000000_1, Status : FAILED
java.io.IOException: SQLException in nextKeyValue
at
com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:251)
at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at
com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:189)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
Caused by: com.hp.t4jdbc.HPT4Exception: *** ERROR[15001] A syntax error
occurred at or before:
SELECT OPPTY_ALNC_PTNR_ID, SR
attempt_201201201838_0001_m_000000_1: log4j:WARN No appenders could be
found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201201201838_0001_m_000000_1: log4j:WARN Please initialize the
log4j system properly.
12/01/20 19:35:32 INFO mapred.JobClient: Task Id :
attempt_201201201838_0001_m_000000_2, Status : FAILED
*java.io.IOException: SQLException in nextKeyValue*
at
com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:251)
at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at
com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:189)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
*Caused by: com.hp.t4jdbc.HPT4Exception: *** ERROR[15001] A syntax error
occurred at or before:
SELECT OPPTY_ALNC_PTNR_ID, SR
attempt_201201201838_0001_m_000000_2: log4j:WARN No appenders could be
found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201201201838_0001_m_000000_2: log4j:WARN Please initialize the
log4j system properly.*
12/01/20 19:35:52 INFO mapred.JobClient: Job complete: job_201201201838_0001
12/01/20 19:35:52 INFO mapred.JobClient: Counters: 6
12/01/20 19:35:52 INFO mapred.JobClient:   Job Counters
12/01/20 19:35:52 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=245116
12/01/20 19:35:52 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
12/01/20 19:35:52 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
12/01/20 19:35:52 INFO mapred.JobClient:     Launched map tasks=6
12/01/20 19:35:52 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/01/20 19:35:52 INFO mapred.JobClient:     Failed map tasks=1
12/01/20 19:35:52 INFO mapreduce.ImportJobBase: Transferred 0 bytes in
165.7817 seconds (0 bytes/sec)
12/01/20 19:35:52 INFO mapreduce.ImportJobBase: Retrieved 0 records.
*12/01/20 19:35:52 ERROR tool.ImportTool: Error during import: Import job
failed*
*
*
*
*
*Please help me if you have any thoughts.*
*
*
*Advanced thanks,*
*Narasimharao*

Re: need help on import tool

Posted by Arvind Prabhakar <ar...@apache.org>.
Hi Narasimharao,

>From the trace in your mail, it seems that the query generated by
DBRecordReader is invalid. There are no obvious reasons why this may happen
since the other parts of the system seem to be working well. One thing to
check would be to run the import using a single mapper (-m 1) and see if
that works. Also, look at the task logs to see if they have any more
information as to why this has failed.

Alternatively, you could build the code locally and instrument it to
produce more debugging output in DBRecordReader to see exactly why it is
behaving the way it is.

Thanks,
Arvind

On Mon, Jan 23, 2012 at 5:31 AM, chelikani narasimharao <
narasimharaoc@gmail.com> wrote:

> hi guys,
>                Can any one help me on this..I got struck up with this
> error.
>
> Appreciate Your Help
>
> Narasimharao
>
>
> On Fri, Jan 20, 2012 at 8:58 PM, chelikani narasimharao <
> narasimharaoc@gmail.com> wrote:
>
>> Hi ,
>>       I am using sqoop to get extract the data from HPNeoView database
>> and then want to write onto Hadoop file system.When i am using the
>> following command i am getting the below error. Here i am providing full
>> error details what i was received when i am running the below command
>>
>> sqoop import --connect jdbc:hpt4jdbc://
>> g4n0601a.houston.hp.com:18650/chema=EDW_INT --driver
>> com.hp.t4jdbc.HPT4Driver --username boyapatr_write -P --verbose --table
>> EDW_INT.OPPTY_ALNC_PTNR_F  --split-by SRC_SYS_KY --target-dir
>> /home/narasimharao/opptyfact
>>
>> DEBUG manager.SqlManager: Using fetchSize for next query: 1000
>> 12/01/20 19:32:31 INFO manager.SqlManager: Executing SQL statement:
>> SELECT t.* FROM EDW_INT.OPPTY_ALNC_PTNR_F AS t WHERE 1=0
>> 12/01/20 19:32:40 DEBUG manager.SqlManager: Using fetchSize for next
>> query: 1000
>> 12/01/20 19:32:40 INFO manager.SqlManager: Executing SQL statement:
>> SELECT t.* FROM EDW_INT.OPPTY_ALNC_PTNR_F AS t WHERE 1=0
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter: selected columns:
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   OPPTY_ALNC_PTNR_ID
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   SRC_SYS_KY
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   PTNR_SRC_SYS_KY
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   OPPTY_ID
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   PTNR_ID
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   PTNR_ROLE_CD
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   PRIM_PTNR_FG
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   INS_GMT_TS
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   UPD_GMT_TS
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   LOAD_JOB_NR
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   REC_ST_NR
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   HPQ_RATING_CD
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   SRC_SYS_UPD_TS
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter: Writing source file:
>> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT_OPPTY_ALNC_PTNR_F.java
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter: Table name:
>> EDW_INT.OPPTY_ALNC_PTNR_F
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter: Columns: OPPTY_ALNC_PTNR_ID:1,
>> SRC_SYS_KY:-5, PTNR_SRC_SYS_KY:-5, OPPTY_ID:1, PTNR_ID:1, PTNR_ROLE_CD:1,
>> PRIM_PTNR_FG:1, INS_GMT_TS:93, UPD_GMT_TS:93, LOAD_JOB_NR:2, REC_ST_NR:5,
>> HPQ_RATING_CD:1, SRC_SYS_UPD_TS:93,
>> 12/01/20 19:32:47 DEBUG orm.ClassWriter: sourceFilename is
>> EDW_INT_OPPTY_ALNC_PTNR_F.java
>> 12/01/20 19:32:47 DEBUG orm.CompilationManager: Found existing
>> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/
>> 12/01/20 19:32:47 INFO orm.CompilationManager: HADOOP_HOME is
>> /usr/lib/hadoop
>> 12/01/20 19:32:47 INFO orm.CompilationManager: Found hadoop core jar at:
>> /usr/lib/hadoop/hadoop-core.jar
>> 12/01/20 19:32:48 DEBUG orm.CompilationManager: Adding source file:
>> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT_OPPTY_ALNC_PTNR_F.java
>> 12/01/20 19:32:48 DEBUG orm.CompilationManager: Invoking javac with args:
>> 12/01/20 19:32:48 DEBUG orm.CompilationManager:   -sourcepath
>> 12/01/20 19:32:48 DEBUG orm.CompilationManager:
>> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/
>> 12/01/20 19:32:48 DEBUG orm.CompilationManager:   -d
>> 12/01/20 19:32:48 DEBUG orm.CompilationManager:
>> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/
>> 12/01/20 19:32:48 DEBUG orm.CompilationManager:   -classpath
>>
>> narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT.OPPTY_ALNC_PTNR_F.jar
>> 12/01/20 19:32:52 DEBUG orm.CompilationManager: Scanning for .class files
>> in directory:
>> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19
>> 12/01/20 19:32:52 DEBUG orm.CompilationManager: Got classfile:
>> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT_OPPTY_ALNC_PTNR_F.class
>> -> EDW_INT_OPPTY_ALNC_PTNR_F.class
>> 12/01/20 19:32:52 DEBUG orm.CompilationManager: Finished writing jar file
>> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT.OPPTY_ALNC_PTNR_F.jar
>> 12/01/20 19:32:52 INFO mapreduce.ImportJobBase: Beginning import of
>> EDW_INT.OPPTY_ALNC_PTNR_F
>> 12/01/20 19:32:55 DEBUG manager.SqlManager: Using fetchSize for next
>> query: 1000
>> 12/01/20 19:32:55 INFO manager.SqlManager: Executing SQL statement:
>> SELECT t.* FROM EDW_INT.OPPTY_ALNC_PTNR_F AS t WHERE 1=0
>> 12/01/20 19:33:05 DEBUG mapreduce.DataDrivenImportJob: Using table class:
>> EDW_INT_OPPTY_ALNC_PTNR_F
>> 12/01/20 19:33:05 DEBUG mapreduce.DataDrivenImportJob: Using InputFormat:
>> class com.cloudera.sqoop.mapreduce.db.DataDrivenDBInputFormat
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/sqoop-1.3.0-cdh3u2.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/hpt4jdbc.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/sqoop-1.3.0-cdh3u2.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/sqoop-1.3.0-cdh3u2.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/hpt4jdbc.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/jackson-mapper-asl-1.7.3.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/paranamer-2.3.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/jackson-core-asl-1.7.3.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/avro-mapred-1.5.4.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/ant-contrib-1.0b3.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/commons-io-1.4.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/jopt-simple-3.2.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/avro-ipc-1.5.4.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/snappy-java-1.0.3.2.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/ant-eclipse-1.0-jvm1.2.jar
>> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
>> file:/usr/lib/sqoop/lib/avro-1.5.4.jar
>> 12/01/20 19:33:28 INFO db.DataDrivenDBInputFormat: BoundingValsQuery:
>> SELECT MIN(SRC_SYS_KY), MAX(SRC_SYS_KY) FROM EDW_INT.OPPTY_ALNC_PTNR_F
>> 12/01/20 19:33:29 DEBUG db.IntegerSplitter: Splits: [
>>     113 to                          126] into 4 parts
>> 12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          113
>> 12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          117
>> 12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          120
>> 12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          123
>> 12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          126
>> 12/01/20 19:33:35 INFO mapred.JobClient: Running job:
>> job_201201201838_0001
>> 12/01/20 19:33:36 INFO mapred.JobClient:  map 0% reduce 0%
>> 12/01/20 19:34:18 INFO mapred.JobClient: Task Id :
>> attempt_201201201838_0001_m_000000_0, Status : FAILED
>> java.io.IOException: SQLException in nextKeyValue
>> at
>> com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:251)
>> at
>> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
>> at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>> at
>> com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:189)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>> at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>> at org.apache.hadoop.mapred.Child.main(Child.java:264)
>> Caused by: com.hp.t4jdbc.HPT4Exception: *** ERROR[15001] A syntax error
>> occurred at or before:
>> SELECT OPPTY_ALNC_PTNR_ID, SR
>> attempt_201201201838_0001_m_000000_0: log4j:WARN No appenders could be
>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>> attempt_201201201838_0001_m_000000_0: log4j:WARN Please initialize the
>> log4j system properly.
>> 12/01/20 19:34:28 INFO mapred.JobClient: Task Id :
>> attempt_201201201838_0001_m_000001_0, Status : FAILED
>> java.io.IOException: SQLException in nextKeyValue
>> at
>> com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:251)
>> at
>> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
>> at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>> at
>> com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:189)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>> at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>> at org.apache.hadoop.mapred.Child.main(Child.java:264)
>> Caused by: com.hp.t4jdbc.HPT4Exception: *** ERROR[15001] A syntax error
>> occurred at or before:
>> SELECT OPPTY_ALNC_PTNR_ID, SR
>> attempt_201201201838_0001_m_000001_0: log4j:WARN No appenders could be
>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>> attempt_201201201838_0001_m_000001_0: log4j:WARN Please initialize the
>> log4j system properly.
>> 12/01/20 19:35:13 INFO mapred.JobClient: Task Id :
>> attempt_201201201838_0001_m_000000_1, Status : FAILED
>> java.io.IOException: SQLException in nextKeyValue
>> at
>> com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:251)
>> at
>> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
>> at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>> at
>> com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:189)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>> at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>> at org.apache.hadoop.mapred.Child.main(Child.java:264)
>> Caused by: com.hp.t4jdbc.HPT4Exception: *** ERROR[15001] A syntax error
>> occurred at or before:
>> SELECT OPPTY_ALNC_PTNR_ID, SR
>> attempt_201201201838_0001_m_000000_1: log4j:WARN No appenders could be
>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>> attempt_201201201838_0001_m_000000_1: log4j:WARN Please initialize the
>> log4j system properly.
>> 12/01/20 19:35:32 INFO mapred.JobClient: Task Id :
>> attempt_201201201838_0001_m_000000_2, Status : FAILED
>> *java.io.IOException: SQLException in nextKeyValue*
>> at
>> com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:251)
>> at
>> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
>> at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>> at
>> com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:189)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>> at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:396)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>> at org.apache.hadoop.mapred.Child.main(Child.java:264)
>> *Caused by: com.hp.t4jdbc.HPT4Exception: *** ERROR[15001] A syntax error
>> occurred at or before:
>> SELECT OPPTY_ALNC_PTNR_ID, SR
>> attempt_201201201838_0001_m_000000_2: log4j:WARN No appenders could be
>> found for logger (org.apache.hadoop.hdfs.DFSClient).
>> attempt_201201201838_0001_m_000000_2: log4j:WARN Please initialize the
>> log4j system properly.*
>> 12/01/20 19:35:52 INFO mapred.JobClient: Job complete:
>> job_201201201838_0001
>> 12/01/20 19:35:52 INFO mapred.JobClient: Counters: 6
>> 12/01/20 19:35:52 INFO mapred.JobClient:   Job Counters
>> 12/01/20 19:35:52 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=245116
>> 12/01/20 19:35:52 INFO mapred.JobClient:     Total time spent by all
>> reduces waiting after reserving slots (ms)=0
>> 12/01/20 19:35:52 INFO mapred.JobClient:     Total time spent by all maps
>> waiting after reserving slots (ms)=0
>> 12/01/20 19:35:52 INFO mapred.JobClient:     Launched map tasks=6
>> 12/01/20 19:35:52 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
>> 12/01/20 19:35:52 INFO mapred.JobClient:     Failed map tasks=1
>> 12/01/20 19:35:52 INFO mapreduce.ImportJobBase: Transferred 0 bytes in
>> 165.7817 seconds (0 bytes/sec)
>> 12/01/20 19:35:52 INFO mapreduce.ImportJobBase: Retrieved 0 records.
>> *12/01/20 19:35:52 ERROR tool.ImportTool: Error during import: Import
>> job failed*
>> *
>> *
>> *
>> *
>> *Please help me if you have any thoughts.*
>> *
>> *
>> *Advanced thanks,*
>> *Narasimharao*
>>
>
>

Re: need help on import tool

Posted by chelikani narasimharao <na...@gmail.com>.
hi guys,
               Can any one help me on this..I got struck up with this error.

Appreciate Your Help

Narasimharao

On Fri, Jan 20, 2012 at 8:58 PM, chelikani narasimharao <
narasimharaoc@gmail.com> wrote:

> Hi ,
>       I am using sqoop to get extract the data from HPNeoView database and
> then want to write onto Hadoop file system.When i am using the following
> command i am getting the below error. Here i am providing full error
> details what i was received when i am running the below command
>
> sqoop import --connect jdbc:hpt4jdbc://
> g4n0601a.houston.hp.com:18650/chema=EDW_INT --driver
> com.hp.t4jdbc.HPT4Driver --username boyapatr_write -P --verbose --table
> EDW_INT.OPPTY_ALNC_PTNR_F  --split-by SRC_SYS_KY --target-dir
> /home/narasimharao/opptyfact
>
> DEBUG manager.SqlManager: Using fetchSize for next query: 1000
> 12/01/20 19:32:31 INFO manager.SqlManager: Executing SQL statement: SELECT
> t.* FROM EDW_INT.OPPTY_ALNC_PTNR_F AS t WHERE 1=0
> 12/01/20 19:32:40 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> 12/01/20 19:32:40 INFO manager.SqlManager: Executing SQL statement: SELECT
> t.* FROM EDW_INT.OPPTY_ALNC_PTNR_F AS t WHERE 1=0
> 12/01/20 19:32:47 DEBUG orm.ClassWriter: selected columns:
> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   OPPTY_ALNC_PTNR_ID
> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   SRC_SYS_KY
> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   PTNR_SRC_SYS_KY
> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   OPPTY_ID
> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   PTNR_ID
> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   PTNR_ROLE_CD
> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   PRIM_PTNR_FG
> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   INS_GMT_TS
> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   UPD_GMT_TS
> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   LOAD_JOB_NR
> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   REC_ST_NR
> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   HPQ_RATING_CD
> 12/01/20 19:32:47 DEBUG orm.ClassWriter:   SRC_SYS_UPD_TS
> 12/01/20 19:32:47 DEBUG orm.ClassWriter: Writing source file:
> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT_OPPTY_ALNC_PTNR_F.java
> 12/01/20 19:32:47 DEBUG orm.ClassWriter: Table name:
> EDW_INT.OPPTY_ALNC_PTNR_F
> 12/01/20 19:32:47 DEBUG orm.ClassWriter: Columns: OPPTY_ALNC_PTNR_ID:1,
> SRC_SYS_KY:-5, PTNR_SRC_SYS_KY:-5, OPPTY_ID:1, PTNR_ID:1, PTNR_ROLE_CD:1,
> PRIM_PTNR_FG:1, INS_GMT_TS:93, UPD_GMT_TS:93, LOAD_JOB_NR:2, REC_ST_NR:5,
> HPQ_RATING_CD:1, SRC_SYS_UPD_TS:93,
> 12/01/20 19:32:47 DEBUG orm.ClassWriter: sourceFilename is
> EDW_INT_OPPTY_ALNC_PTNR_F.java
> 12/01/20 19:32:47 DEBUG orm.CompilationManager: Found existing
> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/
> 12/01/20 19:32:47 INFO orm.CompilationManager: HADOOP_HOME is
> /usr/lib/hadoop
> 12/01/20 19:32:47 INFO orm.CompilationManager: Found hadoop core jar at:
> /usr/lib/hadoop/hadoop-core.jar
> 12/01/20 19:32:48 DEBUG orm.CompilationManager: Adding source file:
> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT_OPPTY_ALNC_PTNR_F.java
> 12/01/20 19:32:48 DEBUG orm.CompilationManager: Invoking javac with args:
> 12/01/20 19:32:48 DEBUG orm.CompilationManager:   -sourcepath
> 12/01/20 19:32:48 DEBUG orm.CompilationManager:
> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/
> 12/01/20 19:32:48 DEBUG orm.CompilationManager:   -d
> 12/01/20 19:32:48 DEBUG orm.CompilationManager:
> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/
> 12/01/20 19:32:48 DEBUG orm.CompilationManager:   -classpath
>
> narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT.OPPTY_ALNC_PTNR_F.jar
> 12/01/20 19:32:52 DEBUG orm.CompilationManager: Scanning for .class files
> in directory:
> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19
> 12/01/20 19:32:52 DEBUG orm.CompilationManager: Got classfile:
> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT_OPPTY_ALNC_PTNR_F.class
> -> EDW_INT_OPPTY_ALNC_PTNR_F.class
> 12/01/20 19:32:52 DEBUG orm.CompilationManager: Finished writing jar file
> /tmp/sqoop-narasimharao/compile/eb164bb8e2c0031416457bda6b86dd19/EDW_INT.OPPTY_ALNC_PTNR_F.jar
> 12/01/20 19:32:52 INFO mapreduce.ImportJobBase: Beginning import of
> EDW_INT.OPPTY_ALNC_PTNR_F
> 12/01/20 19:32:55 DEBUG manager.SqlManager: Using fetchSize for next
> query: 1000
> 12/01/20 19:32:55 INFO manager.SqlManager: Executing SQL statement: SELECT
> t.* FROM EDW_INT.OPPTY_ALNC_PTNR_F AS t WHERE 1=0
> 12/01/20 19:33:05 DEBUG mapreduce.DataDrivenImportJob: Using table class:
> EDW_INT_OPPTY_ALNC_PTNR_F
> 12/01/20 19:33:05 DEBUG mapreduce.DataDrivenImportJob: Using InputFormat:
> class com.cloudera.sqoop.mapreduce.db.DataDrivenDBInputFormat
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/sqoop-1.3.0-cdh3u2.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/hpt4jdbc.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/sqoop-1.3.0-cdh3u2.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/sqoop-1.3.0-cdh3u2.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/hpt4jdbc.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/jackson-mapper-asl-1.7.3.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/paranamer-2.3.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/jackson-core-asl-1.7.3.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/avro-mapred-1.5.4.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/ant-contrib-1.0b3.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/commons-io-1.4.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/jopt-simple-3.2.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/avro-ipc-1.5.4.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/snappy-java-1.0.3.2.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/ant-eclipse-1.0-jvm1.2.jar
> 12/01/20 19:33:06 DEBUG mapreduce.JobBase: Adding to job classpath:
> file:/usr/lib/sqoop/lib/avro-1.5.4.jar
> 12/01/20 19:33:28 INFO db.DataDrivenDBInputFormat: BoundingValsQuery:
> SELECT MIN(SRC_SYS_KY), MAX(SRC_SYS_KY) FROM EDW_INT.OPPTY_ALNC_PTNR_F
> 12/01/20 19:33:29 DEBUG db.IntegerSplitter: Splits: [
>     113 to                          126] into 4 parts
> 12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          113
> 12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          117
> 12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          120
> 12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          123
> 12/01/20 19:33:29 DEBUG db.IntegerSplitter:                          126
> 12/01/20 19:33:35 INFO mapred.JobClient: Running job: job_201201201838_0001
> 12/01/20 19:33:36 INFO mapred.JobClient:  map 0% reduce 0%
> 12/01/20 19:34:18 INFO mapred.JobClient: Task Id :
> attempt_201201201838_0001_m_000000_0, Status : FAILED
> java.io.IOException: SQLException in nextKeyValue
> at
> com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:251)
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
> at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
> at
> com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:189)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.mapred.Child.main(Child.java:264)
> Caused by: com.hp.t4jdbc.HPT4Exception: *** ERROR[15001] A syntax error
> occurred at or before:
> SELECT OPPTY_ALNC_PTNR_ID, SR
> attempt_201201201838_0001_m_000000_0: log4j:WARN No appenders could be
> found for logger (org.apache.hadoop.hdfs.DFSClient).
> attempt_201201201838_0001_m_000000_0: log4j:WARN Please initialize the
> log4j system properly.
> 12/01/20 19:34:28 INFO mapred.JobClient: Task Id :
> attempt_201201201838_0001_m_000001_0, Status : FAILED
> java.io.IOException: SQLException in nextKeyValue
> at
> com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:251)
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
> at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
> at
> com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:189)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.mapred.Child.main(Child.java:264)
> Caused by: com.hp.t4jdbc.HPT4Exception: *** ERROR[15001] A syntax error
> occurred at or before:
> SELECT OPPTY_ALNC_PTNR_ID, SR
> attempt_201201201838_0001_m_000001_0: log4j:WARN No appenders could be
> found for logger (org.apache.hadoop.hdfs.DFSClient).
> attempt_201201201838_0001_m_000001_0: log4j:WARN Please initialize the
> log4j system properly.
> 12/01/20 19:35:13 INFO mapred.JobClient: Task Id :
> attempt_201201201838_0001_m_000000_1, Status : FAILED
> java.io.IOException: SQLException in nextKeyValue
> at
> com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:251)
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
> at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
> at
> com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:189)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.mapred.Child.main(Child.java:264)
> Caused by: com.hp.t4jdbc.HPT4Exception: *** ERROR[15001] A syntax error
> occurred at or before:
> SELECT OPPTY_ALNC_PTNR_ID, SR
> attempt_201201201838_0001_m_000000_1: log4j:WARN No appenders could be
> found for logger (org.apache.hadoop.hdfs.DFSClient).
> attempt_201201201838_0001_m_000000_1: log4j:WARN Please initialize the
> log4j system properly.
> 12/01/20 19:35:32 INFO mapred.JobClient: Task Id :
> attempt_201201201838_0001_m_000000_2, Status : FAILED
> *java.io.IOException: SQLException in nextKeyValue*
> at
> com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:251)
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:456)
> at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
> at
> com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:189)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.mapred.Child.main(Child.java:264)
> *Caused by: com.hp.t4jdbc.HPT4Exception: *** ERROR[15001] A syntax error
> occurred at or before:
> SELECT OPPTY_ALNC_PTNR_ID, SR
> attempt_201201201838_0001_m_000000_2: log4j:WARN No appenders could be
> found for logger (org.apache.hadoop.hdfs.DFSClient).
> attempt_201201201838_0001_m_000000_2: log4j:WARN Please initialize the
> log4j system properly.*
> 12/01/20 19:35:52 INFO mapred.JobClient: Job complete:
> job_201201201838_0001
> 12/01/20 19:35:52 INFO mapred.JobClient: Counters: 6
> 12/01/20 19:35:52 INFO mapred.JobClient:   Job Counters
> 12/01/20 19:35:52 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=245116
> 12/01/20 19:35:52 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 12/01/20 19:35:52 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
> 12/01/20 19:35:52 INFO mapred.JobClient:     Launched map tasks=6
> 12/01/20 19:35:52 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 12/01/20 19:35:52 INFO mapred.JobClient:     Failed map tasks=1
> 12/01/20 19:35:52 INFO mapreduce.ImportJobBase: Transferred 0 bytes in
> 165.7817 seconds (0 bytes/sec)
> 12/01/20 19:35:52 INFO mapreduce.ImportJobBase: Retrieved 0 records.
> *12/01/20 19:35:52 ERROR tool.ImportTool: Error during import: Import job
> failed*
> *
> *
> *
> *
> *Please help me if you have any thoughts.*
> *
> *
> *Advanced thanks,*
> *Narasimharao*
>