You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@sqoop.apache.org by Jarek Jarcec Cecho <ja...@apache.org> on 2014/05/06 18:32:33 UTC

Re: wrong string created after sqoop import for timestamp

It seems that Pig is more strict that necessary, but you should be able to overcome it by changing the expected format, right?

Jarcec

On Wed, Apr 16, 2014 at 07:52:46PM +0530, Prabhakar wrote:
> sorry forgot to add  bellow  pig script line
> 
> scoreDates1 = foreach scoreattributevalue generate slot,
> createdat,ToDate(slot),ToDate(createdat, 'YYYY-MM-DD HH:mm:ss');
> i am getting that error when i try to dump it
> 
> Thanks,
> Prabhakar
> 
> 
> On Wed, Apr 16, 2014 at 7:50 PM, Prabhakar <o....@gmail.com> wrote:
> 
> > thanks for your mail,
> >  actually i am loading  that value into pig script.
> > when it call that column  value i am getting the bellow error
> >
> > 2014-04-16 19:47:37,227 [main] ERROR
> > org.apache.pig.tools.pigstats.SimplePigStats - ERROR: Invalid format:
> > "2014-01-18 01:01:02.0" is malformed at ".0"
> > 2014-04-16 19:47:37,227 [main] ERROR
> > org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
> >
> >
> > Thanks,
> > Prabhakar
> >
> > Thanks,
> > Prabhakar
> >
> >
> > On Wed, Apr 16, 2014 at 7:46 PM, Jarek Jarcec Cecho <ja...@apache.org>wrote:
> >
> >> I'm wondering why you see that as an issue Sandipan? Timestamp values
> >> "1903-05-28 16:13:52" and "1903-05-28 16:13:52.0" are equivalent in the
> >> same way as double numbers 1.0 and 1.00, hence I'm wondering how is it
> >> affecting you?
> >>
> >> Jarcec
> >>
> >> On Wed, Apr 16, 2014 at 07:02:03PM +0530, Prabhakar wrote:
> >> > thanks you very much sandipan, its working
> >> >
> >> > i used  DATE_FORMAT( FROM_UNIXTIME( unix_timestamp(sav.updatedAt)
> >> > ),'%Y-%m-%d %H:%i:%s' ) for getting the correct value to HDFS.
> >> > is it there any better function, so that we can improve the performance.
> >> > thanks any why its working for me
> >> >
> >> > Thanks,
> >> > Prabhakar
> >> >
> >> >
> >> > On Wed, Apr 16, 2014 at 5:14 PM, Sandipan.Ghosh
> >> > <Sa...@target.com>wrote:
> >> >
> >> > >  This is because HDFS convert it to timestamp Hadoop, which have
> >> milisec
> >> > > also. If you don’t want this cast the date column as string in mysql
> >> and
> >> > > import as string in hdfs.
> >> > >
> >> > >
> >> > >
> >> > > Hope this help.
> >> > >
> >> > > Thanks
> >> > >
> >> > > sandipan
> >> > >
> >> > >
> >> > >
> >> > > *From:* Prabhakar [mailto:o.prabhakar@gmail.com]
> >> > > *Sent:* Wednesday, April 16, 2014 5:09 PM
> >> > > *To:* user@sqoop.apache.org
> >> > >
> >> > > *Subject:* wrong string created after sqoop import for timestamp
> >> > >
> >> > >
> >> > >
> >> > > Hi,
> >> > >
> >> > >
> >> > >
> >> > >  I am trying to import mysql table to HDFS,
> >> > >
> >> > > here one of the column in mysql is timestamp
> >> > >
> >> > > after importing this table i am seeing the extra value for column.
> >> > >
> >> > >
> >> > >
> >> > > in Mysql time is : 1903-05-28 16:13:52
> >> > >
> >> > >
> >> > >
> >> > > after import in HDFS it is:  1903-05-28 16:13:52.0
> >> > >
> >> > >
> >> > >
> >> > > at end ".0" adding extra.
> >> > >
> >> > > pls help me on this
> >> > >
> >> > >
> >> > >
> >> > > Thanks,
> >> > >
> >> > > Prabhakar
> >> > >
> >>
> >
> >