You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@trafodion.apache.org by Eric Owhadi <er...@esgyn.com> on 2016/02/02 23:36:21 UTC

fixing/checking corrupted metadata?

I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric

RE: fixing/checking corrupted metadata?

Posted by Dave Birdsall <da...@esgyn.com>.
Hi Eric,



There might be hung transactions. Do “dtmci” and then “status trans” to
see. If so, you can get rid of them by doing sqstop + sqstart. Though you
might need to do ckillall with sqstop because sometimes it hangs on these.



There likely is messed up metadata. To clean that up, you can do “cleanup
table customer”.



Actually, you can do this without cleaning up the hung transactions. Those
can just stay there.



Dave



*From:* Eric Owhadi [mailto:eric.owhadi@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric

RE: fixing/checking corrupted metadata?

Posted by Roberta Marton <ro...@esgyn.com>.
Cleanup works good for me.



     Roberta



*From:* Dave Birdsall [mailto:dave.birdsall@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owhadi@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric

RE: fixing/checking corrupted metadata?

Posted by Sean Broeder <se...@esgyn.com>.
+1



*From:* Roberta Marton [mailto:roberta.marton@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:06 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



May want to add this to our knowledgeware as an FAQ.



   Roberta



*From:* Eric Owhadi [mailto:eric.owhadi@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:00 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Awesome, bookmarked J. And yes it solved the problem.

Thanks all for the precious help,
Eric



*From:* Suresh Subbiah [mailto:suresh.subbiah60@gmail.com]
*Sent:* Tuesday, February 2, 2016 5:09 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: fixing/checking corrupted metadata?



Here is the syntax for cleanup.

https://cwiki.apache.org/confluence/display/TRAFODION/Metadata+Cleanup



We need to add this to the manual that Gunnar created. I will file a JIRA
to raise an error an exit early if requested compression type is not
available.



Thanks

Suresh



On Tue, Feb 2, 2016 at 5:05 PM, Eric Owhadi <er...@esgyn.com> wrote:

Great thanks for the info, very helpful.

You mention Trafodion documentation, in what DOC is it described? I looked
for it in Trafodion Command Interface Guide and Trafodion SQL Reference
Manual with no luck? The other doc titles did not look promising?

Eric





*From:* Anoop Sharma [mailto:anoop.sharma@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:54 PM


*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Dave mentioned ‘cleanup table customer’. You can use that if you know which
table is messed up in metadata.



Or one can use:

  cleanup metadata, check, return details;     to find out all entries
which may be corrupt.

and then:

  cleanup metadata, return details;



Cleanup command is also documented in trafodion documentation which is a
good place to check.



anoop



*From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:49 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Right.  I mentioned this only because reinstalling local_hadoop was
mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
for existing data.



*From:* Dave Birdsall [mailto:dave.birdsall@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owhadi@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric

RE: fixing/checking corrupted metadata?

Posted by Roberta Marton <ro...@esgyn.com>.
May want to add this to our knowledgeware as an FAQ.



   Roberta



*From:* Eric Owhadi [mailto:eric.owhadi@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:00 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Awesome, bookmarked J. And yes it solved the problem.

Thanks all for the precious help,
Eric



*From:* Suresh Subbiah [mailto:suresh.subbiah60@gmail.com]
*Sent:* Tuesday, February 2, 2016 5:09 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: fixing/checking corrupted metadata?



Here is the syntax for cleanup.

https://cwiki.apache.org/confluence/display/TRAFODION/Metadata+Cleanup



We need to add this to the manual that Gunnar created. I will file a JIRA
to raise an error an exit early if requested compression type is not
available.



Thanks

Suresh



On Tue, Feb 2, 2016 at 5:05 PM, Eric Owhadi <er...@esgyn.com> wrote:

Great thanks for the info, very helpful.

You mention Trafodion documentation, in what DOC is it described? I looked
for it in Trafodion Command Interface Guide and Trafodion SQL Reference
Manual with no luck? The other doc titles did not look promising?

Eric





*From:* Anoop Sharma [mailto:anoop.sharma@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:54 PM


*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Dave mentioned ‘cleanup table customer’. You can use that if you know which
table is messed up in metadata.



Or one can use:

  cleanup metadata, check, return details;     to find out all entries
which may be corrupt.

and then:

  cleanup metadata, return details;



Cleanup command is also documented in trafodion documentation which is a
good place to check.



anoop



*From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:49 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Right.  I mentioned this only because reinstalling local_hadoop was
mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
for existing data.



*From:* Dave Birdsall [mailto:dave.birdsall@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owhadi@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric

RE: fixing/checking corrupted metadata?

Posted by Eric Owhadi <er...@esgyn.com>.
Awesome, bookmarked J. And yes it solved the problem.

Thanks all for the precious help,
Eric



*From:* Suresh Subbiah [mailto:suresh.subbiah60@gmail.com]
*Sent:* Tuesday, February 2, 2016 5:09 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: fixing/checking corrupted metadata?



Here is the syntax for cleanup.

https://cwiki.apache.org/confluence/display/TRAFODION/Metadata+Cleanup



We need to add this to the manual that Gunnar created. I will file a JIRA
to raise an error an exit early if requested compression type is not
available.



Thanks

Suresh



On Tue, Feb 2, 2016 at 5:05 PM, Eric Owhadi <er...@esgyn.com> wrote:

Great thanks for the info, very helpful.

You mention Trafodion documentation, in what DOC is it described? I looked
for it in Trafodion Command Interface Guide and Trafodion SQL Reference
Manual with no luck? The other doc titles did not look promising?

Eric





*From:* Anoop Sharma [mailto:anoop.sharma@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:54 PM


*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Dave mentioned ‘cleanup table customer’. You can use that if you know which
table is messed up in metadata.



Or one can use:

  cleanup metadata, check, return details;     to find out all entries
which may be corrupt.

and then:

  cleanup metadata, return details;



Cleanup command is also documented in trafodion documentation which is a
good place to check.



anoop



*From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:49 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Right.  I mentioned this only because reinstalling local_hadoop was
mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
for existing data.



*From:* Dave Birdsall [mailto:dave.birdsall@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owhadi@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric

Re: fixing/checking corrupted metadata?

Posted by Suresh Subbiah <su...@gmail.com>.
Here is the syntax for cleanup.
https://cwiki.apache.org/confluence/display/TRAFODION/Metadata+Cleanup

We need to add this to the manual that Gunnar created. I will file a JIRA
to raise an error an exit early if requested compression type is not
available.

Thanks
Suresh

On Tue, Feb 2, 2016 at 5:05 PM, Eric Owhadi <er...@esgyn.com> wrote:

> Great thanks for the info, very helpful.
>
> You mention Trafodion documentation, in what DOC is it described? I looked
> for it in Trafodion Command Interface Guide and Trafodion SQL Reference
> Manual with no luck? The other doc titles did not look promising?
>
> Eric
>
>
>
>
>
> *From:* Anoop Sharma [mailto:anoop.sharma@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 4:54 PM
>
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: fixing/checking corrupted metadata?
>
>
>
> Dave mentioned ‘cleanup table customer’. You can use that if you know
> which table is messed up in metadata.
>
>
>
> Or one can use:
>
>   cleanup metadata, check, return details;     to find out all entries
> which may be corrupt.
>
> and then:
>
>   cleanup metadata, return details;
>
>
>
> Cleanup command is also documented in trafodion documentation which is a
> good place to check.
>
>
>
> anoop
>
>
>
> *From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 2:49 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: fixing/checking corrupted metadata?
>
>
>
> Right.  I mentioned this only because reinstalling local_hadoop was
> mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
> for existing data.
>
>
>
> *From:* Dave Birdsall [mailto:dave.birdsall@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 2:43 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: fixing/checking corrupted metadata?
>
>
>
> Only do that if you’re willing to get rid of your entire database.
>
>
>
> *From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 2:41 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: fixing/checking corrupted metadata?
>
>
>
> You might want to try sqlci initialize trafodion, drop; initialize
> trafodion;
>
>
>
>
>
>
>
> *From:* Eric Owhadi [mailto:eric.owhadi@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 2:36 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* fixing/checking corrupted metadata?
>
>
>
> I have been playing on my dev environment with this DDL:
>
> create table Customer
>
> (
>
>     c_customer_sk           int not null,
>
>     c_customer_id           char(16)     CHARACTER SET UTF8 not null,
>
>     c_current_cdemo_sk      int,
>
>     c_current_hdemo_sk      int,
>
>     c_current_addr_sk       int,
>
>     c_first_shipto_date_sk  int,
>
>     c_first_sales_date_sk   int,
>
>     c_salutation            char(10) CHARACTER SET UTF8,
>
>     c_first_name            char(20) CHARACTER SET UTF8,
>
>     c_last_name             char(30) CHARACTER SET UTF8,
>
>     c_preferred_cust_flag   char(1),
>
>     c_birth_day             integer,
>
>     c_birth_month           integer,
>
>     c_birth_year            integer,
>
>     c_birth_country         varchar(20) CHARACTER SET UTF8,
>
>     c_login                 char(13) CHARACTER SET UTF8,
>
>     c_email_address         char(50) CHARACTER SET UTF8,
>
>     c_last_review_date_sk   int,
>
>     primary key (c_customer_sk)
>
> )SALT USING 2 PARTITIONS
>
>   HBASE_OPTIONS
>
>   (
>
>     DATA_BLOCK_ENCODING = 'FAST_DIFF',
>
>    COMPRESSION = 'SNAPPY'
>
>   );
>
>
>
> After a long time and supposedly 35 retries, it complained about the lack
> of SNAPPY compression support in local_hadoop.
>
>
>
> That’s fine, so I decided to retry with:
>
> create table Customer
>
> (
>
>     c_customer_sk           int not null,
>
>     c_customer_id           char(16)     CHARACTER SET UTF8 not null,
>
>     c_current_cdemo_sk      int,
>
>     c_current_hdemo_sk      int,
>
>     c_current_addr_sk       int,
>
>     c_first_shipto_date_sk  int,
>
>     c_first_sales_date_sk   int,
>
>     c_salutation            char(10) CHARACTER SET UTF8,
>
>     c_first_name            char(20) CHARACTER SET UTF8,
>
>     c_last_name             char(30) CHARACTER SET UTF8,
>
>     c_preferred_cust_flag   char(1),
>
>     c_birth_day             integer,
>
>     c_birth_month           integer,
>
>     c_birth_year            integer,
>
>     c_birth_country         varchar(20) CHARACTER SET UTF8,
>
>     c_login                 char(13) CHARACTER SET UTF8,
>
>     c_email_address         char(50) CHARACTER SET UTF8,
>
>     c_last_review_date_sk   int,
>
>     primary key (c_customer_sk)
>
> )SALT USING 2 PARTITIONS
>
>   HBASE_OPTIONS
>
>   (
>
>     DATA_BLOCK_ENCODING = 'FAST_DIFF'
>
> -- not available in local_hadoop   COMPRESSION = 'SNAPPY'
>
>   );
>
>
>
> And this time it takes forever and never complete (waited 20 minute, then
> killed it).
>
>
>
> I am assuming that the second attempt might be the consequence of the
> first failure that must have left things half done.
>
>
>
> I know that I can do a full uninstall/re-install local Hadoop, but I was
> wondering if there is a metadata clean up utility that I could try before
> applying the bazooka?
>
>
>
> Thanks in advance for the help,
> Eric
>
>
>

RE: fixing/checking corrupted metadata?

Posted by Eric Owhadi <er...@esgyn.com>.
Great thanks for the info, very helpful.

You mention Trafodion documentation, in what DOC is it described? I looked
for it in Trafodion Command Interface Guide and Trafodion SQL Reference
Manual with no luck? The other doc titles did not look promising?

Eric





*From:* Anoop Sharma [mailto:anoop.sharma@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:54 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Dave mentioned ‘cleanup table customer’. You can use that if you know which
table is messed up in metadata.



Or one can use:

  cleanup metadata, check, return details;     to find out all entries
which may be corrupt.

and then:

  cleanup metadata, return details;



Cleanup command is also documented in trafodion documentation which is a
good place to check.



anoop



*From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:49 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Right.  I mentioned this only because reinstalling local_hadoop was
mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
for existing data.



*From:* Dave Birdsall [mailto:dave.birdsall@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owhadi@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric

RE: fixing/checking corrupted metadata?

Posted by Anoop Sharma <an...@esgyn.com>.
Dave mentioned ‘cleanup table customer’. You can use that if you know which
table is messed up in metadata.



Or one can use:

  cleanup metadata, check, return details;     to find out all entries
which may be corrupt.

and then:

  cleanup metadata, return details;



Cleanup command is also documented in trafodion documentation which is a
good place to check.



anoop



*From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:49 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Right.  I mentioned this only because reinstalling local_hadoop was
mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
for existing data.



*From:* Dave Birdsall [mailto:dave.birdsall@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owhadi@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric

RE: fixing/checking corrupted metadata?

Posted by Sean Broeder <se...@esgyn.com>.
Right.  I mentioned this only because reinstalling local_hadoop was
mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
for existing data.



*From:* Dave Birdsall [mailto:dave.birdsall@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owhadi@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric

RE: fixing/checking corrupted metadata?

Posted by Dave Birdsall <da...@esgyn.com>.
Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broeder@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owhadi@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric

RE: fixing/checking corrupted metadata?

Posted by Sean Broeder <se...@esgyn.com>.
You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owhadi@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

    c_customer_sk           int not null,

    c_customer_id           char(16)     CHARACTER SET UTF8 not null,

    c_current_cdemo_sk      int,

    c_current_hdemo_sk      int,

    c_current_addr_sk       int,

    c_first_shipto_date_sk  int,

    c_first_sales_date_sk   int,

    c_salutation            char(10) CHARACTER SET UTF8,

    c_first_name            char(20) CHARACTER SET UTF8,

    c_last_name             char(30) CHARACTER SET UTF8,

    c_preferred_cust_flag   char(1),

    c_birth_day             integer,

    c_birth_month           integer,

    c_birth_year            integer,

    c_birth_country         varchar(20) CHARACTER SET UTF8,

    c_login                 char(13) CHARACTER SET UTF8,

    c_email_address         char(50) CHARACTER SET UTF8,

    c_last_review_date_sk   int,

    primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

    DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric