You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Stefania (JIRA)" <ji...@apache.org> on 2016/03/10 02:46:40 UTC
[jira] [Updated] (CASSANDRA-11333) cqlsh: COPY FROM should check
that explicit column names are valid
[ https://issues.apache.org/jira/browse/CASSANDRA-11333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Stefania updated CASSANDRA-11333:
---------------------------------
Description:
If an invalid column is specified in the COPY FROM command, then it fails without an appropriate error notification.
For example using this schema:
{code}
CREATE TABLE bulk_read.value500k_cluster1 (
pk int,
c1 int,
v1 text,
v2 text,
PRIMARY KEY (pk, c1)
);
{code}
and this COPY FROM command (note the third column name is wrong:
{code}
COPY bulk_read.value500k_cluster1 (pk, c1, vv, v2) FROM 'test.csv';
{code}
we get the following error:
{code}
Starting copy of bulk_read.value500k_cluster1 with columns ['pk', 'c1', 'vv', 'v2'].
1 child process(es) died unexpectedly, aborting
Processed: 0 rows; Rate: 0 rows/s; Avg. rate: 0 rows/s
0 rows imported from 0 files in 0.109 seconds (0 skipped).
{code}
Running cqlsh with {{--debug}} reveals where the problem is:
{code}
Starting copy of bulk_read.value500k_cluster1 with columns ['pk', 'c1', 'vv', 'v2'].
Traceback (most recent call last):
File "/home/automaton/cassandra-src/bin/../pylib/cqlshlib/copyutil.py", line 2005, in run
self.inner_run(*self.make_params())
File "/home/automaton/cassandra-src/bin/../pylib/cqlshlib/copyutil.py", line 2027, in make_params
is_counter = ("counter" in [table_meta.columns[name].cql_type for name in self.valid_columns])
{code}
The parent process should check that all column names are valid and output an appropriate error message rather than letting worker processes crash.
was:
If an invalid column is specified in the COPY FROM command, then it fails without an appropriate error notification.
For example using this schema:
{code}
CREATE TABLE bulk_read.value500k_cluster1 (
pk int,
c1 int,
v1 text,
v2 text,
PRIMARY KEY (pk, c1)
);
{code}
and this COPY FROM command (note the third column name is wrong:
{code}
COPY bulk_read.value500k_cluster1 (pk, c1, vv, v2) FROM 'test.csv';
{code}
we get the following error:
{code}
Starting copy of bulk_read.value500k_cluster1 with columns ['pk', 'c1', 'vv', 'v2'].
1 child process(es) died unexpectedly, aborting
Processed: 0 rows; Rate: 0 rows/s; Avg. rate: 0 rows/s
0 rows imported from 0 files in 0.109 seconds (0 skipped).
{code}
Running cqlsh with {{--debug}} reveals where the problem is:
{code}
Starting copy of bulk_read.value500k_cluster1 with columns ['pk', 'c1', 'vv', 'v2'].
Traceback (most recent call last):
File "/home/automaton/cassandra-src/bin/../pylib/cqlshlib/copyutil.py", line 2005, in run
self.inner_run(*self.make_params())
File "/home/automaton/cassandra-src/bin/../pylib/cqlshlib/copyutil.py", line 2027, in make_params
is_counter = ("counter" in [table_meta.columns[name].cql_type for name in self.valid_columns])
{code}
The parent process should check that all column names are valid and output an appropriate error message rather than letting worker processes die.
> cqlsh: COPY FROM should check that explicit column names are valid
> ------------------------------------------------------------------
>
> Key: CASSANDRA-11333
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11333
> Project: Cassandra
> Issue Type: Bug
> Components: Tools
> Reporter: Stefania
> Assignee: Stefania
> Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> If an invalid column is specified in the COPY FROM command, then it fails without an appropriate error notification.
> For example using this schema:
> {code}
> CREATE TABLE bulk_read.value500k_cluster1 (
> pk int,
> c1 int,
> v1 text,
> v2 text,
> PRIMARY KEY (pk, c1)
> );
> {code}
> and this COPY FROM command (note the third column name is wrong:
> {code}
> COPY bulk_read.value500k_cluster1 (pk, c1, vv, v2) FROM 'test.csv';
> {code}
> we get the following error:
> {code}
> Starting copy of bulk_read.value500k_cluster1 with columns ['pk', 'c1', 'vv', 'v2'].
> 1 child process(es) died unexpectedly, aborting
> Processed: 0 rows; Rate: 0 rows/s; Avg. rate: 0 rows/s
> 0 rows imported from 0 files in 0.109 seconds (0 skipped).
> {code}
> Running cqlsh with {{--debug}} reveals where the problem is:
> {code}
> Starting copy of bulk_read.value500k_cluster1 with columns ['pk', 'c1', 'vv', 'v2'].
> Traceback (most recent call last):
> File "/home/automaton/cassandra-src/bin/../pylib/cqlshlib/copyutil.py", line 2005, in run
> self.inner_run(*self.make_params())
> File "/home/automaton/cassandra-src/bin/../pylib/cqlshlib/copyutil.py", line 2027, in make_params
> is_counter = ("counter" in [table_meta.columns[name].cql_type for name in self.valid_columns])
> {code}
> The parent process should check that all column names are valid and output an appropriate error message rather than letting worker processes crash.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)