You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by CPC <ac...@gmail.com> on 2018/06/22 18:43:45 UTC
cassandra getendpoints do not match with tracing
Hi all,
Recently we added some nodes to our cluster. After adding nodes we noticed
that when we nodetool getendpoints tims "MESSAGE_HISTORY" partitionkey1 it
reports three nodes per dc with 6 nodes in total which is expected since RF
is 3. But when we run a query with local_one and tracing on , we see some
nodes in tracing which is not reported by getendpoints. And tracing says
that this node which is not reported by getendpoints is reading some
sstables. Is this node coordinator or there is something wrong?
Re: cassandra getendpoints do not match with tracing
Posted by CPC <ac...@gmail.com>.
Any ideas? below is getendpoint result for a specific pk
172.16.5.235
172.16.5.229
172.16.5.228
172.16.5.223
172.16.5.234
172.16.5.241
and below is a trace with same pk
Preparing statement [Native-Transport-Requests-2] | 2018-06-22
16:33:21.118000 | 172.16.5.242 | 6757 | 10.201.165.77
reading
data from /172.16.5.243 [Native-Transport-Requests-2] | 2018-06-22
16:33:21.119000 | 172.16.5.242 | 7143 | 10.201.165.77
Sending
READ message to /172.16.5.243 message size 185 bytes
[MessagingService-Outgoing-/172.16.5.243-Small] | 2018-06-22
16:33:21.119000 | 172.16.5.242 | 7511 | 10.201.165.77
speculating read retry on /172.16.5.236 [Native-Transport-Requests-2] |
2018-06-22 16:33:21.126000 | 172.16.5.242 | 14149 | 10.201.165.77
Sending READ message to /172.16.5.236 message size 185 bytes
[MessagingService-Outgoing-/172.16.5.236-Small] | 2018-06-22
16:33:21.126000 | 172.16.5.242 | 14286 | 10.201.165.77
at the end of the tracing
Submitting range requests on 1 ranges with a concurrency of 1 (5.859375E-4
rows per range expected) [Native-Transport-Requests-1] | 2018-06-22
20:53:20.594000 | 172.16.5.242 | 2764 | 10.201.165.77
Enqueuing
request to /172.16.5.234 [Native-Transport-Requests-1] | 2018-06-22
20:53:20.594000 | 172.16.5.242 | 2871 | 10.201.165.77
Submitted 1 concurrent range requests [Native-Transport-Requests-1] |
2018-06-22 20:53:20.594000 | 172.16.5.242 | 2939 | 10.201.165.77
Sending RANGE_SLICE message to /172.16.5.234 message size 252 bytes
[MessagingService-Outgoing-/172.16.5.234-Small] | 2018-06-22
20:53:20.594000 | 172.16.5.242 | 2967 | 10.201.165.77
RANGE_SLICE message received from /172.16.5.242 [MessagingService-Incoming-/
172.16.5.242] | 2018-06-22 20:53:20.600000 | 172.16.5.234 | 28
| 10.201.165.77
Executing read on
tims.MESSAGE_HISTORY using index msgididx [ReadStage-1] | 2018-06-22
20:53:20.601000 | 172.16.5.234 | 360 | 10.201.165.77
Executing
single-partition query on MESSAGE_HISTORY.msgididx [ReadStage-1] |
2018-06-22 20:53:20.601000 | 172.16.5.234 | 468 | 10.201.165.77
REQUEST_RESPONSE
message received from /172.16.5.234 [MessagingService-Incoming-/172.16.5.234]
| 2018-06-22 20:53:20.612000 | 172.16.5.242 | 21179 | 10.201.165.77
Processing response from /172.16.5.234 [RequestResponseStage-6] |
2018-06-22 20:53:20.612000 | 172.16.5.242 | 21238 | 10.201.165.77
Request complete | 2018-06-22 20:53:20.612342 | 172.16.5.242 |
21342 | 10.201.165.77
I understand that 234 is an endpoint so it should communicate with it. But
I dont understand why tracing includes 243 and 236 ips? They are not
endpoints and we are submitting this query over 172.16.5.242.
On Fri, 22 Jun 2018 at 21:43, CPC <ac...@gmail.com> wrote:
> Hi all,
>
> Recently we added some nodes to our cluster. After adding nodes we
> noticed that when we nodetool getendpoints tims "MESSAGE_HISTORY"
> partitionkey1 it reports three nodes per dc with 6 nodes in total which is
> expected since RF is 3. But when we run a query with local_one and tracing
> on , we see some nodes in tracing which is not reported by getendpoints.
> And tracing says that this node which is not reported by getendpoints is
> reading some sstables. Is this node coordinator or there is something wrong?
>