You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Fabian Wollert <fa...@zalando.de> on 2017/07/13 11:26:01 UTC

Flink Elasticsearch Connector: Lucene Error message

Hi everyone,

I'm trying to make use of the new Elasticsearch Connector
<https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/elasticsearch.html>.
I got a version running locally (with ssh tunnels to my Elasticsearch
cluster in AWS) in my IDE, I see the data in Elasticsearch written
perfectly, as I want it. As soon as I try to run this on our dev cluster
(Flink 1.3.0, running in the same VPC like ) though, i get the following
error message (in the sink):

java.lang.NoSuchFieldError: LUCENE_5_5_0
at org.elasticsearch.Version.<clinit>(Version.java:295)
at
org.elasticsearch.client.transport.TransportClient$Builder.build(TransportClient.java:129)
at
org.apache.flink.streaming.connectors.elasticsearch2.Elasticsearch2ApiCallBridge.createClient(Elasticsearch2ApiCallBridge.java:65)
at
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:272)
at
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:111)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:375)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:748)

I first thought that this has something to do with mismatched versions, but
it happens to me with Elasticsearch 2.2.2 (bundled with Lucene 5.4.1) and
Elasticsearch 2.3 (bundled with Lucene 5.5.0).

Can someone point to what exact version conflict is happening here (or
where to investigate further)? Currently my set up looks like everything is
actually running with Lucene 5.5.0, so I'm wondering where that error
message is exactly coming from. And also why it is running locally, but not
in the cluster. I'm still investigating if this is a general connection
issue from the Flink cluster to the ES cluster, but that would be
surprising, and also that error message would be then misleading ....

Cheers
Fabian

--
*Fabian Wollert*
*Senior Data Engineer*

*POSTAL ADDRESS*
*Zalando SE*
*11501 Berlin*

*OFFICE*
*Zalando SE*
*Charlottenstraße 4*
*10969 Berlin*
*Germany*

*Email: fabian.wollert@zalando.de <fa...@zalando.de>*
*Web: corporate.zalando.com <http://corporate.zalando.com>*
*Jobs: jobs.zalando.de <http://jobs.zalando.de>*

*Zalando SE, Tamara-Danz-Straße 1, 10243 Berlin*
*Company registration: Amtsgericht Charlottenburg, HRB 158855 B*
*VAT registration number: DE 260543043*
*Management Board: Robert Gentz, David Schneider, Rubin Ritter*
*Chairperson of the Supervisory Board: Lothar Lanz*
*Registered office: Berlin*

Re: Flink Elasticsearch Connector: Lucene Error message

Posted by "Tzu-Li (Gordon) Tai" <tz...@apache.org>.
Glad to hear it’s working!

Yes, normally you should avoid using the lib folder to resolve these dependency issues and rely only on user jar packaging when working with Flink connectors.

- Gordon


On 17 July 2017 at 9:44:20 PM, Fabian Wollert (fabian.wollert@zalando.de) wrote:

TL;DR: remove all lucene and elasticsearch libs in your flink env and just use maven to manage dependencies, when working with the flink elasticsearch connector.

so in the first place i deleted the libs in the folder to see if its working, but it did not. then we thought if maybe flink loads already the libs at startup, so i packaged our flink appliance again, with out the old lucene lib which was still loaded, and then redeployed, and et voilà, it worked then!

thanks guys for the investigation help!

Cheers


--
Fabian Wollert
Zalando SE

E-Mail: fabian.wollert@zalando.de
Location: ZMAP

2017-07-17 9:58 GMT+02:00 Tzu-Li (Gordon) Tai <tz...@apache.org>:
Hi,

I would also recommend checking the `lib/` folder of your Flink installation to see if there is any dangling old version jars that you added there.
I did a quick dependency check on the Elasticsearch 2 connector, it is correctly pulling in Lucene 5.5.0 only, so this dependency should not pop up given that the user code is packaged properly.
As of now, I would guess that it is some dependency conflict caused by either the reasons mentioned above, or some other dependency in the user jar is pulling in a conflicting Lucene version.

Of course, if you doubt otherwise and that isn’t the case, let us know the result of your checks so we can investigate further! Thanks.

Cheers,
Gordon

On 17 July 2017 at 3:38:17 PM, Fabian Wollert (fabian.wollert@zalando.de) wrote:

1.3.0, but i only need the ES 2.X connector working right now, since that's the elasticsearch version we're using. another option would be to upgrade to ES 5 (at elast on dev) to see if its working as well, but that sounds not like fixing the problem for me :-D

Cheers
Fabian


--
Fabian Wollert
Zalando SE

E-Mail: fabian.wollert@zalando.de
Location: ZMAP

2017-07-16 15:47 GMT+02:00 Aljoscha Krettek <al...@apache.org>:
Hi,

There was also a problem in releasing the ES 5 connector with Flink 1.3.0. You only said you’re using Flink 1.3, would that be 1.3.0 or 1.3.1?

Best,
Aljoscha

On 16. Jul 2017, at 13:42, Fabian Wollert <fa...@zalando.de> wrote:

Hi Aljoscha,

we are running Flink in Stand alone mode, inside Docker in AWS. I will check tomorrow the dependencies, although i'm wondering: I'm running Flink 1.3 averywhere and the appropiate ES connector which was only released with 1.3, so it's weird where this dependency mix up comes from ... let's see ...

Cheers
Fabian


--
Fabian Wollert
Zalando SE

E-Mail: fabian.wollert@zalando.de
Location: ZMAP

2017-07-14 11:15 GMT+02:00 Aljoscha Krettek <al...@apache.org>:
This kind of error almost always hints at a dependency clash, i.e. there is some version of this code in the class path that clashed with the version that the Flink program uses. That’s why it works in local mode, where there are probably not many other dependencies and not in cluster mode.

How are you running it on the cluster? Standalone, YARN?

Best,
Aljoscha

On 13. Jul 2017, at 13:56, Fabian Wollert <fa...@zalando.de> wrote:

Hi Timo, Hi Gordon,

thx for the reply! I checked the connection from both clusters to each other, and i can telnet to the 9300 port of flink, so i think the connection is not an issue here. 

We are currently using in our live env a custom elasticsearch connector, which used some extra lib's deployed on the cluster. i found one lucene lib and deleted it (since all dependencies should be in the flink job jar), but that unfortunately did not help neither ...

Cheers
Fabian


--
Fabian Wollert
Data Engineering
Technology

E-Mail: fabian.wollert@zalando.de
Location: ZMAP

2017-07-13 13:46 GMT+02:00 Timo Walther <tw...@apache.org>:
Hi Fabian,

I loop in Gordon. Maybe he knows whats happening here.

Regards,
Timo


Am 13.07.17 um 13:26 schrieb Fabian Wollert:
Hi everyone,

I'm trying to make use of the new Elasticsearch Connector. I got a version running locally (with ssh tunnels to my Elasticsearch cluster in AWS) in my IDE, I see the data in Elasticsearch written perfectly, as I want it. As soon as I try to run this on our dev cluster (Flink 1.3.0, running in the same VPC like ) though, i get the following error message (in the sink):

java.lang.NoSuchFieldError: LUCENE_5_5_0
at org.elasticsearch.Version.<clinit>(Version.java:295)
at org.elasticsearch.client.transport.TransportClient$Builder.build(TransportClient.java:129)
at org.apache.flink.streaming.connectors.elasticsearch2.Elasticsearch2ApiCallBridge.createClient(Elasticsearch2ApiCallBridge.java:65)
at org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:272)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:111)
at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:375)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:748)

I first thought that this has something to do with mismatched versions, but it happens to me with Elasticsearch 2.2.2 (bundled with Lucene 5.4.1) and Elasticsearch 2.3 (bundled with Lucene 5.5.0). 

Can someone point to what exact version conflict is happening here (or where to investigate further)? Currently my set up looks like everything is actually running with Lucene 5.5.0, so I'm wondering where that error message is exactly coming from. And also why it is running locally, but not in the cluster. I'm still investigating if this is a general connection issue from the Flink cluster to the ES cluster, but that would be surprising, and also that error message would be then misleading ....

Cheers
Fabian

--
Fabian Wollert
Senior Data Engineer

POSTAL ADDRESS
Zalando SE
11501 Berlin

OFFICE
Zalando SE
Charlottenstraße 4
10969 Berlin
Germany

Email: fabian.wollert@zalando.de
Web: corporate.zalando.com
Jobs: jobs.zalando.de

Zalando SE, Tamara-Danz-Straße 1, 10243 Berlin
Company registration: Amtsgericht Charlottenburg, HRB 158855 B
VAT registration number: DE 260543043
Management Board: Robert Gentz, David Schneider, Rubin Ritter
Chairperson of the Supervisory Board: Lothar Lanz
Registered office: Berlin









Re: Flink Elasticsearch Connector: Lucene Error message

Posted by Fabian Wollert <fa...@zalando.de>.
TL;DR: remove all lucene and elasticsearch libs in your flink env and just
use maven to manage dependencies, when working with the flink elasticsearch
connector.

so in the first place i deleted the libs in the folder to see if its
working, but it did not. then we thought if maybe flink loads already the
libs at startup, so i packaged our flink appliance again, with out the old
lucene lib which was still loaded, and then redeployed, and et voilà, it
worked then!

thanks guys for the investigation help!

Cheers


--

*Fabian WollertZalando SE*

E-Mail: fabian.wollert@zalando.de
Location: ZMAP <ht...@zalando.de>

2017-07-17 9:58 GMT+02:00 Tzu-Li (Gordon) Tai <tz...@apache.org>:

> Hi,
>
> I would also recommend checking the `lib/` folder of your Flink
> installation to see if there is any dangling old version jars that you
> added there.
> I did a quick dependency check on the Elasticsearch 2 connector, it is
> correctly pulling in Lucene 5.5.0 only, so this dependency should not pop
> up given that the user code is packaged properly.
> As of now, I would guess that it is some dependency conflict caused by
> either the reasons mentioned above, or some other dependency in the user
> jar is pulling in a conflicting Lucene version.
>
> Of course, if you doubt otherwise and that isn’t the case, let us know the
> result of your checks so we can investigate further! Thanks.
>
> Cheers,
> Gordon
>
> On 17 July 2017 at 3:38:17 PM, Fabian Wollert (fabian.wollert@zalando.de)
> wrote:
>
> 1.3.0, but i only need the ES 2.X connector working right now, since
> that's the elasticsearch version we're using. another option would be to
> upgrade to ES 5 (at elast on dev) to see if its working as well, but that
> sounds not like fixing the problem for me :-D
>
> Cheers
> Fabian
>
>
> --
>
> *Fabian Wollert Zalando SE*
>
> E-Mail: fabian.wollert@zalando.de
> Location: ZMAP <ht...@zalando.de>
>
> 2017-07-16 15:47 GMT+02:00 Aljoscha Krettek <al...@apache.org>:
>
>> Hi,
>>
>> There was also a problem in releasing the ES 5 connector with Flink
>> 1.3.0. You only said you’re using Flink 1.3, would that be 1.3.0 or 1.3.1?
>>
>> Best,
>> Aljoscha
>>
>> On 16. Jul 2017, at 13:42, Fabian Wollert <fa...@zalando.de>
>> wrote:
>>
>> Hi Aljoscha,
>>
>> we are running Flink in Stand alone mode, inside Docker in AWS. I will
>> check tomorrow the dependencies, although i'm wondering: I'm running Flink
>> 1.3 averywhere and the appropiate ES connector which was only released with
>> 1.3, so it's weird where this dependency mix up comes from ... let's see ...
>>
>> Cheers
>> Fabian
>>
>>
>> --
>>
>> *Fabian Wollert Zalando SE*
>>
>> E-Mail: fabian.wollert@zalando.de
>> Location: ZMAP <ht...@zalando.de>
>>
>> 2017-07-14 11:15 GMT+02:00 Aljoscha Krettek <al...@apache.org>:
>>
>>> This kind of error almost always hints at a dependency clash, i.e. there
>>> is some version of this code in the class path that clashed with the
>>> version that the Flink program uses. That’s why it works in local mode,
>>> where there are probably not many other dependencies and not in cluster
>>> mode.
>>>
>>> How are you running it on the cluster? Standalone, YARN?
>>>
>>> Best,
>>> Aljoscha
>>>
>>> On 13. Jul 2017, at 13:56, Fabian Wollert <fa...@zalando.de>
>>> wrote:
>>>
>>> Hi Timo, Hi Gordon,
>>>
>>> thx for the reply! I checked the connection from both clusters to each
>>> other, and i can telnet to the 9300 port of flink, so i think the
>>> connection is not an issue here.
>>>
>>> We are currently using in our live env a custom elasticsearch connector,
>>> which used some extra lib's deployed on the cluster. i found one lucene lib
>>> and deleted it (since all dependencies should be in the flink job jar), but
>>> that unfortunately did not help neither ...
>>>
>>> Cheers
>>> Fabian
>>>
>>>
>>> --
>>>
>>> *Fabian Wollert Data Engineering*
>>> *Technology*
>>>
>>> E-Mail: fabian.wollert@zalando.de
>>> Location: ZMAP <ht...@zalando.de>
>>>
>>> 2017-07-13 13:46 GMT+02:00 Timo Walther <tw...@apache.org>:
>>>
>>>> Hi Fabian,
>>>>
>>>> I loop in Gordon. Maybe he knows whats happening here.
>>>>
>>>> Regards,
>>>> Timo
>>>>
>>>>
>>>> Am 13.07.17 um 13:26 schrieb Fabian Wollert:
>>>>
>>>> Hi everyone,
>>>>
>>>> I'm trying to make use of the new Elasticsearch Connector
>>>> <https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/elasticsearch.html>.
>>>> I got a version running locally (with ssh tunnels to my Elasticsearch
>>>> cluster in AWS) in my IDE, I see the data in Elasticsearch written
>>>> perfectly, as I want it. As soon as I try to run this on our dev cluster
>>>> (Flink 1.3.0, running in the same VPC like ) though, i get the following
>>>> error message (in the sink):
>>>>
>>>> java.lang.NoSuchFieldError: LUCENE_5_5_0
>>>> at org.elasticsearch.Version.<clinit>(Version.java:295)
>>>> at org.elasticsearch.client.transport.TransportClient$Builder.b
>>>> uild(TransportClient.java:129)
>>>> at org.apache.flink.streaming.connectors.elasticsearch2.Elastic
>>>> search2ApiCallBridge.createClient(Elasticsearch2ApiCallBridge.java:65)
>>>> at org.apache.flink.streaming.connectors.elasticsearch.Elastics
>>>> earchSinkBase.open(ElasticsearchSinkBase.java:272)
>>>> at org.apache.flink.api.common.functions.util.FunctionUtils.ope
>>>> nFunction(FunctionUtils.java:36)
>>>> at org.apache.flink.streaming.api.operators.AbstractUdfStreamOp
>>>> erator.open(AbstractUdfStreamOperator.java:111)
>>>> at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllO
>>>> perators(StreamTask.java:375)
>>>> at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(S
>>>> treamTask.java:252)
>>>> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>>>> at java.lang.Thread.run(Thread.java:748)
>>>>
>>>> I first thought that this has something to do with mismatched versions,
>>>> but it happens to me with Elasticsearch 2.2.2 (bundled with Lucene 5.4.1)
>>>> and Elasticsearch 2.3 (bundled with Lucene 5.5.0).
>>>>
>>>> Can someone point to what exact version conflict is happening here (or
>>>> where to investigate further)? Currently my set up looks like everything is
>>>> actually running with Lucene 5.5.0, so I'm wondering where that error
>>>> message is exactly coming from. And also why it is running locally, but not
>>>> in the cluster. I'm still investigating if this is a general connection
>>>> issue from the Flink cluster to the ES cluster, but that would be
>>>> surprising, and also that error message would be then misleading ....
>>>>
>>>> Cheers
>>>> Fabian
>>>>
>>>> --
>>>> *Fabian Wollert*
>>>> *Senior Data Engineer*
>>>>
>>>> *POSTAL ADDRESS*
>>>> *Zalando SE*
>>>> *11501 Berlin*
>>>>
>>>> *OFFICE*
>>>> *Zalando SE*
>>>> *Charlottenstraße 4*
>>>> *10969 Berlin*
>>>> *Germany*
>>>>
>>>> *Email: fabian.wollert@zalando.de <fa...@zalando.de>*
>>>> *Web: corporate.zalando.com <http://corporate.zalando.com/>*
>>>> *Jobs: jobs.zalando.de <http://jobs.zalando.de/>*
>>>>
>>>> *Zalando SE, Tamara-Danz-Straße 1, 10243 Berlin*
>>>> *Company registration: Amtsgericht Charlottenburg, HRB 158855 B*
>>>> *VAT registration number: DE 260543043*
>>>> *Management Board: Robert Gentz, David Schneider, Rubin Ritter*
>>>> *Chairperson of the Supervisory Board: Lothar Lanz*
>>>> *Registered office: Berlin*
>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>

Re: Flink Elasticsearch Connector: Lucene Error message

Posted by "Tzu-Li (Gordon) Tai" <tz...@apache.org>.
Hi,

I would also recommend checking the `lib/` folder of your Flink installation to see if there is any dangling old version jars that you added there.
I did a quick dependency check on the Elasticsearch 2 connector, it is correctly pulling in Lucene 5.5.0 only, so this dependency should not pop up given that the user code is packaged properly.
As of now, I would guess that it is some dependency conflict caused by either the reasons mentioned above, or some other dependency in the user jar is pulling in a conflicting Lucene version.

Of course, if you doubt otherwise and that isn’t the case, let us know the result of your checks so we can investigate further! Thanks.

Cheers,
Gordon

On 17 July 2017 at 3:38:17 PM, Fabian Wollert (fabian.wollert@zalando.de) wrote:

1.3.0, but i only need the ES 2.X connector working right now, since that's the elasticsearch version we're using. another option would be to upgrade to ES 5 (at elast on dev) to see if its working as well, but that sounds not like fixing the problem for me :-D

Cheers
Fabian


--
Fabian Wollert
Zalando SE

E-Mail: fabian.wollert@zalando.de
Location: ZMAP

2017-07-16 15:47 GMT+02:00 Aljoscha Krettek <al...@apache.org>:
Hi,

There was also a problem in releasing the ES 5 connector with Flink 1.3.0. You only said you’re using Flink 1.3, would that be 1.3.0 or 1.3.1?

Best,
Aljoscha

On 16. Jul 2017, at 13:42, Fabian Wollert <fa...@zalando.de> wrote:

Hi Aljoscha,

we are running Flink in Stand alone mode, inside Docker in AWS. I will check tomorrow the dependencies, although i'm wondering: I'm running Flink 1.3 averywhere and the appropiate ES connector which was only released with 1.3, so it's weird where this dependency mix up comes from ... let's see ...

Cheers
Fabian


--
Fabian Wollert
Zalando SE

E-Mail: fabian.wollert@zalando.de
Location: ZMAP

2017-07-14 11:15 GMT+02:00 Aljoscha Krettek <al...@apache.org>:
This kind of error almost always hints at a dependency clash, i.e. there is some version of this code in the class path that clashed with the version that the Flink program uses. That’s why it works in local mode, where there are probably not many other dependencies and not in cluster mode.

How are you running it on the cluster? Standalone, YARN?

Best,
Aljoscha

On 13. Jul 2017, at 13:56, Fabian Wollert <fa...@zalando.de> wrote:

Hi Timo, Hi Gordon,

thx for the reply! I checked the connection from both clusters to each other, and i can telnet to the 9300 port of flink, so i think the connection is not an issue here. 

We are currently using in our live env a custom elasticsearch connector, which used some extra lib's deployed on the cluster. i found one lucene lib and deleted it (since all dependencies should be in the flink job jar), but that unfortunately did not help neither ...

Cheers
Fabian


--
Fabian Wollert
Data Engineering
Technology

E-Mail: fabian.wollert@zalando.de
Location: ZMAP

2017-07-13 13:46 GMT+02:00 Timo Walther <tw...@apache.org>:
Hi Fabian,

I loop in Gordon. Maybe he knows whats happening here.

Regards,
Timo


Am 13.07.17 um 13:26 schrieb Fabian Wollert:
Hi everyone,

I'm trying to make use of the new Elasticsearch Connector. I got a version running locally (with ssh tunnels to my Elasticsearch cluster in AWS) in my IDE, I see the data in Elasticsearch written perfectly, as I want it. As soon as I try to run this on our dev cluster (Flink 1.3.0, running in the same VPC like ) though, i get the following error message (in the sink):

java.lang.NoSuchFieldError: LUCENE_5_5_0
at org.elasticsearch.Version.<clinit>(Version.java:295)
at org.elasticsearch.client.transport.TransportClient$Builder.build(TransportClient.java:129)
at org.apache.flink.streaming.connectors.elasticsearch2.Elasticsearch2ApiCallBridge.createClient(Elasticsearch2ApiCallBridge.java:65)
at org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:272)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:111)
at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:375)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:748)

I first thought that this has something to do with mismatched versions, but it happens to me with Elasticsearch 2.2.2 (bundled with Lucene 5.4.1) and Elasticsearch 2.3 (bundled with Lucene 5.5.0). 

Can someone point to what exact version conflict is happening here (or where to investigate further)? Currently my set up looks like everything is actually running with Lucene 5.5.0, so I'm wondering where that error message is exactly coming from. And also why it is running locally, but not in the cluster. I'm still investigating if this is a general connection issue from the Flink cluster to the ES cluster, but that would be surprising, and also that error message would be then misleading ....

Cheers
Fabian

--
Fabian Wollert
Senior Data Engineer

POSTAL ADDRESS
Zalando SE
11501 Berlin

OFFICE
Zalando SE
Charlottenstraße 4
10969 Berlin
Germany

Email: fabian.wollert@zalando.de
Web: corporate.zalando.com
Jobs: jobs.zalando.de

Zalando SE, Tamara-Danz-Straße 1, 10243 Berlin
Company registration: Amtsgericht Charlottenburg, HRB 158855 B
VAT registration number: DE 260543043
Management Board: Robert Gentz, David Schneider, Rubin Ritter
Chairperson of the Supervisory Board: Lothar Lanz
Registered office: Berlin








Re: Flink Elasticsearch Connector: Lucene Error message

Posted by Fabian Wollert <fa...@zalando.de>.
1.3.0, but i only need the ES 2.X connector working right now, since that's
the elasticsearch version we're using. another option would be to upgrade
to ES 5 (at elast on dev) to see if its working as well, but that sounds
not like fixing the problem for me :-D

Cheers
Fabian


--

*Fabian WollertZalando SE*

E-Mail: fabian.wollert@zalando.de
Location: ZMAP <ht...@zalando.de>

2017-07-16 15:47 GMT+02:00 Aljoscha Krettek <al...@apache.org>:

> Hi,
>
> There was also a problem in releasing the ES 5 connector with Flink 1.3.0.
> You only said you’re using Flink 1.3, would that be 1.3.0 or 1.3.1?
>
> Best,
> Aljoscha
>
> On 16. Jul 2017, at 13:42, Fabian Wollert <fa...@zalando.de>
> wrote:
>
> Hi Aljoscha,
>
> we are running Flink in Stand alone mode, inside Docker in AWS. I will
> check tomorrow the dependencies, although i'm wondering: I'm running Flink
> 1.3 averywhere and the appropiate ES connector which was only released with
> 1.3, so it's weird where this dependency mix up comes from ... let's see ...
>
> Cheers
> Fabian
>
>
> --
>
> *Fabian WollertZalando SE*
>
> E-Mail: fabian.wollert@zalando.de
> Location: ZMAP <ht...@zalando.de>
>
> 2017-07-14 11:15 GMT+02:00 Aljoscha Krettek <al...@apache.org>:
>
>> This kind of error almost always hints at a dependency clash, i.e. there
>> is some version of this code in the class path that clashed with the
>> version that the Flink program uses. That’s why it works in local mode,
>> where there are probably not many other dependencies and not in cluster
>> mode.
>>
>> How are you running it on the cluster? Standalone, YARN?
>>
>> Best,
>> Aljoscha
>>
>> On 13. Jul 2017, at 13:56, Fabian Wollert <fa...@zalando.de>
>> wrote:
>>
>> Hi Timo, Hi Gordon,
>>
>> thx for the reply! I checked the connection from both clusters to each
>> other, and i can telnet to the 9300 port of flink, so i think the
>> connection is not an issue here.
>>
>> We are currently using in our live env a custom elasticsearch connector,
>> which used some extra lib's deployed on the cluster. i found one lucene lib
>> and deleted it (since all dependencies should be in the flink job jar), but
>> that unfortunately did not help neither ...
>>
>> Cheers
>> Fabian
>>
>>
>> --
>>
>> *Fabian WollertData Engineering*
>> *Technology*
>>
>> E-Mail: fabian.wollert@zalando.de
>> Location: ZMAP <ht...@zalando.de>
>>
>> 2017-07-13 13:46 GMT+02:00 Timo Walther <tw...@apache.org>:
>>
>>> Hi Fabian,
>>>
>>> I loop in Gordon. Maybe he knows whats happening here.
>>>
>>> Regards,
>>> Timo
>>>
>>>
>>> Am 13.07.17 um 13:26 schrieb Fabian Wollert:
>>>
>>> Hi everyone,
>>>
>>> I'm trying to make use of the new Elasticsearch Connector
>>> <https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/elasticsearch.html>.
>>> I got a version running locally (with ssh tunnels to my Elasticsearch
>>> cluster in AWS) in my IDE, I see the data in Elasticsearch written
>>> perfectly, as I want it. As soon as I try to run this on our dev cluster
>>> (Flink 1.3.0, running in the same VPC like ) though, i get the following
>>> error message (in the sink):
>>>
>>> java.lang.NoSuchFieldError: LUCENE_5_5_0
>>> at org.elasticsearch.Version.<clinit>(Version.java:295)
>>> at org.elasticsearch.client.transport.TransportClient$Builder.b
>>> uild(TransportClient.java:129)
>>> at org.apache.flink.streaming.connectors.elasticsearch2.Elastic
>>> search2ApiCallBridge.createClient(Elasticsearch2ApiCallBridge.java:65)
>>> at org.apache.flink.streaming.connectors.elasticsearch.Elastics
>>> earchSinkBase.open(ElasticsearchSinkBase.java:272)
>>> at org.apache.flink.api.common.functions.util.FunctionUtils.ope
>>> nFunction(FunctionUtils.java:36)
>>> at org.apache.flink.streaming.api.operators.AbstractUdfStreamOp
>>> erator.open(AbstractUdfStreamOperator.java:111)
>>> at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllO
>>> perators(StreamTask.java:375)
>>> at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(S
>>> treamTask.java:252)
>>> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>>> at java.lang.Thread.run(Thread.java:748)
>>>
>>> I first thought that this has something to do with mismatched versions,
>>> but it happens to me with Elasticsearch 2.2.2 (bundled with Lucene 5.4.1)
>>> and Elasticsearch 2.3 (bundled with Lucene 5.5.0).
>>>
>>> Can someone point to what exact version conflict is happening here (or
>>> where to investigate further)? Currently my set up looks like everything is
>>> actually running with Lucene 5.5.0, so I'm wondering where that error
>>> message is exactly coming from. And also why it is running locally, but not
>>> in the cluster. I'm still investigating if this is a general connection
>>> issue from the Flink cluster to the ES cluster, but that would be
>>> surprising, and also that error message would be then misleading ....
>>>
>>> Cheers
>>> Fabian
>>>
>>> --
>>> *Fabian Wollert*
>>> *Senior Data Engineer*
>>>
>>> *POSTAL ADDRESS*
>>> *Zalando SE*
>>> *11501 Berlin*
>>>
>>> *OFFICE*
>>> *Zalando SE*
>>> *Charlottenstraße 4*
>>> *10969 Berlin*
>>> *Germany*
>>>
>>> *Email: fabian.wollert@zalando.de <fa...@zalando.de>*
>>> *Web: corporate.zalando.com <http://corporate.zalando.com/>*
>>> *Jobs: jobs.zalando.de <http://jobs.zalando.de/>*
>>>
>>> *Zalando SE, Tamara-Danz-Straße 1, 10243 Berlin*
>>> *Company registration: Amtsgericht Charlottenburg, HRB 158855 B*
>>> *VAT registration number: DE 260543043*
>>> *Management Board: Robert Gentz, David Schneider, Rubin Ritter*
>>> *Chairperson of the Supervisory Board: Lothar Lanz*
>>> *Registered office: Berlin*
>>>
>>>
>>>
>>
>>
>
>

Re: Flink Elasticsearch Connector: Lucene Error message

Posted by Aljoscha Krettek <al...@apache.org>.
Hi,

There was also a problem in releasing the ES 5 connector with Flink 1.3.0. You only said you’re using Flink 1.3, would that be 1.3.0 or 1.3.1?

Best,
Aljoscha

> On 16. Jul 2017, at 13:42, Fabian Wollert <fa...@zalando.de> wrote:
> 
> Hi Aljoscha,
> 
> we are running Flink in Stand alone mode, inside Docker in AWS. I will check tomorrow the dependencies, although i'm wondering: I'm running Flink 1.3 averywhere and the appropiate ES connector which was only released with 1.3, so it's weird where this dependency mix up comes from ... let's see ...
> 
> Cheers
> Fabian
> 
> 
> --
> Fabian Wollert
> Zalando SE
> 
> E-Mail: fabian.wollert@zalando.de <ma...@zalando.de>
> Location: ZMAP <ht...@zalando.de>
> 2017-07-14 11:15 GMT+02:00 Aljoscha Krettek <aljoscha@apache.org <ma...@apache.org>>:
> This kind of error almost always hints at a dependency clash, i.e. there is some version of this code in the class path that clashed with the version that the Flink program uses. That’s why it works in local mode, where there are probably not many other dependencies and not in cluster mode.
> 
> How are you running it on the cluster? Standalone, YARN?
> 
> Best,
> Aljoscha
> 
>> On 13. Jul 2017, at 13:56, Fabian Wollert <fabian.wollert@zalando.de <ma...@zalando.de>> wrote:
>> 
>> Hi Timo, Hi Gordon,
>> 
>> thx for the reply! I checked the connection from both clusters to each other, and i can telnet to the 9300 port of flink, so i think the connection is not an issue here. 
>> 
>> We are currently using in our live env a custom elasticsearch connector, which used some extra lib's deployed on the cluster. i found one lucene lib and deleted it (since all dependencies should be in the flink job jar), but that unfortunately did not help neither ...
>> 
>> Cheers
>> Fabian
>> 
>> 
>> --
>> Fabian Wollert
>> Data Engineering
>> Technology
>> 
>> E-Mail: fabian.wollert@zalando.de <ma...@zalando.de>
>> Location: ZMAP <ht...@zalando.de>
>> 2017-07-13 13:46 GMT+02:00 Timo Walther <twalthr@apache.org <ma...@apache.org>>:
>> Hi Fabian,
>> 
>> I loop in Gordon. Maybe he knows whats happening here. 
>> 
>> Regards,
>> Timo
>> 
>> 
>> Am 13.07.17 um 13:26 schrieb Fabian Wollert:
>>> Hi everyone,
>>> 
>>> I'm trying to make use of the new Elasticsearch Connector <https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/elasticsearch.html>. I got a version running locally (with ssh tunnels to my Elasticsearch cluster in AWS) in my IDE, I see the data in Elasticsearch written perfectly, as I want it. As soon as I try to run this on our dev cluster (Flink 1.3.0, running in the same VPC like ) though, i get the following error message (in the sink):
>>> 
>>> java.lang.NoSuchFieldError: LUCENE_5_5_0
>>> 	at org.elasticsearch.Version.<clinit>(Version.java:295)
>>> 	at org.elasticsearch.client.transport.TransportClient$Builder.build(TransportClient.java:129)
>>> 	at org.apache.flink.streaming.connectors.elasticsearch2.Elasticsearch2ApiCallBridge.createClient(Elasticsearch2ApiCallBridge.java:65)
>>> 	at org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:272)
>>> 	at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>>> 	at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:111)
>>> 	at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:375)
>>> 	at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
>>> 	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>>> 	at java.lang.Thread.run(Thread.java:748)
>>> 
>>> I first thought that this has something to do with mismatched versions, but it happens to me with Elasticsearch 2.2.2 (bundled with Lucene 5.4.1) and Elasticsearch 2.3 (bundled with Lucene 5.5.0). 
>>> 
>>> Can someone point to what exact version conflict is happening here (or where to investigate further)? Currently my set up looks like everything is actually running with Lucene 5.5.0, so I'm wondering where that error message is exactly coming from. And also why it is running locally, but not in the cluster. I'm still investigating if this is a general connection issue from the Flink cluster to the ES cluster, but that would be surprising, and also that error message would be then misleading ....
>>> 
>>> Cheers
>>> Fabian
>>> 
>>> --
>>> Fabian Wollert
>>> Senior Data Engineer
>>> 
>>> POSTAL ADDRESS
>>> Zalando SE
>>> 11501 Berlin
>>> 
>>> OFFICE
>>> Zalando SE
>>> Charlottenstraße 4
>>> 10969 Berlin
>>> Germany
>>> 
>>> Email: fabian.wollert@zalando.de <ma...@zalando.de>
>>> Web: corporate.zalando.com <http://corporate.zalando.com/>
>>> Jobs: jobs.zalando.de <http://jobs.zalando.de/>
>>> 
>>> Zalando SE, Tamara-Danz-Straße 1, 10243 Berlin
>>> Company registration: Amtsgericht Charlottenburg, HRB 158855 B
>>> VAT registration number: DE 260543043
>>> Management Board: Robert Gentz, David Schneider, Rubin Ritter
>>> Chairperson of the Supervisory Board: Lothar Lanz
>>> Registered office: Berlin
>> 
>> 
> 
> 


Re: Flink Elasticsearch Connector: Lucene Error message

Posted by Fabian Wollert <fa...@zalando.de>.
Hi Aljoscha,

we are running Flink in Stand alone mode, inside Docker in AWS. I will
check tomorrow the dependencies, although i'm wondering: I'm running Flink
1.3 averywhere and the appropiate ES connector which was only released with
1.3, so it's weird where this dependency mix up comes from ... let's see ...

Cheers
Fabian


--

*Fabian WollertZalando SE*

E-Mail: fabian.wollert@zalando.de
Location: ZMAP <ht...@zalando.de>

2017-07-14 11:15 GMT+02:00 Aljoscha Krettek <al...@apache.org>:

> This kind of error almost always hints at a dependency clash, i.e. there
> is some version of this code in the class path that clashed with the
> version that the Flink program uses. That’s why it works in local mode,
> where there are probably not many other dependencies and not in cluster
> mode.
>
> How are you running it on the cluster? Standalone, YARN?
>
> Best,
> Aljoscha
>
> On 13. Jul 2017, at 13:56, Fabian Wollert <fa...@zalando.de>
> wrote:
>
> Hi Timo, Hi Gordon,
>
> thx for the reply! I checked the connection from both clusters to each
> other, and i can telnet to the 9300 port of flink, so i think the
> connection is not an issue here.
>
> We are currently using in our live env a custom elasticsearch connector,
> which used some extra lib's deployed on the cluster. i found one lucene lib
> and deleted it (since all dependencies should be in the flink job jar), but
> that unfortunately did not help neither ...
>
> Cheers
> Fabian
>
>
> --
>
> *Fabian WollertData Engineering*
> *Technology*
>
> E-Mail: fabian.wollert@zalando.de
> Location: ZMAP <ht...@zalando.de>
>
> 2017-07-13 13:46 GMT+02:00 Timo Walther <tw...@apache.org>:
>
>> Hi Fabian,
>>
>> I loop in Gordon. Maybe he knows whats happening here.
>>
>> Regards,
>> Timo
>>
>>
>> Am 13.07.17 um 13:26 schrieb Fabian Wollert:
>>
>> Hi everyone,
>>
>> I'm trying to make use of the new Elasticsearch Connector
>> <https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/elasticsearch.html>.
>> I got a version running locally (with ssh tunnels to my Elasticsearch
>> cluster in AWS) in my IDE, I see the data in Elasticsearch written
>> perfectly, as I want it. As soon as I try to run this on our dev cluster
>> (Flink 1.3.0, running in the same VPC like ) though, i get the following
>> error message (in the sink):
>>
>> java.lang.NoSuchFieldError: LUCENE_5_5_0
>> at org.elasticsearch.Version.<clinit>(Version.java:295)
>> at org.elasticsearch.client.transport.TransportClient$Builder.
>> build(TransportClient.java:129)
>> at org.apache.flink.streaming.connectors.elasticsearch2.Elastic
>> search2ApiCallBridge.createClient(Elasticsearch2ApiCallBridge.java:65)
>> at org.apache.flink.streaming.connectors.elasticsearch.Elastics
>> earchSinkBase.open(ElasticsearchSinkBase.java:272)
>> at org.apache.flink.api.common.functions.util.FunctionUtils.ope
>> nFunction(FunctionUtils.java:36)
>> at org.apache.flink.streaming.api.operators.AbstractUdfStreamOp
>> erator.open(AbstractUdfStreamOperator.java:111)
>> at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllO
>> perators(StreamTask.java:375)
>> at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(
>> StreamTask.java:252)
>> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>> at java.lang.Thread.run(Thread.java:748)
>>
>> I first thought that this has something to do with mismatched versions,
>> but it happens to me with Elasticsearch 2.2.2 (bundled with Lucene 5.4.1)
>> and Elasticsearch 2.3 (bundled with Lucene 5.5.0).
>>
>> Can someone point to what exact version conflict is happening here (or
>> where to investigate further)? Currently my set up looks like everything is
>> actually running with Lucene 5.5.0, so I'm wondering where that error
>> message is exactly coming from. And also why it is running locally, but not
>> in the cluster. I'm still investigating if this is a general connection
>> issue from the Flink cluster to the ES cluster, but that would be
>> surprising, and also that error message would be then misleading ....
>>
>> Cheers
>> Fabian
>>
>> --
>> *Fabian Wollert*
>> *Senior Data Engineer*
>>
>> *POSTAL ADDRESS*
>> *Zalando SE*
>> *11501 Berlin*
>>
>> *OFFICE*
>> *Zalando SE*
>> *Charlottenstraße 4*
>> *10969 Berlin*
>> *Germany*
>>
>> *Email: fabian.wollert@zalando.de <fa...@zalando.de>*
>> *Web: corporate.zalando.com <http://corporate.zalando.com/>*
>> *Jobs: jobs.zalando.de <http://jobs.zalando.de/>*
>>
>> *Zalando SE, Tamara-Danz-Straße 1, 10243 Berlin*
>> *Company registration: Amtsgericht Charlottenburg, HRB 158855 B*
>> *VAT registration number: DE 260543043*
>> *Management Board: Robert Gentz, David Schneider, Rubin Ritter*
>> *Chairperson of the Supervisory Board: Lothar Lanz*
>> *Registered office: Berlin*
>>
>>
>>
>
>

Re: Flink Elasticsearch Connector: Lucene Error message

Posted by Aljoscha Krettek <al...@apache.org>.
This kind of error almost always hints at a dependency clash, i.e. there is some version of this code in the class path that clashed with the version that the Flink program uses. That’s why it works in local mode, where there are probably not many other dependencies and not in cluster mode.

How are you running it on the cluster? Standalone, YARN?

Best,
Aljoscha
> On 13. Jul 2017, at 13:56, Fabian Wollert <fa...@zalando.de> wrote:
> 
> Hi Timo, Hi Gordon,
> 
> thx for the reply! I checked the connection from both clusters to each other, and i can telnet to the 9300 port of flink, so i think the connection is not an issue here. 
> 
> We are currently using in our live env a custom elasticsearch connector, which used some extra lib's deployed on the cluster. i found one lucene lib and deleted it (since all dependencies should be in the flink job jar), but that unfortunately did not help neither ...
> 
> Cheers
> Fabian
> 
> 
> --
> Fabian Wollert
> Data Engineering
> Technology
> 
> E-Mail: fabian.wollert@zalando.de <ma...@zalando.de>
> Location: ZMAP <ht...@zalando.de>
> 2017-07-13 13:46 GMT+02:00 Timo Walther <twalthr@apache.org <ma...@apache.org>>:
> Hi Fabian,
> 
> I loop in Gordon. Maybe he knows whats happening here. 
> 
> Regards,
> Timo
> 
> 
> Am 13.07.17 um 13:26 schrieb Fabian Wollert:
>> Hi everyone,
>> 
>> I'm trying to make use of the new Elasticsearch Connector <https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/elasticsearch.html>. I got a version running locally (with ssh tunnels to my Elasticsearch cluster in AWS) in my IDE, I see the data in Elasticsearch written perfectly, as I want it. As soon as I try to run this on our dev cluster (Flink 1.3.0, running in the same VPC like ) though, i get the following error message (in the sink):
>> 
>> java.lang.NoSuchFieldError: LUCENE_5_5_0
>> 	at org.elasticsearch.Version.<clinit>(Version.java:295)
>> 	at org.elasticsearch.client.transport.TransportClient$Builder.build(TransportClient.java:129)
>> 	at org.apache.flink.streaming.connectors.elasticsearch2.Elasticsearch2ApiCallBridge.createClient(Elasticsearch2ApiCallBridge.java:65)
>> 	at org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:272)
>> 	at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>> 	at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:111)
>> 	at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:375)
>> 	at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
>> 	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>> 	at java.lang.Thread.run(Thread.java:748)
>> 
>> I first thought that this has something to do with mismatched versions, but it happens to me with Elasticsearch 2.2.2 (bundled with Lucene 5.4.1) and Elasticsearch 2.3 (bundled with Lucene 5.5.0). 
>> 
>> Can someone point to what exact version conflict is happening here (or where to investigate further)? Currently my set up looks like everything is actually running with Lucene 5.5.0, so I'm wondering where that error message is exactly coming from. And also why it is running locally, but not in the cluster. I'm still investigating if this is a general connection issue from the Flink cluster to the ES cluster, but that would be surprising, and also that error message would be then misleading ....
>> 
>> Cheers
>> Fabian
>> 
>> --
>> Fabian Wollert
>> Senior Data Engineer
>> 
>> POSTAL ADDRESS
>> Zalando SE
>> 11501 Berlin
>> 
>> OFFICE
>> Zalando SE
>> Charlottenstraße 4
>> 10969 Berlin
>> Germany
>> 
>> Email: fabian.wollert@zalando.de <ma...@zalando.de>
>> Web: corporate.zalando.com <http://corporate.zalando.com/>
>> Jobs: jobs.zalando.de <http://jobs.zalando.de/>
>> 
>> Zalando SE, Tamara-Danz-Straße 1, 10243 Berlin
>> Company registration: Amtsgericht Charlottenburg, HRB 158855 B
>> VAT registration number: DE 260543043
>> Management Board: Robert Gentz, David Schneider, Rubin Ritter
>> Chairperson of the Supervisory Board: Lothar Lanz
>> Registered office: Berlin
> 
> 


Re: Flink Elasticsearch Connector: Lucene Error message

Posted by Fabian Wollert <fa...@zalando.de>.
Hi Timo, Hi Gordon,

thx for the reply! I checked the connection from both clusters to each
other, and i can telnet to the 9300 port of flink, so i think the
connection is not an issue here.

We are currently using in our live env a custom elasticsearch connector,
which used some extra lib's deployed on the cluster. i found one lucene lib
and deleted it (since all dependencies should be in the flink job jar), but
that unfortunately did not help neither ...

Cheers
Fabian


--

*Fabian WollertData Engineering*
*Technology*

E-Mail: fabian.wollert@zalando.de
Location: ZMAP <ht...@zalando.de>

2017-07-13 13:46 GMT+02:00 Timo Walther <tw...@apache.org>:

> Hi Fabian,
>
> I loop in Gordon. Maybe he knows whats happening here.
>
> Regards,
> Timo
>
>
> Am 13.07.17 um 13:26 schrieb Fabian Wollert:
>
> Hi everyone,
>
> I'm trying to make use of the new Elasticsearch Connector
> <https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/elasticsearch.html>.
> I got a version running locally (with ssh tunnels to my Elasticsearch
> cluster in AWS) in my IDE, I see the data in Elasticsearch written
> perfectly, as I want it. As soon as I try to run this on our dev cluster
> (Flink 1.3.0, running in the same VPC like ) though, i get the following
> error message (in the sink):
>
> java.lang.NoSuchFieldError: LUCENE_5_5_0
> at org.elasticsearch.Version.<clinit>(Version.java:295)
> at org.elasticsearch.client.transport.TransportClient$
> Builder.build(TransportClient.java:129)
> at org.apache.flink.streaming.connectors.elasticsearch2.
> Elasticsearch2ApiCallBridge.createClient(Elasticsearch2ApiCallBridge.
> java:65)
> at org.apache.flink.streaming.connectors.elasticsearch.
> ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:272)
> at org.apache.flink.api.common.functions.util.FunctionUtils.
> openFunction(FunctionUtils.java:36)
> at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.
> open(AbstractUdfStreamOperator.java:111)
> at org.apache.flink.streaming.runtime.tasks.StreamTask.
> openAllOperators(StreamTask.java:375)
> at org.apache.flink.streaming.runtime.tasks.StreamTask.
> invoke(StreamTask.java:252)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
> at java.lang.Thread.run(Thread.java:748)
>
> I first thought that this has something to do with mismatched versions,
> but it happens to me with Elasticsearch 2.2.2 (bundled with Lucene 5.4.1)
> and Elasticsearch 2.3 (bundled with Lucene 5.5.0).
>
> Can someone point to what exact version conflict is happening here (or
> where to investigate further)? Currently my set up looks like everything is
> actually running with Lucene 5.5.0, so I'm wondering where that error
> message is exactly coming from. And also why it is running locally, but not
> in the cluster. I'm still investigating if this is a general connection
> issue from the Flink cluster to the ES cluster, but that would be
> surprising, and also that error message would be then misleading ....
>
> Cheers
> Fabian
>
> --
> *Fabian Wollert*
> *Senior Data Engineer*
>
> *POSTAL ADDRESS*
> *Zalando SE*
> *11501 Berlin*
>
> *OFFICE*
> *Zalando SE*
> *Charlottenstraße 4*
> *10969 Berlin*
> *Germany*
>
> *Email: fabian.wollert@zalando.de <fa...@zalando.de>*
> *Web: corporate.zalando.com <http://corporate.zalando.com>*
> *Jobs: jobs.zalando.de <http://jobs.zalando.de>*
>
> *Zalando SE, Tamara-Danz-Straße 1, 10243 Berlin*
> *Company registration: Amtsgericht Charlottenburg, HRB 158855 B*
> *VAT registration number: DE 260543043*
> *Management Board: Robert Gentz, David Schneider, Rubin Ritter*
> *Chairperson of the Supervisory Board: Lothar Lanz*
> *Registered office: Berlin*
>
>
>

Re: Flink Elasticsearch Connector: Lucene Error message

Posted by Timo Walther <tw...@apache.org>.
Hi Fabian,

I loop in Gordon. Maybe he knows whats happening here.

Regards,
Timo


Am 13.07.17 um 13:26 schrieb Fabian Wollert:
> Hi everyone,
>
> I'm trying to make use of the new Elasticsearch Connector 
> <https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/elasticsearch.html>. 
> I got a version running locally (with ssh tunnels to my Elasticsearch 
> cluster in AWS) in my IDE, I see the data in Elasticsearch written 
> perfectly, as I want it. As soon as I try to run this on our dev 
> cluster (Flink 1.3.0, running in the same VPC like ) though, i get the 
> following error message (in the sink):
>
> java.lang.NoSuchFieldError: LUCENE_5_5_0
> at org.elasticsearch.Version.<clinit>(Version.java:295)
> at 
> org.elasticsearch.client.transport.TransportClient$Builder.build(TransportClient.java:129)
> at 
> org.apache.flink.streaming.connectors.elasticsearch2.Elasticsearch2ApiCallBridge.createClient(Elasticsearch2ApiCallBridge.java:65)
> at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:272)
> at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:111)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:375)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
> at java.lang.Thread.run(Thread.java:748)
>
> I first thought that this has something to do with mismatched 
> versions, but it happens to me with Elasticsearch 2.2.2 (bundled with 
> Lucene 5.4.1) and Elasticsearch 2.3 (bundled with Lucene 5.5.0).
>
> Can someone point to what exact version conflict is happening here (or 
> where to investigate further)? Currently my set up looks like 
> everything is actually running with Lucene 5.5.0, so I'm wondering 
> where that error message is exactly coming from. And also why it is 
> running locally, but not in the cluster. I'm still investigating if 
> this is a general connection issue from the Flink cluster to the ES 
> cluster, but that would be surprising, and also that error message 
> would be then misleading ....
>
> Cheers
> Fabian
>
> --
> *Fabian Wollert*
> *Senior Data Engineer*
> *
> *
> *POSTAL ADDRESS*
> *Zalando SE*
> *11501 Berlin*
> *
> *
> *OFFICE*
> *Zalando SE*
> *Charlottenstraße 4*
> *10969 Berlin*
> *Germany*
>
> *Email: fabian.wollert@zalando.de <ma...@zalando.de>*
> *Web: corporate.zalando.com <http://corporate.zalando.com>*
> *Jobs: jobs.zalando.de <http://jobs.zalando.de>*
> *
> *
> *Zalando SE, Tamara-Danz-Straße 1, 10243 Berlin*
> *Company registration: Amtsgericht Charlottenburg, HRB 158855 B*
> *VAT registration number: DE 260543043*
> *Management Board: Robert Gentz, David Schneider, Rubin Ritter*
> *Chairperson of the Supervisory Board: Lothar Lanz*
> *Registered office: Berlin*