You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by "Marco B." <ma...@gmail.com> on 2016/06/01 12:54:03 UTC

ClosedChannelException when trying to read from remote Kafka in AWS

Hello everyone,

I am trying to setup a MirrorMaker between my company's local cluster and
another cluster in AWS to have replication over clusters. We have setup a
VPN between these two clusters, and as far as I can see, everything works
correctly, meaning that I can ping the nodes and telnet into them without
any issues.

Now, when I run the following command in the local cluster to use a
Zookeeper instance located in AWS (10.1.83.6:2181), in order to read a
topic "test"

~/kafka_2.11-0.8.2.2$ ./bin/kafka-console-consumer.sh --zookeeper
10.1.83.6:2181 --topic test --from-beginning

A bunch of errors comes up:

WARN Fetching topic metadata with correlation id 1 for topics [Set(test)]
from broker [id:2,host:ip-10-1-83-5.ec2.internal,port:9092] failed
(kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
    at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
    at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
    at
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
    at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
    at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
    at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
    at
kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
WARN Fetching topic metadata with correlation id 1 for topics [Set(test)]
from broker [id:3,host:ip-10-1-83-6.ec2.internal,port:9092] failed
(kafka.client.ClientUtils$)

As far as I know, this is due to the fact that Zookeeper has registered an
IP/Port for each Kafka instance and these need to be consistent with the
producer configuration, as described here (
https://cwiki.apache.org/confluence/display/KAFKA/FAQ).

I tried to search on the web, and some people were recommending to change
the setting "advertised.host.name" to be either the public IP address
coming from AWS (we cannot) or a specific hostname. Now, considering that
we have a VPN between the clusters, the only choice left seems to be the
one setting the hostname.

What should this value be? Is there anything else I need to know for this
kind of setup? Any suggestions?

Thanks in advance.

Kind regards,
Marco

Re: ClosedChannelException when trying to read from remote Kafka in AWS

Posted by Mudit Kumar <mu...@askme.in>.
Glad to hear that you issue is fixed now!




On 6/2/16, 2:11 PM, "Marco B." <ma...@gmail.com> wrote:

>Hi Mudit,
>
>Thanks a lot for your answer.
>
>However, today we have set "advertised.host.name" on each kafka instance to
>the specific IP address of each node. For example, by default kafka tries
>to read the machine's hostname, specifically ip-10-1-83-5.ec2.internal -
>now it's "10.1.83.5" (of course, we had to do this for each hostname).
>
>I hope that all these solutions will help others with the same issue.
>
>Thanks a lot for your support!
>
>Kind regards,
>Marco
>
>
>2016-06-02 5:40 GMT+02:00 Mudit Kumar <mu...@askme.in>:
>
>> I donot think you need public hostname.I have a similarsetup and its
>> perfectly fine.
>> What I would suggest you to change the hostname,make it persistent,and use
>> FQDN everywhere with /etc/hosts entry locally and on AWS machines.Your
>> problem will get fixed.
>>
>>
>>
>>
>> On 6/1/16, 8:54 PM, "Marco B." <ma...@gmail.com> wrote:
>>
>> >Hi Ben,
>> >
>> >Thanks for your answer. What if the instance does not have a public DNS
>> >hostname?
>> >These are all private nodes without public/elastic IP, therefore I don't
>> >know what to set.
>> >
>> >Marco
>> >
>> >2016-06-01 15:09 GMT+02:00 Ben Davison <be...@7digital.com>:
>> >
>> >> Hi Marco,
>> >>
>> >> We use the public DNS hostname that you can get from the AWS metadata
>> >> service.
>> >>
>> >> Thanks,
>> >>
>> >> Ben
>> >>
>> >> On Wed, Jun 1, 2016 at 1:54 PM, Marco B. <ma...@gmail.com> wrote:
>> >>
>> >> > Hello everyone,
>> >> >
>> >> > I am trying to setup a MirrorMaker between my company's local cluster
>> and
>> >> > another cluster in AWS to have replication over clusters. We have
>> setup a
>> >> > VPN between these two clusters, and as far as I can see, everything
>> works
>> >> > correctly, meaning that I can ping the nodes and telnet into them
>> without
>> >> > any issues.
>> >> >
>> >> > Now, when I run the following command in the local cluster to use a
>> >> > Zookeeper instance located in AWS (10.1.83.6:2181), in order to read
>> a
>> >> > topic "test"
>> >> >
>> >> > ~/kafka_2.11-0.8.2.2$ ./bin/kafka-console-consumer.sh --zookeeper
>> >> > 10.1.83.6:2181 --topic test --from-beginning
>> >> >
>> >> > A bunch of errors comes up:
>> >> >
>> >> > WARN Fetching topic metadata with correlation id 1 for topics
>> [Set(test)]
>> >> > from broker [id:2,host:ip-10-1-83-5.ec2.internal,port:9092] failed
>> >> > (kafka.client.ClientUtils$)
>> >> > java.nio.channels.ClosedChannelException
>> >> >     at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
>> >> >     at
>> kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
>> >> >     at
>> >> >
>> >> >
>> >>
>> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
>> >> >     at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
>> >> >     at
>> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
>> >> >     at
>> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
>> >> >     at
>> >> >
>> >> >
>> >>
>> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
>> >> >     at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
>> >> > WARN Fetching topic metadata with correlation id 1 for topics
>> [Set(test)]
>> >> > from broker [id:3,host:ip-10-1-83-6.ec2.internal,port:9092] failed
>> >> > (kafka.client.ClientUtils$)
>> >> >
>> >> > As far as I know, this is due to the fact that Zookeeper has
>> registered
>> >> an
>> >> > IP/Port for each Kafka instance and these need to be consistent with
>> the
>> >> > producer configuration, as described here (
>> >> > https://cwiki.apache.org/confluence/display/KAFKA/FAQ).
>> >> >
>> >> > I tried to search on the web, and some people were recommending to
>> change
>> >> > the setting "advertised.host.name" to be either the public IP address
>> >> > coming from AWS (we cannot) or a specific hostname. Now, considering
>> that
>> >> > we have a VPN between the clusters, the only choice left seems to be
>> the
>> >> > one setting the hostname.
>> >> >
>> >> > What should this value be? Is there anything else I need to know for
>> this
>> >> > kind of setup? Any suggestions?
>> >> >
>> >> > Thanks in advance.
>> >> >
>> >> > Kind regards,
>> >> > Marco
>> >> >
>> >>
>> >> --
>> >>
>> >>
>> >> This email, including attachments, is private and confidential. If you
>> have
>> >> received this email in error please notify the sender and delete it from
>> >> your system. Emails are not secure and may contain viruses. No liability
>> >> can be accepted for viruses that might be transferred by this email or
>> any
>> >> attachment. Any unauthorised copying of this message or unauthorised
>> >> distribution and publication of the information contained herein are
>> >> prohibited.
>> >>
>> >> 7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
>> >> Registered in England and Wales. Registered No. 04843573.
>> >>
>>
>>


Re: ClosedChannelException when trying to read from remote Kafka in AWS

Posted by "Marco B." <ma...@gmail.com>.
Hi Mudit,

Thanks a lot for your answer.

However, today we have set "advertised.host.name" on each kafka instance to
the specific IP address of each node. For example, by default kafka tries
to read the machine's hostname, specifically ip-10-1-83-5.ec2.internal -
now it's "10.1.83.5" (of course, we had to do this for each hostname).

I hope that all these solutions will help others with the same issue.

Thanks a lot for your support!

Kind regards,
Marco


2016-06-02 5:40 GMT+02:00 Mudit Kumar <mu...@askme.in>:

> I donot think you need public hostname.I have a similarsetup and its
> perfectly fine.
> What I would suggest you to change the hostname,make it persistent,and use
> FQDN everywhere with /etc/hosts entry locally and on AWS machines.Your
> problem will get fixed.
>
>
>
>
> On 6/1/16, 8:54 PM, "Marco B." <ma...@gmail.com> wrote:
>
> >Hi Ben,
> >
> >Thanks for your answer. What if the instance does not have a public DNS
> >hostname?
> >These are all private nodes without public/elastic IP, therefore I don't
> >know what to set.
> >
> >Marco
> >
> >2016-06-01 15:09 GMT+02:00 Ben Davison <be...@7digital.com>:
> >
> >> Hi Marco,
> >>
> >> We use the public DNS hostname that you can get from the AWS metadata
> >> service.
> >>
> >> Thanks,
> >>
> >> Ben
> >>
> >> On Wed, Jun 1, 2016 at 1:54 PM, Marco B. <ma...@gmail.com> wrote:
> >>
> >> > Hello everyone,
> >> >
> >> > I am trying to setup a MirrorMaker between my company's local cluster
> and
> >> > another cluster in AWS to have replication over clusters. We have
> setup a
> >> > VPN between these two clusters, and as far as I can see, everything
> works
> >> > correctly, meaning that I can ping the nodes and telnet into them
> without
> >> > any issues.
> >> >
> >> > Now, when I run the following command in the local cluster to use a
> >> > Zookeeper instance located in AWS (10.1.83.6:2181), in order to read
> a
> >> > topic "test"
> >> >
> >> > ~/kafka_2.11-0.8.2.2$ ./bin/kafka-console-consumer.sh --zookeeper
> >> > 10.1.83.6:2181 --topic test --from-beginning
> >> >
> >> > A bunch of errors comes up:
> >> >
> >> > WARN Fetching topic metadata with correlation id 1 for topics
> [Set(test)]
> >> > from broker [id:2,host:ip-10-1-83-5.ec2.internal,port:9092] failed
> >> > (kafka.client.ClientUtils$)
> >> > java.nio.channels.ClosedChannelException
> >> >     at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
> >> >     at
> kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
> >> >     at
> >> >
> >> >
> >>
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
> >> >     at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
> >> >     at
> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
> >> >     at
> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
> >> >     at
> >> >
> >> >
> >>
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
> >> >     at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> >> > WARN Fetching topic metadata with correlation id 1 for topics
> [Set(test)]
> >> > from broker [id:3,host:ip-10-1-83-6.ec2.internal,port:9092] failed
> >> > (kafka.client.ClientUtils$)
> >> >
> >> > As far as I know, this is due to the fact that Zookeeper has
> registered
> >> an
> >> > IP/Port for each Kafka instance and these need to be consistent with
> the
> >> > producer configuration, as described here (
> >> > https://cwiki.apache.org/confluence/display/KAFKA/FAQ).
> >> >
> >> > I tried to search on the web, and some people were recommending to
> change
> >> > the setting "advertised.host.name" to be either the public IP address
> >> > coming from AWS (we cannot) or a specific hostname. Now, considering
> that
> >> > we have a VPN between the clusters, the only choice left seems to be
> the
> >> > one setting the hostname.
> >> >
> >> > What should this value be? Is there anything else I need to know for
> this
> >> > kind of setup? Any suggestions?
> >> >
> >> > Thanks in advance.
> >> >
> >> > Kind regards,
> >> > Marco
> >> >
> >>
> >> --
> >>
> >>
> >> This email, including attachments, is private and confidential. If you
> have
> >> received this email in error please notify the sender and delete it from
> >> your system. Emails are not secure and may contain viruses. No liability
> >> can be accepted for viruses that might be transferred by this email or
> any
> >> attachment. Any unauthorised copying of this message or unauthorised
> >> distribution and publication of the information contained herein are
> >> prohibited.
> >>
> >> 7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
> >> Registered in England and Wales. Registered No. 04843573.
> >>
>
>

Re: ClosedChannelException when trying to read from remote Kafka in AWS

Posted by Mudit Kumar <mu...@askme.in>.
I donot think you need public hostname.I have a similarsetup and its perfectly fine.
What I would suggest you to change the hostname,make it persistent,and use FQDN everywhere with /etc/hosts entry locally and on AWS machines.Your problem will get fixed.




On 6/1/16, 8:54 PM, "Marco B." <ma...@gmail.com> wrote:

>Hi Ben,
>
>Thanks for your answer. What if the instance does not have a public DNS
>hostname?
>These are all private nodes without public/elastic IP, therefore I don't
>know what to set.
>
>Marco
>
>2016-06-01 15:09 GMT+02:00 Ben Davison <be...@7digital.com>:
>
>> Hi Marco,
>>
>> We use the public DNS hostname that you can get from the AWS metadata
>> service.
>>
>> Thanks,
>>
>> Ben
>>
>> On Wed, Jun 1, 2016 at 1:54 PM, Marco B. <ma...@gmail.com> wrote:
>>
>> > Hello everyone,
>> >
>> > I am trying to setup a MirrorMaker between my company's local cluster and
>> > another cluster in AWS to have replication over clusters. We have setup a
>> > VPN between these two clusters, and as far as I can see, everything works
>> > correctly, meaning that I can ping the nodes and telnet into them without
>> > any issues.
>> >
>> > Now, when I run the following command in the local cluster to use a
>> > Zookeeper instance located in AWS (10.1.83.6:2181), in order to read a
>> > topic "test"
>> >
>> > ~/kafka_2.11-0.8.2.2$ ./bin/kafka-console-consumer.sh --zookeeper
>> > 10.1.83.6:2181 --topic test --from-beginning
>> >
>> > A bunch of errors comes up:
>> >
>> > WARN Fetching topic metadata with correlation id 1 for topics [Set(test)]
>> > from broker [id:2,host:ip-10-1-83-5.ec2.internal,port:9092] failed
>> > (kafka.client.ClientUtils$)
>> > java.nio.channels.ClosedChannelException
>> >     at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
>> >     at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
>> >     at
>> >
>> >
>> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
>> >     at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
>> >     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
>> >     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
>> >     at
>> >
>> >
>> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
>> >     at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
>> > WARN Fetching topic metadata with correlation id 1 for topics [Set(test)]
>> > from broker [id:3,host:ip-10-1-83-6.ec2.internal,port:9092] failed
>> > (kafka.client.ClientUtils$)
>> >
>> > As far as I know, this is due to the fact that Zookeeper has registered
>> an
>> > IP/Port for each Kafka instance and these need to be consistent with the
>> > producer configuration, as described here (
>> > https://cwiki.apache.org/confluence/display/KAFKA/FAQ).
>> >
>> > I tried to search on the web, and some people were recommending to change
>> > the setting "advertised.host.name" to be either the public IP address
>> > coming from AWS (we cannot) or a specific hostname. Now, considering that
>> > we have a VPN between the clusters, the only choice left seems to be the
>> > one setting the hostname.
>> >
>> > What should this value be? Is there anything else I need to know for this
>> > kind of setup? Any suggestions?
>> >
>> > Thanks in advance.
>> >
>> > Kind regards,
>> > Marco
>> >
>>
>> --
>>
>>
>> This email, including attachments, is private and confidential. If you have
>> received this email in error please notify the sender and delete it from
>> your system. Emails are not secure and may contain viruses. No liability
>> can be accepted for viruses that might be transferred by this email or any
>> attachment. Any unauthorised copying of this message or unauthorised
>> distribution and publication of the information contained herein are
>> prohibited.
>>
>> 7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
>> Registered in England and Wales. Registered No. 04843573.
>>


Re: ClosedChannelException when trying to read from remote Kafka in AWS

Posted by "Marco B." <ma...@gmail.com>.
Hi Ben,

Thanks for your answer. What if the instance does not have a public DNS
hostname?
These are all private nodes without public/elastic IP, therefore I don't
know what to set.

Marco

2016-06-01 15:09 GMT+02:00 Ben Davison <be...@7digital.com>:

> Hi Marco,
>
> We use the public DNS hostname that you can get from the AWS metadata
> service.
>
> Thanks,
>
> Ben
>
> On Wed, Jun 1, 2016 at 1:54 PM, Marco B. <ma...@gmail.com> wrote:
>
> > Hello everyone,
> >
> > I am trying to setup a MirrorMaker between my company's local cluster and
> > another cluster in AWS to have replication over clusters. We have setup a
> > VPN between these two clusters, and as far as I can see, everything works
> > correctly, meaning that I can ping the nodes and telnet into them without
> > any issues.
> >
> > Now, when I run the following command in the local cluster to use a
> > Zookeeper instance located in AWS (10.1.83.6:2181), in order to read a
> > topic "test"
> >
> > ~/kafka_2.11-0.8.2.2$ ./bin/kafka-console-consumer.sh --zookeeper
> > 10.1.83.6:2181 --topic test --from-beginning
> >
> > A bunch of errors comes up:
> >
> > WARN Fetching topic metadata with correlation id 1 for topics [Set(test)]
> > from broker [id:2,host:ip-10-1-83-5.ec2.internal,port:9092] failed
> > (kafka.client.ClientUtils$)
> > java.nio.channels.ClosedChannelException
> >     at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
> >     at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
> >     at
> >
> >
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
> >     at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
> >     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
> >     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
> >     at
> >
> >
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
> >     at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> > WARN Fetching topic metadata with correlation id 1 for topics [Set(test)]
> > from broker [id:3,host:ip-10-1-83-6.ec2.internal,port:9092] failed
> > (kafka.client.ClientUtils$)
> >
> > As far as I know, this is due to the fact that Zookeeper has registered
> an
> > IP/Port for each Kafka instance and these need to be consistent with the
> > producer configuration, as described here (
> > https://cwiki.apache.org/confluence/display/KAFKA/FAQ).
> >
> > I tried to search on the web, and some people were recommending to change
> > the setting "advertised.host.name" to be either the public IP address
> > coming from AWS (we cannot) or a specific hostname. Now, considering that
> > we have a VPN between the clusters, the only choice left seems to be the
> > one setting the hostname.
> >
> > What should this value be? Is there anything else I need to know for this
> > kind of setup? Any suggestions?
> >
> > Thanks in advance.
> >
> > Kind regards,
> > Marco
> >
>
> --
>
>
> This email, including attachments, is private and confidential. If you have
> received this email in error please notify the sender and delete it from
> your system. Emails are not secure and may contain viruses. No liability
> can be accepted for viruses that might be transferred by this email or any
> attachment. Any unauthorised copying of this message or unauthorised
> distribution and publication of the information contained herein are
> prohibited.
>
> 7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
> Registered in England and Wales. Registered No. 04843573.
>

Re: ClosedChannelException when trying to read from remote Kafka in AWS

Posted by Ben Davison <be...@7digital.com>.
Hi Marco,

We use the public DNS hostname that you can get from the AWS metadata
service.

Thanks,

Ben

On Wed, Jun 1, 2016 at 1:54 PM, Marco B. <ma...@gmail.com> wrote:

> Hello everyone,
>
> I am trying to setup a MirrorMaker between my company's local cluster and
> another cluster in AWS to have replication over clusters. We have setup a
> VPN between these two clusters, and as far as I can see, everything works
> correctly, meaning that I can ping the nodes and telnet into them without
> any issues.
>
> Now, when I run the following command in the local cluster to use a
> Zookeeper instance located in AWS (10.1.83.6:2181), in order to read a
> topic "test"
>
> ~/kafka_2.11-0.8.2.2$ ./bin/kafka-console-consumer.sh --zookeeper
> 10.1.83.6:2181 --topic test --from-beginning
>
> A bunch of errors comes up:
>
> WARN Fetching topic metadata with correlation id 1 for topics [Set(test)]
> from broker [id:2,host:ip-10-1-83-5.ec2.internal,port:9092] failed
> (kafka.client.ClientUtils$)
> java.nio.channels.ClosedChannelException
>     at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
>     at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
>     at
>
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
>     at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
>     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
>     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
>     at
>
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
>     at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> WARN Fetching topic metadata with correlation id 1 for topics [Set(test)]
> from broker [id:3,host:ip-10-1-83-6.ec2.internal,port:9092] failed
> (kafka.client.ClientUtils$)
>
> As far as I know, this is due to the fact that Zookeeper has registered an
> IP/Port for each Kafka instance and these need to be consistent with the
> producer configuration, as described here (
> https://cwiki.apache.org/confluence/display/KAFKA/FAQ).
>
> I tried to search on the web, and some people were recommending to change
> the setting "advertised.host.name" to be either the public IP address
> coming from AWS (we cannot) or a specific hostname. Now, considering that
> we have a VPN between the clusters, the only choice left seems to be the
> one setting the hostname.
>
> What should this value be? Is there anything else I need to know for this
> kind of setup? Any suggestions?
>
> Thanks in advance.
>
> Kind regards,
> Marco
>

-- 


This email, including attachments, is private and confidential. If you have 
received this email in error please notify the sender and delete it from 
your system. Emails are not secure and may contain viruses. No liability 
can be accepted for viruses that might be transferred by this email or any 
attachment. Any unauthorised copying of this message or unauthorised 
distribution and publication of the information contained herein are 
prohibited.

7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
Registered in England and Wales. Registered No. 04843573.