You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Stuti Awasthi <st...@hcl.com> on 2012/01/18 13:25:52 UTC

Apply ACL on file level in Hadoop Cluster

Hi All,
I wanted to apply ACL on per file level in Hadoop Cluster i.e. I want to apply ACL's on every file which is present in Hadoop Cluster.
I tried mounting HDFS using fuse_dfs , that works fine. Now HDFS is a mounted drive I thought it is easy to apply ACL as we do on normal directory in Linux but I was wrong. I think FUSE does not support ACL

How can I achieve that? It can work any way for me either by configuration file setting or applying ACL on mounted drive.

Regards,
Stuti Awasthi

::DISCLAIMER::
-----------------------------------------------------------------------------------------------------------------------

The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of
this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have
received this email in error please delete it and notify the sender immediately. Before opening any mail and
attachments please check them for viruses and defect.

-----------------------------------------------------------------------------------------------------------------------

RE: Apply ACL on file level in Hadoop Cluster

Posted by Stuti Awasthi <st...@hcl.com>.
Any ideas??

-----Original Message-----
From: Stuti Awasthi 
Sent: Wednesday, January 18, 2012 9:49 PM
To: hdfs-user@hadoop.apache.org
Subject: RE: Apply ACL on file level in Hadoop Cluster

Hi Joey,

I shall explain my use-case in detail. So basically I will be storing files in HDFS in different directory structure and there will be multiple users who can access those files.
What I have initially thought that I will mount my HDFS , apply ACL and LDAP on mounted HDFS drive and expose the urls to the users via Webdav. So users can access the HDFS as mounted drive or through http url.

This way suppose I have a directory in HDFS which contain 2 files A.txt and B.txt and there are 2 users John, Bella. Say John have access permission to A.txt and Bella have permission on B.txt. Each user will have http url to access the HDFS directory. I want that when John and Bella access the webdav url /mounted drive, they will see only those files in which they have access. 

I do not want that users which have access on a directory level can see all the inner content even if they do not have access permission on them.

I thought of attaining it using ACL's . Is there any other way through which I can achieve this goal of mine. 
Any ideas and suggestions are welcome.

-----Original Message-----
From: Joey Echeverria [mailto:joey@cloudera.com]
Sent: Wednesday, January 18, 2012 6:34 PM
To: hdfs-user@hadoop.apache.org
Subject: Re: Apply ACL on file level in Hadoop Cluster

HDFS only supports Unix style read, write execute permissions. What style of ACLs do you want to apply?

-Joey

On Wed, Jan 18, 2012 at 7:55 AM, Stuti Awasthi <st...@hcl.com> wrote:
> Thanks Alex,
> Yes, I wanted to apply ACL's on every file/directory created on HDFS. Is there absolutely no way to achieve that either by conf files or on mounted drive ??
>
>
>
> -----Original Message-----
> From: alo alt [mailto:wget.null@googlemail.com]
> Sent: Wednesday, January 18, 2012 6:13 PM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: Apply ACL on file level in Hadoop Cluster
>
> Stuti,
>
> HDFS does not support ACL's, I assume you mean ACL per file / directory / dataset? I know only Accumulo (http://wiki.apache.org/incubator/AccumuloProposal) which is supporting that.
>
> - Alex
>
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
>
> On Jan 18, 2012, at 1:25 PM, Stuti Awasthi wrote:
>
>> Hi All,
>> I wanted to apply ACL on per file level in Hadoop Cluster i.e. I want to apply ACL's on every file which is present in Hadoop Cluster.
>> I tried mounting HDFS using fuse_dfs , that works fine. Now HDFS is a 
>> mounted drive I thought it is easy to apply ACL as we do on normal 
>> directory in Linux but I was wrong. I think FUSE does not support ACL
>>
>> How can I achieve that? It can work any way for me either by configuration file setting or applying ACL on mounted drive.
>>
>> Regards,
>> Stuti Awasthi
>>
>> ::DISCLAIMER::
>> ---------------------------------------------------------------------
>> -
>> -------------------------------------------------
>>
>> The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.
>> It shall not attach any liability on the originator or HCL or its 
>> affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of HCL or its affiliates.
>> Any form of reproduction, dissemination, copying, disclosure, 
>> modification, distribution and / or publication of this message 
>> without the prior written consent of the author of this e-mail is 
>> strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any mail and attachments please check them for viruses and defect.
>>
>> ---------------------------------------------------------------------
>> -
>> -------------------------------------------------
>



--
Joseph Echeverria
Cloudera, Inc.
443.305.9434

Re: Apply ACL on file level in Hadoop Cluster

Posted by Tomer Shiran <ts...@maprtech.com>.
I would encourage you to take a look at the MapR distribution (www.mapr.com).
In addition to the standard Hadoop FileSystem API, you will be able to
mount the cluster over NFS without installing any software on the client.
It supports standard POSIX/UNIX ACLs, including the ability to do what you
described (chmod, chown, etc.).

On Thu, Jan 19, 2012 at 4:50 PM, Joey Echeverria <jo...@cloudera.com> wrote:

> I'm pretty sure standard FS ACLs won't work because fuse_dfs doesn't
> provide xattr support. The way I would probably handle this is with
> Hoop (httpfs) or webhdfs. I'd put another web server in front of them
> to proxy and implement the ACLs, filtering rules there. It wouldn't
> support webdav out of the box, but if you really needed support you
> could probably make it work. As an FYI, libHDFS (which fuse_dfs is
> based on) is likely to be re-written to make use of webhdfs instead of
> JNI. There's an open JIRA for it, but the number escapes me at the
> moment.
>
> -Joey
>
> On Wed, Jan 18, 2012 at 11:18 AM, Stuti Awasthi <st...@hcl.com>
> wrote:
> > Hi Joey,
> >
> > I shall explain my use-case in detail. So basically I will be storing
> files in HDFS in different directory structure and there will be multiple
> users who can access those files.
> > What I have initially thought that I will mount my HDFS , apply ACL and
> LDAP on mounted HDFS drive and expose the urls to the users via Webdav. So
> users can access the HDFS as mounted drive or through http url.
> >
> > This way suppose I have a directory in HDFS which contain 2 files A.txt
> and B.txt and there are 2 users John, Bella. Say John have access
> permission to A.txt and Bella have permission on B.txt. Each user will have
> http url to access the HDFS directory. I want that when John and Bella
> access the webdav url /mounted drive, they will see only those files in
> which they have access.
> >
> > I do not want that users which have access on a directory level can see
> all the inner content even if they do not have access permission on them.
> >
> > I thought of attaining it using ACL's . Is there any other way through
> which I can achieve this goal of mine.
> > Any ideas and suggestions are welcome.
> >
> > -----Original Message-----
> > From: Joey Echeverria [mailto:joey@cloudera.com]
> > Sent: Wednesday, January 18, 2012 6:34 PM
> > To: hdfs-user@hadoop.apache.org
> > Subject: Re: Apply ACL on file level in Hadoop Cluster
> >
> > HDFS only supports Unix style read, write execute permissions. What
> style of ACLs do you want to apply?
> >
> > -Joey
> >
> > On Wed, Jan 18, 2012 at 7:55 AM, Stuti Awasthi <st...@hcl.com>
> wrote:
> >> Thanks Alex,
> >> Yes, I wanted to apply ACL's on every file/directory created on HDFS.
> Is there absolutely no way to achieve that either by conf files or on
> mounted drive ??
> >>
> >>
> >>
> >> -----Original Message-----
> >> From: alo alt [mailto:wget.null@googlemail.com]
> >> Sent: Wednesday, January 18, 2012 6:13 PM
> >> To: hdfs-user@hadoop.apache.org
> >> Subject: Re: Apply ACL on file level in Hadoop Cluster
> >>
> >> Stuti,
> >>
> >> HDFS does not support ACL's, I assume you mean ACL per file / directory
> / dataset? I know only Accumulo (
> http://wiki.apache.org/incubator/AccumuloProposal) which is supporting
> that.
> >>
> >> - Alex
> >>
> >> --
> >> Alexander Lorenz
> >> http://mapredit.blogspot.com
> >>
> >> On Jan 18, 2012, at 1:25 PM, Stuti Awasthi wrote:
> >>
> >>> Hi All,
> >>> I wanted to apply ACL on per file level in Hadoop Cluster i.e. I want
> to apply ACL's on every file which is present in Hadoop Cluster.
> >>> I tried mounting HDFS using fuse_dfs , that works fine. Now HDFS is a
> >>> mounted drive I thought it is easy to apply ACL as we do on normal
> >>> directory in Linux but I was wrong. I think FUSE does not support ACL
> >>>
> >>> How can I achieve that? It can work any way for me either by
> configuration file setting or applying ACL on mounted drive.
> >>>
> >>> Regards,
> >>> Stuti Awasthi
> >>>
> >>> ::DISCLAIMER::
> >>> ---------------------------------------------------------------------
> >>> -
> >>> -------------------------------------------------
> >>>
> >>> The contents of this e-mail and any attachment(s) are confidential and
> intended for the named recipient(s) only.
> >>> It shall not attach any liability on the originator or HCL or its
> >>> affiliates. Any views or opinions presented in this email are solely
> those of the author and may not necessarily reflect the opinions of HCL or
> its affiliates.
> >>> Any form of reproduction, dissemination, copying, disclosure,
> >>> modification, distribution and / or publication of this message
> >>> without the prior written consent of the author of this e-mail is
> >>> strictly prohibited. If you have received this email in error please
> delete it and notify the sender immediately. Before opening any mail and
> attachments please check them for viruses and defect.
> >>>
> >>> ---------------------------------------------------------------------
> >>> -
> >>> -------------------------------------------------
> >>
> >
> >
> >
> > --
> > Joseph Echeverria
> > Cloudera, Inc.
> > 443.305.9434
>
>
>
> --
> Joseph Echeverria
> Cloudera, Inc.
> 443.305.9434
>



-- 
Tomer Shiran
Director of Product Management | MapR Technologies | 650-804-8657

Re: Apply ACL on file level in Hadoop Cluster

Posted by Joey Echeverria <jo...@cloudera.com>.
I'm pretty sure standard FS ACLs won't work because fuse_dfs doesn't
provide xattr support. The way I would probably handle this is with
Hoop (httpfs) or webhdfs. I'd put another web server in front of them
to proxy and implement the ACLs, filtering rules there. It wouldn't
support webdav out of the box, but if you really needed support you
could probably make it work. As an FYI, libHDFS (which fuse_dfs is
based on) is likely to be re-written to make use of webhdfs instead of
JNI. There's an open JIRA for it, but the number escapes me at the
moment.

-Joey

On Wed, Jan 18, 2012 at 11:18 AM, Stuti Awasthi <st...@hcl.com> wrote:
> Hi Joey,
>
> I shall explain my use-case in detail. So basically I will be storing files in HDFS in different directory structure and there will be multiple users who can access those files.
> What I have initially thought that I will mount my HDFS , apply ACL and LDAP on mounted HDFS drive and expose the urls to the users via Webdav. So users can access the HDFS as mounted drive or through http url.
>
> This way suppose I have a directory in HDFS which contain 2 files A.txt and B.txt and there are 2 users John, Bella. Say John have access permission to A.txt and Bella have permission on B.txt. Each user will have http url to access the HDFS directory. I want that when John and Bella access the webdav url /mounted drive, they will see only those files in which they have access.
>
> I do not want that users which have access on a directory level can see all the inner content even if they do not have access permission on them.
>
> I thought of attaining it using ACL's . Is there any other way through which I can achieve this goal of mine.
> Any ideas and suggestions are welcome.
>
> -----Original Message-----
> From: Joey Echeverria [mailto:joey@cloudera.com]
> Sent: Wednesday, January 18, 2012 6:34 PM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: Apply ACL on file level in Hadoop Cluster
>
> HDFS only supports Unix style read, write execute permissions. What style of ACLs do you want to apply?
>
> -Joey
>
> On Wed, Jan 18, 2012 at 7:55 AM, Stuti Awasthi <st...@hcl.com> wrote:
>> Thanks Alex,
>> Yes, I wanted to apply ACL's on every file/directory created on HDFS. Is there absolutely no way to achieve that either by conf files or on mounted drive ??
>>
>>
>>
>> -----Original Message-----
>> From: alo alt [mailto:wget.null@googlemail.com]
>> Sent: Wednesday, January 18, 2012 6:13 PM
>> To: hdfs-user@hadoop.apache.org
>> Subject: Re: Apply ACL on file level in Hadoop Cluster
>>
>> Stuti,
>>
>> HDFS does not support ACL's, I assume you mean ACL per file / directory / dataset? I know only Accumulo (http://wiki.apache.org/incubator/AccumuloProposal) which is supporting that.
>>
>> - Alex
>>
>> --
>> Alexander Lorenz
>> http://mapredit.blogspot.com
>>
>> On Jan 18, 2012, at 1:25 PM, Stuti Awasthi wrote:
>>
>>> Hi All,
>>> I wanted to apply ACL on per file level in Hadoop Cluster i.e. I want to apply ACL's on every file which is present in Hadoop Cluster.
>>> I tried mounting HDFS using fuse_dfs , that works fine. Now HDFS is a
>>> mounted drive I thought it is easy to apply ACL as we do on normal
>>> directory in Linux but I was wrong. I think FUSE does not support ACL
>>>
>>> How can I achieve that? It can work any way for me either by configuration file setting or applying ACL on mounted drive.
>>>
>>> Regards,
>>> Stuti Awasthi
>>>
>>> ::DISCLAIMER::
>>> ---------------------------------------------------------------------
>>> -
>>> -------------------------------------------------
>>>
>>> The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.
>>> It shall not attach any liability on the originator or HCL or its
>>> affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of HCL or its affiliates.
>>> Any form of reproduction, dissemination, copying, disclosure,
>>> modification, distribution and / or publication of this message
>>> without the prior written consent of the author of this e-mail is
>>> strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any mail and attachments please check them for viruses and defect.
>>>
>>> ---------------------------------------------------------------------
>>> -
>>> -------------------------------------------------
>>
>
>
>
> --
> Joseph Echeverria
> Cloudera, Inc.
> 443.305.9434



-- 
Joseph Echeverria
Cloudera, Inc.
443.305.9434

RE: Apply ACL on file level in Hadoop Cluster

Posted by Stuti Awasthi <st...@hcl.com>.
Hi Joey,

I shall explain my use-case in detail. So basically I will be storing files in HDFS in different directory structure and there will be multiple users who can access those files.
What I have initially thought that I will mount my HDFS , apply ACL and LDAP on mounted HDFS drive and expose the urls to the users via Webdav. So users can access the HDFS as mounted drive or through http url.

This way suppose I have a directory in HDFS which contain 2 files A.txt and B.txt and there are 2 users John, Bella. Say John have access permission to A.txt and Bella have permission on B.txt. Each user will have http url to access the HDFS directory. I want that when John and Bella access the webdav url /mounted drive, they will see only those files in which they have access. 

I do not want that users which have access on a directory level can see all the inner content even if they do not have access permission on them.

I thought of attaining it using ACL's . Is there any other way through which I can achieve this goal of mine. 
Any ideas and suggestions are welcome.

-----Original Message-----
From: Joey Echeverria [mailto:joey@cloudera.com] 
Sent: Wednesday, January 18, 2012 6:34 PM
To: hdfs-user@hadoop.apache.org
Subject: Re: Apply ACL on file level in Hadoop Cluster

HDFS only supports Unix style read, write execute permissions. What style of ACLs do you want to apply?

-Joey

On Wed, Jan 18, 2012 at 7:55 AM, Stuti Awasthi <st...@hcl.com> wrote:
> Thanks Alex,
> Yes, I wanted to apply ACL's on every file/directory created on HDFS. Is there absolutely no way to achieve that either by conf files or on mounted drive ??
>
>
>
> -----Original Message-----
> From: alo alt [mailto:wget.null@googlemail.com]
> Sent: Wednesday, January 18, 2012 6:13 PM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: Apply ACL on file level in Hadoop Cluster
>
> Stuti,
>
> HDFS does not support ACL's, I assume you mean ACL per file / directory / dataset? I know only Accumulo (http://wiki.apache.org/incubator/AccumuloProposal) which is supporting that.
>
> - Alex
>
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
>
> On Jan 18, 2012, at 1:25 PM, Stuti Awasthi wrote:
>
>> Hi All,
>> I wanted to apply ACL on per file level in Hadoop Cluster i.e. I want to apply ACL's on every file which is present in Hadoop Cluster.
>> I tried mounting HDFS using fuse_dfs , that works fine. Now HDFS is a 
>> mounted drive I thought it is easy to apply ACL as we do on normal 
>> directory in Linux but I was wrong. I think FUSE does not support ACL
>>
>> How can I achieve that? It can work any way for me either by configuration file setting or applying ACL on mounted drive.
>>
>> Regards,
>> Stuti Awasthi
>>
>> ::DISCLAIMER::
>> ---------------------------------------------------------------------
>> -
>> -------------------------------------------------
>>
>> The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.
>> It shall not attach any liability on the originator or HCL or its 
>> affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of HCL or its affiliates.
>> Any form of reproduction, dissemination, copying, disclosure, 
>> modification, distribution and / or publication of this message 
>> without the prior written consent of the author of this e-mail is 
>> strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any mail and attachments please check them for viruses and defect.
>>
>> ---------------------------------------------------------------------
>> -
>> -------------------------------------------------
>



--
Joseph Echeverria
Cloudera, Inc.
443.305.9434

Re: Apply ACL on file level in Hadoop Cluster

Posted by Joey Echeverria <jo...@cloudera.com>.
HDFS only supports Unix style read, write execute permissions. What
style of ACLs do you want to apply?

-Joey

On Wed, Jan 18, 2012 at 7:55 AM, Stuti Awasthi <st...@hcl.com> wrote:
> Thanks Alex,
> Yes, I wanted to apply ACL's on every file/directory created on HDFS. Is there absolutely no way to achieve that either by conf files or on mounted drive ??
>
>
>
> -----Original Message-----
> From: alo alt [mailto:wget.null@googlemail.com]
> Sent: Wednesday, January 18, 2012 6:13 PM
> To: hdfs-user@hadoop.apache.org
> Subject: Re: Apply ACL on file level in Hadoop Cluster
>
> Stuti,
>
> HDFS does not support ACL's, I assume you mean ACL per file / directory / dataset? I know only Accumulo (http://wiki.apache.org/incubator/AccumuloProposal) which is supporting that.
>
> - Alex
>
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
>
> On Jan 18, 2012, at 1:25 PM, Stuti Awasthi wrote:
>
>> Hi All,
>> I wanted to apply ACL on per file level in Hadoop Cluster i.e. I want to apply ACL's on every file which is present in Hadoop Cluster.
>> I tried mounting HDFS using fuse_dfs , that works fine. Now HDFS is a
>> mounted drive I thought it is easy to apply ACL as we do on normal
>> directory in Linux but I was wrong. I think FUSE does not support ACL
>>
>> How can I achieve that? It can work any way for me either by configuration file setting or applying ACL on mounted drive.
>>
>> Regards,
>> Stuti Awasthi
>>
>> ::DISCLAIMER::
>> ----------------------------------------------------------------------
>> -------------------------------------------------
>>
>> The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.
>> It shall not attach any liability on the originator or HCL or its
>> affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of HCL or its affiliates.
>> Any form of reproduction, dissemination, copying, disclosure,
>> modification, distribution and / or publication of this message
>> without the prior written consent of the author of this e-mail is
>> strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any mail and attachments please check them for viruses and defect.
>>
>> ----------------------------------------------------------------------
>> -------------------------------------------------
>



-- 
Joseph Echeverria
Cloudera, Inc.
443.305.9434

RE: Apply ACL on file level in Hadoop Cluster

Posted by Stuti Awasthi <st...@hcl.com>.
Thanks Alex,
Yes, I wanted to apply ACL's on every file/directory created on HDFS. Is there absolutely no way to achieve that either by conf files or on mounted drive ??



-----Original Message-----
From: alo alt [mailto:wget.null@googlemail.com] 
Sent: Wednesday, January 18, 2012 6:13 PM
To: hdfs-user@hadoop.apache.org
Subject: Re: Apply ACL on file level in Hadoop Cluster

Stuti,

HDFS does not support ACL's, I assume you mean ACL per file / directory / dataset? I know only Accumulo (http://wiki.apache.org/incubator/AccumuloProposal) which is supporting that.

- Alex 

--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 18, 2012, at 1:25 PM, Stuti Awasthi wrote:

> Hi All,
> I wanted to apply ACL on per file level in Hadoop Cluster i.e. I want to apply ACL's on every file which is present in Hadoop Cluster.
> I tried mounting HDFS using fuse_dfs , that works fine. Now HDFS is a 
> mounted drive I thought it is easy to apply ACL as we do on normal 
> directory in Linux but I was wrong. I think FUSE does not support ACL
> 
> How can I achieve that? It can work any way for me either by configuration file setting or applying ACL on mounted drive.
> 
> Regards,
> Stuti Awasthi
> 
> ::DISCLAIMER::
> ----------------------------------------------------------------------
> -------------------------------------------------
> 
> The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.
> It shall not attach any liability on the originator or HCL or its 
> affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of HCL or its affiliates.
> Any form of reproduction, dissemination, copying, disclosure, 
> modification, distribution and / or publication of this message 
> without the prior written consent of the author of this e-mail is 
> strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any mail and attachments please check them for viruses and defect.
> 
> ----------------------------------------------------------------------
> -------------------------------------------------


Re: Apply ACL on file level in Hadoop Cluster

Posted by alo alt <wg...@googlemail.com>.
Stuti,

HDFS does not support ACL's, I assume you mean ACL per file / directory / dataset? I know only Accumulo (http://wiki.apache.org/incubator/AccumuloProposal) which is supporting that.

- Alex 

--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 18, 2012, at 1:25 PM, Stuti Awasthi wrote:

> Hi All,
> I wanted to apply ACL on per file level in Hadoop Cluster i.e. I want to apply ACL's on every file which is present in Hadoop Cluster.
> I tried mounting HDFS using fuse_dfs , that works fine. Now HDFS is a mounted drive I thought it is easy to apply ACL as we do on normal directory in Linux but I was wrong. I think FUSE does not support ACL
> 
> How can I achieve that? It can work any way for me either by configuration file setting or applying ACL on mounted drive.
> 
> Regards,
> Stuti Awasthi
> 
> ::DISCLAIMER::
> -----------------------------------------------------------------------------------------------------------------------
> 
> The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.
> It shall not attach any liability on the originator or HCL or its affiliates. Any views or opinions presented in
> this email are solely those of the author and may not necessarily reflect the opinions of HCL or its affiliates.
> Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of
> this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have
> received this email in error please delete it and notify the sender immediately. Before opening any mail and
> attachments please check them for viruses and defect.
> 
> -----------------------------------------------------------------------------------------------------------------------