You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Martin Grund <ma...@databricks.com.INVALID> on 2022/06/03 17:45:17 UTC

[DISCUSS] SPIP: Spark Connect - A client and server interface for Apache Spark.

Hi Everyone,

We would like to start a discussion on the "Spark Connect" proposal. Please
find the links below:

*JIRA* - https://issues.apache.org/jira/browse/SPARK-39375
*SPIP Document* -
https://docs.google.com/document/d/1Mnl6jmGszixLW4KcJU5j9IgpG9-UabS0dcM6PM2XGDc/edit#heading=h.wmsrrfealhrj

*Excerpt from the document: *

We propose to extend Apache Spark by building on the DataFrame API and the
underlying unresolved logical plans. The DataFrame API is widely used and
makes it very easy to iteratively express complex logic. We will introduce
Spark Connect, a remote option of the DataFrame API that separates the
client from the Spark server. With Spark Connect, Spark will become
decoupled, allowing for built-in remote connectivity: The decoupled client
SDK can be used to run interactive data exploration and connect to the
server for DataFrame operations.

Spark Connect will benefit Spark developers in different ways: The
decoupled architecture will result in improved stability, as clients are
separated from the driver. From the Spark Connect client perspective, Spark
will be (almost) versionless, and thus enable seamless upgradability, as
server APIs can evolve without affecting the client API. The decoupled
client-server architecture can be leveraged to build close integrations
with local developer tooling. Finally, separating the client process from
the Spark server process will improve Spark’s overall security posture by
avoiding the tight coupling of the client inside the Spark runtime
environment.

Spark Connect will strengthen Spark’s position as the modern unified engine
for large-scale data analytics and expand applicability to use cases and
developers we could not reach with the current setup: Spark will become
ubiquitously usable as the DataFrame API can be used with (almost) any
programming language.

We would like to start a discussion on the document and any feedback is
welcome!

Thanks a lot in advance,
Martin

Re: [DISCUSS] SPIP: Spark Connect - A client and server interface for Apache Spark.

Posted by Martin Grund <ma...@databricks.com.INVALID>.
On Tue, Jun 7, 2022 at 3:54 PM Steve Loughran <st...@cloudera.com.invalid>
wrote:

>
>
> On Fri, 3 Jun 2022 at 18:46, Martin Grund
> <ma...@databricks.com.invalid> wrote:
>
>> Hi Everyone,
>>
>> We would like to start a discussion on the "Spark Connect" proposal.
>> Please find the links below:
>>
>> *JIRA* - https://issues.apache.org/jira/browse/SPARK-39375
>> *SPIP Document* -
>> https://docs.google.com/document/d/1Mnl6jmGszixLW4KcJU5j9IgpG9-UabS0dcM6PM2XGDc/edit#heading=h.wmsrrfealhrj
>>
>> *Excerpt from the document: *
>>
>> We propose to extend Apache Spark by building on the DataFrame API and
>> the underlying unresolved logical plans. The DataFrame API is widely used
>> and makes it very easy to iteratively express complex logic. We will
>> introduce Spark Connect, a remote option of the DataFrame API that
>> separates the client from the Spark server. With Spark Connect, Spark will
>> become decoupled, allowing for built-in remote connectivity: The decoupled
>> client SDK can be used to run interactive data exploration and connect to
>> the server for DataFrame operations.
>>
>> Spark Connect will benefit Spark developers in different ways: The
>> decoupled architecture will result in improved stability, as clients are
>> separated from the driver. From the Spark Connect client perspective, Spark
>> will be (almost) versionless, and thus enable seamless upgradability, as
>> server APIs can evolve without affecting the client API. The decoupled
>> client-server architecture can be leveraged to build close integrations
>> with local developer tooling. Finally, separating the client process from
>> the Spark server process will improve Spark’s overall security posture by
>> avoiding the tight coupling of the client inside the Spark runtime
>> environment.
>>
>
> one key finding on distributed systems since the earliest work since
> Nelson first did the RPC in 1981 is that "seamless upgradability" is
> usually an unrealised vision, especially if things like serialized
> java/spark objects are part of the payload.
>
> if it is a goal, then the tests to validate the versioning would have to
> be a key deliverable. examples: test modules using old versions,
>
> This is particularly a risk with a design which proposes serialising
> logical plans; it may be hard to change planning in future.
>
> Will the protocol include something similar to the DXL plan language
> implemented in Greenplum's orca query optimizer? That's an
> under-appreciated piece of work. If the goal of the protocol is to be long
> lived, it is a design worth considering, not just for its portability but
> because it lets people work on query optimisation as a service.
>
>
In the prototype I've built I'm not actually using the fully specified
logical plans that Spark is using for the query execution before
optimization, but rather something that is closer to the parse plans of a
SQL query. The parse plans follow more closely the relational algebra and
are much less likely to change compared to the actual underlying logical
plan operator. The goal is not to build an endpoint that can receive
optimized plans and directly executes these plans.

For example, all attributes in the plans are referenced as unresolved
attributes and the same is true for functions. This delegates the
responsibility for name resolution etc to the existing implementation that
we're not going to touch instead of trying to replicate it. It is still
possible to provide early feedback to the user because one can always
analyze the specific sub-plan.

Please let me know what you think.


>
> [1]. Orca: A Modular Query Optimizer Architecture for Big Data
>
>  https://15721.courses.cs.cmu.edu/spring2017/papers/15-optimizer2/p337-soliman.pdf
> <https://15721.courses.cs.cmu.edu/spring2017/papers/15-optimizer2/p337-soliman.pdf>
>
>
>> Spark Connect will strengthen Spark’s position as the modern unified
>> engine for large-scale data analytics and expand applicability to use cases
>> and developers we could not reach with the current setup: Spark will become
>> ubiquitously usable as the DataFrame API can be used with (almost) any
>> programming language.
>>
>> That's a marketing comment, not a technical one. best left out of ASF
> docs.
>

Re: [DISCUSS] SPIP: Spark Connect - A client and server interface for Apache Spark.

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
On Fri, 3 Jun 2022 at 18:46, Martin Grund
<ma...@databricks.com.invalid> wrote:

> Hi Everyone,
>
> We would like to start a discussion on the "Spark Connect" proposal.
> Please find the links below:
>
> *JIRA* - https://issues.apache.org/jira/browse/SPARK-39375
> *SPIP Document* -
> https://docs.google.com/document/d/1Mnl6jmGszixLW4KcJU5j9IgpG9-UabS0dcM6PM2XGDc/edit#heading=h.wmsrrfealhrj
>
> *Excerpt from the document: *
>
> We propose to extend Apache Spark by building on the DataFrame API and the
> underlying unresolved logical plans. The DataFrame API is widely used and
> makes it very easy to iteratively express complex logic. We will introduce
> Spark Connect, a remote option of the DataFrame API that separates the
> client from the Spark server. With Spark Connect, Spark will become
> decoupled, allowing for built-in remote connectivity: The decoupled client
> SDK can be used to run interactive data exploration and connect to the
> server for DataFrame operations.
>
> Spark Connect will benefit Spark developers in different ways: The
> decoupled architecture will result in improved stability, as clients are
> separated from the driver. From the Spark Connect client perspective, Spark
> will be (almost) versionless, and thus enable seamless upgradability, as
> server APIs can evolve without affecting the client API. The decoupled
> client-server architecture can be leveraged to build close integrations
> with local developer tooling. Finally, separating the client process from
> the Spark server process will improve Spark’s overall security posture by
> avoiding the tight coupling of the client inside the Spark runtime
> environment.
>

one key finding on distributed systems since the earliest work since Nelson
first did the RPC in 1981 is that "seamless upgradability" is usually an
unrealised vision, especially if things like serialized java/spark objects
are part of the payload.

if it is a goal, then the tests to validate the versioning would have to be
a key deliverable. examples: test modules using old versions,

This is particularly a risk with a design which proposes serialising
logical plans; it may be hard to change planning in future.

Will the protocol include something similar to the DXL plan language
implemented in Greenplum's orca query optimizer? That's an
under-appreciated piece of work. If the goal of the protocol is to be long
lived, it is a design worth considering, not just for its portability but
because it lets people work on query optimisation as a service.


[1]. Orca: A Modular Query Optimizer Architecture for Big Data
 https://15721.courses.cs.cmu.edu/spring2017/papers/15-optimizer2/p337-soliman.pdf
<https://15721.courses.cs.cmu.edu/spring2017/papers/15-optimizer2/p337-soliman.pdf>


> Spark Connect will strengthen Spark’s position as the modern unified
> engine for large-scale data analytics and expand applicability to use cases
> and developers we could not reach with the current setup: Spark will become
> ubiquitously usable as the DataFrame API can be used with (almost) any
> programming language.
>
> That's a marketing comment, not a technical one. best left out of ASF docs.

Re: [DISCUSS] SPIP: Spark Connect - A client and server interface for Apache Spark.

Posted by Martin Grund <ma...@databricks.com.INVALID>.
Support for UDFs would work in the same way as they work today. The
closures are serialized on the client and sent via the driver to the worker.

While there is no difference in the execution of the UDF, there can be
potential challenges with the dependencies required for execution. This is
true both for Python and Scala. I would like to avoid bringing dependency
management into this SPIP and I believe this can be solved in principle by
explicitly adding the JARs for the depency so that they are available in
the classpath.

In its current form, the SPIP does not propose to add new language support
for UDFs, but in theory it becomes possible to do so as long as closures
can be serialized either as code or binary and dynamically loaded on the
other side.

I hope this answers the question.

Thanks
Martin

On Sat 4. Jun 2022 at 05:04 Koert Kuipers <ko...@tresata.com> wrote:

> how would scala udfs be supported in this?
>
> On Fri, Jun 3, 2022 at 1:52 PM Martin Grund
> <ma...@databricks.com.invalid> wrote:
>
>> Hi Everyone,
>>
>> We would like to start a discussion on the "Spark Connect" proposal.
>> Please find the links below:
>>
>> *JIRA* - https://issues.apache.org/jira/browse/SPARK-39375
>> *SPIP Document* -
>> https://docs.google.com/document/d/1Mnl6jmGszixLW4KcJU5j9IgpG9-UabS0dcM6PM2XGDc/edit#heading=h.wmsrrfealhrj
>>
>> *Excerpt from the document: *
>>
>> We propose to extend Apache Spark by building on the DataFrame API and
>> the underlying unresolved logical plans. The DataFrame API is widely used
>> and makes it very easy to iteratively express complex logic. We will
>> introduce Spark Connect, a remote option of the DataFrame API that
>> separates the client from the Spark server. With Spark Connect, Spark will
>> become decoupled, allowing for built-in remote connectivity: The decoupled
>> client SDK can be used to run interactive data exploration and connect to
>> the server for DataFrame operations.
>>
>> Spark Connect will benefit Spark developers in different ways: The
>> decoupled architecture will result in improved stability, as clients are
>> separated from the driver. From the Spark Connect client perspective, Spark
>> will be (almost) versionless, and thus enable seamless upgradability, as
>> server APIs can evolve without affecting the client API. The decoupled
>> client-server architecture can be leveraged to build close integrations
>> with local developer tooling. Finally, separating the client process from
>> the Spark server process will improve Spark’s overall security posture by
>> avoiding the tight coupling of the client inside the Spark runtime
>> environment.
>>
>> Spark Connect will strengthen Spark’s position as the modern unified
>> engine for large-scale data analytics and expand applicability to use cases
>> and developers we could not reach with the current setup: Spark will become
>> ubiquitously usable as the DataFrame API can be used with (almost) any
>> programming language.
>>
>> We would like to start a discussion on the document and any feedback is
>> welcome!
>>
>> Thanks a lot in advance,
>> Martin
>>
>
> CONFIDENTIALITY NOTICE: This electronic communication and any files
> transmitted with it are confidential, privileged and intended solely for
> the use of the individual or entity to whom they are addressed. If you are
> not the intended recipient, you are hereby notified that any disclosure,
> copying, distribution (electronic or otherwise) or forwarding of, or the
> taking of any action in reliance on the contents of this transmission is
> strictly prohibited. Please notify the sender immediately by e-mail if you
> have received this email by mistake and delete this email from your system.
>
> Is it necessary to print this email? If you care about the environment
> like we do, please refrain from printing emails. It helps to keep the
> environment forested and litter-free.

Re: [DISCUSS] SPIP: Spark Connect - A client and server interface for Apache Spark.

Posted by Koert Kuipers <ko...@tresata.com>.
how would scala udfs be supported in this?

On Fri, Jun 3, 2022 at 1:52 PM Martin Grund
<ma...@databricks.com.invalid> wrote:

> Hi Everyone,
>
> We would like to start a discussion on the "Spark Connect" proposal.
> Please find the links below:
>
> *JIRA* - https://issues.apache.org/jira/browse/SPARK-39375
> *SPIP Document* -
> https://docs.google.com/document/d/1Mnl6jmGszixLW4KcJU5j9IgpG9-UabS0dcM6PM2XGDc/edit#heading=h.wmsrrfealhrj
>
> *Excerpt from the document: *
>
> We propose to extend Apache Spark by building on the DataFrame API and the
> underlying unresolved logical plans. The DataFrame API is widely used and
> makes it very easy to iteratively express complex logic. We will introduce
> Spark Connect, a remote option of the DataFrame API that separates the
> client from the Spark server. With Spark Connect, Spark will become
> decoupled, allowing for built-in remote connectivity: The decoupled client
> SDK can be used to run interactive data exploration and connect to the
> server for DataFrame operations.
>
> Spark Connect will benefit Spark developers in different ways: The
> decoupled architecture will result in improved stability, as clients are
> separated from the driver. From the Spark Connect client perspective, Spark
> will be (almost) versionless, and thus enable seamless upgradability, as
> server APIs can evolve without affecting the client API. The decoupled
> client-server architecture can be leveraged to build close integrations
> with local developer tooling. Finally, separating the client process from
> the Spark server process will improve Spark’s overall security posture by
> avoiding the tight coupling of the client inside the Spark runtime
> environment.
>
> Spark Connect will strengthen Spark’s position as the modern unified
> engine for large-scale data analytics and expand applicability to use cases
> and developers we could not reach with the current setup: Spark will become
> ubiquitously usable as the DataFrame API can be used with (almost) any
> programming language.
>
> We would like to start a discussion on the document and any feedback is
> welcome!
>
> Thanks a lot in advance,
> Martin
>

-- 
CONFIDENTIALITY NOTICE: This electronic communication and any files 
transmitted with it are confidential, privileged and intended solely for 
the use of the individual or entity to whom they are addressed. If you are 
not the intended recipient, you are hereby notified that any disclosure, 
copying, distribution (electronic or otherwise) or forwarding of, or the 
taking of any action in reliance on the contents of this transmission is 
strictly prohibited. Please notify the sender immediately by e-mail if you 
have received this email by mistake and delete this email from your system.


Is it necessary to print this email? If you care about the environment 
like we do, please refrain from printing emails. It helps to keep the 
environment forested and litter-free.

Re: [DISCUSS] SPIP: Spark Connect - A client and server interface for Apache Spark.

Posted by Hyukjin Kwon <gu...@gmail.com>.
What I like most about this SPIP are:
1. We could leverage this SPIP to dispatch the driver to the cluster (e.g.,
yarn-cluster or K8S cluster mode) with an interactive shell which Spark
currently doesn't support.
2. Makes it easier for other languages to support, especially given that we
talked about some other languages like Go or .net in the past.

While 1. I don't think we can (or should) implement all the API and 2. the
details would have to be discussed thoroughly, I think this is a good idea
to have this layer.



On Mon, 6 Jun 2022 at 17:47, Martin Grund
<ma...@databricks.com.invalid> wrote:

> Hi Mich,
>
> I think I must have been not clear enough in the document. The proposal is
> not for connecting Spark to other engines but to connect to Spark from
> other clients remotely (without using SQL)
>
> Please let me know if that clarifies things or if I can provide additional
> context.
>
> Thanks
> Martin
>
> On Sun 5. Jun 2022 at 16:38 Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>> Hi,
>>
>> Whilst I concur that there is a need for client server architecture, that
>> technology has been around over 30 years. Moreover the current spark had
>> vey efficient connections via JDBC to various databases. In some cases the
>> API to various databases, for example Google BiqQuery is very efficient. I
>> am not sure what this proposal is to trying to address?
>>
>> HTH
>>
>> On Fri, 3 Jun 2022 at 18:46, Martin Grund ent server
>> <ma...@dd.com.invalid> wrote:
>>
>>> Hi Everyone,
>>>
>>> We would like to start a discussion on the "Spark Connect" proposal.
>>> Please find the links below:
>>>
>>> *JIRA* - https://issues.apache.org/jira/browse/SPARK-39375
>>> *SPIP Document* -
>>> https://docs.google.com/document/d/1Mnl6jmGszixLW4KcJU5j9IgpG9-UabS0dcM6PM2XGDc/edit#heading=h.wmsrrfealhrj
>>>
>>> *Excerpt from the document: *
>>>
>>> We propose to extend Apache Spark by building on the DataFrame API and
>>> the underlying unresolved logical plans. The DataFrame API is widely used
>>> and makes it very easy to iteratively express complex logic. We will
>>> introduce Spark Connect, a remote option of the DataFrame API that
>>> separates the client from the Spark server. With Spark Connect, Spark will
>>> become decoupled, allowing for built-in remote connectivity: The decoupled
>>> client SDK can be used to run interactive data exploration and connect to
>>> the server for DataFrame operations.
>>>
>>> Spark Connect will benefit Spark developers in different ways: The
>>> decoupled architecture will result in improved stability, as clients are
>>> separated from the driver. From the Spark Connect client perspective, Spark
>>> will be (almost) versionless, and thus enable seamless upgradability, as
>>> server APIs can evolve without affecting the client API. The decoupled
>>> client-server architecture can be leveraged to build close integrations
>>> with local developer tooling. Finally, separating the client process from
>>> the Spark server process will improve Spark’s overall security posture by
>>> avoiding the tight coupling of the client inside the Spark runtime
>>> environment.
>>>
>>> Spark Connect will strengthen Spark’s position as the modern unified
>>> engine for large-scale data analytics and expand applicability to use cases
>>> and developers we could not reach with the current setup: Spark will become
>>> ubiquitously usable as the DataFrame API can be used with (almost) any
>>> programming language.
>>>
>>> We would like to start a discussion on the document and any feedback is
>>> welcome!
>>>
>>> Thanks a lot in advance,
>>> Martin
>>>
>> --
>>
>>
>>
>>    view my Linkedin profile
>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>

Re: [DISCUSS] SPIP: Spark Connect - A client and server interface for Apache Spark.

Posted by Martin Grund <ma...@databricks.com.INVALID>.
Hi Mich,

I think I must have been not clear enough in the document. The proposal is
not for connecting Spark to other engines but to connect to Spark from
other clients remotely (without using SQL)

Please let me know if that clarifies things or if I can provide additional
context.

Thanks
Martin

On Sun 5. Jun 2022 at 16:38 Mich Talebzadeh <mi...@gmail.com>
wrote:

> Hi,
>
> Whilst I concur that there is a need for client server architecture, that
> technology has been around over 30 years. Moreover the current spark had
> vey efficient connections via JDBC to various databases. In some cases the
> API to various databases, for example Google BiqQuery is very efficient. I
> am not sure what this proposal is to trying to address?
>
> HTH
>
> On Fri, 3 Jun 2022 at 18:46, Martin Grund ent server
> <ma...@dd.com.invalid> wrote:
>
>> Hi Everyone,
>>
>> We would like to start a discussion on the "Spark Connect" proposal.
>> Please find the links below:
>>
>> *JIRA* - https://issues.apache.org/jira/browse/SPARK-39375
>> *SPIP Document* -
>> https://docs.google.com/document/d/1Mnl6jmGszixLW4KcJU5j9IgpG9-UabS0dcM6PM2XGDc/edit#heading=h.wmsrrfealhrj
>>
>> *Excerpt from the document: *
>>
>> We propose to extend Apache Spark by building on the DataFrame API and
>> the underlying unresolved logical plans. The DataFrame API is widely used
>> and makes it very easy to iteratively express complex logic. We will
>> introduce Spark Connect, a remote option of the DataFrame API that
>> separates the client from the Spark server. With Spark Connect, Spark will
>> become decoupled, allowing for built-in remote connectivity: The decoupled
>> client SDK can be used to run interactive data exploration and connect to
>> the server for DataFrame operations.
>>
>> Spark Connect will benefit Spark developers in different ways: The
>> decoupled architecture will result in improved stability, as clients are
>> separated from the driver. From the Spark Connect client perspective, Spark
>> will be (almost) versionless, and thus enable seamless upgradability, as
>> server APIs can evolve without affecting the client API. The decoupled
>> client-server architecture can be leveraged to build close integrations
>> with local developer tooling. Finally, separating the client process from
>> the Spark server process will improve Spark’s overall security posture by
>> avoiding the tight coupling of the client inside the Spark runtime
>> environment.
>>
>> Spark Connect will strengthen Spark’s position as the modern unified
>> engine for large-scale data analytics and expand applicability to use cases
>> and developers we could not reach with the current setup: Spark will become
>> ubiquitously usable as the DataFrame API can be used with (almost) any
>> programming language.
>>
>> We would like to start a discussion on the document and any feedback is
>> welcome!
>>
>> Thanks a lot in advance,
>> Martin
>>
> --
>
>
>
>    view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>

Re: [DISCUSS] SPIP: Spark Connect - A client and server interface for Apache Spark.

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi,

Whilst I concur that there is a need for client server architecture, that
technology has been around over 30 years. Moreover the current spark had
vey efficient connections via JDBC to various databases. In some cases the
API to various databases, for example Google BiqQuery is very efficient. I
am not sure what this proposal is to trying to address?

HTH

On Fri, 3 Jun 2022 at 18:46, Martin Grund ent server
<ma...@dd.com.invalid> wrote:

> Hi Everyone,
>
> We would like to start a discussion on the "Spark Connect" proposal.
> Please find the links below:
>
> *JIRA* - https://issues.apache.org/jira/browse/SPARK-39375
> *SPIP Document* -
> https://docs.google.com/document/d/1Mnl6jmGszixLW4KcJU5j9IgpG9-UabS0dcM6PM2XGDc/edit#heading=h.wmsrrfealhrj
>
> *Excerpt from the document: *
>
> We propose to extend Apache Spark by building on the DataFrame API and the
> underlying unresolved logical plans. The DataFrame API is widely used and
> makes it very easy to iteratively express complex logic. We will introduce
> Spark Connect, a remote option of the DataFrame API that separates the
> client from the Spark server. With Spark Connect, Spark will become
> decoupled, allowing for built-in remote connectivity: The decoupled client
> SDK can be used to run interactive data exploration and connect to the
> server for DataFrame operations.
>
> Spark Connect will benefit Spark developers in different ways: The
> decoupled architecture will result in improved stability, as clients are
> separated from the driver. From the Spark Connect client perspective, Spark
> will be (almost) versionless, and thus enable seamless upgradability, as
> server APIs can evolve without affecting the client API. The decoupled
> client-server architecture can be leveraged to build close integrations
> with local developer tooling. Finally, separating the client process from
> the Spark server process will improve Spark’s overall security posture by
> avoiding the tight coupling of the client inside the Spark runtime
> environment.
>
> Spark Connect will strengthen Spark’s position as the modern unified
> engine for large-scale data analytics and expand applicability to use cases
> and developers we could not reach with the current setup: Spark will become
> ubiquitously usable as the DataFrame API can be used with (almost) any
> programming language.
>
> We would like to start a discussion on the document and any feedback is
> welcome!
>
> Thanks a lot in advance,
> Martin
>
-- 



   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.