You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@nifi.apache.org by Mike Sofen <ms...@runbox.com> on 2020/06/26 01:45:26 UTC

initiating a machine learning script on a remote server

I've been prototyping various functionality on nifi, initially on a Windows
laptop, now on a single GCP Linux instance (for now), using the more basic
processors for files and databases.  It's really a superb platform.

 

What I now need to solve for is firing a python machine learning script that
exists on another CPU/GPU equipped instance, as part of a pipeline that
detects a new file to process, sends the file name/location to the remote
server and receives the results of the processing from the server, for
further actions.  We need maximum performance and robustness from this step
of the processing.

 

I've read a bunch of posts on this and they point to using the
ExecuteStreamCommand processor (vs the ExecuteProcess, since it allows
inputs) but none seem show how to configure the processor to point to a
remote server and execute a script that exists on that server with
arguments/variables I pass in with the call.  These servers will all be GCP
instances. To keep things simple, let's ignore security for the moment and
assume I own both servers.

 

Can someone point me in the right direction? Many thanks!

 

Mike Sofen


Re: initiating a machine learning script on a remote server

Posted by Darren Govoni <da...@ontrenet.com>.
Quick answer is you could just execute a ssh command to execute on the remote machine.

If you need flowfiles to go remote, nifi supports remote processor groups.

Sent from my Verizon, Samsung Galaxy smartphone