You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@predictionio.apache.org by infoquest india <in...@gmail.com> on 2017/04/03 11:31:15 UTC

Error while pio status

Hi

I am using pio status i am getting error


SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in
[jar:file:/home/aml/pio/PredictionIO-0.11.0-SNAPSHOT/lib/spark/pio-data-hdfs-assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in
[jar:file:/home/aml/pio/PredictionIO-0.11.0-SNAPSHOT/lib/pio-assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

[INFO] [Management$] Inspecting PredictionIO...

[INFO] [Management$] PredictionIO 0.11.0-SNAPSHOT is installed at
/home/aml/pio/PredictionIO-0.11.0-SNAPSHOT

[INFO] [Management$] Inspecting Apache Spark...

[INFO] [Management$] Apache Spark is installed at None

[INFO] [Management$] Apache Spark 1.6.3 detected (meets minimum requirement
of 1.3.0)

[INFO] [Management$] Inspecting storage backend connections...

[INFO] [Storage$] Verifying Meta Data Backend (Source: ELASTICSEARCH)...

[INFO] [Storage$] Verifying Model Data Backend (Source: HDFS)...

[ERROR] [Storage$] Error initializing storage client for source HDFS

[ERROR] [Management$] Unable to connect to all storage backends
successfully.

The following shows the error message from the storage backend.


Data source HDFS was not properly initialized.
(org.apache.predictionio.data.storage.StorageClientException)


Dumping configuration of initialized storage backend sources.

Please make sure they are correct.


Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME ->
/usr/local/elasticsearch, HOSTS -> some-master, PORTS -> 9300, CLUSTERNAME
-> infoquest, TYPE -> elasticsearch

Source Name: HDFS; Type: (error); Configuration: (error)


Thanks
Gaurav

Re: Error while pio status

Posted by Pat Ferrel <pa...@occamsmachete.com>.
What version of PIO are you using?


On Apr 3, 2017, at 2:07 PM, infoquest india <in...@gmail.com> wrote:

I have done corrections and its giving errors below 


ERROR] [Console$] No storage backend implementation can be found (tried both org.apache.predictionio.data.storage.elasticsearch.ESModels and elasticsearch.ESModels) (org.apache.predictionio.data.storage.StorageClientException)

[ERROR] [Console$] Dumping configuration of initialized storage backend sources. Please make sure they are correct.

[ERROR] [Console$] Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME -> /usr/local/elasticsearch, HOSTS ->PUBLIC IP(Can't reveal), PORTS -> 9300, CLUSTERNAME -> infoquest, TYPE -> elasticsearch



on checking on port 9200 all seems ok 


{
  "status" : 200,
  "name" : "Dusk",
  "cluster_name" : "infoquest",
  "version" : {
    "number" : "1.7.5",
    "build_hash" : "00f95f4ffca6de89d68b7ccaf80d148f1f70e4d4",
    "build_timestamp" : "2016-02-02T09:55:30Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.4"
  },
  "tagline" : "You Know, for Search"
}

Thanks
Gaurav
http://www.infoquestsolutions.com <http://www.infoquestsolutions.com/>
Turning Imagination To Reality
Skype:- <skype:-> infoquestsolutions
Gtalk:- infoquestindia

On Mon, Apr 3, 2017 at 10:34 PM, Pat Ferrel <pat@occamsmachete.com <ma...@occamsmachete.com>> wrote:
No, your pio-env.sh is changed and needs correction as shown below. I will correct any mention of aml PIO, which has been merged with Apache PIO since 0.10.0


On Apr 3, 2017, at 8:38 AM, infoquest india <infoquestindia@gmail.com <ma...@gmail.com>> wrote:

Please check instructions on actionml.com <http://actionml.com/> here

http://actionml.com/docs/single_machine <http://actionml.com/docs/single_machine>

Its all i have copied from. And it also suggest aml version of prediction which is also incorrect. 


On Mon, 3 Apr 2017 at 8:41 PM, Pat Ferrel <pat@occamsmachete.com <ma...@occamsmachete.com>> wrote:
The data will come from HBase (or possibly JDBC but not recommended) the model is always stored in Elasticsearch. The reason for storage in Elasticsearch is that the last step in the algorithm is performed by the ES query, with gives k-nearest neighbors based on cosine similarity. This is not possible with HDFS. We are not fetching things by ID, we are performing a mathematical operation on the model that fetches special things.

HDFS may be used for import/export but is not needed by the UR explicitly.

If you are using the setup instructions on actionml.com <http://actionml.com/> I suggest you look through that again. It looks like you have tried things that were outside of those instructions.


#!/usr/bin/env bash

# PredictionIO Main Configuration
#
# This section controls core behavior of PredictionIO. It is very likely that
# you need to change these to fit your site.

# Safe config that will work if you expand your cluster later
SPARK_HOME=/usr/local/spark
ES_CONF_DIR=/usr/local/elasticsearch
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
HBASE_CONF_DIR=/usr/local/hbase/conf

# Filesystem paths where PredictionIO uses as block storage.
PIO_FS_BASEDIR=$HOME/.pio_store
PIO_FS_ENGINESDIR=$PIO_FS_BASEDIR/engines
PIO_FS_TMPDIR=$PIO_FS_BASEDIR/tmp

# PredictionIO Storage Configuration
#
# This section controls programs that make use of PredictionIO's built-in
# storage facilities.

# Storage Repositories

PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta
PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH


PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event
PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE

# Need to use HDFS here instead of LOCALFS to account for future expansion
PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model
# PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=HDFS
PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE= ELASTICSEARCH


# Storage Data Sources, lower level that repos above, just a simple storage API
# to use

# Elasticsearch Example
PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=/usr/local/elasticsearch
# the next line should match the cluster.name <http://cluster.name/> in elasticsearch.yml
PIO_STORAGE_SOURCES_ELASTICSEARCH_CLUSTERNAME=infoquest

# For single host Elasticsearch, may add hosts and ports later
# PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=some-master
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=some-master  <—— put your DNS name or IP address for ES here
PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9300

# dummy models are stored here so use HDFS in case you later want to
# expand the Event and PredictionServers
PIO_STORAGE_SOURCES_HDFS_TYPE=hdfs
PIO_STORAGE_SOURCES_HDFS_PATH=hdfs://some-master:9000/models

# HBase Source config
PIO_STORAGE_SOURCES_HBASE_TYPE=hbase
PIO_STORAGE_SOURCES_HBASE_HOME=/usr/local/hbase
# Hbase single master config
# PIO_STORAGE_SOURCES_HBASE_HOSTS=some-master
PIO_STORAGE_SOURCES_HBASE_HOSTS=some-master   <—— put your DNS name or IP address for HBase here
PIO_STORAGE_SOURCES_HBASE_PORTS=0

# I don’t think this is used
PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs
PIO_STORAGE_SOURCES_FS_PATH=/mymodels <—— really? /mymodels at the root of the local disk?




On Apr 3, 2017, at 7:01 AM, infoquest india <infoquestindia@gmail.com <ma...@gmail.com>> wrote:

Can we use HDFS or LocalFileSystem for UR ?

I am using single machine setup and changed my /etc/hosts file to point to internal IP.

Please find attached pio-env,sh.

One thing i am not clear what is creating issue HDFS or ElasticSearch ?


Thanks
Gaurav
http://www.infoquestsolutions.com <http://www.infoquestsolutions.com/>
Turning Imagination To Reality
Skype:- <> infoquestsolutions
Gtalk:- infoquestindia

On Mon, Apr 3, 2017 at 6:52 PM, Pat Ferrel <pat@occamsmachete.com <ma...@occamsmachete.com>> wrote:
If you are still using the UR you don’t need HDFS as a storage backend.

In setup instructions, “some-master” is a placeholder where you actually enter the DNS name or IP address of your actual master machine running Elasticsearch. This can be a list comma separated, no spaces.

Can you share your pio-env.sh


On Apr 3, 2017, at 4:31 AM, infoquest india <infoquestindia@gmail.com <ma...@gmail.com>> wrote:

Hi 

I am using pio status i am getting error 


SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/home/aml/pio/PredictionIO-0.11.0-SNAPSHOT/lib/spark/pio-data-hdfs-assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/home/aml/pio/PredictionIO-0.11.0-SNAPSHOT/lib/pio-assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings <http://www.slf4j.org/codes.html#multiple_bindings> for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

[INFO] [Management$] Inspecting PredictionIO...

[INFO] [Management$] PredictionIO 0.11.0-SNAPSHOT is installed at /home/aml/pio/PredictionIO-0.11.0-SNAPSHOT

[INFO] [Management$] Inspecting Apache Spark...

[INFO] [Management$] Apache Spark is installed at None

[INFO] [Management$] Apache Spark 1.6.3 detected (meets minimum requirement of 1.3.0)

[INFO] [Management$] Inspecting storage backend connections...

[INFO] [Storage$] Verifying Meta Data Backend (Source: ELASTICSEARCH)...

[INFO] [Storage$] Verifying Model Data Backend (Source: HDFS)...

[ERROR] [Storage$] Error initializing storage client for source HDFS

[ERROR] [Management$] Unable to connect to all storage backends successfully.

The following shows the error message from the storage backend.



Data source HDFS was not properly initialized. (org.apache.predictionio.data.storage.StorageClientException)



Dumping configuration of initialized storage backend sources.

Please make sure they are correct.



Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME -> /usr/local/elasticsearch, HOSTS -> some-master, PORTS -> 9300, CLUSTERNAME -> infoquest, TYPE -> elasticsearch

Source Name: HDFS; Type: (error); Configuration: (error)



Thanks
Gaurav



<pio-env.sh.rtf>

-- 
Thanks
Gaurav
http://www.infoquestsolutions.com <http://www.infoquestsolutions.com/>
Turning Imagination To Reality
Skype:- <> infoquestsolutions
Gtalk:- infoquestindia

-- 
You received this message because you are subscribed to the Google Groups "actionml-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to actionml-user+unsubscribe@googlegroups.com <ma...@googlegroups.com>.
To post to this group, send email to actionml-user@googlegroups.com <ma...@googlegroups.com>.
To view this discussion on the web visit https://groups.google.com/d/msgid/actionml-user/CACb6ZbDphs3hAOXhbVRyUL5CbdTTaP_rMZTn6TW77ProtuE2Kg%40mail.gmail.com <https://groups.google.com/d/msgid/actionml-user/CACb6ZbDphs3hAOXhbVRyUL5CbdTTaP_rMZTn6TW77ProtuE2Kg%40mail.gmail.com?utm_medium=email&utm_source=footer>.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.



-- 
You received this message because you are subscribed to the Google Groups "actionml-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to actionml-user+unsubscribe@googlegroups.com <ma...@googlegroups.com>.
To post to this group, send email to actionml-user@googlegroups.com <ma...@googlegroups.com>.
To view this discussion on the web visit https://groups.google.com/d/msgid/actionml-user/CACb6ZbANoiMkXtwb0jSSdE4MxVSpNL-VNpwB7tdH%3Do-%3DLwkU5A%40mail.gmail.com <https://groups.google.com/d/msgid/actionml-user/CACb6ZbANoiMkXtwb0jSSdE4MxVSpNL-VNpwB7tdH%3Do-%3DLwkU5A%40mail.gmail.com?utm_medium=email&utm_source=footer>.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.


Re: Error while pio status

Posted by infoquest india <in...@gmail.com>.
I have done corrections and its giving errors below


ERROR] [Console$] No storage backend implementation can be found (tried
both org.apache.predictionio.data.storage.elasticsearch.ESModels and
elasticsearch.ESModels) (org.apache.predictionio.data.storage.
StorageClientException)

[ERROR] [Console$] Dumping configuration of initialized storage backend
sources. Please make sure they are correct.

[ERROR] [Console$] Source Name: ELASTICSEARCH; Type: elasticsearch;
Configuration: HOME -> /usr/local/elasticsearch, HOSTS ->PUBLIC IP(Can't
reveal), PORTS -> 9300, CLUSTERNAME -> infoquest, TYPE -> elasticsearch


on checking on port 9200 all seems ok

{
  "status" : 200,
  "name" : "Dusk",
  "cluster_name" : "infoquest",
  "version" : {
    "number" : "1.7.5",
    "build_hash" : "00f95f4ffca6de89d68b7ccaf80d148f1f70e4d4",
    "build_timestamp" : "2016-02-02T09:55:30Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.4"
  },
  "tagline" : "You Know, for Search"
}


Thanks
Gaurav
http://www.infoquestsolutions.com
Turning Imagination To Reality
Skype:- infoquestsolutions
Gtalk:- infoquestindia

On Mon, Apr 3, 2017 at 10:34 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:

> No, your pio-env.sh is changed and needs correction as shown below. I will
> correct any mention of aml PIO, which has been merged with Apache PIO since
> 0.10.0
>
>
> On Apr 3, 2017, at 8:38 AM, infoquest india <in...@gmail.com>
> wrote:
>
> Please check instructions on actionml.com here
>
> http://actionml.com/docs/single_machine
>
> Its all i have copied from. And it also suggest aml version of prediction
> which is also incorrect.
>
>
> On Mon, 3 Apr 2017 at 8:41 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:
>
>> The data will come from HBase (or possibly JDBC but not recommended) the
>> model is always stored in Elasticsearch. The reason for storage in
>> Elasticsearch is that the last step in the algorithm is performed by the ES
>> query, with gives k-nearest neighbors based on cosine similarity. This is
>> not possible with HDFS. We are not fetching things by ID, we are performing
>> a mathematical operation on the model that fetches special things.
>>
>> HDFS may be used for import/export but is not needed by the UR explicitly.
>>
>> If you are using the setup instructions on actionml.com I suggest you
>> look through that again. It looks like you have tried things that were
>> outside of those instructions.
>>
>>
>> #!/usr/bin/env bash
>>
>> # PredictionIO Main Configuration
>> #
>> # This section controls core behavior of PredictionIO. It is very likely
>> that
>> # you need to change these to fit your site.
>>
>> # Safe config that will work if you expand your cluster later
>> SPARK_HOME=/usr/local/spark
>> ES_CONF_DIR=/usr/local/elasticsearch
>> HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
>> HBASE_CONF_DIR=/usr/local/hbase/conf
>>
>> # Filesystem paths where PredictionIO uses as block storage.
>> PIO_FS_BASEDIR=*$HOME*/.pio_store
>> PIO_FS_ENGINESDIR=*$PIO_FS_BASEDIR*/engines
>> PIO_FS_TMPDIR=*$PIO_FS_BASEDIR*/tmp
>>
>> # PredictionIO Storage Configuration
>> #
>> # This section controls programs that make use of PredictionIO's built-in
>> # storage facilities.
>>
>> # Storage Repositories
>>
>> PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta
>> PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH
>>
>>
>> PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event
>> PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE
>>
>> # Need to use HDFS here instead of LOCALFS to account for future expansion
>> PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model
>> # PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=HDFS
>> PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE= ELASTICSEARCH
>>
>>
>> # Storage Data Sources, lower level that repos above, just a simple
>> storage API
>> # to use
>>
>> # Elasticsearch Example
>> PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch
>> PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=/usr/local/elasticsearch
>> # the next line should match the cluster.name in elasticsearch.yml
>> PIO_STORAGE_SOURCES_ELASTICSEARCH_CLUSTERNAME=infoquest
>>
>> # For single host Elasticsearch, may add hosts and ports later
>> # PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=some-master
>> PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=*some-master*  <—— put your DNS
>> name or IP address for ES here
>> PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9300
>>
>> # dummy models are stored here so use HDFS in case you later want to
>> # expand the Event and PredictionServers
>> PIO_STORAGE_SOURCES_HDFS_TYPE=hdfs
>> PIO_STORAGE_SOURCES_HDFS_PATH=hdfs://some-master:9000/models
>>
>> # HBase Source config
>> PIO_STORAGE_SOURCES_HBASE_TYPE=hbase
>> PIO_STORAGE_SOURCES_HBASE_HOME=/usr/local/hbase
>> # Hbase single master config
>> # PIO_STORAGE_SOURCES_HBASE_HOSTS=some-master
>> PIO_STORAGE_SOURCES_HBASE_HOSTS=*some-master*   <—— put your DNS name or
>> IP address for HBase here
>> PIO_STORAGE_SOURCES_HBASE_PORTS=0
>>
>> # I don’t think this is used
>> PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs
>> PIO_STORAGE_SOURCES_FS_PATH=/mymodels <—— really? /mymodels at the root
>> of the local disk?
>>
>>
>>
>>
>> On Apr 3, 2017, at 7:01 AM, infoquest india <in...@gmail.com>
>> wrote:
>>
>> Can we use HDFS or LocalFileSystem for UR ?
>>
>> I am using single machine setup and changed my /etc/hosts file to point
>> to internal IP.
>>
>> Please find attached pio-env,sh.
>>
>> One thing i am not clear what is creating issue HDFS or ElasticSearch ?
>>
>>
>> Thanks
>> Gaurav
>> http://www.infoquestsolutions.com
>> Turning Imagination To Reality
>> Skype:- infoquestsolutions
>> Gtalk:- infoquestindia
>>
>> On Mon, Apr 3, 2017 at 6:52 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:
>>
>> If you are still using the UR you don’t need HDFS as a storage backend.
>>
>> In setup instructions, “some-master” is a placeholder where you actually
>> enter the DNS name or IP address of your actual master machine running
>> Elasticsearch. This can be a list comma separated, no spaces.
>>
>> Can you share your pio-env.sh
>>
>>
>> On Apr 3, 2017, at 4:31 AM, infoquest india <in...@gmail.com>
>> wrote:
>>
>> Hi
>>
>> I am using pio status i am getting error
>>
>>
>> SLF4J: Class path contains multiple SLF4J bindings.
>>
>> SLF4J: Found binding in [jar:file:/home/aml/pio/
>> PredictionIO-0.11.0-SNAPSHOT/lib/spark/pio-data-hdfs-
>> assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>
>> SLF4J: Found binding in [jar:file:/home/aml/pio/
>> PredictionIO-0.11.0-SNAPSHOT/lib/pio-assembly-0.11.0-
>> SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>>
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>
>> [INFO] [Management$] Inspecting PredictionIO...
>>
>> [INFO] [Management$] PredictionIO 0.11.0-SNAPSHOT is installed at
>> /home/aml/pio/PredictionIO-0.11.0-SNAPSHOT
>>
>> [INFO] [Management$] Inspecting Apache Spark...
>>
>> [INFO] [Management$] Apache Spark is installed at None
>>
>> [INFO] [Management$] Apache Spark 1.6.3 detected (meets minimum
>> requirement of 1.3.0)
>>
>> [INFO] [Management$] Inspecting storage backend connections...
>>
>> [INFO] [Storage$] Verifying Meta Data Backend (Source: ELASTICSEARCH)...
>>
>> [INFO] [Storage$] Verifying Model Data Backend (Source: HDFS)...
>>
>> [ERROR] [Storage$] Error initializing storage client for source HDFS
>>
>> [ERROR] [Management$] Unable to connect to all storage backends
>> successfully.
>>
>> The following shows the error message from the storage backend.
>>
>>
>> Data source HDFS was not properly initialized.
>> (org.apache.predictionio.data.storage.StorageClientException)
>>
>>
>> Dumping configuration of initialized storage backend sources.
>>
>> Please make sure they are correct.
>>
>>
>> Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME ->
>> /usr/local/elasticsearch, HOSTS -> some-master, PORTS -> 9300, CLUSTERNAME
>> -> infoquest, TYPE -> elasticsearch
>>
>> Source Name: HDFS; Type: (error); Configuration: (error)
>>
>>
>> Thanks
>> Gaurav
>>
>>
>>
>> <pio-env.sh.rtf>
>>
>> --
> Thanks
> Gaurav
> http://www.infoquestsolutions.com
> Turning Imagination To Reality
> Skype:- infoquestsolutions
> Gtalk:- infoquestindia
>
> --
> You received this message because you are subscribed to the Google Groups
> "actionml-user" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to actionml-user+unsubscribe@googlegroups.com.
> To post to this group, send email to actionml-user@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/actionml-user/CACb6ZbDphs3hAOXhbVRyUL5CbdTTa
> P_rMZTn6TW77ProtuE2Kg%40mail.gmail.com
> <https://groups.google.com/d/msgid/actionml-user/CACb6ZbDphs3hAOXhbVRyUL5CbdTTaP_rMZTn6TW77ProtuE2Kg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
>

Re: Error while pio status

Posted by Pat Ferrel <pa...@occamsmachete.com>.
No, your pio-env.sh is changed and needs correction as shown below. I will correct any mention of aml PIO, which has been merged with Apache PIO since 0.10.0


On Apr 3, 2017, at 8:38 AM, infoquest india <in...@gmail.com> wrote:

Please check instructions on actionml.com <http://actionml.com/> here

http://actionml.com/docs/single_machine <http://actionml.com/docs/single_machine>

Its all i have copied from. And it also suggest aml version of prediction which is also incorrect. 


On Mon, 3 Apr 2017 at 8:41 PM, Pat Ferrel <pat@occamsmachete.com <ma...@occamsmachete.com>> wrote:
The data will come from HBase (or possibly JDBC but not recommended) the model is always stored in Elasticsearch. The reason for storage in Elasticsearch is that the last step in the algorithm is performed by the ES query, with gives k-nearest neighbors based on cosine similarity. This is not possible with HDFS. We are not fetching things by ID, we are performing a mathematical operation on the model that fetches special things.

HDFS may be used for import/export but is not needed by the UR explicitly.

If you are using the setup instructions on actionml.com <http://actionml.com/> I suggest you look through that again. It looks like you have tried things that were outside of those instructions.


#!/usr/bin/env bash

# PredictionIO Main Configuration
#
# This section controls core behavior of PredictionIO. It is very likely that
# you need to change these to fit your site.

# Safe config that will work if you expand your cluster later
SPARK_HOME=/usr/local/spark
ES_CONF_DIR=/usr/local/elasticsearch
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
HBASE_CONF_DIR=/usr/local/hbase/conf

# Filesystem paths where PredictionIO uses as block storage.
PIO_FS_BASEDIR=$HOME/.pio_store
PIO_FS_ENGINESDIR=$PIO_FS_BASEDIR/engines
PIO_FS_TMPDIR=$PIO_FS_BASEDIR/tmp

# PredictionIO Storage Configuration
#
# This section controls programs that make use of PredictionIO's built-in
# storage facilities.

# Storage Repositories

PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta
PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH


PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event
PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE

# Need to use HDFS here instead of LOCALFS to account for future expansion
PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model
# PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=HDFS
PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE= ELASTICSEARCH


# Storage Data Sources, lower level that repos above, just a simple storage API
# to use

# Elasticsearch Example
PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=/usr/local/elasticsearch
# the next line should match the cluster.name <http://cluster.name/> in elasticsearch.yml
PIO_STORAGE_SOURCES_ELASTICSEARCH_CLUSTERNAME=infoquest

# For single host Elasticsearch, may add hosts and ports later
# PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=some-master
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=some-master  <—— put your DNS name or IP address for ES here
PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9300

# dummy models are stored here so use HDFS in case you later want to
# expand the Event and PredictionServers
PIO_STORAGE_SOURCES_HDFS_TYPE=hdfs
PIO_STORAGE_SOURCES_HDFS_PATH=hdfs://some-master:9000/models

# HBase Source config
PIO_STORAGE_SOURCES_HBASE_TYPE=hbase
PIO_STORAGE_SOURCES_HBASE_HOME=/usr/local/hbase
# Hbase single master config
# PIO_STORAGE_SOURCES_HBASE_HOSTS=some-master
PIO_STORAGE_SOURCES_HBASE_HOSTS=some-master   <—— put your DNS name or IP address for HBase here
PIO_STORAGE_SOURCES_HBASE_PORTS=0

# I don’t think this is used
PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs
PIO_STORAGE_SOURCES_FS_PATH=/mymodels <—— really? /mymodels at the root of the local disk?




On Apr 3, 2017, at 7:01 AM, infoquest india <infoquestindia@gmail.com <ma...@gmail.com>> wrote:

Can we use HDFS or LocalFileSystem for UR ?

I am using single machine setup and changed my /etc/hosts file to point to internal IP.

Please find attached pio-env,sh.

One thing i am not clear what is creating issue HDFS or ElasticSearch ?


Thanks
Gaurav
http://www.infoquestsolutions.com <http://www.infoquestsolutions.com/>
Turning Imagination To Reality
Skype:- <> infoquestsolutions
Gtalk:- infoquestindia

On Mon, Apr 3, 2017 at 6:52 PM, Pat Ferrel <pat@occamsmachete.com <ma...@occamsmachete.com>> wrote:
If you are still using the UR you don’t need HDFS as a storage backend.

In setup instructions, “some-master” is a placeholder where you actually enter the DNS name or IP address of your actual master machine running Elasticsearch. This can be a list comma separated, no spaces.

Can you share your pio-env.sh


On Apr 3, 2017, at 4:31 AM, infoquest india <infoquestindia@gmail.com <ma...@gmail.com>> wrote:

Hi 

I am using pio status i am getting error 


SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/home/aml/pio/PredictionIO-0.11.0-SNAPSHOT/lib/spark/pio-data-hdfs-assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/home/aml/pio/PredictionIO-0.11.0-SNAPSHOT/lib/pio-assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings <http://www.slf4j.org/codes.html#multiple_bindings> for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

[INFO] [Management$] Inspecting PredictionIO...

[INFO] [Management$] PredictionIO 0.11.0-SNAPSHOT is installed at /home/aml/pio/PredictionIO-0.11.0-SNAPSHOT

[INFO] [Management$] Inspecting Apache Spark...

[INFO] [Management$] Apache Spark is installed at None

[INFO] [Management$] Apache Spark 1.6.3 detected (meets minimum requirement of 1.3.0)

[INFO] [Management$] Inspecting storage backend connections...

[INFO] [Storage$] Verifying Meta Data Backend (Source: ELASTICSEARCH)...

[INFO] [Storage$] Verifying Model Data Backend (Source: HDFS)...

[ERROR] [Storage$] Error initializing storage client for source HDFS

[ERROR] [Management$] Unable to connect to all storage backends successfully.

The following shows the error message from the storage backend.



Data source HDFS was not properly initialized. (org.apache.predictionio.data.storage.StorageClientException)



Dumping configuration of initialized storage backend sources.

Please make sure they are correct.



Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME -> /usr/local/elasticsearch, HOSTS -> some-master, PORTS -> 9300, CLUSTERNAME -> infoquest, TYPE -> elasticsearch

Source Name: HDFS; Type: (error); Configuration: (error)



Thanks
Gaurav



<pio-env.sh.rtf>

-- 
Thanks
Gaurav
http://www.infoquestsolutions.com <http://www.infoquestsolutions.com/>
Turning Imagination To Reality
Skype:- infoquestsolutions
Gtalk:- infoquestindia

-- 
You received this message because you are subscribed to the Google Groups "actionml-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to actionml-user+unsubscribe@googlegroups.com <ma...@googlegroups.com>.
To post to this group, send email to actionml-user@googlegroups.com <ma...@googlegroups.com>.
To view this discussion on the web visit https://groups.google.com/d/msgid/actionml-user/CACb6ZbDphs3hAOXhbVRyUL5CbdTTaP_rMZTn6TW77ProtuE2Kg%40mail.gmail.com <https://groups.google.com/d/msgid/actionml-user/CACb6ZbDphs3hAOXhbVRyUL5CbdTTaP_rMZTn6TW77ProtuE2Kg%40mail.gmail.com?utm_medium=email&utm_source=footer>.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.


Re: Error while pio status

Posted by infoquest india <in...@gmail.com>.
Please check instructions on actionml.com here

http://actionml.com/docs/single_machine

Its all i have copied from. And it also suggest aml version of prediction
which is also incorrect.


On Mon, 3 Apr 2017 at 8:41 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:

> The data will come from HBase (or possibly JDBC but not recommended) the
> model is always stored in Elasticsearch. The reason for storage in
> Elasticsearch is that the last step in the algorithm is performed by the ES
> query, with gives k-nearest neighbors based on cosine similarity. This is
> not possible with HDFS. We are not fetching things by ID, we are performing
> a mathematical operation on the model that fetches special things.
>
> HDFS may be used for import/export but is not needed by the UR explicitly.
>
> If you are using the setup instructions on actionml.com I suggest you
> look through that again. It looks like you have tried things that were
> outside of those instructions.
>
>
> #!/usr/bin/env bash
>
> # PredictionIO Main Configuration
> #
> # This section controls core behavior of PredictionIO. It is very likely
> that
> # you need to change these to fit your site.
>
> # Safe config that will work if you expand your cluster later
> SPARK_HOME=/usr/local/spark
> ES_CONF_DIR=/usr/local/elasticsearch
> HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
> HBASE_CONF_DIR=/usr/local/hbase/conf
>
> # Filesystem paths where PredictionIO uses as block storage.
> PIO_FS_BASEDIR=*$HOME*/.pio_store
> PIO_FS_ENGINESDIR=*$PIO_FS_BASEDIR*/engines
> PIO_FS_TMPDIR=*$PIO_FS_BASEDIR*/tmp
>
> # PredictionIO Storage Configuration
> #
> # This section controls programs that make use of PredictionIO's built-in
> # storage facilities.
>
> # Storage Repositories
>
> PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta
> PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH
>
>
> PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event
> PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE
>
> # Need to use HDFS here instead of LOCALFS to account for future expansion
> PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model
> # PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=HDFS
> PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE= ELASTICSEARCH
>
>
> # Storage Data Sources, lower level that repos above, just a simple
> storage API
> # to use
>
> # Elasticsearch Example
> PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch
> PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=/usr/local/elasticsearch
> # the next line should match the cluster.name in elasticsearch.yml
> PIO_STORAGE_SOURCES_ELASTICSEARCH_CLUSTERNAME=infoquest
>
> # For single host Elasticsearch, may add hosts and ports later
> # PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=some-master
> PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=*some-master*  <—— put your DNS
> name or IP address for ES here
> PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9300
>
> # dummy models are stored here so use HDFS in case you later want to
> # expand the Event and PredictionServers
> PIO_STORAGE_SOURCES_HDFS_TYPE=hdfs
> PIO_STORAGE_SOURCES_HDFS_PATH=hdfs://some-master:9000/models
>
> # HBase Source config
> PIO_STORAGE_SOURCES_HBASE_TYPE=hbase
> PIO_STORAGE_SOURCES_HBASE_HOME=/usr/local/hbase
> # Hbase single master config
> # PIO_STORAGE_SOURCES_HBASE_HOSTS=some-master
> PIO_STORAGE_SOURCES_HBASE_HOSTS=*some-master*   <—— put your DNS name or
> IP address for HBase here
> PIO_STORAGE_SOURCES_HBASE_PORTS=0
>
> # I don’t think this is used
> PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs
> PIO_STORAGE_SOURCES_FS_PATH=/mymodels <—— really? /mymodels at the root
> of the local disk?
>
>
>
>
> On Apr 3, 2017, at 7:01 AM, infoquest india <in...@gmail.com>
> wrote:
>
> Can we use HDFS or LocalFileSystem for UR ?
>
> I am using single machine setup and changed my /etc/hosts file to point to
> internal IP.
>
> Please find attached pio-env,sh.
>
> One thing i am not clear what is creating issue HDFS or ElasticSearch ?
>
>
> Thanks
> Gaurav
> http://www.infoquestsolutions.com
> Turning Imagination To Reality
> Skype:- infoquestsolutions
> Gtalk:- infoquestindia
>
> On Mon, Apr 3, 2017 at 6:52 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:
>
> If you are still using the UR you don’t need HDFS as a storage backend.
>
> In setup instructions, “some-master” is a placeholder where you actually
> enter the DNS name or IP address of your actual master machine running
> Elasticsearch. This can be a list comma separated, no spaces.
>
> Can you share your pio-env.sh
>
>
> On Apr 3, 2017, at 4:31 AM, infoquest india <in...@gmail.com>
> wrote:
>
> Hi
>
> I am using pio status i am getting error
>
>
> SLF4J: Class path contains multiple SLF4J bindings.
>
> SLF4J: Found binding in
> [jar:file:/home/aml/pio/PredictionIO-0.11.0-SNAPSHOT/lib/spark/pio-data-hdfs-assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
> SLF4J: Found binding in
> [jar:file:/home/aml/pio/PredictionIO-0.11.0-SNAPSHOT/lib/pio-assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
>
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>
> [INFO] [Management$] Inspecting PredictionIO...
>
> [INFO] [Management$] PredictionIO 0.11.0-SNAPSHOT is installed at
> /home/aml/pio/PredictionIO-0.11.0-SNAPSHOT
>
> [INFO] [Management$] Inspecting Apache Spark...
>
> [INFO] [Management$] Apache Spark is installed at None
>
> [INFO] [Management$] Apache Spark 1.6.3 detected (meets minimum
> requirement of 1.3.0)
>
> [INFO] [Management$] Inspecting storage backend connections...
>
> [INFO] [Storage$] Verifying Meta Data Backend (Source: ELASTICSEARCH)...
>
> [INFO] [Storage$] Verifying Model Data Backend (Source: HDFS)...
>
> [ERROR] [Storage$] Error initializing storage client for source HDFS
>
> [ERROR] [Management$] Unable to connect to all storage backends
> successfully.
>
> The following shows the error message from the storage backend.
>
>
> Data source HDFS was not properly initialized.
> (org.apache.predictionio.data.storage.StorageClientException)
>
>
> Dumping configuration of initialized storage backend sources.
>
> Please make sure they are correct.
>
>
> Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME ->
> /usr/local/elasticsearch, HOSTS -> some-master, PORTS -> 9300, CLUSTERNAME
> -> infoquest, TYPE -> elasticsearch
>
> Source Name: HDFS; Type: (error); Configuration: (error)
>
>
> Thanks
> Gaurav
>
>
>
> <pio-env.sh.rtf>
>
> --
Thanks
Gaurav
http://www.infoquestsolutions.com
Turning Imagination To Reality
Skype:- infoquestsolutions
Gtalk:- infoquestindia

Re: Error while pio status

Posted by Pat Ferrel <pa...@occamsmachete.com>.
The data will come from HBase (or possibly JDBC but not recommended) the model is always stored in Elasticsearch. The reason for storage in Elasticsearch is that the last step in the algorithm is performed by the ES query, with gives k-nearest neighbors based on cosine similarity. This is not possible with HDFS. We are not fetching things by ID, we are performing a mathematical operation on the model that fetches special things.

HDFS may be used for import/export but is not needed by the UR explicitly.

If you are using the setup instructions on actionml.com I suggest you look through that again. It looks like you have tried things that were outside of those instructions.


#!/usr/bin/env bash

# PredictionIO Main Configuration
#
# This section controls core behavior of PredictionIO. It is very likely that
# you need to change these to fit your site.

# Safe config that will work if you expand your cluster later
SPARK_HOME=/usr/local/spark
ES_CONF_DIR=/usr/local/elasticsearch
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
HBASE_CONF_DIR=/usr/local/hbase/conf

# Filesystem paths where PredictionIO uses as block storage.
PIO_FS_BASEDIR=$HOME/.pio_store
PIO_FS_ENGINESDIR=$PIO_FS_BASEDIR/engines
PIO_FS_TMPDIR=$PIO_FS_BASEDIR/tmp

# PredictionIO Storage Configuration
#
# This section controls programs that make use of PredictionIO's built-in
# storage facilities.

# Storage Repositories

PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta
PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH


PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event
PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE

# Need to use HDFS here instead of LOCALFS to account for future expansion
PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model
# PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=HDFS
PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE= ELASTICSEARCH


# Storage Data Sources, lower level that repos above, just a simple storage API
# to use

# Elasticsearch Example
PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=/usr/local/elasticsearch
# the next line should match the cluster.name in elasticsearch.yml
PIO_STORAGE_SOURCES_ELASTICSEARCH_CLUSTERNAME=infoquest

# For single host Elasticsearch, may add hosts and ports later
# PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=some-master
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=some-master  <—— put your DNS name or IP address for ES here
PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9300

# dummy models are stored here so use HDFS in case you later want to
# expand the Event and PredictionServers
PIO_STORAGE_SOURCES_HDFS_TYPE=hdfs
PIO_STORAGE_SOURCES_HDFS_PATH=hdfs://some-master:9000/models

# HBase Source config
PIO_STORAGE_SOURCES_HBASE_TYPE=hbase
PIO_STORAGE_SOURCES_HBASE_HOME=/usr/local/hbase
# Hbase single master config
# PIO_STORAGE_SOURCES_HBASE_HOSTS=some-master
PIO_STORAGE_SOURCES_HBASE_HOSTS=some-master   <—— put your DNS name or IP address for HBase here
PIO_STORAGE_SOURCES_HBASE_PORTS=0

# I don’t think this is used
PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs
PIO_STORAGE_SOURCES_FS_PATH=/mymodels <—— really? /mymodels at the root of the local disk?




On Apr 3, 2017, at 7:01 AM, infoquest india <in...@gmail.com> wrote:

Can we use HDFS or LocalFileSystem for UR ?

I am using single machine setup and changed my /etc/hosts file to point to internal IP.

Please find attached pio-env,sh.

One thing i am not clear what is creating issue HDFS or ElasticSearch ?


Thanks
Gaurav
http://www.infoquestsolutions.com <http://www.infoquestsolutions.com/>
Turning Imagination To Reality
Skype:- infoquestsolutions
Gtalk:- infoquestindia

On Mon, Apr 3, 2017 at 6:52 PM, Pat Ferrel <pat@occamsmachete.com <ma...@occamsmachete.com>> wrote:
If you are still using the UR you don’t need HDFS as a storage backend.

In setup instructions, “some-master” is a placeholder where you actually enter the DNS name or IP address of your actual master machine running Elasticsearch. This can be a list comma separated, no spaces.

Can you share your pio-env.sh


On Apr 3, 2017, at 4:31 AM, infoquest india <infoquestindia@gmail.com <ma...@gmail.com>> wrote:

Hi 

I am using pio status i am getting error 


SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/home/aml/pio/PredictionIO-0.11.0-SNAPSHOT/lib/spark/pio-data-hdfs-assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/home/aml/pio/PredictionIO-0.11.0-SNAPSHOT/lib/pio-assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings <http://www.slf4j.org/codes.html#multiple_bindings> for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

[INFO] [Management$] Inspecting PredictionIO...

[INFO] [Management$] PredictionIO 0.11.0-SNAPSHOT is installed at /home/aml/pio/PredictionIO-0.11.0-SNAPSHOT

[INFO] [Management$] Inspecting Apache Spark...

[INFO] [Management$] Apache Spark is installed at None

[INFO] [Management$] Apache Spark 1.6.3 detected (meets minimum requirement of 1.3.0)

[INFO] [Management$] Inspecting storage backend connections...

[INFO] [Storage$] Verifying Meta Data Backend (Source: ELASTICSEARCH)...

[INFO] [Storage$] Verifying Model Data Backend (Source: HDFS)...

[ERROR] [Storage$] Error initializing storage client for source HDFS

[ERROR] [Management$] Unable to connect to all storage backends successfully.

The following shows the error message from the storage backend.



Data source HDFS was not properly initialized. (org.apache.predictionio.data.storage.StorageClientException)



Dumping configuration of initialized storage backend sources.

Please make sure they are correct.



Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME -> /usr/local/elasticsearch, HOSTS -> some-master, PORTS -> 9300, CLUSTERNAME -> infoquest, TYPE -> elasticsearch

Source Name: HDFS; Type: (error); Configuration: (error)



Thanks
Gaurav



<pio-env.sh.rtf>


Re: Error while pio status

Posted by infoquest india <in...@gmail.com>.
Can we use HDFS or LocalFileSystem for UR ?

I am using single machine setup and changed my /etc/hosts file to point to
internal IP.

Please find attached pio-env,sh.

One thing i am not clear what is creating issue HDFS or ElasticSearch ?


Thanks
Gaurav
http://www.infoquestsolutions.com
Turning Imagination To Reality
Skype:- infoquestsolutions
Gtalk:- infoquestindia

On Mon, Apr 3, 2017 at 6:52 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:

> If you are still using the UR you don’t need HDFS as a storage backend.
>
> In setup instructions, “some-master” is a placeholder where you actually
> enter the DNS name or IP address of your actual master machine running
> Elasticsearch. This can be a list comma separated, no spaces.
>
> Can you share your pio-env.sh
>
>
> On Apr 3, 2017, at 4:31 AM, infoquest india <in...@gmail.com>
> wrote:
>
> Hi
>
> I am using pio status i am getting error
>
>
> SLF4J: Class path contains multiple SLF4J bindings.
>
> SLF4J: Found binding in [jar:file:/home/aml/pio/
> PredictionIO-0.11.0-SNAPSHOT/lib/spark/pio-data-hdfs-
> assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
> SLF4J: Found binding in [jar:file:/home/aml/pio/
> PredictionIO-0.11.0-SNAPSHOT/lib/pio-assembly-0.11.0-
> SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
>
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>
> [INFO] [Management$] Inspecting PredictionIO...
>
> [INFO] [Management$] PredictionIO 0.11.0-SNAPSHOT is installed at
> /home/aml/pio/PredictionIO-0.11.0-SNAPSHOT
>
> [INFO] [Management$] Inspecting Apache Spark...
>
> [INFO] [Management$] Apache Spark is installed at None
>
> [INFO] [Management$] Apache Spark 1.6.3 detected (meets minimum
> requirement of 1.3.0)
>
> [INFO] [Management$] Inspecting storage backend connections...
>
> [INFO] [Storage$] Verifying Meta Data Backend (Source: ELASTICSEARCH)...
>
> [INFO] [Storage$] Verifying Model Data Backend (Source: HDFS)...
>
> [ERROR] [Storage$] Error initializing storage client for source HDFS
>
> [ERROR] [Management$] Unable to connect to all storage backends
> successfully.
>
> The following shows the error message from the storage backend.
>
>
> Data source HDFS was not properly initialized.
> (org.apache.predictionio.data.storage.StorageClientException)
>
>
> Dumping configuration of initialized storage backend sources.
>
> Please make sure they are correct.
>
>
> Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME ->
> /usr/local/elasticsearch, HOSTS -> some-master, PORTS -> 9300, CLUSTERNAME
> -> infoquest, TYPE -> elasticsearch
>
> Source Name: HDFS; Type: (error); Configuration: (error)
>
>
> Thanks
> Gaurav
>
>
>

Re: Error while pio status

Posted by Pat Ferrel <pa...@occamsmachete.com>.
If you are still using the UR you don’t need HDFS as a storage backend.

In setup instructions, “some-master” is a placeholder where you actually enter the DNS name or IP address of your actual master machine running Elasticsearch. This can be a list comma separated, no spaces.

Can you share your pio-env.sh


On Apr 3, 2017, at 4:31 AM, infoquest india <in...@gmail.com> wrote:

Hi 

I am using pio status i am getting error 


SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/home/aml/pio/PredictionIO-0.11.0-SNAPSHOT/lib/spark/pio-data-hdfs-assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/home/aml/pio/PredictionIO-0.11.0-SNAPSHOT/lib/pio-assembly-0.11.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings <http://www.slf4j.org/codes.html#multiple_bindings> for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

[INFO] [Management$] Inspecting PredictionIO...

[INFO] [Management$] PredictionIO 0.11.0-SNAPSHOT is installed at /home/aml/pio/PredictionIO-0.11.0-SNAPSHOT

[INFO] [Management$] Inspecting Apache Spark...

[INFO] [Management$] Apache Spark is installed at None

[INFO] [Management$] Apache Spark 1.6.3 detected (meets minimum requirement of 1.3.0)

[INFO] [Management$] Inspecting storage backend connections...

[INFO] [Storage$] Verifying Meta Data Backend (Source: ELASTICSEARCH)...

[INFO] [Storage$] Verifying Model Data Backend (Source: HDFS)...

[ERROR] [Storage$] Error initializing storage client for source HDFS

[ERROR] [Management$] Unable to connect to all storage backends successfully.

The following shows the error message from the storage backend.



Data source HDFS was not properly initialized. (org.apache.predictionio.data.storage.StorageClientException)



Dumping configuration of initialized storage backend sources.

Please make sure they are correct.



Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME -> /usr/local/elasticsearch, HOSTS -> some-master, PORTS -> 9300, CLUSTERNAME -> infoquest, TYPE -> elasticsearch

Source Name: HDFS; Type: (error); Configuration: (error)



Thanks
Gaurav