You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@datalab.apache.org by "Leonid Frolov (Jira)" <ji...@apache.org> on 2021/06/30 07:26:00 UTC

[jira] [Updated] (DATALAB-2433) [GCP]: Investigate performance Dataproc and libaries usage

     [ https://issues.apache.org/jira/browse/DATALAB-2433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Leonid Frolov updated DATALAB-2433:
-----------------------------------
    Story Points: 8

> [GCP]: Investigate performance Dataproc and libaries usage
> ----------------------------------------------------------
>
>                 Key: DATALAB-2433
>                 URL: https://issues.apache.org/jira/browse/DATALAB-2433
>             Project: Apache DataLab
>          Issue Type: Task
>      Security Level: Public(Regular Issues) 
>          Components: DataLab Main
>            Reporter: Vira Vitanska
>            Assignee: Leonid Frolov
>            Priority: Major
>              Labels: Debian, DevOps, GCP
>             Fix For: v.2.5
>
>         Attachments: Playbook which was running previuosly successfully.png, Reshape2 installation.png, ggplo2 Jupyter UI.png, ggplot installation via Jupyter UI.txt, ggplot2 ssh.png
>
>
> *Preconditions:*
>  # Dataproc based on CPU is created on Jupyter
>  # Dataproc Master/slave are n1-highmem-4
> *Case1:* Flights data Visualization R  playbook for Dataproc v.2.0.0-RC22-ubuntu18 requires libraries from R package ggplot2 and reshape2. ggplot 2 was installed on Dataproc master/slaves via ssh successfully. However running  Flights data Visualization R  playbook does not find ggplot2.
> If  ggplot2 install via Jupyter UI running  Flights data Visualization R  playbook does find library ggplot2. 
> *Questions:* 
>  # Is the library installed in different places?
>  # Why is not found the library if install via ssh terminal Dataproc?
> *Case2:* Reshape2 installation was more than 40 minutes via Jupyter UI and after that i restart kernel. The attempt of reshape2 installation or running previously successful playbook causes a new error:
> {code:java}
> The code failed because of a fatal error:
> 	Invalid status code '500' from http://172.31.16.13:8998/sessions with error payload: "java.lang.NullPointerException".
> Some things to try:
> a) Make sure Spark has enough available resources for Jupyter to create a Spark context.
> b) Contact your Jupyter administrator to make sure the Spark magics library is configured correctly.
> c) Restart the kernel.
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@datalab.apache.org
For additional commands, e-mail: dev-help@datalab.apache.org