You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cloudstack.apache.org by GitBox <gi...@apache.org> on 2019/01/16 23:24:17 UTC

[GitHub] kiwiflyer commented on issue #3137: Integrating Ceph Status into CloudStack Dashboard

kiwiflyer commented on issue #3137: Integrating Ceph Status into CloudStack Dashboard
URL: https://github.com/apache/cloudstack/issues/3137#issuecomment-454982730
 
 
   So Mimic introduces a new mgmt layer and as part of that provides a bunch
   of APIs. It would be nice to be able to integrate those features and also
   potentially use those endpoints to introduce a managed storage driver for
   Ceph.
   
   On Wed, Jan 16, 2019, 5:15 PM James McClune <notifications@github.com wrote:
   
   > ISSUE TYPE
   >
   >    - Feature Idea
   >
   > COMPONENT NAME
   >
   > CloudStack UI
   >
   > CLOUDSTACK VERSION
   >
   > 4.11.1+
   >
   > CONFIGURATION
   >
   > I have an idea for a new CloudStack feature. Being someone who utilizes
   > Ceph greatly in their environment, I thought it would be pretty cool if
   > CloudStack fetched the health status of a Ceph cluster and displayed it in
   > the ACS dashboard (under each instance of Primary Storage (RBD)). I wrote a
   > Python script that opens a SSH connection to a Ceph node and runs the ceph
   > health command. I'm not too familiar with the "under-the-hood" functions
   > of CloudStack. That's why I would love some advice on how to go about doing
   > this. Some things I brainstormed:
   >
   >    - Where would I put this script, in accordance with the CloudStack
   >    code structure? (e.g. cloudstack/scripts/storage)
   >    - Where would I reference the SSH authentication for the Ceph storage
   >    node? Could we add a passwordless SSH auth to the storage node (via
   >    CloudStack)? Is there another way to fetch ceph health without SSH?
   >    (maybe via API).
   >    - I'm guessing a new column would be created under the storage_pool
   >    table. You could call it ceph_health and insert the health status
   >    within this column, for each row of RBD storage specified. If the storage
   >    type is not RBD, the value would be null.
   >    - For scheduling execution, I'm thinking you could add a reference
   >    within the management server to run the script. The script would open a
   >    connection to the cloud database, query all storage types that are
   >    RBD, authenticate to the RADOS monitor IP specified, run the ceph
   >    health command, report the result back to the CloudStack management
   >    server, and store the result in ceph_health (for each instance).
   >
   > The Ceph health status would be placed under the storage state. For
   > example,
   >
   > State: *Up*
   >
   > Ceph Health: *HEALTH_OK*
   >
   > If anyone needs further clarification, please let me know. Again, I just
   > thought of this idea and it seemed like a pretty good one. I know there are
   > many different Ceph dashboards available, including the dashboard that
   > comes with Ceph (starting in the Luminous release). I thought this feature
   > would be useful and it wouldn't take too much time to implement. If there
   > are any errors in my idea or if I'm misinterpreting something, please let
   > me know. Thanks! :)
   >
   > —
   > You are receiving this because you are subscribed to this thread.
   > Reply to this email directly, view it on GitHub
   > <https://github.com/apache/cloudstack/issues/3137>, or mute the thread
   > <https://github.com/notifications/unsubscribe-auth/AQek8pUUYiLaGhU72dOBNyF608-n9Q_pks5vD7J5gaJpZM4aEJGY>
   > .
   >
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services