You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@superset.apache.org by GitBox <gi...@apache.org> on 2022/01/21 10:45:02 UTC

[GitHub] [superset] ad-m opened a new issue #18127: Testing strategy of Helm Charts

ad-m opened a new issue #18127:
URL: https://github.com/apache/superset/issues/18127


   **Is your feature request related to a problem? Please describe.**
   
   I would like to discuss the testing strategy for Helm Chart for Apache Superset to improve the developer experience and avoid issues like #17920 .
   
   I think a DevEX tweaks in this area could improve development efficiency, which will allow address [the numerous issues](https://github.com/apache/superset/issues?q=is%3Aissue+is%3Aopen+helm) that exist for the Helm Charts. Then we will be able to obtain the certainty necessary for publication on [Artifact Repository](https://artifacthub.io/packages/search?ts_query_web=superset&sort=relevance&page=1) as an official artifact.
   
   **Describe the solution you'd like**
   
   I see numerous solution in that area. I am afraid that I do not have experience in the project to see all the required aspects, so I highly recommend any comments from experienced people, both users and developers. Comments and additional requirements are welcome!
   
   First of all, It will be great to provide `values.schema.json` ( https://helm.sh/docs/topics/charts/#schema-files ). It's about writing JSON schema for `values.yaml` file. I already have some draft in that area. This will improve the experience of developers, but most of all end-users will lose the ability to use wrongly formatted or misindent something `values.yaml` file. 
   
   Secondly, I think about providing unit tests about rendering helm charts to:
   
   * verify schema for generated manifests (I think it will avoid situations like  #17920 ) with K8s expected schema
   * use `pytest` to renders helm manifests & use it as a tests framework eg. Apache Airflow use that approach.
   
   **Describe alternatives you've considered**
   
   We might also use kubeval for testing schema format, but this is another tool when we have a few testing framework in place. A large number of different frameworks raise the entry threshold and lower DevEx by constantly changing the context.
   
   We might also use conftest, but again – one more testing framework which does not bring much more values than  `pytest`
   
   We might also start integration tests on CI eg. in minikube or – like test env – on AWS. This can be a great topic, but such tests are long, slow, and thus cover a limited number of scenarios (but provide a very real validation of correctness). I think we should start with something smaller.
   
   **Additional context**
   
    @wiktor2200 @nytai @mvoitko might be interested in that area as involved in development of our Helm Chart.
    @mik-laj @potiuk may want to share their thoughts based on the experience of Apache Airflow (another Apache Foundation project).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] wiktor2200 commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
wiktor2200 commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018400891


   Sure, great initiative. I can help a little bit :)
   This could help avoiding many more errors. Issue #17920 which you mentioned is only one if it, another one is this one in which I've left a comment about helm chart testing today (1hour ago) #17712 :) This one could also be avoided with proper testing rules.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] potiuk edited a comment on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
potiuk edited a comment on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018404144


   Comment from my/Airlfow side (and maybe  more from @ephraimbuddy and @kaxil and @jedcunningham  who implemented most of the tests there):
   
   Yes. Definitely. Both values schema and unit tests on rendering are super-helpful to keep our Helm Chart "in-check". 
   
   Re - unit tests, we have currently > 350 tests. One thing that I want to look at shortly is to speed them up a bit (this is the only problem we have now that they are rather slow but I believe it should be possible to speed them up a lot). And knowing that other - friendly :) - projects might want to use it we might prioritise it a bit and share our experience.
   
   We use Pytest for all kinds of tests and we think pytest is a great test harness that can be used in various contexts. We tried other solutions before for the helm rendering (helm-unittests)  but it was extremely unfriendly - declarative way of describing and asserting the tests was very difficult to maintain and very few people were even somewhat fluent in it. 
   
   One thing we get with Pytest for the tests - most of our contributors are Pythonistas, and the way we do the rendering tests, it's very much like any other python tests and it's extremely easy to ask even casual contributors to add tests for any contribution they provide and they are able to do it very quickly without too much learning.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] wiktor2200 commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
wiktor2200 commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018428297


   For a good start it would be great to have:
   * syntax check, to avoid helm templating error
   * check for fail-safe values in `values.yaml` file


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] potiuk edited a comment on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
potiuk edited a comment on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018468792


   > To protect against #17712 misconfig in pytest, there would be needed some kind of expected values list (which probably should be updated) - idk about actual implementation, probably will be more clear while planning this :) To be sure that any of those error won't occur some kind of dry-run would be needed. E.g. running helm on minikube will show the result and ability to successfully deploy the app. On cluster is quite easy though: https://helm.sh/docs/topics/chart_tests/
   
   Also another comment for that - in Airlfow we have > 350 unit tests, but we also have "kubernetes integration tests". Those are run also regularly in our CI and they consist of :
   
   * Building Production Image from latest sources
   * Extending the image to include test DAGs for Airflow
   * Deploying the Chart + image in few basic configurations to a kind (https://kind.sigs.k8s.io/docs/user/quick-start/)  cluster which is set-up during testing.
   * Running a small number of Pytest "integration" tests with the running application (very basic, literally "smoke" testing). 
   
   We also run separate "upgrade" tests. We update the chart and upgrade it after installing just to make sure that the upgrade scenario also works (we had some problems with hooks and migration jobs in the past). Those are really small number of tests but they give us confidence that we have not done something disastrous by accident.
   
   I can heartily recommend Kind for this kind of testing, it's also great for development environment when you would like to reproduce those kind of tests locally.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] mik-laj edited a comment on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
mik-laj edited a comment on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018596225


   > We (in Superset) have already an ephemeral test environment on top of AWS ECS & AWS Fargate ( https://preset.io/blog/2021-4-28-ephemeral-test-environments-in-superset/ )
   
   I don't think we need to use external services as we can use KIND on Github Action. We have test environments running on Fargate because we want them to be available for longer than the duration of one build. 
   
   > Like Jarek pointed out, we will share our finding and optimization here as I think more projects can benefit from a pytest backed Helm unit-test framework.
   
   I agree. The Helm Chart testing framework written in Python is very convenient and has a low barrier to entry for projects that already use Python. For Golang project, terratest is also option.
   
   It is also worth mentioning the discussion we had at Airflow when we decided to migrate from tests written in YAML to the new Python framework:  https://github.com/apache/airflow/issues/11657
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] wiktor2200 commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
wiktor2200 commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018449669


   To protect against #17712 misconfig in pytest, there would be needed some kind of expected values list (which probably should be updated) - idk about actual implementation, probably will be more clear while planning this :)
   To be sure that any of those error won't occur some kind of dry-run would be needed. E.g. running helm on minikube will show the result and ability to successfully deploy the app. On cluster is quite easy though: https://helm.sh/docs/topics/chart_tests/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] kaxil commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
kaxil commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018511129


   Yeah, I strongly suggest a combination of both unit-test and integration test for the Helm Chart. Jed and I had talked about it  speeding up Unit test framework we have with pytest by grouping/caching the manifest files for the same input, however, the caveat is if we run them in parallel, that cache will be per thread.
   
   Like Jarek pointed out, we will share our finding and optimization here as I think more projects can benefit from a pytest backed Helm unit-test framework.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] ad-m commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
ad-m commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1019322520


   From the current discussion I see:
   - consensus on the use of `values.schema.json` as validation for user data in `values.yaml`
   - the legitimacy of the use of unit tests - several solutions are visible, including the one available in Apache Airflow causes an enthusiastic attitude among Python developers
   - legitimacy of implementing integration tests in next steps - several solutions are possible
   
   Given the above, I think I can start working on `values.schema.json` as it will probably be useful. I will try to present something in the coming days.
   
   However, I am still very eager to hear all the other points of view, even the contraindications for `values.schema.json`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] ad-m commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
ad-m commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018430487


   Great, this is the approach I was thinking about in the beginning - schema of `values.yaml` + pytest for syntax check of templating. In your opinion, will this protect from issue like #17712 ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] ad-m commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
ad-m commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1021711619


   @craig-rueda As the maintainer of Chart, can you give input here based on your experiences and previous problems to address them well? In fact, this is an issue of how to make the maintainer's life easier. I will have some time to deal with this once we agree on the direction.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] mik-laj commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
mik-laj commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018596225


   > We (in Superset) have already an ephemeral test environment on top of AWS ECS & AWS Fargate ( https://preset.io/blog/2021-4-28-ephemeral-test-environments-in-superset/ )
   
   I don't think we need to use external services as we can use KIND on Github Action. We have test environments running on Fargate because we want them to be available for longer than the duration of one build. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] ad-m commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
ad-m commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018492841


   I like idea of integration testing via eg. `kind` / `minikube`. This is the approach that I considered in the next steps once the local testing is effective. In my experience, using integration tests without unit tests gives a weak devEX, because the test feedback loop is very long even for syntax error. 
   
   We (in Superset) have already an ephemeral test environment on top of AWS ECS & AWS Fargate ( https://preset.io/blog/2021-4-28-ephemeral-test-environments-in-superset/ ). So I am thinking about testing it on an AWS EKS cluster by creating a dedicated namespace for each PR. Then arise the issue of funding, but it goes in line with the test environments about sponsorship. Do you have opinions on which approach is better in this regard? @robdiciuccio as a contributor in #13189 , could you provide some feedback on that?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] potiuk commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
potiuk commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018494730


   > I like idea of integration testing via eg. `kind` / `minikube`. This is the approach that I considered in the next steps once the local testing is effective. In my experience, using integration tests without unit tests gives a weak devEX, because the test feedback loop is very long even for syntax error.
   
   Fully agree.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] kaxil edited a comment on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
kaxil edited a comment on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1019404038


   Beware of issues like https://github.com/apache/airflow/issues/20539 when use external Schema definitions where offline or Air-gapped installations don’t work.
   
   We solved it with bringing those definitions in the repo itself - https://github.com/apache/airflow/pull/20544


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] potiuk commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
potiuk commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018404144


   Comment from my/Airlfow side (and maybe  more from @ephraimbuddy and @kaxil and @jedcunnningham  who implemented most of the tests there):
   
   Yes. Definitely. Both values schema and unit tests on rendering are super-helpful to keep our Helm Chart "in-check". 
   
   Re - unit tests, we have currently > 350 tests. One thing that I want to look at shortly is to speed them up a bit (this is the only problem we have now that they are rather slow but I believe it should be possible to speed them up a lot). And knowing that other - friendly :) - projects might want to use it we might prioritise it a bit and share our experience.
   
   We use Pytest for all kinds of tests and we think pytest is a great test harness that can be used in various contexts. We tried other solutions before for the helm rendering (helm-unittests)  but it was extremely unfriendly - declarative way of describing and asserting the tests was very difficult to maintain and very few people were even somewhat fluent in it. 
   
   One thing we get with Pytest for the tests - most of our contributors are Pythonistas, and the way we do the rendering tests, it's very much like any other python tests and it's extremely easy to ask even casual contributors to add tests for any contribution they provide and they are able to do it very quickly without too much learning.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] ad-m commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
ad-m commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018419740


   @potiuk It would be great to share a solution and be able to achieve a synergistic effect in the development of test mechanisms in this area! Thank you very much for sharing your in-depth test management experience in this area. We don't have to go into the same puddle twice if we can avoid it.
   
   > This one could also be avoided with proper testing rules.
   
   @wiktor2200 what kind of proper testing rules do you mean? Could you give examples so that we can understand each other well about the approach to testing?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] kaxil edited a comment on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
kaxil edited a comment on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1019404038


   Beware of issues like https://github.com/apache/airflow/issues/20539 when using external Schema definitions where offline or Air-gapped installations don’t work.
   
   We solved it with bringing those definitions in the repo itself - https://github.com/apache/airflow/pull/20544


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] potiuk commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
potiuk commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018468792


   > To protect against #17712 misconfig in pytest, there would be needed some kind of expected values list (which probably should be updated) - idk about actual implementation, probably will be more clear while planning this :) To be sure that any of those error won't occur some kind of dry-run would be needed. E.g. running helm on minikube will show the result and ability to successfully deploy the app. On cluster is quite easy though: https://helm.sh/docs/topics/chart_tests/
   
   Also another comment for that - in Airlfow we have > 350 unit tests, but we also have "kubernetes integration tests". Those are run also regularly in our CI and they consist of :
   
   * Building Production Image from latest sources
   * Extending the image to include test DAGs for Airflow
   * Deploying the image in few basic configuration to a kind (https://kind.sigs.k8s.io/docs/user/quick-start/)  cluster which is set-up during testing.
   * Running a small number of Pytest "integration" tests with the running application (very basic, literally "smoke" testing). 
   
   We also run separate "upgrade" tests. We update the chart and upgrade it after installing just to make sure that the upgrade scenario also works (we had some problems with hooks and migration jobs in the past). Those are really small number of tests but they give us confidence that we have not done something disastrous by accident.
   
   I can heartily recommend Kind for this kind of testing, it's also great for development environment when you would like to reproduce those kind of tests locally.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] mik-laj edited a comment on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
mik-laj edited a comment on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1018596225


   > We (in Superset) have already an ephemeral test environment on top of AWS ECS & AWS Fargate ( https://preset.io/blog/2021-4-28-ephemeral-test-environments-in-superset/ )
   
   I don't think we need to use external services as we can use KIND on Github Action. We have test environments running on Fargate because we want them to be available for longer than the duration of one build. 
   
   > Like Jarek pointed out, we will share our finding and optimization here as I think more projects can benefit from a pytest backed Helm unit-test framework.
   
   I agree. The Helm Chart testing framework written in Python is very convenient and has a low entry barrier for projects that already use Python. For Golang project, [`terratest`](https://terratest.gruntwork.io/docs/getting-started/quick-start/) is also option.
   
   It is also worth mentioning the discussion we had at Airflow when we decided to migrate from tests written in YAML to the new Python framework:  https://github.com/apache/airflow/issues/11657
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] kaxil commented on issue #18127: Testing strategy of Helm Chart

Posted by GitBox <gi...@apache.org>.
kaxil commented on issue #18127:
URL: https://github.com/apache/superset/issues/18127#issuecomment-1019404038


   Beware of issues like https://github.com/apache/airflow/issues/20539 when use external Schema definitions where offline or Air-gapped installations don’t work.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org