You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@opennlp.apache.org by GitBox <gi...@apache.org> on 2022/08/05 22:41:51 UTC

[GitHub] [opennlp] jzonthemtn opened a new pull request, #421: OPENNLP-1375: Adding option for GPU inference.

jzonthemtn opened a new pull request, #421:
URL: https://github.com/apache/opennlp/pull/421

   Thank you for contributing to Apache OpenNLP.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
        in the commit message?
   
   - [X] Does your PR title start with OPENNLP-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.
   
   - [X] Has your PR been rebased against the latest commit within the target branch (typically master)?
   
   - [X] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [X] Have you ensured that the full suite of tests is executed via mvn clean install at the root opennlp folder?
   - [X] Have you written or updated unit tests to verify your changes?
   - [] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file in opennlp folder?
   - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found in opennlp folder?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@opennlp.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [opennlp] jzonthemtn merged pull request #421: OPENNLP-1375: Adding option for GPU inference.

Posted by GitBox <gi...@apache.org>.
jzonthemtn merged PR #421:
URL: https://github.com/apache/opennlp/pull/421


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@opennlp.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [opennlp] kinow commented on a diff in pull request #421: OPENNLP-1375: Adding option for GPU inference.

Posted by GitBox <gi...@apache.org>.
kinow commented on code in PR #421:
URL: https://github.com/apache/opennlp/pull/421#discussion_r939436371


##########
opennlp-dl/pom.xml:
##########
@@ -38,7 +38,7 @@
     </dependency>
     <dependency>
       <groupId>com.microsoft.onnxruntime</groupId>
-      <artifactId>onnxruntime</artifactId>
+      <artifactId>onnxruntime_gpu</artifactId>

Review Comment:
   :+1: if you think it's helpful maybe this note can be added as a comment to the `pom.xml`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@opennlp.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [opennlp] jzonthemtn commented on a diff in pull request #421: OPENNLP-1375: Adding option for GPU inference.

Posted by GitBox <gi...@apache.org>.
jzonthemtn commented on code in PR #421:
URL: https://github.com/apache/opennlp/pull/421#discussion_r939431517


##########
opennlp-dl/pom.xml:
##########
@@ -38,7 +38,7 @@
     </dependency>
     <dependency>
       <groupId>com.microsoft.onnxruntime</groupId>
-      <artifactId>onnxruntime</artifactId>
+      <artifactId>onnxruntime_gpu</artifactId>

Review Comment:
   This dependency works for both CPU and GPU.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@opennlp.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org