You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@nifi.apache.org by GitBox <gi...@apache.org> on 2020/09/22 17:34:36 UTC

[GitHub] [nifi] pvillard31 opened a new pull request #4545: NIFI-7833 - Add Ozone support in Hadoop components where appropriate

pvillard31 opened a new pull request #4545:
URL: https://github.com/apache/nifi/pull/4545


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
   #### Description of PR
   
   _Enables X functionality; fixes bug NIFI-YYYY._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
        in the commit message?
   
   - [ ] Does your PR title start with **NIFI-XXXX** where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.
   
   - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [nifi] pvillard31 edited a comment on pull request #4545: NIFI-7833 - Add Ozone support in Hadoop components where appropriate

Posted by GitBox <gi...@apache.org>.
pvillard31 edited a comment on pull request #4545:
URL: https://github.com/apache/nifi/pull/4545#issuecomment-698894620


   Tested against a cluster where Ozone 1.0.0 was installed with Kerberos enabled.
   
   - Added core-site, and ozone-site XML files as configuration files
   - Ensured that the [proper properties](https://hadoop.apache.org/ozone/docs/1.0.0/interface/o3fs.html) are indicated in core-site.xml
   ````
   <property>
     <name>fs.AbstractFileSystem.o3fs.impl</name>
     <value>org.apache.hadoop.fs.ozone.OzFs</value>
   </property>
   <property>
     <name>fs.defaultFS</name>
     <value>o3fs://bucket.volume</value>
   </property>
   ````
   
   It's important to note that for the ``FetchHDFS`` processor, it's important to specify the fully qualified path and not just the relative path to the bucket. In other words, ``ListHDFS`` will list a file with path equal to ``/`` and name equal to ``foo.txt``, if leaving the default property for ``FetchHDFS``, you might get this error:
   
   ````
   2020-09-25 11:16:24,155 WARN org.apache.nifi.controller.tasks.ConnectableTask: Administratively Yielding FetchHDFS[id=c04a869b-0174-1000-0000-00003e0aa21b] due to uncaught Exception: java.lang.StringIndexOutOfBoundsException: String index out of range: -1
   java.lang.StringIndexOutOfBoundsException: String index out of range: -1
   	at java.lang.String.substring(String.java:1931)
   	at org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.pathToKey(BasicOzoneFileSystem.java:748)
   	at org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.open(BasicOzoneFileSystem.java:212)
   	at org.apache.nifi.processors.hadoop.FetchHDFS$1.run(FetchHDFS.java:161)
   	at java.security.AccessController.doPrivileged(Native Method)
   	at javax.security.auth.Subject.doAs(Subject.java:360)
   	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1710)
   	at org.apache.nifi.processors.hadoop.FetchHDFS.onTrigger(FetchHDFS.java:140)
   	at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
   	at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
   	at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
   	at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
   	at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
   	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
   	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
   	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   	at java.lang.Thread.run(Thread.java:748)
   ````
   
   You'll need to change the property in the ``FetchHDFS`` processor to use ``o3fs://bucket.volume/${path}/${name}`` instead of the default ``${path}/${name}``.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [nifi] pvillard31 commented on pull request #4545: NIFI-7833 - Add Ozone support in Hadoop components where appropriate

Posted by GitBox <gi...@apache.org>.
pvillard31 commented on pull request #4545:
URL: https://github.com/apache/nifi/pull/4545#issuecomment-698894620


   Tested against a cluster where Ozone 1.1.0 was installed with Kerberos enabled.
   
   - Added core-site, and ozone-site XML files as configuration files
   - Ensured that the [proper properties](https://hadoop.apache.org/ozone/docs/1.0.0/interface/o3fs.html) are indicated in core-site.xml
   ````
   <property>
     <name>fs.AbstractFileSystem.o3fs.impl</name>
     <value>org.apache.hadoop.fs.ozone.OzFs</value>
   </property>
   <property>
     <name>fs.defaultFS</name>
     <value>o3fs://bucket.volume</value>
   </property>
   ````
   
   It's important to note that for the ``FetchHDFS`` processor, it's important to specify the fully qualified path and not just the relative path to the bucket. In other words, ``ListHDFS`` will list a file with path equal to ``/`` and name equal to ``foo.txt``, if leaving the default property for ``FetchHDFS``, you might get this error:
   
   ````
   2020-09-25 11:16:24,155 WARN org.apache.nifi.controller.tasks.ConnectableTask: Administratively Yielding FetchHDFS[id=c04a869b-0174-1000-0000-00003e0aa21b] due to uncaught Exception: java.lang.StringIndexOutOfBoundsException: String index out of range: -1
   java.lang.StringIndexOutOfBoundsException: String index out of range: -1
   	at java.lang.String.substring(String.java:1931)
   	at org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.pathToKey(BasicOzoneFileSystem.java:748)
   	at org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.open(BasicOzoneFileSystem.java:212)
   	at org.apache.nifi.processors.hadoop.FetchHDFS$1.run(FetchHDFS.java:161)
   	at java.security.AccessController.doPrivileged(Native Method)
   	at javax.security.auth.Subject.doAs(Subject.java:360)
   	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1710)
   	at org.apache.nifi.processors.hadoop.FetchHDFS.onTrigger(FetchHDFS.java:140)
   	at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
   	at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
   	at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
   	at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
   	at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
   	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
   	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
   	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   	at java.lang.Thread.run(Thread.java:748)
   ````
   
   You'll need to change the property in the ``FetchHDFS`` processor to use ``o3fs://bucket.volume/${path}/${name}`` instead of the default ``${path}/${name}``.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [nifi] pvillard31 edited a comment on pull request #4545: NIFI-7833 - Add Ozone support in Hadoop components where appropriate

Posted by GitBox <gi...@apache.org>.
pvillard31 edited a comment on pull request #4545:
URL: https://github.com/apache/nifi/pull/4545#issuecomment-698894620


   Tested against a cluster where Ozone 1.0.0 was installed with Kerberos enabled.
   
   - Added core-site, and ozone-site XML files as configuration files
   - Ensured that the [proper properties](https://hadoop.apache.org/ozone/docs/1.0.0/interface/o3fs.html) are indicated in core-site.xml
   ````
   <property>
     <name>fs.AbstractFileSystem.o3fs.impl</name>
     <value>org.apache.hadoop.fs.ozone.OzFs</value>
   </property>
   <property>
     <name>fs.defaultFS</name>
     <value>o3fs://bucket.volume</value>
   </property>
   ````
   
   It's important to note that for the ``FetchHDFS`` processor, it's important to specify the fully qualified path and not just the relative path to the bucket. In other words, ``ListHDFS`` will list a file with path equal to ``/`` and name equal to ``foo.txt``, if leaving the default property for ``FetchHDFS``, you might get this error:
   
   ````
   2020-09-25 11:16:24,155 WARN org.apache.nifi.controller.tasks.ConnectableTask: Administratively Yielding FetchHDFS[id=c04a869b-0174-1000-0000-00003e0aa21b] due to uncaught Exception: java.lang.StringIndexOutOfBoundsException: String index out of range: -1
   java.lang.StringIndexOutOfBoundsException: String index out of range: -1
   	at java.lang.String.substring(String.java:1931)
   	at org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.pathToKey(BasicOzoneFileSystem.java:748)
   	at org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.open(BasicOzoneFileSystem.java:212)
   	at org.apache.nifi.processors.hadoop.FetchHDFS$1.run(FetchHDFS.java:161)
   	at java.security.AccessController.doPrivileged(Native Method)
   	at javax.security.auth.Subject.doAs(Subject.java:360)
   	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1710)
   	at org.apache.nifi.processors.hadoop.FetchHDFS.onTrigger(FetchHDFS.java:140)
   	at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
   	at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
   	at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
   	at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
   	at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
   	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
   	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
   	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   	at java.lang.Thread.run(Thread.java:748)
   ````
   
   You'll need to change the property in the ``FetchHDFS`` processor to use ``o3fs://bucket.volume/${path}/${name}`` instead of the default ``${path}/${name}``.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [nifi] bbende commented on pull request #4545: NIFI-7833 - Add Ozone support in Hadoop components where appropriate

Posted by GitBox <gi...@apache.org>.
bbende commented on pull request #4545:
URL: https://github.com/apache/nifi/pull/4545#issuecomment-700689011


   Looks good, will merge, thanks!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [nifi] bbende merged pull request #4545: NIFI-7833 - Add Ozone support in Hadoop components where appropriate

Posted by GitBox <gi...@apache.org>.
bbende merged pull request #4545:
URL: https://github.com/apache/nifi/pull/4545


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [nifi] pvillard31 commented on pull request #4545: NIFI-7833 - Add Ozone support in Hadoop components where appropriate

Posted by GitBox <gi...@apache.org>.
pvillard31 commented on pull request #4545:
URL: https://github.com/apache/nifi/pull/4545#issuecomment-698894620


   Tested against a cluster where Ozone 1.1.0 was installed with Kerberos enabled.
   
   - Added core-site, and ozone-site XML files as configuration files
   - Ensured that the [proper properties](https://hadoop.apache.org/ozone/docs/1.0.0/interface/o3fs.html) are indicated in core-site.xml
   ````
   <property>
     <name>fs.AbstractFileSystem.o3fs.impl</name>
     <value>org.apache.hadoop.fs.ozone.OzFs</value>
   </property>
   <property>
     <name>fs.defaultFS</name>
     <value>o3fs://bucket.volume</value>
   </property>
   ````
   
   It's important to note that for the ``FetchHDFS`` processor, it's important to specify the fully qualified path and not just the relative path to the bucket. In other words, ``ListHDFS`` will list a file with path equal to ``/`` and name equal to ``foo.txt``, if leaving the default property for ``FetchHDFS``, you might get this error:
   
   ````
   2020-09-25 11:16:24,155 WARN org.apache.nifi.controller.tasks.ConnectableTask: Administratively Yielding FetchHDFS[id=c04a869b-0174-1000-0000-00003e0aa21b] due to uncaught Exception: java.lang.StringIndexOutOfBoundsException: String index out of range: -1
   java.lang.StringIndexOutOfBoundsException: String index out of range: -1
   	at java.lang.String.substring(String.java:1931)
   	at org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.pathToKey(BasicOzoneFileSystem.java:748)
   	at org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.open(BasicOzoneFileSystem.java:212)
   	at org.apache.nifi.processors.hadoop.FetchHDFS$1.run(FetchHDFS.java:161)
   	at java.security.AccessController.doPrivileged(Native Method)
   	at javax.security.auth.Subject.doAs(Subject.java:360)
   	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1710)
   	at org.apache.nifi.processors.hadoop.FetchHDFS.onTrigger(FetchHDFS.java:140)
   	at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
   	at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
   	at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
   	at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
   	at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
   	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
   	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
   	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   	at java.lang.Thread.run(Thread.java:748)
   ````
   
   You'll need to change the property in the ``FetchHDFS`` processor to use ``o3fs://bucket.volume/${path}/${name}`` instead of the default ``${path}/${name}``.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org