You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tika.apache.org by "Hudson (Jira)" <ji...@apache.org> on 2021/09/14 16:18:00 UTC

[jira] [Commented] (TIKA-3552) Improve robustness of S3 fetcher and emitter

    [ https://issues.apache.org/jira/browse/TIKA-3552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17415030#comment-17415030 ] 

Hudson commented on TIKA-3552:
------------------------------

SUCCESS: Integrated in Jenkins build Tika ยป tika-main-jdk8 #328 (See [https://ci-builds.apache.org/job/Tika/job/tika-main-jdk8/328/])
TIKA-3552 -- improve robustness of s3 emitter and fetcher (tallison: [https://github.com/apache/tika/commit/41cac8b6bc2b7f480281e3aee2fad050a84477c3])
* (edit) tika-pipes/tika-emitters/tika-emitter-s3/src/main/java/org/apache/tika/pipes/emitter/s3/S3Emitter.java
* (edit) tika-pipes/tika-fetchers/tika-fetcher-s3/src/main/java/org/apache/tika/pipes/fetcher/s3/S3Fetcher.java


> Improve robustness of S3 fetcher and emitter
> --------------------------------------------
>
>                 Key: TIKA-3552
>                 URL: https://issues.apache.org/jira/browse/TIKA-3552
>             Project: Tika
>          Issue Type: Task
>            Reporter: Tim Allison
>            Priority: Major
>             Fix For: 2.1.1
>
>
> We should allow users to set the maxConnections for both.  
> For the emitter, if there is an underlying file, we should use that in the putObject call; in practice, I was getting a number of inputstream.reset exceptions during aws client's uploading of the stream, specifically when it was digesting the stream before upload.  Those exceptions went away when I used the client to upload the underlying file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)