You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flume.apache.org by "adarsh (JIRA)" <ji...@apache.org> on 2015/01/08 11:20:34 UTC
[jira] [Commented] (FLUME-2019)
(SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR -
org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:460)
[ https://issues.apache.org/jira/browse/FLUME-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14269138#comment-14269138 ]
adarsh commented on FLUME-2019:
-------------------------------
Hi,
I am getting same error for flume 1.5.2
2015-01-08 05:15:21,941 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:463)] HDFS IO error
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hdfs/gy/lpn1/FlumeData.1420710148522.tmp could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation.
conf file :
agent.sources = Monitor
agent.channels = memoryChannel
agent.sinks = hdfsSink
# The channel can be defined as follows.
agent.sources.Monitor.channels = memoryChannel
agent.sources.Monitor.type = exec
agent.sources.Monitor.command = cat /apps/scope/alerts/logs/monitor.log
# Each sink's type must be defined
agent.sinks.hdfsSink.type = hdfs
agent.sinks.hdfsSink.hdfs.path = hdfs://11.120.93.20:8020/user/hdfs/abc
agent.sinks.hdfsSink.channel = memoryChannel
agent.sinks.hdfsSink.rollCount = 6000
agent.sinks.hdfsSink.rollInterval = 15
agent.sinks.hdfsSink.rollSize = 209715200
agent.sinks.hdfsSink.batchSize =1000
agent.sinks.hdfsSink.fileType = DataStream
agent.sinks.hdfsSink.callTimeout = 3600000
# Each channel's type is defined.
agent.channels.memoryChannel.type = memory
# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
agent.channels.memoryChannel.capacity = 100
Please suggest some solution.
Thanks.
> (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:460)
> ---------------------------------------------------------------------------------------------------------------------------------
>
> Key: FLUME-2019
> URL: https://issues.apache.org/jira/browse/FLUME-2019
> Project: Flume
> Issue Type: Question
> Components: Sinks+Sources
> Affects Versions: v1.3.1
> Environment: Ubuntu 12.04
> Reporter: Kanikkannan
> Priority: Minor
> Labels: hadoop, newbie
> Fix For: v1.3.1
>
>
> I am getting the below error, when I try to upload a file into Hadoop HDFS.
> 2013-04-23 12:06:39,141 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:460)] process failed
> java.lang.NoSuchMethodError: com.google.common.cache.CacheBuilder.build()Lcom/google/common/cache/Cache;
> Flume.conf is as below
> ----------------------
> a1.sources = r1
> a1.sinks = k1
> a1.channels = c1
> a1.sources.r1.type = netcat
> a1.sources.r1.bind = localhost
> a1.sources.r1.port = 44444
> a1.sinks.k1.type = hdfs
> a1.sinks.k1.channel = c1
> a1.sinks.k1.hdfs.path = hdfs://localhost:8020/projects
> a1.sinks.k1.hdfs.hdfs.maxOpenFiles = 10000
> a1.channels.c1.type = memory
> a1.channels.c1.capacity = 100000
> a1.channels.c1.transactionCapacity = 100
> a1.sources.r1.channels = c1
> a1.sinks.k1.channel = c1
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)