You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Hao Zhong (JIRA)" <ji...@apache.org> on 2013/02/22 02:13:11 UTC

[jira] [Created] (HADOOP-9324) Out of date API document

Hao Zhong created HADOOP-9324:
---------------------------------

             Summary: Out of date API document
                 Key: HADOOP-9324
                 URL: https://issues.apache.org/jira/browse/HADOOP-9324
             Project: Hadoop Common
          Issue Type: Bug
    Affects Versions: 2.0.3-alpha
            Reporter: Hao Zhong


The documentation is out of date. Some code references are broken:
1. http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FSDataInputStream.html
"All Implemented Interfaces:
    Closeable, DataInput, *org.apache.hadoop.fs.ByteBufferReadable*, *org.apache.hadoop.fs.HasFileDescriptor*, PositionedReadable, Seekable "

2.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Cluster.html
renewDelegationToken(*org.apache.hadoop.security.token.Token<org.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenIdentifier>* token)
          Deprecated. Use Token.renew(*org.apache.hadoop.conf.Configuration*) instead

3.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/JobConf.html
"Use MRAsyncDiskService.moveAndDeleteAllVolumes instead. "
I cannot find the MRAsyncDiskService class in the documentation of 2.0.3. 

4.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/join/CompositeRecordReader.html
 "protected *org.apache.hadoop.mapred.join.CompositeRecordReader.JoinCollector* 	jc"
Please globally search JoinCollector. It is deleted, but mentioned many times in the current documentation.

5.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/OutputCommitter.html
"abortJob(JobContext context, *org.apache.hadoop.mapreduce.JobStatus.State runState*)"  
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Job.html
"public *org.apache.hadoop.mapreduce.JobStatus.State* getJobState()"

4.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/SequenceFileOutputFormat.html
" static *org.apache.hadoop.io.SequenceFile.CompressionType* getOutputCompressionType"
" static *org.apache.hadoop.io.SequenceFile.Reader[]* 	getReaders"

5.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/TaskCompletionEvent.html
"Returns enum Status.SUCESS or Status.FAILURE."->Status.SUCCEEDED? 

6.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Job.html
" static *org.apache.hadoop.mapreduce.Job.TaskStatusFilter* 	getTaskOutputFilter"
"  org.apache.hadoop.mapreduce.TaskReport[] 	getTaskReports(TaskType type) "

7.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Reducer.html
"cleanup(*org.apache.hadoop.mapreduce.Reducer.Context* context) "

8.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/SequenceFileOutputFormat.html
 "static *org.apache.hadoop.io.SequenceFile.CompressionType* 	getOutputCompressionType(JobConf conf)
          Get the *SequenceFile.CompressionType* for the output SequenceFile."
" static *org.apache.hadoop.io.SequenceFile.Reader[]* 	getReaders" 

9.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/lib/partition/InputSampler.html
"writePartitionFile(Job job, *org.apache.hadoop.mapreduce.lib.partition.InputSampler.Sampler<K,V>* sampler) "

10.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/lib/partition/TotalOrderPartitioner.html
contain JobContextImpl.getNumReduceTasks() - 1 keys. 
The JobContextImpl class is already deleted.

11. http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/OutputCommitter.html
"Note that this is invoked for jobs with final runstate as JobStatus.State.FAILED or JobStatus.State.KILLED."->JobStatus.FAILED JobStatus.KILLED?

12.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/TaskAttemptContext.html
"All Superinterfaces:
    JobContext, *org.apache.hadoop.mapreduce.MRJobConfig*, Progressable, TaskAttemptContext "

13.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics/file/FileContext.html
"All Implemented Interfaces:
    *org.apache.hadoop.metrics.MetricsContext*"

14.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics/spi/AbstractMetricsContext.html   
"*org.apache.hadoop.metrics.MetricsRecord* 	createRecord(String recordName)"

15. http://hadoop.apache.org/docs/current/api/org/apache/hadoop/net/DNSToSwitchMapping.html
"If a name cannot be resolved to a rack, the implementation should return NetworkTopology.DEFAULT_RACK."
NetworkTopology is deleted.

16.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics2/package-summary.html
"myprefix.sink.file.class=org.hadoop.metrics2.sink.FileSink" ->
org.apache.hadoop.metrics2.sink.FileSink?
"org.apache.hadoop.metrics2.impl" -> The package is not found.

17.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/ha/HAServiceTarget.html
" abstract  *org.apache.hadoop.ha.NodeFencer* 	getFencer() "

18.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/MarkableIterator.html
"MarkableIterator is a wrapper iterator class that implements the MarkableIteratorInterface. "
MarkableIteratorInterface is deleted.

19.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics/spi/NoEmitMetricsContext.html
 "A MetricsContext that does not emit data, but, unlike NullContextWithUpdate"
NullContextWithUpdate is deleted.

20.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/net/ConnectTimeoutException.html
"Thrown by NetUtils.connect(java.net.Socket, java.net.SocketAddress, int) "
The NetUtils class is deleted.

Please revise the documentation.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira