You are viewing a plain text version of this content. The canonical link for it is here.
Posted to java-dev@axis.apache.org by "Davanum Srinivas (JIRA)" <ax...@ws.apache.org> on 2005/07/12 16:08:11 UTC

[jira] Resolved: (AXIS-2084) Dime attachements: Type_Length of the final record chunk must be zero

     [ http://issues.apache.org/jira/browse/AXIS-2084?page=all ]
     
Davanum Srinivas resolved AXIS-2084:
------------------------------------

    Resolution: Fixed

Please try latest CVS. I've checked in your fix. I have no clue how to test it. Please confirm that it works correctly.

-- dims

> Dime attachements: Type_Length of the final record chunk must be zero
> ---------------------------------------------------------------------
>
>          Key: AXIS-2084
>          URL: http://issues.apache.org/jira/browse/AXIS-2084
>      Project: Apache Axis
>         Type: Bug
>   Components: Serialization/Deserialization
>     Versions: 1.2, 1.2.1
>  Environment: Microsoft XP
>     Reporter: Coralia Silvana Popa

>
> Large files sent as DIME attachments are not correct serialized. Seems that the 
> When reading a series of chunked records, the parser assumes that the first record without the CF flag is the final record in the chunk; in this case, it's the last record in my sample. The record type is specified only in the first record chunk, and all remaining chunks must have the TYPE_T field and all remaining header fields (except for the DATA_LENGTH field) set to zero.
> Seems that Type_Length (and maybe other header fields) is not set to 0 for the last chunk. The code work correct when there is only one chunck.
> The problem is in class: org.apache.axis.attachments.DimeBodyPart, in method void send(java.io.OutputStream os, byte position, DynamicContentDataHandler dh, final long maxchunk)
> I suggest the following code the fix this problem:
> void send(java.io.OutputStream os, byte position, DynamicContentDataHandler dh,
>             final long maxchunk)
>             throws java.io.IOException {
>     	
>     		BufferedInputStream in = new BufferedInputStream(dh.getInputStream());
>     		
>     		final int myChunkSize = dh.getChunkSize();
>     		
>     		byte[] buffer1 = new byte[myChunkSize]; 
>     		byte[] buffer2 = new byte[myChunkSize]; 
>     		
>     		int bytesRead1 = 0 , bytesRead2 = 0;
>     		bytesRead1 = in.read(buffer1);
>     		
>     		if(bytesRead1 < 0) {
>                 sendHeader(os, position, 0, (byte) 0);
>                 os.write(pad, 0, dimePadding(0));
>                 return;
>     		}
> 		byte chunknext = 0;
>     		do {
>     			bytesRead2 = in.read(buffer2);
>     			
>     			if(bytesRead2 < 0) {
>     				//last record...do not set the chunk bit.
>     				//buffer1 contains the last chunked record!
>     				sendChunk(os, position, buffer1, 0, bytesRead1, chunknext);
>     				break;
>     			}
>     			
>     			sendChunk(os, position, buffer1, 0, bytesRead1,(byte) (CHUNK | chunknext) );
> 			chunknext = CHUNK_NEXT;
>     			//now that we have written out buffer1, copy buffer2 into to buffer1
>     			System.arraycopy(buffer2,0,buffer1,0,myChunkSize);
>     			bytesRead1 = bytesRead2;
>     			
>     		}while(bytesRead2 > 0);
>     }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira