public class ChunkedConsumer extends UpdateConsumer
ChunkedConsumer is reads an decodes a stream
using the chunked transfer coding. This is used so that any data
sent in the chunked transfer coding can be decoded. All bytes are
appended to an internal buffer so that they can be read without
having to parse the encoding.
length := 0
read chunk-size, chunk-extension (if any) and CRLF
while (chunk-size > 0) {
read chunk-data and CRLF
append chunk-data to entity-body
length := length + chunk-size
read chunk-size and CRLF
}
read entity-header
while (entity-header not empty) {
append entity-header to existing header fields
read entity-header
}
The above algorithm is taken from RFC 2616 section 19.4.6. This
coding scheme is used in HTTP pipelines so that dynamic content,
that is, content with which a length cannot be determined does
not require a connection close to delimit the message body.array, finished| Constructor and Description |
|---|
ChunkedConsumer(Allocator allocator)
Constructor for the
ChunkedConsumer object. |
| Modifier and Type | Method and Description |
|---|---|
Body |
getBody()
This is used to acquire the body that has been consumed.
|
protected int |
update(byte[] array,
int off,
int size)
This is used to process the bytes that have been read from the
cursor.
|
commit, consume, isFinishedpublic ChunkedConsumer(Allocator allocator)
ChunkedConsumer object. This
is used to create a consumer that reads chunked encoded data and
appended that data in decoded form to an internal buffer so that
it can be read in a clean decoded fromat.allocator - this is used to allocate the internal bufferpublic Body getBody()
Attachment objects.
Each part can then be read as an individual message.protected int update(byte[] array,
int off,
int size)
throws IOException
update in class UpdateConsumerarray - this is a chunk read from the cursoroff - this is the offset within the array the chunk startssize - this is the number of bytes within the arrayIOExceptionCopyright © 2025. All rights reserved.