Creating a Bitmap ByteBuffer for quantized Tensorflow Lite Model

1

I would like to use a quantized tensorflow lite model but the current ByteBuffer I have is using floating point. I would like this to be integer representation. Right now the model wants 270000 bytes and I am trying to pass it 1080000 bytes. Is it as simple as casting the float to int?

public ByteBuffer convertBitmapToByteBuffer(Bitmap bitmap) {

    // Preallocate memory for bytebuffer
    ByteBuffer byteBuffer = ByteBuffer.allocate(inputSize*inputSize*pixelSize);
    byteBuffer.order(ByteOrder.nativeOrder());

    // Initialize pixel data array and populate from bitmap
    int [] intArray = new int[inputSize*inputSize];
    bitmap.getPixels(intArray, 0, bitmap.getWidth(), 0 , 0,
            bitmap.getWidth(), bitmap.getHeight());

    int pixel = 0;      // pixel indexer
    for (int i=0; i<inputSize; i++) {
        for (int j=0; j<inputSize; j++) {
            int input = intArray[pixel++];

            byteBuffer.putfloat((((input >> 16 & 0x000000FF) - imageMean) / imageStd));
            byteBuffer.putfloat((((input >> 8 & 0x000000FF) - imageMean) / imageStd));
            byteBuffer.putfloat((((input & 0x000000FF) - imageMean) / imageStd));
        }
    }
    return byteBuffer;
}

Thanks for any tips you can provide.

java
floating-point
pixel
tensorflow-lite
bytebuffer
asked on Stack Overflow May 28, 2020 by Kyle Marshall

1 Answer

2

Casting float to int is not the correct approach. Good news is that the quantized input values expected by the model (8-bit r, g, b values in sequence) matches exactly the same as the Bitmap pixel representation except that the model doesn't expect an alpha channel, so the conversion process should be actually easier than when you're using float inputs.

Here's what you might try instead. (I'm assuming pixelSize is 3)

int pixel = 0;      // pixel indexer
for (int i=0; i<inputSize; i++) {
    for (int j=0; j<inputSize; j++) {
        int input = intArray[pixel++];   // pixel containing ARGB.
        byteBuffer
            .put((byte)((input >> 16) & 0xFF))    // R
            .put((byte)((input >>  8) & 0xFF))    // G
            .put((byte)((input      ) & 0xFF));   // B
    }
}
answered on Stack Overflow May 29, 2020 by yyoon

User contributions licensed under CC BY-SA 3.0