Tflite Android: Converting a ByteBuffer to segmentation map

0

I have a pretrained semantic segmentation model for tflite. It takes input image of shape 1x224x224x3 while gives the softmax output for three classes of shape 1x50176x3. It works okay in Python as intended. I take argmax for last dimension and reshape 50176 size vector to 224x224 and then for each value in the matrix, I map it to the index of colormap to get segmentation map and show it as overlay on the original image. I have ported the tflite model to Android app and now I am giving the input and taking the output as following:

private static final int[] colormap = {
        0x00000000,     //background
        0x99ffe119,     //healthy
        0x993cb44b,     //disease
};

imgData = ByteBuffer.allocateDirect(4 * 1 * 224 * 224 * 3);
imgData.order(ByteOrder.nativeOrder());
outputBuffer = ByteBuffer.allocateDirect(1 * 50176 * 3 * 4);
outputBuffer.order(ByteOrder.nativeOrder());
convertBitmapToByteBuffer(bitmap);


results = tflite.run(imgData, outputBuffer);

// To Do: Take argmax of outputBuffer. Reshape it to 224x224 and map to the colors to get segmentation map. 
// Show it on original image as overlay.

I have two questions. Is it the right approach to allocate ByteBuffers for both input image and output?

Secondly, how do I map the output ByteBuffer to the segmentation map, similar to the Python counter part?

I am a bit comfortable with working on arrays but when I convert the outputBuffer to array, it gives me unexpected random values as output! May be there is a better way to achieve the goal. I have no clue on how to get the result similar to the attached image (which is taken from my Python implementation).

enter image description here

java
android
image-segmentation
semantic-segmentation
tf-lite
asked on Stack Overflow Apr 10, 2020 by Haroon S. • edited Apr 11, 2020 by Haroon S.

0 Answers

Nobody has answered this question yet.


User contributions licensed under CC BY-SA 3.0