Hello all,
I've been working a bit with 3D textures lately, to display MRI which are sometimes (more often than not) encoded in Float32. So far, I was converting the data into uint8 on a single channel (LUMINANCE) but then thought it would be nice to have my data sent to the GPU directly as Float32 (single channel: gl.R32F ). And I found a bug! Or rather something that I suspect to have been overlooked because not a lot of people uses 3D textures and even less want to have floating point precision.
The guilty line of code lays in the Engine L5926 (link). When creating the 3D texture, something like that happens:
this._gl.texImage3D(this._gl.TEXTURE_3D, 0, internalFormat, texture.width, texture.height, texture.depth, 0, internalFormat, this._gl.UNSIGNED_BYTE, data);
and reading at the doc on MDN, we have a prototype like that (link😞
void gl.texImage3D(target, level, internalformat, width, height, depth, border, format, type, ImageData source);
The mistake is in the confusion between 'internalFormat' and 'format' and also in the fact that the type in the Engine is forced to UNSIGNED_INT. Somewhere in the middle of this page, we can find this table:
In the case of a RGB or LUMINANCE image in UNSIGNED_BYTE, the piece of code in the Engine would work fine (and it's how I used it until now), but this is only because the 'type' and 'internal type' are the same. In the case or single channel float 32, 'internal format' and 'format' will take different values, respectively gl.R32F and gl.RED, while the type becomes gl.FLOAT .
So in my case, I replaced the line of code in the Engine by this dirty hardcoded version:
this._gl.texImage3D(this._gl.TEXTURE_3D, 0, this._gl.R32F, texture.width, texture.height, texture.depth, 0, this._gl.RED, this._gl.FLOAT, data);
As well as using that:
let myDummyTexture = new BABYLON.RawTexture3D( new Float32Array(1),1,1,1, BABYLON.Engine.TEXTUREFORMAT_R32F, this._scene);
And it works! (later on , I replaced that by an actual brain MRI texture, and it works too!)
In the method Engine._getInternalFormat() , the internalFormat returned for Engine.TEXTUREFORMAT_R32F is gl.RED but the thing is, it should be gl.R32F and we should have another lookup method called for example Engine._getFormat() that returns gl.RED when Engine.TEXTUREFORMAT_R32F is given and a third lookup method (say Engine._getType() ) that would return gl.FLOAT when Engine.TEXTUREFORMAT_R32F is given. Then we would have every possible settings available! Do you think it's an update you could add to the core?
Cheers