To be fair to TM, the sensor they use, MLX90333, has a front end ADC of 14bit. The output DAC is 12bit (the front and ADC senses the Hall Effect voltage into digital values, and DSP processes it, filter, scaling, and then compute the X,Y, Z, etc), then, if you program the chip to output analog signal, then the 12 bit DAC is used to output rail-to-rail 0 to 5V.
However, AFAIK, Warthog uses the SPI digital mode, so it could potentially have 14 bit resolution output. It is possible to use oversampling to increase resolution, but MLX90333 does not have enough sampling rate to do this, as it requires 4^2 times more sampling rate to boost 14bit to 16bit, and MLX90333 only has about 1000KHz sampling rate. So, if they oversample it to 16bit, Warthog's max USB report rate would be less than 62.5Hz. That would be horrible because 14bit resolution is sufficient and sacrificing sampling rate for resolution you can't use is not a smart compromise.
So, my guess is that because MLX90333's SPI report is 2byte per axis (16bit), TM claims it 16bit. But internally, MLX90333 probably just takes the 14bit and shift left two to make it 16bit for output.
Case in point, T16000M uses the exact same MLX90333 chip, but it claims the followings.
Offer precision levels 256 times greater than current systems (i.e. a resolution attaining 16,000 x 16,000 values!)
That's about 14-bit.