I’ve been trying to convert this onnx model to Tensorflow so I can convert it into tflite that I will then use in my flutter app
import onnx
from onnx_tf.backend import prepare
import tensorflow
onnx_model = onnx.load('vit_trashnet.onnx')
tf_rep = prepare(onnx_model)
tf_rep.export_graph('vit_trashnet_tf')
but I keep getting this error
AttributeError: module 'keras._tf_keras.keras' has no attribute '__internal__'
not just this error but different errors are appearing depending on the version of the dependency I’m fixing. Is there any workaround this I’m trying to convert it to tflite since the flutter app is much comfortable in tflite models due to the library (at least that’s what I know). Also, the size of the model is 340mb± which is kinda big if ever I will use it directly on my app since I want to try and make the model usable at offline mode or whatever.
You’re encountering a common issue: the ONNX → TensorFlow conversion process via onnx-tf
can be fragile and often breaks with TensorFlow/Keras version mismatches, especially with newer versions (TF 2.13+).
Best Practice to Convert ONNX to TensorFlow Lite (TFLite)
Here’s a reliable and version-stable pathway to convert your .onnx
model to .tflite
:
1. ONNX → TensorFlow SavedModel (use onnx-tf
)
But use compatible versions:
onnx==1.14.0
onnx-tf==1.10.0
tensorflow==2.11.0
or 2.10.1
(works best)
- Avoid TensorFlow 2.13+ with
onnx-tf
, since __internal__
errors happen due to internal Keras API changes.
Tip: Use a virtual environment to isolate versions.
pip install onnx==1.14.0 onnx-tf==1.10.0 tensorflow==2.11.0
Then your code should work:
import onnx
from onnx_tf.backend import prepare
onnx_model = onnx.load("vit_trashnet.onnx")
tf_rep = prepare(onnx_model)
tf_rep.export_graph("vit_trashnet_tf") # Exports as SavedModel
2. SavedModel → TFLite
Now convert to TFLite:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model("vit_trashnet_tf")
converter.optimizations = [tf.lite.Optimize.DEFAULT] # Quantization
tflite_model = converter.convert()
with open("vit_trashnet.tflite", "wb") as f:
f.write(tflite_model)
Optional: Quantization for Size Reduction
To shrink from ~340MB:
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Optional: Representative dataset for int8 quantization
# converter.representative_dataset = your_data_gen
# Optional: Set output/input types
# converter.target_spec.supported_types = [tf.float16]
Quantization can reduce size 4–8× with little performance loss.
Alternative: Use onnx2tf
If the onnx-tf
route is still problematic, try onnx2tf
– it’s actively maintained and often succeeds where onnx-tf
fails.
pip install onnx2tf
onnx2tf -i vit_trashnet.onnx -o vit_trashnet_tf
Then convert the exported directory to .tflite
using TFLiteConverter
.