I am trying to use TensorFlow Lite model to make predictions for the input data. The Neural Network was trained using Keras and then was converted to TFLite standard model as following:
keras_file = "pruned_model.h5"
converter = lite.TocoConverter.from_keras_model_file(keras_file)
tflite_model = converter.convert()
And saved it locally:
TFLITE_MODEL = "dts-model.tflite"
open(TFLITE_MODEL, "wb").write(tflite_model)
Then I loaded the model into Interpreter representation, so the inference process can run on it:
tflite_interpreter = tf.lite.Interpreter(model_path=TFLITE_MODEL)
# extracted input and output details
input_details = tflite_interpreter.get_input_details()
output_details = tflite_interpreter.get_output_details()
#created random sample data matching input shape
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
And when trying to run the following code, the notebook Kernel keeps crashing.
tflite_interpreter.set_tensor(input_details[0]['index'], input_data)
Kernel crash/restart screenshot
EDIT: Here's the terminal traceback of the kernel. Before executing this code when I run the interpreter, it is initialized, and after executing the above, it restarts.
INFO: Initialized TensorFlow Lite runtime.
[I 09:04:00.289 LabApp] KernelRestarter: restarting kernel (1/5), keep random ports
kernel d960c8d0-d14d-416a-bee3-3549c6a11c76 restarted
And here's the list of kernels for Jupyter:
jupyter kernelspec list
>> Available kernels:
>> python3 /home/username/anaconda3/envs/LearningML/share/jupyter/kernels/python3
I am not exactly sure why is this happening.
1) Why do the kernels generally restart and what is the reason for my kernel's restart?
2) Is there any better way I can run TFLite model inference and get outputs?
Thanks!