Skip to main content
. Author manuscript; available in PMC: 2023 Nov 15.
Published in final edited form as: IEEE Sens J. 2022 Oct 5;22(22):21362–21390. doi: 10.1109/jsen.2022.3210773

Fig. 4.

Fig. 4.

Operation of TensorFlow Lite Micro, an interpreter-based inference engine. (a) The training graph is frozen, optimized and converted to a flatbuffer serialized model schema, suitable for deployment in embedded devices. (b) The TFLM runtime API preallocates a portion of memory in the SRAM (called arena) and performs bin-packing during runtime to optimize memory usage (figure adapted from [167]).