Building/Using custom models

bkgoodman
Posts: 45
Joined: Fri Feb 17, 2017 12:41 pm

Building/Using custom models

Postby bkgoodman » Tue Oct 26, 2021 6:13 pm

I've been reading:

https://github.com/espressif/esp-dl/tre ... r/tutorial

...and there seems to be either some detail missing (or a huge gap in my own knowledge).

I have built and trained (and can modify, if needed) a very simple (categorical classifaction) Tensorflow model (with Keras). I have converted it to Tensorflow Lite, and quantized it down to an unsigned 8-bit model. (Which works well from a simple command-line bench test with Tensorflow Lite Micro under Linux).

My models (the original, lite, and quanitzed) are all .h5 files - and in previous examples (and Tensorflow Lite Micro projects) it has simply been a matter of running the quantized .h5 file (model, weights and biases) into xdd or something to make it so I can feed the binary [model] into TFLiteMicro. (This is literally how I use it on the Linux TFLiteMicro - i.e. My LiteMicro inference code just "loads" the model.)

But this tutorial (if I understand correctly) seems to want me to create a json representation of my model, AND rewrite it in C++ code - and then (I'm not exactly sure) - take the weights and biases and convert them to the numpy array? There was no talk on how to get this data out of a model and put it into these numpy arrays.

I don't know if you have any more clarity of this - or how I could do this from a real-world trained Tensorflow/Keras model?

Is there some way I could (write a script to) extract the appropriate layer info and coefficients from my .h5 file, and convert them to the numpy and json config??

P.S.

My current project is public, and here: https://github.com/bkgoodman/espcam_training_tools
It is kind of a giant experimental mess right now - but it is a start-to-finish flow that lets you:

1. Acquire images directly from ESP32-CAM (Web GUI)
2. Save and categorize them (on a host computer) (Web GUI)
3. Manually (human) Crop/Resize, review, recategorized, reject, etc. (Think: Mechanical Turk) (Web GUI)
4. Convert source images into ones ready for training. (i.e. Take all crop info - crop them and resized down to 28x28 or whatever for training)
5. Run training
6. Manually test inference on new or existing images (Web GUI)
7. Generate Confusion Matrix for veritication
8. Convert to TFLite
9. Quantize to 8-bit
10. Test Inference w/ 8-bit TFlite model (on Linux)

So - I am very close to having a complete "A-to-Z" workflow for collecting and training against new images - and the model for ready-use onto ESP32 for inference...just trying to get the last piece working!

bkgoodman
Posts: 45
Joined: Fri Feb 17, 2017 12:41 pm

Re: Building/Using custom models

Postby bkgoodman » Mon Nov 01, 2021 12:47 pm

.....I guess that's a "no" then?? :|

yehangyang
Posts: 6
Joined: Wed Feb 27, 2019 12:24 pm

Re: Building/Using custom models

Postby yehangyang » Tue Nov 02, 2021 8:06 am

Hi bkgoodman,

If you don't know how to save coefficient into npy files, you can replace steps 1~3 here https://github.com/espressif/esp-dl/tre ... r/tutorial with using our quantization tool https://github.com/espressif/esp-dl/blo ... /README.md. The Calibrator could help to get the coefficient.cpp and coefficient.hpp. Here is example, after https://github.com/espressif/esp-dl/blo ... ple.py#L38 coefficient.cpp and coefficient.hpp will be gotten.

Who is online

Users browsing this forum: No registered users and 6 guests