This is a computer vision project focused on blood vessel segmentation in medical images using a U-Net architecture trained with TensorFlow/Keras in Python, and later converted to TensorFlow Lite for efficient inference in a C++ application.
The goal is to build a complete and optimized pipeline, from data preprocessing to embedded inference using TFLite.
📌 Objectives
- Train a U-Net neural network for blood vessel segmentation
- Export the trained model to .tflite format
- Integrate and run inference with TensorFlow Lite in C++
- Ensure performance and portability for embedded systems
⚙️ Technologies Used
- Python
- TensorFlow / Keras
- U-Net (segmentation architecture)
- TensorFlow Lite
- C++ (with CMake)
- OpenCV (for image handling in C++)
The dataset used in this project was the retinal blood vessel segmentation dataset available on kaggle. This set contains images of the fundus of the eye and their respective annotations, thus having 50 images in total.
-
Create a folder called models in the project root
-
Then, in this link, you can download the original
.kerasmodel and the.tflitemodel (we will use this one to perform the inference)
TensorFlow Lite is a lightweight version of TensorFlow designed to run machine learning models on devices with limited resources, such as smartphones, microcontrollers, and embedded systems. It enables on-device inference, which improves speed, ensures data privacy, reduces internet usage, and allows offline operation.
Models are first trained in standard TensorFlow and then converted to the .tflite format. TFLite supports optimizations like quantization to reduce model size and improve performance. It also offers hardware acceleration through GPU, NNAPI, and Edge TPU, and supports programming in Python, Java, C++, and Swift.
Common applications include image classification, object detection, speech recognition, and offline translation. TFLite is ideal for deploying efficient and fast machine learning solutions directly on edge devices.
-
This step is not mandatory unless you train the model again or want to use this function in isolation, feel free.
-
To convert the model trained by tensorflow into a .tflite model, simply run the
create_tf_lite_model.pyfile with the following command:
python create_tf_lite_mode.py
- The
.tflitefile will be saved in the models folder with the namesegment_model.tflite.
- Clone the repository:
git clone https://github.com/colaresm/blood-vessel-segmentation.git
cd blood-vessel-segmentation
- This repository already contains the model already trained and ready to perform inference. If you want to change any parameter (see the
train.pyfile) and perform the training again, follow the instructions below. Otherwise, skip to the next step. - If you want to perform a new training, make sure to download the dataset and place it inside a folder called
data
- To run the training, simply use the following command:
python train.py
-
The file responsible for reading the .tflite model and executing the entire inference process is located in
inference/main.cpp. -
If you are using Windows:
cd inference
mkdir build && cd build
cmake ..
#open TFLiteCheck.sln in Visual Studio.
#Build Release x64
cd Release
.\SegmentBloodVessels path_of_image
- If you are using MacOs:
cd inference
mkdir build && cd build
cmake ..
make
.\SegmentBloodVessels path_of_image
- The following result will be displayed:
- The
imagesfolder has some examples that you can use for testing.
- Fork the repository.
- Create a branch for your changes (
git checkout -b feature/new-feature). - Commit your changes (
git commit -am 'Add new feature'). - Push to the branch (
git push origin feature/new-feature). - Open a pull request.
- Email: colaresmarcelo2022@gmail.com
- LinkedIn: engmarcelocolares


