Diving into Machine Learning
Machine learning has revolutionized industries, and will continue to drastically over the next few years as advancements are made. Systems being able to learn from data to make informed decisions allows for huge flexibility and customization.
Image Recommendation System
My goal for creating this application was to simulate algorithms commonly seen in social media platforms to enhance user engagement. This is done by recommending relevant images based on user preferences and interactions.
Technical details: Collected user interaction data, including likes, comments, and hover times. Preprocessed 5,000+ images using Python and OpenCV for consistent input dimensions, while also normalizing pixel values.
The model architecture utilized was a pretrained ResNet50 for feature extraction of the images. This convolutional neural network (CNN) was important for embedding generation.
I implemented this custom recommendation model using K-Nearest Neighbors, which I felt was best to calculate the similarity scores between image embeddings.
Overall the system improved recommendation accuracy by 25%, and a drastic data preparation time reduction of 60%.
Guava Health Detector
My second project opened up doors for me to train my own customized model. After lots of reading of tensorflow and keras documentation, I was able to create a customized model that could detect the health of a guava with high confidence based on a user inserted image.
Technical details: I normalized all pixel values between the range of [0,1]. For augmentation I created a custom pipeline to introduce variability in the training data, this helped to substantially prevent overfitting.
The model architecture consisted of a fine tuned MobileNetV2, which is a pre-trained convolution neural network (CNN). This model accomplished two key aspects, efficiency and extraction of high-quality features from images, which was vital considering the size of the dataset, the length of the training, and the importance of confidence in the user inputted images.
Model compiled with Adam optimizer, categorical crossentropy (good for multi-class classification), and accuracy for performance evaluation.
Image Colorizer
This project was hard. From creating to testing to training. The goal of this project was unique, I wanted to be able to convert a grayscale image to color, and vice versa. I managed to find a perfect kaggle dataset for training. The approach I took was planned carefully and thoroughly ahead of time.
Technical details:
Generative Adversarial Networks (GANs): Useful for realistic outputs since they involve a generator (which was extremely handy), and a discriminator to evaluate the realism.
U-Net for the generator: Skipped connections, and allowed for the low level details from the input to be transferred to output (preserves textures and edges).
A simple way to prevent overfitting through augmentation allowed for me to not require additional data.
Comments
Post a Comment