June 9, 2025
Day 10 – Exploring MobileNet and Keyword Searching
What I Learned
We started the day by briefing Mr. Pelumi on what we did last week. Among other trivial things, I studied transfer learning in depth and experimented with some lightweight deep learning models. As Mr. Pelumi assigned us all to try tinkering with a deep model each, I began working with the MobileNetV2 model. The model is very accurate, but when I fed it a picture of the Danfe, Nepal’s national bird, it couldn’t classify it as even a bird. Perhaps it wasn’t trained on this specific species. We then met with Mr. Pelumi at 11 AM in a conference room beside our lab. There, he briefed us on how to properly search keywords on paper databases to trim down search results. This was new to me, and I learned that by wrapping the keywords in inverted commas, the search engine looks for those keywords as a whole instead of separately. For example, if we enter [“driver drowsiness detection”], it searches for the whole three words together, but if we enter [driver drowsiness detection], the search engine looks up the keywords individually.
After the lunch break, I returned to experimenting with another model, DenseNet. I watched a YouTube video that walked through the paper on DenseNet.
Blockers
No issues faced.
Reflection
Today, I learned a proper way to search for articles, which will definitely be helpful in the future when I look for resources. I also searched for some ensemble models on GitHub and found a solid repo about stacking MobileNet and ResNet, something similar to what we will be doing. I will try to understand and reverse engineer that code to fit our project needs. I will continue reading the architecture paper on other models tomorrow.