Description as a Tweet:

Its a system that helps visually challenged people to detect stray animals on the roads while walking and also help self driving cars to detect strays and thus save their lives.

Inspiration:

I wish to protect the stray animals and provide them a secure future even when self driving cars come in place. I also wish to help the visually impaired people in order to lead a normal life.

What it does:

It differentiates between the background and the stray animal and this helps us to know that there is an animal in front of us.

How we built it:

Using Computer Vision Algorithm - Semantic Segmentation using MobileNet(Unet)

Technologies we used:

  • Python
  • AI/Machine Learning

Challenges we ran into:

Model Training was the toughest took the most time.

Accomplishments we're proud of:

Learnt a new algorithm in order to achieve this.

What we've learned:

What is Semantic Segmentation, Unet, MobileNet.

What's next:

This same methodology can be further used to aid Blind People to get full understanding of a scene and we could further provide speech explanation to them.
This same methodology can also be used for self driving cars to understand the scenes and thus take locomotive actions based on it.

Built with:

Google Colab, Tensorflow, Python, Github

Prizes we're going for:

  • Fujifilm Instax Mini 11 Instant Film Camera, Sky Blue
  • Cash prizes: $1,000 total to first place team; $500 total to second place team; $200 to third place team
  • JBL Clip Bluetooth Speakers
  • $500 total to the winning team
  • Meta Portals

Team Members

Shriya Natesan

Table Number

Table TBD