An intelligent fashion recommendation system that combines voice interaction and web scraping to help visually impaired individuals explore and choose clothing.
This project aims to create an accessible fashion assistant that transforms visual fashion information into voice-based interactions, making online fashion shopping more inclusive for visually impaired users.
- Scrapes clothing information from major fashion retailers (e.g., H&M, Prada)
- Collects detailed product information including:
- Product names
- Colors
- Materials
- Descriptions
- Patterns
- Images
- Utilizes CLIP (Contrastive Language-Image Pre-Training) model
- Converts fashion images into vector embeddings for efficient similarity search
- Enables content-based image retrieval and matching
- Transforms technical product descriptions into natural, conversational recommendations
- Uses ChatGPT to generate human-like fashion advice
- Creates contextual and personalized clothing suggestions
- Input: Google Speech Recognition for converting user voice commands to text
- Output: OpenAI Text-to-Speech for delivering fashion recommendations in natural voice
- Web Scraping: Python web scraping tools
- Image Processing: CLIP (OpenAI)
- Language Model: ChatGPT API
- Speech Processing:
- Google Speech Recognition (STT)
- OpenAI Text-to-Speech (TTS)
The primary goal is to bridge the gap in online fashion shopping for visually impaired individuals by providing an intuitive, voice-based interface for exploring and receiving personalized fashion recommendations.
- User provides voice input about their fashion preferences
- System processes the voice command into text
- Matches user preferences with the fashion database
- Generates personalized recommendations using natural language
- Delivers recommendations through voice output
- Expand retailer database
- Implement personalized style learning
- Add multi-language support
- Integrate with e-commerce platforms
- Enhance recommendation algorithms