-
Notifications
You must be signed in to change notification settings - Fork 301
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #1234 from 770navyasharma/main
Issue #1232 Resolved
- Loading branch information
Showing
4 changed files
with
240 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,83 @@ | ||
# Gait Recognition Project | ||
## Description | ||
The Gait Recognition project focuses on recognizing individuals based on their walking patterns. Gait recognition is a biometric authentication technique that identifies people by analyzing the unique way they walk. This technique has a wide range of applications, including security, surveillance, and even healthcare for detecting abnormalities in walking patterns. | ||
|
||
This project uses OpenCV for video processing and image extraction, and Machine Learning for classifying the gait patterns of different individuals. | ||
## Features | ||
- **Video Processing**: Extract frames from video to analyze walking sequences. | ||
- **Pose Estimation**: Track key points of the human body to model the walking pattern. | ||
- **Gait Classification**: Classify individuals based on their walking patterns using machine learning models. | ||
- **Custom Dataset Support**: Can be adapted to different datasets of gait sequences for training and testing. | ||
|
||
## Dependencies | ||
To run this project, you need the following libraries installed: | ||
- OpenCV for video and image processing: | ||
``` | ||
pip install opencv-python | ||
``` | ||
- Numpy for numerical operations: | ||
``` | ||
pip install numpy | ||
``` | ||
- scikit-learn for training the classification models: | ||
``` | ||
pip install scikit-learn | ||
``` | ||
- TensorFlow or PyTorch (optional) for deep learning models (if using advanced classification): | ||
|
||
``` | ||
pip install tensorflow | ||
``` | ||
or | ||
``` | ||
pip install torch torchvision | ||
``` | ||
|
||
## How to Run | ||
- Install the required dependencies mentioned above. | ||
- Clone the project repository: | ||
``` | ||
git clone https://github.com/your-repo/gait-recognition.git | ||
``` | ||
|
||
- Navigate to the project directory: | ||
``` | ||
cd gait-recognition | ||
``` | ||
- Prepare the dataset: | ||
- Place video files of individuals walking into the data/ folder. | ||
- Ensure that the videos are named appropriately for each individual (e.g., person_1.mp4, person_2.mp4). | ||
- Run the script to extract gait features and classify individuals: | ||
``` | ||
python gait_recognition.py | ||
``` | ||
## How It Works | ||
Gait recognition works by extracting frames from a video sequence, detecting the human body in each frame, and then tracking key points such as the head, shoulders, hips, and feet. These key points form a "pose" for each frame, and the sequence of poses over time is used to capture the unique walking pattern (gait) of an individual. | ||
## Step-by-Step Process: | ||
- Frame Extraction: | ||
The video is processed to extract individual frames. Each frame is analyzed to detect the person in the scene. | ||
- Pose Estimation: | ||
- The key points of the human body are detected using a pose estimation model (such as OpenPose or the PoseNet model from TensorFlow). | ||
- These key points (like the head, shoulders, and knees) are tracked over time, forming a sequence of body movements. | ||
- Feature Extraction: | ||
The relative positions of key body points are extracted from each frame to form a feature vector for each step in the walking cycle. | ||
|
||
- Classification: | ||
Machine learning models (such as Support Vector Machines, Random Forests, or Neural Networks) are used to classify the feature vectors based on the unique walking patterns of different individuals. | ||
- Prediction: | ||
Once the model is trained, it can classify the gait of new individuals based on their walking patterns. | ||
``` | ||
- data/ # Folder for video data | ||
- gait_recognition.py # Main script for gait recognition | ||
- model/ # Folder to save trained models | ||
- README.md # Project documentation | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
import cv2 | ||
import numpy as np | ||
from sklearn.neighbors import KNeighborsClassifier | ||
|
||
# Load the pre-trained gait recognition model (or train a new one) | ||
model = KNeighborsClassifier(n_neighbors=3) | ||
|
||
# Function to perform background subtraction and silhouette extraction | ||
def extract_silhouette(frame): | ||
# Convert the frame to grayscale | ||
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) | ||
|
||
# Apply background subtraction | ||
fgmask = cv2.createBackgroundSubtractorMOG2().apply(gray) | ||
|
||
# Threshold to binarize the silhouette | ||
_, silhouette = cv2.threshold(fgmask, 250, 255, cv2.THRESH_BINARY) | ||
|
||
return silhouette | ||
|
||
# Function to extract gait features from the silhouette | ||
def extract_gait_features(silhouette): | ||
# Example: Extract contour area as a feature | ||
contours, _ = cv2.findContours(silhouette, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) | ||
if contours: | ||
largest_contour = max(contours, key=cv2.contourArea) | ||
return [cv2.contourArea(largest_contour)] | ||
return [0] # Return zero if no valid silhouette is found | ||
|
||
# Start capturing video (from webcam or pre-recorded video) | ||
cap = cv2.VideoCapture('walking_video.mp4') | ||
|
||
while True: | ||
ret, frame = cap.read() | ||
if not ret: | ||
break | ||
|
||
# Extract silhouette from the current frame | ||
silhouette = extract_silhouette(frame) | ||
|
||
# Extract gait features | ||
gait_features = extract_gait_features(silhouette) | ||
|
||
# Display the silhouette | ||
cv2.imshow("Silhouette", silhouette) | ||
|
||
# Check for user input to exit | ||
if cv2.waitKey(1) & 0xFF == ord('q'): | ||
break | ||
|
||
cap.release() | ||
cv2.destroyAllWindows() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
# Virtual Painter Project | ||
|
||
## Description | ||
The Virtual Painter is an interactive project where users can draw on the screen by tracking a colored object (e.g., a red pen or green object) via a webcam. As the object moves across the screen, it leaves a virtual trail, creating a painting or drawing effect. This project uses OpenCV to detect the object's color and track its movement, allowing for real-time drawing on the screen. | ||
|
||
This fun and engaging project can be used for educational purposes, drawing games, or creative activities by tracking specific color objects and adjusting the canvas features. | ||
|
||
## Features | ||
- **Real-time color tracking**: Detect and track an object with a specific color using the HSV color range. | ||
- **Dynamic Drawing**: Draw on the screen by moving the color object in front of the webcam. | ||
- **Customizable canvas**: Modify the color range and adjust the drawing features. | ||
- **Noise Filtering**: Ignores smaller irrelevant contours to prevent noise from interfering with the drawing. | ||
|
||
## Dependencies | ||
To run this project, the following Python packages are required: | ||
- OpenCV for image processing: | ||
```pip install opencv-python``` | ||
|
||
- Numpy for numerical operations: | ||
```pip install numpy``` | ||
|
||
## How to Run | ||
- Install the required dependencies mentioned above. | ||
- Download or clone the project files: | ||
```git clone https://github.com/your-repo/your-project.git``` | ||
- Navigate to the project directory: | ||
```cd your-project-folder``` | ||
- Run the script: | ||
```python virtual_painter.py``` | ||
- **Use a colored object** (like a red or green pen) in front of the webcam to start drawing. Move the object around to see the trail created on the screen. | ||
|
||
## How It Works | ||
|
||
The project works by using OpenCV to capture video from the webcam and detect the movement of an object based on its color. The HSV (Hue, Saturation, and Value) color space is used to define a range for detecting specific colors. Once the object is detected, its coordinates are tracked, and a trail is drawn on the canvas. | ||
## Color Detection | ||
The color detection is done using the HSV color range, which separates color (Hue) from intensity (Saturation and Value). The object's color is detected by defining lower and upper bounds in HSV format, which is then used to create a mask to highlight the colored object. | ||
## Tracking and Drawing | ||
Once the object is detected, its position is tracked, and the coordinates are stored in a list. The cv2.line() or cv2.circle() function is then used to draw lines or points on the screen at those coordinates, creating a virtual drawing effect. | ||
## Project Structure | ||
``` | ||
- virtual_painter.py # Main script for the virtual painter | ||
- README.md # Documentation for the project | ||
``` | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,61 @@ | ||
import cv2 | ||
import numpy as np | ||
|
||
# HSV color range for detecting the color object (adjust this range for different colors) | ||
lower_bound = np.array([0, 120, 70]) | ||
upper_bound = np.array([10, 255, 255]) | ||
|
||
# Initialize variables | ||
my_points = [] # List to store points for drawing | ||
|
||
# Function to detect color and return the coordinates of the detected object | ||
def find_color(img, lower_bound, upper_bound): | ||
hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # Convert to HSV color space | ||
mask = cv2.inRange(hsv_img, lower_bound, upper_bound) # Create mask for specific color | ||
|
||
# Find contours in the masked image | ||
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) | ||
x, y, w, h = 0, 0, 0, 0 | ||
|
||
for contour in contours: | ||
area = cv2.contourArea(contour) | ||
if area > 500: # Filter by area size to remove noise | ||
x, y, w, h = cv2.boundingRect(contour) | ||
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2) # Draw a rectangle around the object | ||
return x + w // 2, y | ||
|
||
return None | ||
|
||
# Function to draw on canvas based on detected points | ||
def draw_on_canvas(points, img): | ||
for point in points: | ||
cv2.circle(img, (point[0], point[1]), 10, (0, 0, 255), cv2.FILLED) # Draw red circles at each point | ||
|
||
# Capture video from webcam | ||
cap = cv2.VideoCapture(0) | ||
|
||
while True: | ||
success, img = cap.read() # Read frame from webcam | ||
if not success: | ||
break | ||
|
||
# Find the object in the current frame | ||
new_point = find_color(img, lower_bound, upper_bound) | ||
|
||
# If a new point is detected, add it to the list | ||
if new_point: | ||
my_points.append(new_point) | ||
|
||
# Draw on the canvas using the points stored | ||
draw_on_canvas(my_points, img) | ||
|
||
# Display the frame | ||
cv2.imshow("Virtual Painter", img) | ||
|
||
# Break the loop on 'q' key press | ||
if cv2.waitKey(1) & 0xFF == ord('q'): | ||
break | ||
|
||
# Release the webcam and close windows | ||
cap.release() | ||
cv2.destroyAllWindows() |