Authors - Soni R Ragho, Vidya R Ghule, Gayatri S Wasulkar, Sanjivani M Jadhav, Aniket T Ghule Abstract - People with hearing impairments use sign language to communicate. The goal of project is to use computer vision to take the translate it into text in real time using the four modules. The system comprises four modules: image capturing, preprocessing, classification, and prediction. The spitting image dispensation can be used to segment. Open CV python library is used to process sign gestures. After capturing the gesture, it's transformed into a grayscale image, and noise filtering is applied to enhance prediction accuracy. Prediction and classification are accomplished through the utilization of a neural network.