Authors - Krisha Zalaria, Nishit Dadhaniya, Priyanka Patel Abstract - The project aims to recognize sign language using hand gestures in real accurately. Hand gesture recognition helps to reduce the gap between people who are hard of hearing or are not able to speak and who can hear and speak well. The system receives sequential data representing gestures taken using a camera. In order to extract useful information from hand gestures, pre-processing techniques are utilized to increase the quality of the input data, followed by feature extraction. The recognition outcome depends on the quality of the input data, the efficacy of feature extraction, the structure of the recognition model, the richness of the training dataset, and the accuracy of the training model during real-time scenarios. Using LSTM and MediaPipe Holistics, the model archives an accuracy of around 97.4% across different dynamic signs(600 clips, 15 classes) along with static signs (1444 images, 39 classes). This study demonstrates the efficacy of the proposed system in accurately reconizing sign language gestures, therby facilitating improved communication for indivuals with hearing or speech impairments.