Authors - Deepali R. Naglot, Deepa S. Deshpande Abstract - Sign language serves as the primary mode of communication for individuals experiencing hearing and speech impairments. This research paper concentrates on vision-based system for recognizing and interpreting the Devnagari Sign Language (DSL). The paper reviews existing literature on various kinds of systems for detecting and interpreting sign language and discusses different approaches, including glove-based systems, vision-based systems, and depth sensors. The proposed system created a dataset of 47 alphabets of Devnagari Sign Language and applied image augmentation techniques, segmentation, and canny edge detection for preprocessing. The system incorporates Convolutional Neural Network architecture with convolutional layers, max-pooling layers, and fully connected neural networks for sign language recognition. This study evaluates the efficiency of a proposed system using precision, recall, F1-score, and support metrics. The proposed model attained an accuracy of 90.43% for 47 Devnagari alphabets.