Authors - Prajwal Sangalad, Vijeta Chitragar, Prajwal Bolabandi, Prabha C Nissimagoudar, Kaushik Mallibhat, Nalini C Iyer Abstract - To navigate safely, driving requires engaging with other drivers and forecasting their future actions. This work presents a probabilistic future prediction model that uses a Full-Spectrum single camera to provide an aerial view. The model projects dynamic agent motion and instance segmentation into the future, converting them into statistical trajectories. Our approach involves directly predicting aerial view representations from surrounding Full-Spectrum single camera inputs by incorporating the sensing, integration of sensors, and future projection units within a conventional autonomous driving stack. With no requirement for HD maps, our model, trained on end-to-end video driving data, captures the underlying unpredictable dynamics of the future. It outperforms earlier prediction baselines on the NuScenes dataset in forecasting multimodal future trajectories with an IoU of 59.4% for short (30m x 30m) situations and 36.7% for long (100m x 100m) scenarios. The Video Panoptic Quality (VPQ) values improved as well, reaching 50.2% for short instances and 29.8% for long instances.