RNN shares parameters across each layer of the network. While feedforward networks have different weights across each node, recurrent neural networks share the same weight parameter within each layer of the network.

One of the answers to why use the same weight?

Why Do Recurrent Neural Networks Use The Same Weight Parameter In The Weighted Sum? What Does That Mean?

Working and equations

Understanding Gated Recurrent Unit (GRU) in Deep Learning

Good explanation of working of LSTM and GRU

https://analyticsindiamag.com/lstm-vs-gru-in-recurrent-neural-network-a-comparative-study/#:~:text=The workflow of the Gated,Update gate and Reset gate.