RNN shares parameters across each layer of the network. While feedforward networks have different weights across each node, recurrent neural networks share the same weight parameter within each layer of the network.
One of the answers to why use the same weight?
Working and equations
Understanding Gated Recurrent Unit (GRU) in Deep Learning
https://analyticsindiamag.com/lstm-vs-gru-in-recurrent-neural-network-a-comparative-study/#:~:text=The workflow of the Gated,Update gate and Reset gate.