The information that users provide to user interfaces are interactions like "click on this pixel position" or "enter this character here". On phones or other devices, there may more inputs such "click and hold on this pixel position".

The user interface provides feedback and information back to the user i.e. activating a button, turning the cursor into a little glove that indicates the the position can be clicked on, a shadow effect when a UI element is hovered on, etc.

When I hover my cursor over a button, I need to verify that my cursor is indeed hovering over the button I want to press. With a computer cursor, the feedback provided to me is a pixel position. The iPadOS cursor snaps to the button, so the feedback provided then is "you've snapped to this button". It requires less mental processing for me in the second case to interpret whether the button I actually want is selected:

Apple's iPadOS pointer matches the granularity between the user's input and the UI