By default, Spark shuffles use a hash partitioner (same as MapReduce).
The default Spark partitioner can be overridden.
Example of a custom Scala partitioner: