Persistent Disagreement and Polarization in a Bayesian Setting

Abstract

For two ideally rational agents, does learning a finite amount of shared evidence necessitate agreement? No. But does it at least guard against belief polarization, the case in which their opinions get further apart? No. OK, but are rational agents guaranteed to avoid polarization if they have access to an infinite, increasing stream of shared evidence? No.

Publication
The British Journal for the Philosophy of Science 71 (1)