DescriptionOne of the key issues in speech perception is how listeners are able to accurately categorize linguistic units (e.g., phonemes) from acoustic cues that contain variation due to multiple overlapping layers of information (Liberman et al., 1967). Over the years, researchers have developed various compensation procedures (e.g., vowel formant normalization) that strive to overcome this variation and increase classification accuracy. Although computationally efficient and widely used, these compensation procedures fall short conceptually as i) they are not necessarily computational models of compensation/ perception/cognition and ii) they do not allow inferences regarding classification to interact dynamically with inferences regarding compensation. In this work we outline a bayesian computational framework for speech perception and compensation, the ideal compensator. Because our listener model infers how to compensate based on a speaker’s generative model while also simultaneously inferring linguistic category, we believe our approach is novel as it both increases classification accuracy and addresses the conceptual issues ignored by previous compensation models and procedures.