Dopaminergic action prediction errors serve as a value-free teaching signalAnimals’ choice behavior is characterized by two main tendencies: taking actions that led to rewards and repeating past actions. Theory suggests these strategies may be reinforced by different types of dopaminergic teaching signals: reward prediction error to reinforce value-based associations and movement-based action prediction errors to reinforce value-free repetitive associations. Here we use an auditory-discrimination task in mice to show that movement-related dopamine activity in the tail of the striatum encodes the hypothesized action prediction error signal. Causal manipulations reveal that this prediction error serves as a value-free teaching signal that supports learning by reinforcing repeated associations. Computational modelling and experiments demonstrate that action prediction errors alone cannot support reward-guided learning but when paired with the reward prediction error circuitry they serve to consolidate stable sound-action associations in a value-free manner. Together we show that there are two types of dopaminergic prediction errors that work in tandem to support learning, each reinforcing different types of association in different striatal areas.This is the record of all fiber photometry recording experiments (except those in ED Fig 5O-Y and ED FIg 12). It contains mouse ID, date, experiment type as well as any annotated notes. It should be used in conjunction with 'Processed striatal dopamine fiber photometry data, required to reproduce all photometry figures (except EDfig5pqrstwvxy and EDfig12dfg)'.