Animals’ choice behavior is characterized by two main tendencies: taking actions that led to rewards and repeating past actions. Theory suggests these strategies may be reinforced by different types of dopaminergic teaching signals: reward prediction error to reinforce value-based associations and movement-based action prediction errors to reinforce value-free repetitive associations. Here we use an auditory-discrimination task in mice to show that movement-related dopamine activity in the tail of the striatum encodes the hypothesized action prediction error signal. Causal manipulations reveal that this prediction error serves as a value-free teaching signal that supports learning by reinforcing repeated associations. Computational modelling and experiments demonstrate that action prediction errors alone cannot support reward-guided learning but when paired with the reward prediction error circuitry they serve to consolidate stable sound-action associations in a value-free manner. Together we show that there are two types of dopaminergic prediction errors that work in tandem to support learning, each reinforcing different types of association in different striatal areas.This dataset contains the experimental record, the preprocessed fiber photometry data, and the preprocessed behavioral data of the experiments in ED Fig 5O-Y and ED Fig 12FG. The experimental overview provides the mouse ID, recording date, and the behavioral protocol used. With this dataset the figures ED Fig 5O-Y and ED Fig 12 FG can be reproduced.