Some underwater applications involve deploying multiple underwater Remotely Operated Vehicles in a common area. Such applications require the localization of these vehicles, not only with respect to each other but also with respect to a previously unknown environment, hence the interest in multi-agent simultaneous localization and mapping (SLAM) algorithms. Underwater VSLAM usually relies on multi-sensor fusion, but some works highlight the interest of visual SLAM for underwater applications. The current dataset provides three underwater two-agent sequences for the evaluation of multi-agent, monocular, VSLAM algorithms in underwater environments. The sequences include two in pool and one in the sea. Reference trajectories computed using Structure-from-Motion are provided.