Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics
DOI10.5281/zenodo.10636983Zenodo10636983MaRDI QIDQ6725234
Dataset published at Zenodo repository.
Author name not available (Why is that?)
Publication date: 8 February 2024
Copyright license: No records found.
Raw data for the paper Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics. open_field_2D.zip 2D keypoints from open field recordings, used in Fig 1, Fig 2, Fig 3a-g. The data is formatted as if it were the output of DeepLabCut so that it can be used with keypoint-MoSeq tutorial. open_field_3D.h5 3D keypoints from open field recordings, used in Fig 5g-l. The data is formatted as an h5 file with one dataset per recording. Each dataset is an array with shape (n_frames, n_keypoints, 3). accelerometry_and_keypoints.h5 2D keypoints and intertial measurement unit (UMI) readings, used in Fig 3h-i. The keypoints and IMU data can be aligned using their respective timestamps. dopamine_and_keypoints.h5 2D keypoints and striatal dopamine signals (measured using dLight), used in Fig 4. The dopamine signal is already synced to the keypoints.
This page was built for dataset: Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics