Profile picture of Jeremy Harper
Discussion · ·
Visible also to unregistered users
I’m trying to put together a process and structure to ensure that we have reliable, robust longitudinal data to see how we are progressing in terms of accessibility. So I want to make sure that I understand the historical data that Ally is capable of presenting. Is the following a correct understanding? - Ally’s Institutional reports by term, month, or year, are not snapshots of the accessibility at that point. Rather they are filters on the current accessibility data—either by term (if you have terms associated with courses) or by creation date. So they’re more proxies for changes in accessibility over time than direct measures. Is this correct? Ally isn’t saving snapshots on some sort of regular basis of all accessibility data, is it? - On the other hand, Ally does associate datetime data with each record in the usage report (instructor feedback launch, instructor fix, alternative format usage, etc.). So those do present snapshots in time, and thus are a more direct depiction of changes over time. - If we want a more direct measure of the changes in Ally scores, total error counts, total files, etc. over time, we just need to routinely export and store institutional reports. Does that seem like the best way? Is that how most institutions handle it?