Pages

Tuesday, 18 October 2022

Harvard's EZ-MMLA Toolkit

 

Multimedia Learning Analytics

Some time back, I learnt about this toolkit, the Multimodal Data Collection Made Easy: Harvard's EZ-MMLA Toolkit

The EZ-MMLA toolkit has been developed by the Learning, Innovation and Technology lab at the Harvard Graduate School of Education. We are using open-source algorithms created by others to collect multimodal data from video and audio feeds. We did not create these algorithms; the source code can be found on each tool's page. We thank the creators of these models for sharing their work. This website makes it easy to collect multimodal datasets, for researchers and people wanting to learn how to analyze multimodal data.

from https://mmla.gse.harvard.edu/about/ 



The site hosts the demos and also provides some source code for the different Artificial Intelligence tools. 




I tried the PoseNet one and was pretty impressed with the data that could be extracted from the tool. I believe that schools and educators can use this for some exciting learning analytics. There are other tools to check out too so please do and let me know if you decide to use any of them for some project.




No comments:

Post a Comment