Learning Analytics in Education and a data dogma?

Back in the good old days when I was a member of the Glasgow based supergroup with my then colleagues Lorna Campbell and Sheila MacNeill we were approached to write a chapter for the soon to be published ‘Reusing Open Resources’.  We were tasked with writing something on ‘Analytics for Education’. Prior to print our chapter along with four others have been published in the Journal of Interactive Media in Education (JiME) under a CC-BY license. You can read the full Analytics in Education chapter here and copied below is the section I had most input on was ‘future developments’.
Given ‘prediction is very hard, especially about the future’ its interesting to look back at what we wrote in the summer 2013. Something we should have perhaps expanded upon was data privacy concerns particularly in light of the news that news that  non-profit inBloom is shutting down. I often find myself with conflicted interests between data collection as part of my personal quantified self and data collection for quantifying others. TAGS is a prime example of where I initially wanted to collect data to understand the shape of the communities I was in, but now is used by myself and others to extract data from communities we have no investment in.
And right now I’m developing the next iteration of ocTEL which thanks to funding  from the MOOC Research Initiative has helped find areas where we can improve data collection, in particular, resolving identities across networks. Achieving this personally feels like progress but I’m sure many others will disagree.
Are we bound by a data dogma?

Given the diversity of research strands that feed into the area of analytics in education, together with the increased ease of data storage, the field is expanding rapidly in a wide range of new directions. Until recently, the focus of most analytics developments to support teaching and learning has been on integrating tools with existing institutional learning management systems (Ferguson, 2013). This is primarily because such integration provides relatively easy access to available student data. However the increased adoption of third party services such as social network tools and applications, and the emergence of massive open online courses (MOOCs) have created new opportunities for large-scale experimentation with analytics. Recent examples of projects that are seeking to explore the use of data from MOOCs and social networks include Stanford University’s Lytics Lab (Lytics, n.d.) which, amongst other work, runs randomised control trials of MOOC courses offered by Coursera. In addition to using analytics to identifying potential “threshold concepts” that might be exposed by tens of thousands of students taking multiple choice question tests, there are opportunities to identify, analyse and define wider engagement patterns within subpopulations of learners (Kizilcec, Piech , & Schneider, 2013).
Large scale analytics initiatives are also taking place at a national level, with varying degrees of success. Launched in early 2013, inBloom is a US non-profit organisation backed by the Carnegie Corporation and the Bill and Melinda Gates Foundation, that aims to create infrastructure to integrate, analyse and provide solutions to personalise student learning for schools at state and district level. By creating a common interface inBloom set out to stimulate educational technology providers to develop new tools utilising the growing database and infrastructure around student data, without the cost of having to develop custom connections to existing local infrastructure such as student management systems (inBloom, 2013″>inBloom, 2013). Shortly after the project was launched however, parents and civil liberties organisations began raising concerns about centralising sensitive student data in this manner and asking questions about who will have access to the data (Campbell, 2013). By August 2013 these concerns resulted in a significant number of states pulling out of the project all together, leaving only four school districts participating (Nelson, 2013).
Within the UK, the Department of Education launched an Analytical Review looking at the role of research, analysis and data within the Department. The Review focused on two key areas: data systems for the collection, sharing and retrieval of data generated by English schools; and the role of randomized control trials for “building evidence into education” (Department for Education, 2013). Whilst it is still unclear what data exchange models will be adopted by the Department, the announcement, following the publication of two random control trials, is a clear indication that analytics is playing an increasingly significant role at all levels of education (Department for Education, 2013).
As analytics initiatives continue developing, it is highly likely that commercial practices will continue transferring into the educational sector. Recommendation systems and targeted advertising are the backbone of commercial giants such as Amazon and Google but they are increasingly finding their way into learning analytics systems. Emerging products in this area include Talis Aspire (talis aspire, 2013″>talis aspire, 2013), which offers complete reading-list management solutions based on usage data to provide both staff and students with insights into catalogue use, thus creating opportunities for personalised learning.
Associated with these developments are “analytics as a service” products offered by companies that specialize in providing analytic services for a fee. Companies such as Narrative Science, who specialize in automatically producing text based summaries of numeric data, have already highlighted opportunities for creating personalised feedback with actionable insights by combining data from test results (Hammond, 2012).