One of the goals of academic assessment is to identify which students need help; the sooner they can be identified, the better. The promise of technology has been that its ability to collect unique data could make this process timelier, more accurate, and less burdensome.
But how might technology actually go about fulfilling this promise? Thus far, academic research suggests that technological tools can predict outcomes by collecting or analyzing data according to the following three different categories.
One way of predicting outcomes is simply by measuring how much students are using curricular materials—it’s essentially drawing conclusions from computerized attendance takers. For example, a new study led by Iowa State’s Reynol Junco (who once declared that “Most ed-tech startups suck!”) examined whether engagement with online textbooks could predict classroom outcomes. Using data from over 200 students across 11 college courses, he found that the number of days students used the textbook could predict course performance, and that this was actually a better predictor than previous course grades.
A second way to predict student outcomes is to focus on data relating to student engagement. This research, which tends to involve data from learning management systems (LMS), goes beyond whether or not somebody has opened a book and shows that a variety of specific behaviors (e.g. posting a message) can also be indicative of future course performance.
For example, a 2010 study led by Leah Macfadyen of the University of British Columbia examined activity on Blackboard from five undergraduate classes. The researchers found 15 different variables, such as number of discussion messages posted, and number of messages sent, that were correlated with final course grades. The data was used to develop a model that accounted for about 30% of the variance in final course grades and identified over 80% of students who would go on to fail the course.
The most difficult technological (and computational) achievement is predicting student outcomes by actually evaluating their knowledge. The end goal is to have computers that can grade as accurately as humans, or better, which would ultimately allow for more frequent and painless assessment than would otherwise take place (arguably blending learning systems are approaching this point in mathematics.)
While tools built to assess skills are most commonly associated with math and writing, a new study led by Stanford’s Paulo Blikstein shows predictive data can be gathered in computer science. Specifically, Blikstein and his colleagues investigated whether machine learning algorithms could predict computer science course grades based on the progression of a student’s code in single assignment. This effort required no specification about what a good piece of code should look like. The algorithm—specially, a cluster analysis algorithm—simply clustered groups of students together based on how their code changed from one attempt to the next. It’s akin to having students write multiple drafts of an essay, and then having a computer group together those students who appeared to make the same kinds of changes from one draft to the next.
For these studies to be meaningful it requires effectively equating final grades with learning, and the two clearly are not the same. But by linking quantifiable student actions—whether it’s opening an online textbook or typing a piece of code—to actual outcomes, each of these studies demonstrates a different way that technology can allow teachers to get a better sense of which students are learning without additional formal assessments.
Read the entire article by Eric Horowitz on edSurge at https://www.edsurge.com/n/2015-07-13-3-ways-educational-technology-tools-predict-student-success