Teaser Image

Top Ed-Tech Trends of 2012

A Hack Education Project

Automation and Artificial Intelligence


This post first appeared on Hack Education on December 18, 2012. Part 9 of my Top Ed-Tech Trends of 2012 series.

On Saturday, I took a ride in one of Google’s self-driving cars. “This will be the most incredibly boring drive ever,” joked the car’s developer Sebastian Thrun.

Incredibly boring indeed. I mean, sure, the trip was utterly uneventful: the car cruised along I–280 without incident, just like any Saturday morning drive should go. The other cars did slow and swerve when they saw the Lexus with the Google logo, the little camera on the top, and the words “self-driving car.” Drivers and passengers turned and stared. Amazed. I was amazed. Yes, uneventful, but also was the most incredible drive I’ve ever taken (beating out that time when I was sixteen and a friend and I “borrowed” her stepdad’s Corvette). There was Thrun with his hands off the wheel, feet off the pedals, eyes not on the road, explaining how the car (and Google) collected massive amounts of data in order to map the road and move along it. The car does have lots of cameras and sensors, but the technology (hardware at leastI) wasn’t really that overwhelming — the car’s computer quite small, tucked away in the corner of the trunk. It all worked flawlessly. Just another passenger vehicle on the road — how banal. Except that it was driving itself — how friggin’ incredible.

I saw the future, in one of those weird William Gibson “the future is already here — it’s just not evenly distributed” moments. I hate to drive and I moved to LA this year: the self-driving car is a future whose more widespread distribution I look forward to.

The limo driver that dropped a handful of us off at Thrun’s house, on the other hand, wasn’t so thrilled.

But wait, you say. What does the Google self-driving car have to do with education?

In 2012? Everything.

AI and MOOCs


The lead on Google’s self-driving car project Sebastian Thrun is, of course, also the founder of Udacity, one of the most important education startups of the year and key to 2012’s most important ed-tech trend, MOOCs. It was Thrun’s Artificial Intelligence class offered in the Fall of 2011 that’s often credited for igniting the whole MOOC craze. In January , Thrun announced his departure from Stanford where he’d been a research professor and the director of SAIL, the Stanford Artificial Intelligence Laboratory.

Now the director of SAIL is Andrew Ng, who along with fellow Stanford machine learning and AI professor Daphne Koller, is the founder of Coursera.

In March, Anant Agarwal announced that he was stepping down as the director of CSAIL, MIT’s Computer Science and Artificial Laboratory in order to become the president of MITx (now edX).

The year's three major xMOOC initiatives — Udacity, edX, and Coursera — all originated in AI labs. That’s not a coincidence. That’s a trend.

(Sidenote: how interesting that the CEO of the recently-announced UK uni response to these xMOOCs — FutureLearn — comes from the BBC and not from the CS department.)

How will the field of artificial intelligence — "the science of creating intelligent machines" — shape education?

The Long History of Machine Learning and Teaching Machines


The field of artificial intelligence relies in part on machine learning — that is, teaching computers to adapt their behaviors algorithmically (i.e. to learn) — and natural language processing — that is, teaching computers to understand input from humans that isn’t written in code. (Yes, this is a greatly oversimplified explanation. I didn’t last too long in either Thrun’s AI or Ng’s Machine Learning MOOCs.)

In addition to the innovations surrounding self-driving cars, this branch of computer science has also been actively working on adaptive learning, automated assessment, and “intelligent tutoring” systems for a very long time. We are seeing a lot of breakthroughs now, in part because — much like the self-driving car with its sensors and cameras and knowledge of the Google-mapped-world — we are gathering immense amounts of data via our interactions with hardware, software, websites, and apps. And more data equals better modeling.

Fine-tuning these models and “teaching machines” has been the Holy Grail for education technology – that is, there’s long been a quest to write software that offers personalized feedback, that responds to each individual student’s skills and needs.

What makes these programs “adaptive” is the AI assesses a student’s answer (typically to a multiple choice question, but in the case of many of the early xMOOCs, the student’s code), then follows up with the “next best” question, aimed at the “right” level of difficulty. This doesn’t have to be a particularly complicated algorithm, and the idea actually based on “item response theory” which dates back to the 1950s (and the rise of the psychometrician). Despite the intervening decades, quite honestly, these systems haven’t become terribly sophisticated, in no small part because they tend to rely on multiple choice tests.

So the search for that ed-tech Holy Grail continues not only to make software more adaptive — more “personalized” as the marketing-speak goes — but to expand the capabilities outside the realm of just multiple choice testing.

Automated Essay Graders


In January of this year, the Hewlett Foundation announced it would award a $100,000 prize to software designers who could “reliably automate the grading of essays for state tests.”

“Better tests support better learning,” said Barbara Chow, the foundation’s Education Program Director. “Rapid and accurate automated essay scoring will encourage states to include more writing in their state assessments. And the more we can use essays to assess what students have learned, the greater the likelihood they’ll master important academic content, critical thinking, and effective communication.”

The Hewlett Foundation turned to the machine learning competition site Kaggle to run its automated essay grading contest. And in April, Kaggle data scientist Ben Hamner along with University of Akron’s Dean of the College of Education Mark Shermis published a study that contended that the software that was being developed was as good as humans at grading essays. The researchers examined some 22,000 essays that were administered to junior and high school level students as part of their states’ standardized testing process, comparing the grades given by human graders and those given by automated grading software. They found that “overall, automated essay scoring was capable of producing scores similar to human scores for extended-response writing items with equal performance for both source-based and traditional writing genre.” (PDF)

“The demonstration showed conclusively that automated essay scoring systems are fast, accurate, and cost effective,” said Tom Vander Ark, managing partner at the investment firm Learn Capital and manager of the Kaggle competition via his Open Education Solutions company, in a press release touting the study’s results.

As someone who taught writing for a number of years (See: My CV), I don’t agree at all, as evident by the headline I used on my lengthy response to the Kaggle competition, the research, and the whole robot grader hoopla: “Tossing Sabots into the Automated Essay Grading Machine.” I have lingering questions about labor, about gaming the system, about why and how and what we ask students to write and why on earth we’d want to automate that.

kindergarten-cop-terminator

Efficiency and Learning


Why would we want to automate it? Why, for the sake of efficiency, of course. We have to scale. Process more students. We have to assess more content. Write more. Grade more. Test more. Cut costs. Etc.

A couple of 20th century theorists who played a part in our thinking his year:

Jacques Ellul (1912–1994): See my Storify on The Technological Society . See also: The Boston Globe, “Jacques Ellul, technology doomsayer before his time.”

William Baumol (1922- ): See Clay Shirky’s popular essay “Napster, Udacity, and the Academy,” which invokes Baumol’s Cost Disease.

Baumol’s Cost Disease posits that there are a rise in salaries even in industries that have not seen a rise in productivity or efficiency. “Higher education has a bad case of cost disease,” writes Shirky. And as James Surowiecki wrote in The New Yorker late last year,

”some sectors of the economy, like manufacturing, have rising productivity—they regularly produce more with less, which leads to higher wages and rising living standards. But other sectors, like education, have a harder time increasing productivity. Ford, after all, can make more cars with fewer workers and in less time than it did in 1980. But the average student-teacher ratio in college is sixteen to one, just about what it was thirty years ago. In other words, teachers today aren’t any more productive than they were in 1980. The problem is that colleges can’t pay 1980 salaries, and the only way they can pay 2011 salaries is by raising prices. And the Baumol problem is exacerbated by the arms-race problem: colleges compete to lure students by investing in expensive things, like high-profile faculty members, fancy facilities, and a low student-to-teacher ratio.”

Can technology change all this? Can technology bring down the cost of education? No doubt that’s why talk of this cost disease matters so very much: the increasing cost of education, the record levels of student loan debt. Can technology make education more efficient? What does that entail? Can technology scale education? What does that mean? With automated instruction and automated grading, can we train people more quickly?

“Hell yes,” say the xMOOCs and their AI.

But at what cost? How will the field of artificial intelligence -- remember, the origin of these xMOOCs -- shape how we'll think about teaching and learning and technology and efficiency?

These questions are at the core of Jacques Ellul’s dark vision about technology, society, and education in The Technological Society (1964). He writes that this impulse drives human history:

”The human brain must be made to conform to the much more advanced brain of the machine. And education will no longer be an unpredictable and exciting adventure in human enlightement but an exercise in conformity and an apprenticeship to whatever gadgetry is useful in a technical world.”

So there’s that.

Ethics and the 3 Laws of (Ed-Tech) Robotics


My background is in literature, and when I was asked this summer to speak at CALI.org conference (the computer-assisted law instruction conference), I couldn’t help but give a talk that drew on science fiction: on the history of robots and labor and learning. And as it was a law conference, I couldn't help but invoke Isaac Asimov’s “3 Laws of Robotics.”

I feel like I’m being a bit of a doomsayer in that slide deck. Interestingly, I shared the keynote “stage” at CALI.org with Dave Cormier who spoke about xMOOCs and open learning. “I saw a deadhead sticker on a Cadillac,” were the lyrics he invoked in his talk. #Teameducationapocalypse was the hashtag he used to describe our presentations. And that’s the thing with robots: they seem to contain both our salvation and our doom.

But it’s fairly clear that 2012 was an incredibly important year for robots (check out my Tumblr, the Robot Chronicles, for all the latest news) – and it’s time for us to think about the implications and the ethics of robotics. For self-driving cars, for robot essay graders, and for drones.

So how will we proceed? What matters to us? What values will we instill in our robots?

I think this is why I like Asimov’s Laws of Robotics very much, particularly the Zeroeth law, which reminds us about the harm not just to human beings but to humanity.

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

After all, humanity and learning are deeply deeply intertwined.

The Robots that Replace Us


Duke University professor Cathy Davidson often quips that if teachers “can be replaced by a computer screen, we should be.” In a political climate that seeks to deprofessionalize the teaching profession (more on that in my last Top Ed-Tech Trends post), that’s a controversial statement. Davidson insists she’s not arguing that teachers should be replaced with robots, but “if, as a teacher, at any level you are no more interactive and responsive than a YouTube video, then your institution and your students should save your salary and go with the cheaper–and probably more entertaining–online version instead.”

I suppose the same could go for all the technology bloggers who just copy and paste press releases and who will in the near future probably be replaced by the robot-writers at Narrative Science.

Earlier this month, Paul Wallich posted onto the IEEE Spectrum blog his instructions for building a “DIY Kid-Tracking Drone.” “On school-day mornings,” he wrote, “I walk my grade-school-age son 400 meters down the hill to the bus stop. Last winter, I fantasized about sitting at my computer while a camera-equipped drone followed him overhead. So this year, I set out to build one.” Wallich describes the drone’s mechanics, its electronics, its software, and the tracking beacon that fit “unobtrusively in my child’s backpack.”

So, did it work? Mostly. The copter is skittish when it’s windy, and GPS guidance is good to 10 meters at best. Because my particular front yard is only about 15 meters across, with a long, tree-edged driveway leading to the street, I either have to follow automatically above the treetops—where I can’t really see what’s going on—or else supplement the autopilot with old-fashioned line-of-sight remote control. Which somewhat defeats the original plan of staying warm and dry while a drone does my parenting.

Jokes about “helicopter parenting” are pretty easy to make here, and while some folks left comments on the article criticizing Wallich for his innovation, others noted that this was a pretty clever excuse to build a DIY drone.

But clever excuses and clever innovations and clever drones aside, I think there remain many unanswered questions about drones and robots and artificial intelligence and why and how -- when it comes to kids and to learners -- we want to automate things. What about surveillance? What about data? What about messiness and inefficiency? What do we gain through automation? And what will we lose?

Image credits: Jesse Stommel, Paul Wallich