Machine Learning Software Grades Essays and Gives Students Feedback—Instantly

EdX, a nonprofit enterprise founded by Harvard and the Massachusetts Institute of Technology, will release automated software that uses artificial intelligence to grade student essays and short written answers.
EdX, a nonprofit enterprise founded by Harvard and the Massachusetts Institute of Technology, will release automated software that uses artificial intelligence to grade student essays and short written answers.
Courtesy of Gretchen Ertl for The New York Times

John Markoff at the New York Times reports on a fast-moving, back-and-forth exchange where students submit their essays online, receive a grade almost immediately, and  improve their grades based on system-generated feedback.

EdX, a nonprofit consortium of Harvard and the Massachusetts Institute of Technology that offers courses on the Internet,  has developed the automated essay-scoring software powering this new reality.

While controversy rages over the reliability of artificial intelligence to grade essays, EdX software is free to any institution that wants to offer its courses online. So far, the program has been adopted by 12 prestigious universities and it is spreading rapidly worldwide.

Proponents of the software argue that instant feedback is an invaluable learning aid to students versus waiting weeks for professor-graded feedback. Moreover, students find it engaging in much the same way as video games and claim they learn better from the process.

Critics counter that even with the best machine learning algorithms in place; computers cannot perform the essentials of assessing written communication. Les Perelman, a researcher at MIT, has tricked such grading systems into awarding high grades with nonsensical submissions.

A group of educators to which he belongs known as Professionals Against Machine Scoring of Student Essays in High-Stakes Assessment, has collected nearly 2,000 signatures and makes the case that “Computers cannot ‘read.’ They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organization, clarity, and veracity, among others.”

The EdX program has human graders assess the first 100 essays or essay questions. From then on, the system uses various machine-learning algorithms to train itself automatically. Once trained, it can grade any number of essays or answers in near real time. The software lets the teacher create the scoring system based on letter grades or numerical rankings.

Dr. Anant Agarwal, president of EdX, believes the program is approaching the capability of human graders. Skeptics point out how formal studies comparing the system against qualified human graders have not been done. Nevertheless, Dr. Agarwal claims the quality of EdX grading is as consistent as that found from one instructor to another.

Instant, automated feedback has its adherents elsewhere as well, including start-ups Coursera and Udacity. Both are funded by Stanford faculty members as part of their mission to create “massive open online courses,” or MOOCs.

Coursera founder, Daphne Koller, believes instant feedback turns learning into a game students feel compelled to master where they resubmit their work until they achieve a certain level of proficiency.

So, if automated grading is possible in academic settings, the general idea of assessing new written content based on previous human assessments of existing content is sure to explode over the next few years.

Applications that mine blogs, social media and forum postings to understand markets and communities come to mind.

What do you see happening in your field once automated interpretation of extended passages of text goes mainstream?


Can Machine Learning Give Investigative Journalism the Scoop?

While attending the recent Nicar 2013 conference in Lousiville, Kentucky, Andrew Trench, Media24 Investigations editor and technology blogger reported on a fascinating demonstration of machine learning for finding news stories and insights humans would typically overlook.

Machine Learning at a Glance
Courtwesy & Andrew Trench

He then shares his vision for how machine learning will impact news as we know it in terms of gathering and shaping stories as well as the news business itself.

ProPublica’s Jeff Larson was the presenter at Nicar. By way of the Message Machine project, his non-profit investigative reporting group used machine learning to uncover a number of major stories.

The project clustered documents and applied decision trees to comb through vast volumes of crowd-sourced emails from their readers on a given topic.

In this case, the topic was how US political parties raised money by tailoring their pitches to suit the demographics of the email recipients.

Under the hood, algorithms convert every word in every email to a number. Documents then have mathematical properties that allow them to be clustered as similar or different.

Apart from the tedium such clustering and conversion tasks would impose on the human mind, scouring the sheer volume of content collected would be too time-consuming and expensive. All the more so given the ever shorter news cycles we’ve come to accept nowadays.

Trench envisions using machine learning to more accurately predict which stories will yield the most likes, click-thrus, and commentary.

He then ruminates about what it would be like for editors concerned about daily sales to have hard data to go on instead of gut instinct.

Upon reading this, I recalled the timeless bible for direct response marketers known as Scientific Advertising by Claude Hopkins.

Written in the 1920s, Hopkins takes advertisers to task for going with their guts when they could craft ads and calls to action that would give them the data they need to continually improve their response from readers.

In effect, Google Adwords is Scientific Advertising on steroids because it forces all businesses to be better direct marketers in real time.

Meanwhile, chances are, machine learning is already quietly building out Trench’s vision of newspapers organized using prediction engines.

After all, if sites and apps like Flipboard allow readers to pull in their own personalized magazines, I suppose the big challenge for traditional media online is to push out an engaging product that differentiates itself by expanding the reader’s horizons in ways they would not on their own.

Which brings us back to Larson and machine learning as a way to make investigative reporting economically viable again.

Starting in the 1980s, the media business converted news from a loss leader to a profit center, amid a flurry of mergers and acquisitions. Along the way, investigative reporting gave way to infotainment because it was seen as an anathema to making profits.

Today, many complain that the mainstream media in the US offers too much commentary and too little “hard news.” In turn, news networks from overseas are gaining American viewers by filling this void.

And so perhaps, we’ve come full circle. With the help of machine learning, traditional media can strike the right balance between catering to their audience’s known preferences and wowing them with authentic, hard-hitting stories as a counterweight to ubiquitous fluff.

How do you see machine learning transforming the way you communicate and publish?

Beyond Spam Filters: Machine Learning to Keep Your Inbox Manageable

SaneBox uses machine learning to manage your inbox
SaneBox uses machine learning to manage your inbox
Courtesy of and SaneBox

Even with the best spam filter, managing an inbox overflowing with legitimate business emails can still gobble up precious time.

Many of us different have ways of coping with this daily onslaught.

Some of us slog through every email and do our best to reply to all of them. Others scan subject lines and senders to prioritize which ones are worth opening.

And still others create new email accounts for specific purposes to keep business, personal and commercial messages separated.

Christina DesMarais, an contributor, wrote the article Email Doesn’t Have to Suck about her experience with a new service designed to address overwhelm from legitimate emails, aptly named SaneBox.

Based on her description, it’s clear SaneBox is using machine learning to help categorize and prioritize messages.

The service watches how you engage with senders over time to predict which new messages you’ll consider important.

Those messages it considers less important it moves out of your inbox and into an @SaneLater folder you can look at whenever you like.

If you notice an important message in your @SaneLater folder, you can move it to your inbox and SaneBox will remember so that next time you receive a message from that sender, SaneBox will leave it in your inbox.

The service also equips you with a dashboard so you can track your volume of important versus non-important messages. DesMarais gained insight into just how much time email was sucking out of her workday.

Additional folders include @SaneNews (so all your newsletter subscriptions are in one place) and @SaneBlackHole (for those messages you want sent straight to trash).

The SaneBox reminder feature lets you specify which message you want replies to and by when. Simply add an address to a CC or BCC like or  and SaneBox keeps an @SaneRemindMe folder with these messages ordered accordingly.

Now machine learning not only keeps spam out of view, it rescues your relationship with your inbox.

Germ Tracker Monitors Spread of Sickness Using Tweets & GPS

Germ Tracker Web App Output
Germ Tracker Web App Output
Germ Tracker Web App Courtesy of University of Rochester and

Claire Porter, Technology Editor for reports on a web app called Germ Tracker that pinpoints where the sick people are in public so you can avoid them and stay well.

Created by scientists at the University of Rochester, Germ Tracker analyzes thousands of tweets against pollution sources, public transportation stations, gyms, restaurants, and other GPS-based data.

Most notably, those who tweeted about going to the gym but rarely made it there got sick significantly more often.

According to lead researcher, Adam Sadilek, individuals can use this information to take charge of their health by steering clear of subway stations where many sick people may congregate.

Sadilek sees Germ Tracker assisting local authorities to better handle responses to outbursts of the flu.

Germ Tracker uses machine learning and natural language processing to distinguish between colloquial uses of “sick” like “cool!” versus “I’m really ill.”

As the app mines more tweets, it learns and gets better at predicting whether a Twitter user is indeed sick. Its classifications include not sick, low risk, sick, and high risk.

The algorithm reveals the tweet that led to the classification and allows users to override it.

Two Brains Act As One Super Pilot with Machine Learning

EEG machine

Two brains are better than one! At least it is when using a simulator to navigate a virtual spacecraft.

More specifically, two people’s thoughts were far more accurate than either person’s acting alone.

Paul Marks at the NewScientist reports on how a team at the University of Essex in the UK researching brain-computer interfaces (BCI) arrived at this conclusion. The team will present its findings in March at the Intelligent User Interfaces conference.

Their experiment suggests “collaborative BCI” could flourish in the future in fields like robotics and telepresence.

EEG electrodes were hooked up to the scalps of both navigators. As each person thought about a predefined concept like “left” or “right,” machine learning software identified patterns in their thinking and applied them in real time.

For the participants, the challenge was to steer the craft to the absolute center of a planet using a repertoire of 8 directional thoughts. Machine learning software merged their separate thoughts into continuous actions that directed the craft.

The researchers found simulation flights using collaborative BCI were 90% on target versus 67% for solo pilots.

Even when sudden changes in the planet’s position were introduced, having additional brainpower cut human reaction time in half.

Also, since EEG signals are often fraught with noise, having signals from two brains helped maintain a more usable signal to noise ratio. It also compensated for lapses of attention when one person’s mind momentarily strayed.

While remotely controlling actual spacecraft using collaborate BCI is still a long way off, more modest uses are altogether feasible today.

For example, enabling people with disabilities to steer a wheelchair with their thoughts.

So, how do you see collaborative BCI being used in your world?

Can Big Data Tackle Football with Machine Learning?

Mockup of telestrator diagram. Based on Image:UT Longhorn football – handoff to Melton in Big12 championship game.JPG by Johntex.

In the public mind, the movie Moneyball captures the fusion of sports and statistics for competitive advantage.

MLB and the NBA have embraced the video analysis of computer vision and the data crunching of machine learning for some time.

Now, the NFL could be next. Derrick Harris at Gigaom offers insights on this trend as reported on by Judy Battista for the New York Times.

Most intriguing is the idea that NFLers have not embraced these technologies as readily as their pro sports peers because the complexity of the game defies decoupling teamwork into discrete actions readily attributed to individual players.

Think of offensive linemen working together to protect quarterbacks and running backs. Or defensive linebackers who don’t get many tackles, but make it difficult for the offense to execute.

For this reason, many NFL coaches prefer to assess players based on how they look on film.

Over time, however, this resistance to analytics is likely to fade as machine learning based applications that use computer vision prove their value and become easier to use.

In fact, it could simply be a matter of identifying more subtle metrics to extract and analyze that previously evaded human detection.

For example, during the 2012 playoffs, the Wall Street Journal’s John Letzig reported on how MLB used motion-analysis software from Sportvision Inc. to quantify an outfielder’s ability to get a jump on fielding fly balls.

Given the rich data complexity of football, it’s hard to imagine coaches not eating up algorithm-powered, in-situ forecasts that take player stats, weather and game scenarios into account and identify those variables most likely to influence what happens next.

Or team management not angling for competitive advantage at the lowest possible cost by pinpointing those overlooked, game-deciding metrics that don’t correlate with salary levels like fourth down conversions (just like on-base percentage was the key focus in Moneyball).

In other instances, humans can request recalibration of the algorithms so video tracking models ignore what they consider to be noise and add additional factors they view as pivotal.

We reported on how Zest Finance continually improved the accuracy of its credit underwriting assessments in the payday lending market by taking this approach with respect to 70,000 variables.

Part of football’s very appeal is its complexity and the many inter-dependencies that make it tick. And so it’s a natural for the video scrutiny and data mining that computer vision and machine learning make possible.

Are you involved in an activity where many individuals come together to form a whole greater than the sum of its parts?

How could analysis of its finer points of interaction unlock hidden value in your business?

Machine Learning Helps Ford Hybrids Cut Gas Use

Imagine a plug-in hybrid car that could learn your travel patterns. Now imagine it uses that info to keep you driving in electric-only mode more often.

Well, Ford is on it with its C-MAX Energi plug-in hybrid and Fusion Hybrid models.

The company calls its machine learning technology EV+, a patent-pending aspect of of SmartGauge®, which comes standard with these vehicles.

Ford research shows that hybrid owners prefer to drive in electric-only mode near their homes and frequent destinations.

To learn your often-visited locations, EV+ utilizes the onboard GPS of Ford SYNC® with proprietary software algorithms the company has developed in-house.

As EV+ gets familiar with your home parking spot and frequent destinations, it uses the electric power stored in the car’s high-voltage battery and remains in electric-only mode because you’re likely to charge it up again soon.

When EV+ identifies that your current location is within a radius of 1/8 mile or 200 meters of a frequent stop, the vehicle favors electric-only mode and the dashboard displays an “EV+” light to let you know.

Green Car Congress reports that Ford has big plans for big data and machine learning to transform the driver assistance capabilities of its vehicles. Among them are:

  • “Big data” analysis for smarter decisions using radar and camera sensors to assess outdoor conditions like traffic congestion, so the vehicle could minimize distractions like an incoming phone call and keep the driver focused on the road.
  • Biometrics for personalized driving help where biometric sensors in seats could measure the driver’s stress levels and integrated technologies calibrate what kind of assistance to provide based on the situation and the driver’s skill.

In our last post, we pointed to self-parking cars as an early manifestation of cognitive computing that enables humans to offload their duties to the vehicle entirely.

And in the previous post, we speculated how the Google hire of Ray Kurzweil could signal an acceleration of computers and humans extending one another’s capabilities.

With these trends in machine learning happening in tandem, how do you see driver assistance taking shape in the future?