Machine Learning Predicts Students’ Final Grades As the Course Unfolds

“Can we predict a student’s final grade based on his or her behavior in the course so far?”

Writing for the Wall Street Journal, Don Clark showcases Canadian company Desire2Learn, a provider of cloud-based learning systems to enterprises and academia that recently announced this very capability.

With 10 million learners over 14 years, the company has collected detailed records on student engagement with instructional materials and their subsequent performance on tests.

Desire2Learn has developed machine learning algorithms it applies to its historical data that make predictions of how students will fare as the course unfolds.

Such predictive analysis serves as an early warning signal so instructors can give at-risk learners the additional, personalized attention they need, when they need it most.

The company’s CEO, John Baker, claims Desire2Learn’s algorithms yield greater than 90% accuracy at predicting letter grades.

John Baker, CEO Desire2Learn (c) Desire2Learn
John Baker, CEO Desire2Learn (c) Desire2Learn

Just the same, privacy issues do crop up. For example, instructors having student’s individual engagement statistics can expose a student’s general level of effort.

As a safeguard, Desire2Learn anonymizes personally identifiable information about student activities by stripping off such data when the student finishes the course.

The company makes predictive findings available to instructors through its Students Success System and they plan to do likewise for students through Degree Compass, a product currently in beta.

So, how would your business change if you had time predictions about future outcomes?

Advertisements

Machine Learning Startup Skytree Lands $18 Million

In venture capital circles, machine learning startups are about to catch fire. This makes sense as the size of data sets that companies and organizations need to utilize spirals beyond what the human brain can fathom.

As Derrick Harris at Gigaom reports, Skytree landed $18 million in Series A funding from US Venture Partners, United Parcel Service and Scott McNealy, the Sun Microsystems co-founder and former CEO. The company began just over a year earlier with $1.5 million in seed funding.

Skytree co-founder Alexander Gray (second from left) at Structure: Data 2012. (c) Pinar Ozger
Skytree co-founder Alexander Gray (second from left) at Structure: Data 2012. (c) Pinar Ozger

As big data gets bigger ever more quickly, machine learning makes it possible to identify meaningful patterns in real time that would elude sharp humans even with the best of query tools.

Still, there’s often a place for human judgment to flesh out the findings of machine learning algorithms.

For example: Netflix recommendations, the ZestFinance credit risk analysis platform and ProPublica’s Message Machine project that combs through vast volumes of crowd-sourced emails to find important news stories on a given topic.

The flagship Skytree product, Skytree Server, lets users run advanced machine learning algorithms against their own data sources at speeds much faster than current alternatives. The company claims such rapid and complete processing of large datasets yields extraordinary boosts in accuracy.

Skytree’s new beta product, Adviser, allows novice users to perform machine learning analysis of their data on a laptop and receive guidance about methods and findings.

As the machine learning space becomes more accessible to a wider audience, expect to see more startups get venture funding.

And with DARPA striving to make it easier for machine learning developers to focus more on application design and less on the complexities of statistical inference, this trend could have momentum for some time to come.

Machine Learning Touches All Aspects of Medical Care

Jennifer Barrett
Courtesy of George Mason University

Writing for Mason Research at George Mason University, Michele McDonald reports on how machine learning is helping doctors determine the best course of treatment for their patients. What’s more, machine learning is improving efficiency in medical billing and even predicting patients’ future medical conditions.

Using complex algorithms to mine the data, individualized medicine becomes possible according to Janusz Wojtusiak, director of the Machine Learning and Inference Laboratory and the Center for Discovery Science and Health Informatics at Mason’s College of Health and Human Services.

Wojtusiak points out how current research and studies focus on the average patient whereas those being treated want personalized care at the lowest risk for the best outcome.

Machine learning can identify patterns in reams of data and place the patient’s conditions and symptoms in context to build an individualized treatment model.

As such, machine learning seeks to support the physician based on the history of the condition as well as the history of the patient.

The data to be mined is vast and detailed. It includes the lab tests, diagnoses, treatments, and qualitative notes of individual patients who, taken together, form large populations.

Machine learning uses algorithms that recognize the data, identify patterns in it and derive meaningful analyses.

For example, researchers at the Machine Learning and Inference Lab are comparing five different treatment options for patients with prostate cancer.

To determine the best treatment option, machine learning must first categorize prostate cancer patients on the basis of certain commonalities. When a new patient comes in, algorithms can figure out which group he is most similar to. In turn, this guides the direction of treatment for that patient.

Given the high stakes consequences involved with patient care, the complexity that must be sorted out when making diagnoses and the ongoing monitoring of interventions against outcomes, machine learning development in health care is risk-mitigating and cost-effective.

For more about The Machine Learning and Inference Lab and the health care pilot projects they are working on, see the original article here.

DARPA Sets Stage for Giant Leap Forward in Machine Learning

Probabilistic Programming for Advanced Machine Learning
Courtesy of DARPA.mil

As the new frontier in computing. machine learning brings us software that can make sense of big data, act on its findings and draw insights from ambiguous information.

Spam filters, recommendation systems and driver assistance technology are some of today’s more mainstream uses of machine learning.

Like life on any frontier, creating new machine learning applications, even with the most talented of teams, can be difficult and slow for a lack of tools and infrastructure.

DARPA (The Defense Advanced Research Projects Agency) is tackling this problem head on by launching the Probabilistic Programming for Advanced Machine Learning Program (PPAML).

Probabilistic programming is a programming paradigm for dealing with uncertain information.

In much the same way that high level programming languages spared developers the need to deal with machine level issues, DARPA’s focus on probabilistic programming sets the stage for a quantum leap forward in machine learning.

More specifically, machine learning developers using new programming languages geared for probabilistic inference will be freed up to deliver applications faster that are more innovative, effective and efficient while relying less on big data, as is common today.

For details, see the DARPA Special Notice document describing the specific capabilities sought at http://go.usa.gov/2PhW.

Can Machine Learning Give Investigative Journalism the Scoop?

While attending the recent Nicar 2013 conference in Lousiville, Kentucky, Andrew Trench, Media24 Investigations editor and technology blogger reported on a fascinating demonstration of machine learning for finding news stories and insights humans would typically overlook.

Machine Learning at a Glance
Courtwesy GrubStreet.co.za & Andrew Trench

He then shares his vision for how machine learning will impact news as we know it in terms of gathering and shaping stories as well as the news business itself.

ProPublica’s Jeff Larson was the presenter at Nicar. By way of the Message Machine project, his non-profit investigative reporting group used machine learning to uncover a number of major stories.

The project clustered documents and applied decision trees to comb through vast volumes of crowd-sourced emails from their readers on a given topic.

In this case, the topic was how US political parties raised money by tailoring their pitches to suit the demographics of the email recipients.

Under the hood, algorithms convert every word in every email to a number. Documents then have mathematical properties that allow them to be clustered as similar or different.

Apart from the tedium such clustering and conversion tasks would impose on the human mind, scouring the sheer volume of content collected would be too time-consuming and expensive. All the more so given the ever shorter news cycles we’ve come to accept nowadays.

Trench envisions using machine learning to more accurately predict which stories will yield the most likes, click-thrus, and commentary.

He then ruminates about what it would be like for editors concerned about daily sales to have hard data to go on instead of gut instinct.

Upon reading this, I recalled the timeless bible for direct response marketers known as Scientific Advertising by Claude Hopkins.

Written in the 1920s, Hopkins takes advertisers to task for going with their guts when they could craft ads and calls to action that would give them the data they need to continually improve their response from readers.

In effect, Google Adwords is Scientific Advertising on steroids because it forces all businesses to be better direct marketers in real time.

Meanwhile, chances are, machine learning is already quietly building out Trench’s vision of newspapers organized using prediction engines.

After all, if sites and apps like Flipboard allow readers to pull in their own personalized magazines, I suppose the big challenge for traditional media online is to push out an engaging product that differentiates itself by expanding the reader’s horizons in ways they would not on their own.

Which brings us back to Larson and machine learning as a way to make investigative reporting economically viable again.

Starting in the 1980s, the media business converted news from a loss leader to a profit center, amid a flurry of mergers and acquisitions. Along the way, investigative reporting gave way to infotainment because it was seen as an anathema to making profits.

Today, many complain that the mainstream media in the US offers too much commentary and too little “hard news.” In turn, news networks from overseas are gaining American viewers by filling this void.

And so perhaps, we’ve come full circle. With the help of machine learning, traditional media can strike the right balance between catering to their audience’s known preferences and wowing them with authentic, hard-hitting stories as a counterweight to ubiquitous fluff.

How do you see machine learning transforming the way you communicate and publish?

Can Big Data Tackle Football with Machine Learning?

Mockup of telestrator diagram. Based on Image:UT Longhorn football – handoff to Melton in Big12 championship game.JPG by Johntex.

In the public mind, the movie Moneyball captures the fusion of sports and statistics for competitive advantage.

MLB and the NBA have embraced the video analysis of computer vision and the data crunching of machine learning for some time.

Now, the NFL could be next. Derrick Harris at Gigaom offers insights on this trend as reported on by Judy Battista for the New York Times.

Most intriguing is the idea that NFLers have not embraced these technologies as readily as their pro sports peers because the complexity of the game defies decoupling teamwork into discrete actions readily attributed to individual players.

Think of offensive linemen working together to protect quarterbacks and running backs. Or defensive linebackers who don’t get many tackles, but make it difficult for the offense to execute.

For this reason, many NFL coaches prefer to assess players based on how they look on film.

Over time, however, this resistance to analytics is likely to fade as machine learning based applications that use computer vision prove their value and become easier to use.

In fact, it could simply be a matter of identifying more subtle metrics to extract and analyze that previously evaded human detection.

For example, during the 2012 playoffs, the Wall Street Journal’s John Letzig reported on how MLB used motion-analysis software from Sportvision Inc. to quantify an outfielder’s ability to get a jump on fielding fly balls.

Given the rich data complexity of football, it’s hard to imagine coaches not eating up algorithm-powered, in-situ forecasts that take player stats, weather and game scenarios into account and identify those variables most likely to influence what happens next.

Or team management not angling for competitive advantage at the lowest possible cost by pinpointing those overlooked, game-deciding metrics that don’t correlate with salary levels like fourth down conversions (just like on-base percentage was the key focus in Moneyball).

In other instances, humans can request recalibration of the algorithms so video tracking models ignore what they consider to be noise and add additional factors they view as pivotal.

We reported on how Zest Finance continually improved the accuracy of its credit underwriting assessments in the payday lending market by taking this approach with respect to 70,000 variables.

Part of football’s very appeal is its complexity and the many inter-dependencies that make it tick. And so it’s a natural for the video scrutiny and data mining that computer vision and machine learning make possible.

Are you involved in an activity where many individuals come together to form a whole greater than the sum of its parts?

How could analysis of its finer points of interaction unlock hidden value in your business?

Machine Learning Helps Ford Hybrids Cut Gas Use

Imagine a plug-in hybrid car that could learn your travel patterns. Now imagine it uses that info to keep you driving in electric-only mode more often.

Well, Ford is on it with its C-MAX Energi plug-in hybrid and Fusion Hybrid models.

The company calls its machine learning technology EV+, a patent-pending aspect of of SmartGauge®, which comes standard with these vehicles.

Ford research shows that hybrid owners prefer to drive in electric-only mode near their homes and frequent destinations.

To learn your often-visited locations, EV+ utilizes the onboard GPS of Ford SYNC® with proprietary software algorithms the company has developed in-house.

As EV+ gets familiar with your home parking spot and frequent destinations, it uses the electric power stored in the car’s high-voltage battery and remains in electric-only mode because you’re likely to charge it up again soon.

When EV+ identifies that your current location is within a radius of 1/8 mile or 200 meters of a frequent stop, the vehicle favors electric-only mode and the dashboard displays an “EV+” light to let you know.

Green Car Congress reports that Ford has big plans for big data and machine learning to transform the driver assistance capabilities of its vehicles. Among them are:

  • “Big data” analysis for smarter decisions using radar and camera sensors to assess outdoor conditions like traffic congestion, so the vehicle could minimize distractions like an incoming phone call and keep the driver focused on the road.
  • Biometrics for personalized driving help where biometric sensors in seats could measure the driver’s stress levels and integrated technologies calibrate what kind of assistance to provide based on the situation and the driver’s skill.

In our last post, we pointed to self-parking cars as an early manifestation of cognitive computing that enables humans to offload their duties to the vehicle entirely.

And in the previous post, we speculated how the Google hire of Ray Kurzweil could signal an acceleration of computers and humans extending one another’s capabilities.

With these trends in machine learning happening in tandem, how do you see driver assistance taking shape in the future?