Machine learning is the newest thing at BYU, thanks to the work of engineer Dah-Jye Lee, who has created an algorithm that allows computers to learn without human help. According to Lee, his algorithm differs from others in that it doesn’t specify for the computer what it should or shouldn’t look for. Instead, his program simply feeds images to the computer, letting it decide on its own what is what.
Similar to how children learn differences between objects in the world around them in an intuitive way, Lee uses object recognition to show the computer various images but doesn’t differentiate between them. Instead, the computer is tasked with doing this on its own. According to Lee:
“It’s very comparable to other object recognition algorithms for accuracy, but, we don’t need humans to be involved. You don’t have to reinvent the wheel each time. You just run it.”
Writing for Mason Research at George Mason University, Michele McDonald reports on how machine learning is helping doctors determine the best course of treatment for their patients. What’s more, machine learning is improving efficiency in medical billing and even predicting patients’ future medical conditions.
Wojtusiak points out how current research and studies focus on the average patient whereas those being treated want personalized care at the lowest risk for the best outcome.
Machine learning can identify patterns in reams of data and place the patient’s conditions and symptoms in context to build an individualized treatment model.
As such, machine learning seeks to support the physician based on the history of the condition as well as the history of the patient.
The data to be mined is vast and detailed. It includes the lab tests, diagnoses, treatments, and qualitative notes of individual patients who, taken together, form large populations.
Machine learning uses algorithms that recognize the data, identify patterns in it and derive meaningful analyses.
For example, researchers at the Machine Learning and Inference Lab are comparing five different treatment options for patients with prostate cancer.
To determine the best treatment option, machine learning must first categorize prostate cancer patients on the basis of certain commonalities. When a new patient comes in, algorithms can figure out which group he is most similar to. In turn, this guides the direction of treatment for that patient.
Given the high stakes consequences involved with patient care, the complexity that must be sorted out when making diagnoses and the ongoing monitoring of interventions against outcomes, machine learning development in health care is risk-mitigating and cost-effective.
For more about The Machine Learning and Inference Lab and the health care pilot projects they are working on, see the original article here.
While attending the recent Nicar 2013 conference in Lousiville, Kentucky, Andrew Trench, Media24 Investigations editor and technology blogger reported on a fascinating demonstration of machine learning for finding news stories and insights humans would typically overlook.
He then shares his vision for how machine learning will impact news as we know it in terms of gathering and shaping stories as well as the news business itself.
ProPublica’s Jeff Larson was the presenter at Nicar. By way of the Message Machine project, his non-profit investigative reporting group used machine learning to uncover a number of major stories.
The project clustered documents and applied decision trees to comb through vast volumes of crowd-sourced emails from their readers on a given topic.
In this case, the topic was how US political parties raised money by tailoring their pitches to suit the demographics of the email recipients.
Under the hood, algorithms convert every word in every email to a number. Documents then have mathematical properties that allow them to be clustered as similar or different.
Apart from the tedium such clustering and conversion tasks would impose on the human mind, scouring the sheer volume of content collected would be too time-consuming and expensive. All the more so given the ever shorter news cycles we’ve come to accept nowadays.
Trench envisions using machine learning to more accurately predict which stories will yield the most likes, click-thrus, and commentary.
He then ruminates about what it would be like for editors concerned about daily sales to have hard data to go on instead of gut instinct.
Upon reading this, I recalled the timeless bible for direct response marketers known as Scientific Advertising by Claude Hopkins.
Written in the 1920s, Hopkins takes advertisers to task for going with their guts when they could craft ads and calls to action that would give them the data they need to continually improve their response from readers.
In effect, Google Adwords is Scientific Advertising on steroids because it forces all businesses to be better direct marketers in real time.
Meanwhile, chances are, machine learning is already quietly building out Trench’s vision of newspapers organized using prediction engines.
After all, if sites and apps like Flipboard allow readers to pull in their own personalized magazines, I suppose the big challenge for traditional media online is to push out an engaging product that differentiates itself by expanding the reader’s horizons in ways they would not on their own.
Which brings us back to Larson and machine learning as a way to make investigative reporting economically viable again.
Starting in the 1980s, the media business converted news from a loss leader to a profit center, amid a flurry of mergers and acquisitions. Along the way, investigative reporting gave way to infotainment because it was seen as an anathema to making profits.
Today, many complain that the mainstream media in the US offers too much commentary and too little “hard news.” In turn, news networks from overseas are gaining American viewers by filling this void.
And so perhaps, we’ve come full circle. With the help of machine learning, traditional media can strike the right balance between catering to their audience’s known preferences and wowing them with authentic, hard-hitting stories as a counterweight to ubiquitous fluff.
How do you see machine learning transforming the way you communicate and publish?
Based on her description, it’s clear SaneBox is using machine learning to help categorize and prioritize messages.
The service watches how you engage with senders over time to predict which new messages you’ll consider important.
Those messages it considers less important it moves out of your inbox and into an @SaneLater folder you can look at whenever you like.
If you notice an important message in your @SaneLater folder, you can move it to your inbox and SaneBox will remember so that next time you receive a message from that sender, SaneBox will leave it in your inbox.
The service also equips you with a dashboard so you can track your volume of important versus non-important messages. DesMarais gained insight into just how much time email was sucking out of her workday.
Additional folders include @SaneNews (so all your newsletter subscriptions are in one place) and @SaneBlackHole (for those messages you want sent straight to trash).
The SaneBox reminder feature lets you specify which message you want replies to and by when. Simply add an address to a CC or BCC like oneday@SaneBox.com or April12@SaneBox.com and SaneBox keeps an @SaneRemindMe folder with these messages ordered accordingly.
Now machine learning not only keeps spam out of view, it rescues your relationship with your inbox.