Germ Tracker Monitors Spread of Sickness Using Tweets & GPS

Germ Tracker Web App Output
Germ Tracker Web App Output
Germ Tracker Web App Courtesy of University of Rochester and News.com.au

Claire Porter, Technology Editor for News.com.au reports on a web app called Germ Tracker that pinpoints where the sick people are in public so you can avoid them and stay well.

Created by scientists at the University of Rochester, Germ Tracker analyzes thousands of tweets against pollution sources, public transportation stations, gyms, restaurants, and other GPS-based data.

Most notably, those who tweeted about going to the gym but rarely made it there got sick significantly more often.

According to lead researcher, Adam Sadilek, individuals can use this information to take charge of their health by steering clear of subway stations where many sick people may congregate.

Sadilek sees Germ Tracker assisting local authorities to better handle responses to outbursts of the flu.

Germ Tracker uses machine learning and natural language processing to distinguish between colloquial uses of “sick” like “cool!” versus “I’m really ill.”

As the app mines more tweets, it learns and gets better at predicting whether a Twitter user is indeed sick. Its classifications include not sick, low risk, sick, and high risk.

The algorithm reveals the tweet that led to the classification and allows users to override it.

Two Brains Act As One Super Pilot with Machine Learning

EEG machine

Two brains are better than one! At least it is when using a simulator to navigate a virtual spacecraft.

More specifically, two people’s thoughts were far more accurate than either person’s acting alone.

Paul Marks at the NewScientist reports on how a team at the University of Essex in the UK researching brain-computer interfaces (BCI) arrived at this conclusion. The team will present its findings in March at the Intelligent User Interfaces conference.

Their experiment suggests “collaborative BCI” could flourish in the future in fields like robotics and telepresence.

EEG electrodes were hooked up to the scalps of both navigators. As each person thought about a predefined concept like “left” or “right,” machine learning software identified patterns in their thinking and applied them in real time.

For the participants, the challenge was to steer the craft to the absolute center of a planet using a repertoire of 8 directional thoughts. Machine learning software merged their separate thoughts into continuous actions that directed the craft.

The researchers found simulation flights using collaborative BCI were 90% on target versus 67% for solo pilots.

Even when sudden changes in the planet’s position were introduced, having additional brainpower cut human reaction time in half.

Also, since EEG signals are often fraught with noise, having signals from two brains helped maintain a more usable signal to noise ratio. It also compensated for lapses of attention when one person’s mind momentarily strayed.

While remotely controlling actual spacecraft using collaborate BCI is still a long way off, more modest uses are altogether feasible today.

For example, enabling people with disabilities to steer a wheelchair with their thoughts.

So, how do you see collaborative BCI being used in your world?

Can Big Data Tackle Football with Machine Learning?

Mockup of telestrator diagram. Based on Image:UT Longhorn football – handoff to Melton in Big12 championship game.JPG by Johntex.

In the public mind, the movie Moneyball captures the fusion of sports and statistics for competitive advantage.

MLB and the NBA have embraced the video analysis of computer vision and the data crunching of machine learning for some time.

Now, the NFL could be next. Derrick Harris at Gigaom offers insights on this trend as reported on by Judy Battista for the New York Times.

Most intriguing is the idea that NFLers have not embraced these technologies as readily as their pro sports peers because the complexity of the game defies decoupling teamwork into discrete actions readily attributed to individual players.

Think of offensive linemen working together to protect quarterbacks and running backs. Or defensive linebackers who don’t get many tackles, but make it difficult for the offense to execute.

For this reason, many NFL coaches prefer to assess players based on how they look on film.

Over time, however, this resistance to analytics is likely to fade as machine learning based applications that use computer vision prove their value and become easier to use.

In fact, it could simply be a matter of identifying more subtle metrics to extract and analyze that previously evaded human detection.

For example, during the 2012 playoffs, the Wall Street Journal’s John Letzig reported on how MLB used motion-analysis software from Sportvision Inc. to quantify an outfielder’s ability to get a jump on fielding fly balls.

Given the rich data complexity of football, it’s hard to imagine coaches not eating up algorithm-powered, in-situ forecasts that take player stats, weather and game scenarios into account and identify those variables most likely to influence what happens next.

Or team management not angling for competitive advantage at the lowest possible cost by pinpointing those overlooked, game-deciding metrics that don’t correlate with salary levels like fourth down conversions (just like on-base percentage was the key focus in Moneyball).

In other instances, humans can request recalibration of the algorithms so video tracking models ignore what they consider to be noise and add additional factors they view as pivotal.

We reported on how Zest Finance continually improved the accuracy of its credit underwriting assessments in the payday lending market by taking this approach with respect to 70,000 variables.

Part of football’s very appeal is its complexity and the many inter-dependencies that make it tick. And so it’s a natural for the video scrutiny and data mining that computer vision and machine learning make possible.

Are you involved in an activity where many individuals come together to form a whole greater than the sum of its parts?

How could analysis of its finer points of interaction unlock hidden value in your business?

Machine Learning Helps Ford Hybrids Cut Gas Use

Imagine a plug-in hybrid car that could learn your travel patterns. Now imagine it uses that info to keep you driving in electric-only mode more often.

Well, Ford is on it with its C-MAX Energi plug-in hybrid and Fusion Hybrid models.

The company calls its machine learning technology EV+, a patent-pending aspect of of SmartGauge®, which comes standard with these vehicles.

Ford research shows that hybrid owners prefer to drive in electric-only mode near their homes and frequent destinations.

To learn your often-visited locations, EV+ utilizes the onboard GPS of Ford SYNC® with proprietary software algorithms the company has developed in-house.

As EV+ gets familiar with your home parking spot and frequent destinations, it uses the electric power stored in the car’s high-voltage battery and remains in electric-only mode because you’re likely to charge it up again soon.

When EV+ identifies that your current location is within a radius of 1/8 mile or 200 meters of a frequent stop, the vehicle favors electric-only mode and the dashboard displays an “EV+” light to let you know.

Green Car Congress reports that Ford has big plans for big data and machine learning to transform the driver assistance capabilities of its vehicles. Among them are:

  • “Big data” analysis for smarter decisions using radar and camera sensors to assess outdoor conditions like traffic congestion, so the vehicle could minimize distractions like an incoming phone call and keep the driver focused on the road.
  • Biometrics for personalized driving help where biometric sensors in seats could measure the driver’s stress levels and integrated technologies calibrate what kind of assistance to provide based on the situation and the driver’s skill.

In our last post, we pointed to self-parking cars as an early manifestation of cognitive computing that enables humans to offload their duties to the vehicle entirely.

And in the previous post, we speculated how the Google hire of Ray Kurzweil could signal an acceleration of computers and humans extending one another’s capabilities.

With these trends in machine learning happening in tandem, how do you see driver assistance taking shape in the future?

IBM: Computers that Think in 5 Senses within 5 Years

Smart phones getting smarter

Nancy Houser, writing for Digital Journal, surveys the landscape of cognitive computing and provides a lucid view of what to expect over the next 5 years.

Smart phones getting smarter
Smart phones are ramping up their intelligence (Courtesy of Digital Journal, Photo: Espen Irwing Swang)
A key theme is that of computers moving from glorified calculators to thinking machines that augment all 5 of your senses.

For example, imagine computers responding to your inquiry with richer, more valuable outputs like the texture of fabric or the aroma of fresh cut grass.

What’s making it possible to enhance and extend our senses with intelligence is the convergence of computer vision, machine learning, big data, speech recognition, natural language processing, smartphones and biometric sensors.

One hurdle to humanizing computers has been making the leap from performing predefined tasks based on programmed logic to handling new tasks autonomously without dedicated software programs already in place. (e.g., unsupervised machine learning.)

Towards bridging the gap, Houser discusses the promising work of Prof. Chris Eliasmith on computing efforts to reverse engineer the biology of the mammalian brain.

In the past, human brain structure and operation has been replicated in fine detail to accomplish tasks like handwriting recognition, adding via counting, and even interpreting patterns associated with higher-level intelligence.

Now, by adding interfaces that sense touch, pressure and heat, we’re beginning to see more human-like computers that can park cars and perform biometric security functions. IBM is already at work on cognitive computing applications for retailers.

In some respects, cognitive computing could be the near term forerunner of cyborg singularity speculated in the Google hire of Ray Kurzweil.

So, how do you see, hear, smell, touch, taste cognitive computing in the future of your business?

Putting Google’s Hire of Kurzweil in Perspective

Futurist Ray Kurzweil

Colleen Taylor at TechCrunch reports that legendary futurist and inventor, Ray Kurzweil has joined Google as full time Director of Engineering. Kurzweil will focus on machine learning and language processing.

Futurist Ray Kurzweil
Futurist Ray Kurzweil joins Google as Director of Engineering
In 1999, Kurzweil predicted self-driving cars and intelligent, voice-driven apps like Siri: two areas in which Google is very active now. At the time, many called his predictions “unrealistic.”

That Google co-founders Sergey Brin and Larry Page are looking at technological singularity as a basis for future innovation makes this appear all the more like an inevitable event whose time has come.

The idea of “technological singularity” envisions a form of intelligence beyond that of humans where humans and technology extend one another’s thinking and processing capabilities.

The literature on singularity speaks of frequent quantum leaps in the pace of change with unpredictable results and human affairs forever changed by it.

Possible accelerators of singularity include artificial intelligence, human biological enhancement and brain-computer interfaces.

In the forward to the John von Neumann book, “The Computer and the Brain,” Kurzweil cites von Neumann’s use of the term, “singularity.”

Announcing his decision on his website, Kurzweil writes:

“’I’m thrilled to be teaming up with Google to work on some of the hardest problems in computer science so we can turn the next decade’s ‘unrealistic’ visions into reality.’”

So stay tuned and share your thoughts below…

Why Humans Won’t Be Replaced by Machine Learning

Courtesy of VentureBeat

Laura Teller, guest-posting at VentureBeat makes the case that human domain expertise will never be replaced by smart machines.

For starters, smart machines need human experts to correct their mistakes and provide the closed loop feedback that makes machine learning possible.

But the bigger reasons why Teller believes humans will always remain preeminent have to do with the three levels of cognition developed by Prof. Terrence Deacon, Ph.D., Chair of the Department of Anthropology at University of California, Berkeley.

The three levels are iconic, indexic and symbolic.

Iconic cognition happens when a computer identifies something. For example, facial recognition software can distinguish human faces from everything else.

Indexic cognition takes place when we make associations. Pointing your thumb at your chest means you’re talking about yourself.

Iconic and indexic rely on patterns and rules and both readily scale.

Symbolic thought focuses on abstractions and the human tendency to “complete the picture.”

We think symbolically when we brainstorm, daydream, listen to our instincts, and question the status quo. In so doing, we rely on emotion, experience, visions, and logic… and create the conditions for innovation.

Fans of Star Trek will be familiar with this dichotomy of the iconic and indexic versus the symbolic in the characters of Captain Kirk and Mr. Spock.

(In fact, this analogy is a form of symbolic cognition :))

Mr. Spock thinks logically with the ability to mentally process vast storehouses of data quickly. Captain Kirk, while in need of Mr. Spock’s prodigious powers, always saves the day when forced to “go with his gut.”

Realizing this, imagine you had more of the iconic and indexic tasks of your business covered by machine learning. What would this free you up to do? Share your thoughts below…

Machine Learning Makes Finding Web Images Easier

Writing for the New York Times Science Desk, John Markoff reports on how computer vision and machine learning will create the next generation Internet where search engines find images and videos with the same degree of relevance as they do now with text.

And the need is crushing… in the next 60 seconds, YouTube will have uploaded 72 hours of video.

Today, unless images and videos are labeled, search engines have no way to match them against your query.  Even then, labels can be unreliable (e.g., “junk” versus the objects that comprise it).

To give search engines something akin to human sight, Stanford’s Dr. Fei Fei Li has teamed up with fellow computer scientists at Princeton to develop ImageNet, the world’s biggest image database.

Logo-ImageNet.org
Courtesy of ImageNet.org

Given the enormity of the task and limited budget, Dr. Li connected with Mechanical Turk, the Amazon.com crowdsourcing system where, for a small payment per task, humans label photos. The database now has over 14 million images in over 21,000 categories thanks to the efforts of nearly 30,000 participants a year.

As the database of labeled images grows, machine learning algorithms enable software to recognize similar, unlabeled images. Over time, accuracy rates improve dramatically.

Surprisingly, when tested on a large collection of labeled images by Google computer scientists Andrew Ng and Jeff Dean, the system nearly doubled the accuracy of previous neural network algorithms designed to model human thought processes.

To further improve speed and accuracy, images are classified against WordNet, a hierarchical database of English words. With skillful programming to make educated choices about how to search the hierarchy, the database continues to rise to this growing challenge.

See the full article here.

Machine Learning Creates Win-Win for Lenders and Consumers

TechCrunch’s Leena Rao reports on ZestFinance, a startup poised to transform the payday loan industry with a new underwriting model that yields more accurate assessments of a borrower’s creditworthiness.

The result? Greater access to credit for more people at more affordable rates and fewer defaults.

Founded by former Google CIO, Douglas Merrill, ZestFinance weaves together traditional credit scoring with big data analysis, machine learning and human expertise.

From the lender’s point of view, the value proposition is the ability to better:

  • Quantify a borrower’s likelihood to repay
  • Manage the risks of their loan portfolios

So far, the model outperforms current industry best practice with a 54% lower default rate amid twice the approval rate for loans.

The human element comes in the form of ZestFinance’s team of predictive modelers who are experts in mathematics, computer science and physics.

As machine learning algorithms uncover thousands of variables, the team looks at them in the context of patterns and trends they are seeing before releasing them into multiple big data models that run in parallel.

In the process, load decisions are returned in minutes and ZestFinance continually improves its algorithms.

For more, see the article here.

 

Unsupervised Machine Learning Mines Useful Information from a Multilingual Sea of Text

The challenge of extracting meaningful information from an ever-growing Internet awash in languages, dialects, and knowledge domains is clearly, too much for our brains to handle.

Multiple Languages
Courtesy of Phys.org

And traditional approaches are simply not up to the task.

However, a combination of statistical methods, data mining and machine learning could help change all that.

Mari-Sanna Paukkeri, a doctoral candidate at the Aalto University Department of Information and Computer Science in Finland has developed computational methods of text processing that are independent of any language or knowledge domain.

Languages share certain building blocks: symbols form words and words aggregate into sentences. Algorithms developed by Paukkeri analyze massive bodies of text and discover patterns to the presence of words and the structure of sentences. As a result, the meaning of specific words and sentences can be inferred.

To date, computational approaches to natural language processing have typically relied on rules defined in advance. Instead, Paukkeri’s algorithms use unsupervised machine learning to uncover meaning from statistical dependencies and structures that exist in the dataset with no help from data pre-processing or human intervention of any kind.

A familiar use of unsupervised machine learning and natural language processing would be the ability of Google News to bundle related new stories on any topic the user requests.

As such, Paukkeri’s methods have the potential to serve global corporations particularly well because they can glean meaningful insights from vast storehouses of data across multiple languages and knowledge domains.

Paukkeri has even studied how a search engine could ascertain if the user is an expert or a layperson and return suitable results by automatically assessing the difficulty of comprehension in the text it finds.

For more, see the original article here.