BYU Engineer Creates Unsupervised Machine Learning Algorithm

Machine learning is the newest thing at BYU, thanks to the work of engineer Dah-Jye Lee, who has created an algorithm that allows computers to learn without human help. According to Lee, his algorithm differs from others in that it doesn’t specify for the computer what it should or shouldn’t look for. Instead, his program simply feeds images to the computer, letting it decide on its own what is what.

Photo courtesy of BYU Photo.
Photo courtesy of BYU Photo.

Similar to how children learn differences between objects in the world around them in an intuitive way, Lee uses object recognition to show the computer various images but doesn’t differentiate between them. Instead, the computer is tasked with doing this on its own. According to Lee:

“It’s very comparable to other object recognition algorithms for accuracy, but, we don’t need humans to be involved. You don’t have to reinvent the wheel each time. You just run it.”

Advertisements

Two Brains Act As One Super Pilot with Machine Learning

EEG machine

Two brains are better than one! At least it is when using a simulator to navigate a virtual spacecraft.

More specifically, two people’s thoughts were far more accurate than either person’s acting alone.

Paul Marks at the NewScientist reports on how a team at the University of Essex in the UK researching brain-computer interfaces (BCI) arrived at this conclusion. The team will present its findings in March at the Intelligent User Interfaces conference.

Their experiment suggests “collaborative BCI” could flourish in the future in fields like robotics and telepresence.

EEG electrodes were hooked up to the scalps of both navigators. As each person thought about a predefined concept like “left” or “right,” machine learning software identified patterns in their thinking and applied them in real time.

For the participants, the challenge was to steer the craft to the absolute center of a planet using a repertoire of 8 directional thoughts. Machine learning software merged their separate thoughts into continuous actions that directed the craft.

The researchers found simulation flights using collaborative BCI were 90% on target versus 67% for solo pilots.

Even when sudden changes in the planet’s position were introduced, having additional brainpower cut human reaction time in half.

Also, since EEG signals are often fraught with noise, having signals from two brains helped maintain a more usable signal to noise ratio. It also compensated for lapses of attention when one person’s mind momentarily strayed.

While remotely controlling actual spacecraft using collaborate BCI is still a long way off, more modest uses are altogether feasible today.

For example, enabling people with disabilities to steer a wheelchair with their thoughts.

So, how do you see collaborative BCI being used in your world?

Machine Learning Helps Ford Hybrids Cut Gas Use

Imagine a plug-in hybrid car that could learn your travel patterns. Now imagine it uses that info to keep you driving in electric-only mode more often.

Well, Ford is on it with its C-MAX Energi plug-in hybrid and Fusion Hybrid models.

The company calls its machine learning technology EV+, a patent-pending aspect of of SmartGauge®, which comes standard with these vehicles.

Ford research shows that hybrid owners prefer to drive in electric-only mode near their homes and frequent destinations.

To learn your often-visited locations, EV+ utilizes the onboard GPS of Ford SYNC® with proprietary software algorithms the company has developed in-house.

As EV+ gets familiar with your home parking spot and frequent destinations, it uses the electric power stored in the car’s high-voltage battery and remains in electric-only mode because you’re likely to charge it up again soon.

When EV+ identifies that your current location is within a radius of 1/8 mile or 200 meters of a frequent stop, the vehicle favors electric-only mode and the dashboard displays an “EV+” light to let you know.

Green Car Congress reports that Ford has big plans for big data and machine learning to transform the driver assistance capabilities of its vehicles. Among them are:

  • “Big data” analysis for smarter decisions using radar and camera sensors to assess outdoor conditions like traffic congestion, so the vehicle could minimize distractions like an incoming phone call and keep the driver focused on the road.
  • Biometrics for personalized driving help where biometric sensors in seats could measure the driver’s stress levels and integrated technologies calibrate what kind of assistance to provide based on the situation and the driver’s skill.

In our last post, we pointed to self-parking cars as an early manifestation of cognitive computing that enables humans to offload their duties to the vehicle entirely.

And in the previous post, we speculated how the Google hire of Ray Kurzweil could signal an acceleration of computers and humans extending one another’s capabilities.

With these trends in machine learning happening in tandem, how do you see driver assistance taking shape in the future?

IBM: Computers that Think in 5 Senses within 5 Years

Smart phones getting smarter

Nancy Houser, writing for Digital Journal, surveys the landscape of cognitive computing and provides a lucid view of what to expect over the next 5 years.

Smart phones getting smarter
Smart phones are ramping up their intelligence (Courtesy of Digital Journal, Photo: Espen Irwing Swang)
A key theme is that of computers moving from glorified calculators to thinking machines that augment all 5 of your senses.

For example, imagine computers responding to your inquiry with richer, more valuable outputs like the texture of fabric or the aroma of fresh cut grass.

What’s making it possible to enhance and extend our senses with intelligence is the convergence of computer vision, machine learning, big data, speech recognition, natural language processing, smartphones and biometric sensors.

One hurdle to humanizing computers has been making the leap from performing predefined tasks based on programmed logic to handling new tasks autonomously without dedicated software programs already in place. (e.g., unsupervised machine learning.)

Towards bridging the gap, Houser discusses the promising work of Prof. Chris Eliasmith on computing efforts to reverse engineer the biology of the mammalian brain.

In the past, human brain structure and operation has been replicated in fine detail to accomplish tasks like handwriting recognition, adding via counting, and even interpreting patterns associated with higher-level intelligence.

Now, by adding interfaces that sense touch, pressure and heat, we’re beginning to see more human-like computers that can park cars and perform biometric security functions. IBM is already at work on cognitive computing applications for retailers.

In some respects, cognitive computing could be the near term forerunner of cyborg singularity speculated in the Google hire of Ray Kurzweil.

So, how do you see, hear, smell, touch, taste cognitive computing in the future of your business?