“We’re kind of in an AI Spring,” says John Giannandrea, Google’s head of machine learning.

Giannandrea’s statement — a reference to the term AI winter — sums up how the use cases, interest and research in artificial intelligence are really taking off.

During a Google I/O panel on Friday held at the company’s developer’s conference, Giannandrea was joined by Google SVP of Product Aparna Chennapragada and Google Brain lead Jeffrey Dean to talk about how AI and machine learning are changing products.

IMAGE: GOOGLE I/O LIVESTREAM

Giannandrea cited the recent success in the areas of speech recognition and image understanding as two reasons AI and machine learning are suddenly so hot.

Even though Google has invested such areas for more than a decade, Giannandrea said the company became really serious about it four years ago.

Success in speech recognition and image understanding are two reasons AI and machine learning are suddenly so hot.

Chennapragada, who led the Google Now team, says she believes that machine learning changes the game when it comes to building new products.

For example, take something like a voice-enabled assistant. Chennapragada says that as error rates decrease, usage for those types of products increase. “The product gets more usable as machine learning improves the underlying engine.”

She added that machine learning can also unlock new use cases. “Thanks to mobile, a lot of the real world problems – transportation and health – can be come AI problems.”

To help developers get a handle on machine learning, Google has open-sourced some of its platforms, including TensorFlow. TensorFlow, which is an open source library for machine intelligence.

Dean, who was heavily involved with the development of TensorFlow, said Google decided to open-source the library because it wanted to be able to help accelerate the free exchange of ideas.

Last week, Google Research also open-sourced a neural network framework for TensoryFlow, known as SyntaxNet. Part of that release was an English parser, adorable named Parsey McParseface, that has been trained to analyze English text.

Although many people using TensorFlow are already familiar with machine learning, the goal is to get even non-machine learning experts using the libraries and models in their projects.

Paying attention to the wow to WTH ratio

Of course, as these platforms and technologies evolve, the products that come out of them don’t always work the way you would expect.

Chennapragada said that while working on Google Now, the team was very cognizant about paying attention to what is internally dubbed as the “wow to WTH ratio.” In other words, there are cases when getting an assumption right can be delightful and magical. But getting something wrong can lead to “a high cost to the user.”

IMAGE: GOOGLE

If an AI assistant tells you to drive to the airport too late and you end up missing your flight, it would be a total ‘what the hell’ moment.

And this is especially true when it comes to the domain of the product. As an example, Chennapragada said that if you were to do a search for Justin Timberlake and got back a result that wasn’t quite as relevant as it needed to be, that wouldn’t be a big deal. But if an AI assistant tells you to drive to the airport too late and you end up missing your flight, it would be a total ‘what the hell’ moment.

Ensuring that ratio is right is really important, especially in the early stages of a new platform or product, she said.

And this is true. Five years later, I know people who still don’t use Siri because of the early stumbles it had coming out of the gate.

Chennapragada added that it’s also important to build trust with the user. “You don’t want to be unpredictable and inscrutable,” she said, noting that this was why it was good to use machine learning against problems its most adroit at solving – stuff that’s easy for machines but hard for humans – like repetitive tasks.

Should we fear the borg?

In a question and answer session, Giannandrea was asked about Elon Musk’s fear of Larry Page’s mythical robot army.

As a refresher: Musk has frequently expressed his fears about AI, stating that “something seriously dangerous” may come from artificial intelligence in the next five to 10 years.

But Giannandrea says he thinks the kind of “superintelligence” Musk fears is “decades away.”

That said, he did concede some stuff done by AI and machine learning can be creepy. He believes it’s the job of the people behind the technology (which basically includes himself and his colleagues) to prove that the tools are genuinely useful.

Earlier, Giannandrea said that language and dialogue are the big unsolved problems of computer science. For him, he doesn’t see an AI summer coming until we can teach a computer to read.