Main menu

Pages

Too much AI today is a novelty without a clear plan to make money

Too much AI today is a novelty without a clear plan to make money

Too much AI today is a novelty without a clear plan to make money

2018's AI landscape looks a lot like a catalog of sharper images. There are products all over the place just because we can make them and they're marketable.

Do you really need this bacon toaster? Do you really think this 3D drawing pen will bring out your inner artist?

Like these products, too much AI on the market today is a one-off novelty. No one has conducted market research to determine the total addressable market for bacon toasters. They didn't build focus groups with potential clients. They built a novelty that was fun and practical enough to convince some people to part ways with a little cash. If that doesn't sound like a lot of AI being sold these days, I don't know what is.

In half-heartedly defending the industry, AI experts like to remind us that it's "too early." Others explained that the first wave of enterprise AI was "doomed to fail," which was somehow both predetermined and acceptable. Shouldn't AI be held to a higher standard, given its good understanding and potential impact on society?

So smart clients are asking: Why is there so much hedging and so little accountability in AI?

researchers run amok

I love visiting research labs as much as the next nerd, but we need to be careful about AI implementations in researcher-dominated business scenarios. With a severe shortage of AI talent, many companies are poaching doctorates from universities around the world. Facebook had an AI research team of more than 100 researchers, a luxury some other tech companies claimed, but the Facebook Messenger AI team was shut down shortly after hitting a 70% failure rate.

One might say that despite the massive investment of capital and academic talent, the platform failed, but we need to be honest with ourselves: hence the failure.

Money and talent matter. They are important. However, the failure rates we encounter in this industry look more like scientific research than IT implementation. "Nature" magazine recently reported that "more than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half failed to reproduce their own experiments."

The AI ​​industry has brought in a lot of academic research scientists, and the result is a lot of experimentation with clients’ businesses. Don't get me wrong - as a tech entrepreneur, I value research, experimentation and even failure. But any entrepreneur will agree that it is unacceptable to ask customers to take all the risk.

At the same time, researchers are bound to focus on the technology and its inner workings. They are not trained and are not generally very good at ensuring the best business outcomes. Think about it, AI failures are not the result of a shortage of AI PhDs; they are the result of a lack of business analysts and customer success specialists on their teams.

If you go this far in business acumen, you also hedge.

Collateral damage to navel staring

Researchers are an important part of the AI ​​ecosystem. However, over the past decade, thousands of developers and technologists have poured into the field. If you've ever spent time on sites like Stack Exchange or Hacker News, you'll find dedicated communities of talented technologists debating the merits of new technologies, arguing over the points of programming languages ​​and tools, platforms and standards. This is how the technology industry continues to advance.

With AI still in its relative nascent stage, discussions and debates around all of these topics are heating up. As an industry, we are still working to establish best practices and standards, a process that requires our technology leaders to look inward at the technology itself.

The good news is that we've been doing this for decades -- that's how we're addressing digital transformation and the transition to cloud, mobile, and now we're doing it for artificial intelligence.

The bad news is that most people in the industry spend little time and effort understanding their customers and their business needs. Silicon Valley has a long history of building fascinating new technologies that fail on the first try because they don't fit the product/market. Building the best technology is not the same thing as building the best technology for my business.

This exact phenomenon is what we're seeing in AI now, at least for developers who aren't obsessed with customers.

client, client, client

The next big breakthrough in artificial intelligence won't come from a Stanford lab. This doesn't happen in the client's code. This will happen in HR, where recruiting teams will develop strategies to hire merchants who have the ability to bridge the gap between business technology and business outcomes.

We need to be obsessed with the business of AI buyers, and we need to be obsessed with their customers. AI is not a one-off technology -- it impacts the entire value chain from start to finish. These technologies need to fit the business, not the other way around.

Since the days of "digital transformation", we've been teaching businesses how their IT should work. That will no longer work. AI is reaching too far into the business, touching too many processes, for any sane executive to get a tech company to tell them how to run a business.

We need merchants who are good at listening to customers and customers -- because that's where AI really impacts.

Thanks to AI's ability to transform business, the customer is right again.

Comments