Artificial Intelligence and machine learning can give your business decision-making processes a massive upgrade. Here’s what you really need to know.
AI is here, it’s already proving itself in the market, and it’s increasingly being built with business in mind. Those billions of dollars pouring into AI startups are already having an impact, turning out powerful tools that leverage cutting-edge cognitive technologies that can give your business decision-making processes a massive upgrade. The tech is getting easier to use than ever, too.You won’t need an army of Stanford Ph.D.’s to manage many of today’s AI tools and it won’t kick you out of the corner office, either.
The myths around artificial intelligence can get pretty dense, so we’ve taken some of the biggest and dissected them to help you understand the truth about today’s AI landscape. We’ll address some major misconceptions to set your business on the right path toward success in the world of AI.
Myth #1: AI and ML Are the Same Thing
At its simplest level, AI can be split into two categories: Strong AI and weak AI. The names have evolved in recent years, but the terms can generally be thought of in the following ways.
- Weak AI, also sometimes called “narrow AI,” is a collection of technologies that rely on algorithms and programmatic responses to simulate intelligence, generally with a focus on a specific task. When you use a voice recognition system like Alexa to turn on the lights, that’s weak AI in action.
- Strong AI, also called “ true AI,” in contrast, is intended to think on its own. These are systems built with the human brain as their archetype. Strong AI is designed to be cognitive, to be aware of context and nuance, and to make decisions that are not programmatic in nature but rather the result of a reasoned analysis. Strong AI, in general, is designed to learn and adapt, to make a decision tomorrow that is better than the one it made today.
With this distinction in mind, what then is machine learning (or ML)? Machine learning is a specific type of AI put into practice, with the goal of giving a computing device access to some store of data and allowing it to learn from it. Not all forms of AI are defined as machine learning. When Alexa turns on the lights, it doesn’t learn anything. It just waits to be told to turn the lights off. In contrast, an ML system may be given a data feed — say, temperature and tolerance information from sensors on a piece of manufacturing equipment — and be asked to draw conclusions about it. This may involve searching that data for trends, patterns, and anomalies, information that might not be obvious to a human observer. Ultimately, the ML system may conclude that a machine needs to be repaired because it is about to fail, or that it needs to be run at a lower speed. As the machine learning algorithm continues to learn from this data, it becomes progressively easier for it to generate additional insights down the line, and those insights become more accurate.
ML is just one example of AI put into use. A variety of other terms also tend to get conflated with general AI concepts. For example, deep learning is a subset of machine learning that uses software to mimic brain activity as closely as possible.
The bottom line: AI is hard. AI is complicated. And people are throwing around AI terms conflating it’s meaning. It’s important to understand the distinctions so you know what you’re getting.
Myth #2: AI Is a Magic Wand
As exciting as an AI-enabled hair dryer and AI-powered yoga pants sound (yes, those are real things), there is a time and a place for AI, at least as it stands today.
At its most fundamental level, the key to successfully building an AI outcome, regardless of the industry in which it’s deployed and its level of complexity, is training. A spam filter must be trained on how to recognize a good email message from a bad one. A voice recognition AI must listen to countless hours of spoken dialogue before it can parse what is being said with any degree of accuracy. AI-enabled factory floor initiatives typically must gather several million gigabytes of data each week in order to have enough analytical data to make reasoned decisions about what might happen in the future.
All of these are examples of training, and it’s not just a game of volume but one of quality, too. Successful AI algorithms must be trained on the right data sources, or they simply won’t be able to make good decisions. If you were to open up your email inbox and tag all the messages from your spouse as spam, then tag all the emails from Nigerian princes as good. You’ll promptly see for yourself how quickly AI can go off the rails when it’s trained the wrong way. The same is true in a more advanced industrial setting. If a sensor is miscalibrated and feeds inaccurate information to an algorithm tasked with monitoring equipment, all those gigabytes of data will end up being useless or worse, as the AI will use bad data in the course of reaching inaccurate conclusions.
The point of this is that AI is not necessarily a cure-all. There is no “AI switch” or “AI plugin” that can take any old technology and somehow give it cognitive ability. Humans have to define the problem, identify an appropriate AI technology to solve it, train the tool with the correct data, and then verify that the results are valid. Even the most powerful AI tools developed to date have to be carefully managed over time so they don’t run off the rails.
Once an AI tool has generated results, the work isn’t over. Many AI professionals are finding that they learn more when an AI algorithm returns the wrong answer than the right one. This effect is visible at both the consumer and the industrial level. When an AI-based spam filter miscategorizes an incoming message, the user has the chance to retrain the tool by categorizing it properly. This gives the algorithm new insight into what it might have missed the first time around; learning from the error makes the tool incrementally more powerful. If the spam filter had not been retrained, it would be no more accurate the next time around and would likely make the same mistake again.
Similarly, in a manufacturing setting, imagine that an AI directs that a machine be taken offline because a failure in a key part is imminent. If the part does not fail, then what? What happens if a security-focused AI blocks your traveling sales force from accessing the network because it wrongly assumed they were hackers? Because of the logical nature of AI, a developer can determine why the AI made these specific decisions and can work backwards to determine what data it relied on in the process. This may reveal flaws in the data, an error of logic processing, or some other bug that would otherwise go unnoticed.
Myth #3: Most Companies Don’t Have the Resources or the Need for AI/ML
Artificial intelligence and machine learning, by their very names, do not exactly convey simplicity. And in truth, these are dazzlingly complex technologies that, under the hood, are far from accessible to the layperson.
So that means you need an army of Ph.D.-carrying data scientists and troves of cash to implement AI, right? No. It’s important to understand the difference between building an artificial intelligence solution from the ground up and implementing existing AI tools within your organization. The first of these is extremely difficult. The second is getting easier every day.
Consider all of the tools you use in the course of a day: an email client, or productivity tools like your digital assistant or spreadsheets. They’re not simple technologies, but you’re able to master them without knowing what’s happening behind the curtain. The same thing is happening to AI, as tools are becoming increasingly accessible.
You may also be under the impression that your business or specific use case is too small or insignificant to merit an investment in AI, that your environment is just not complex enough to benefit from the technology. There’s really no need that’s “too small.” Remember that a tiny improvement in a key business vector can have a huge impact on the bottom line. A system that reduces production mistakes by just 1 percent or that correctly recommends a price increase of just a few pennies could equate to millions of dollars in avoided costs or additional profits. The challenge is largely in identifying where these opportunities could lie.
Want more? Check out the complete guide: 5 Big Myths of AI and Machine Learning Debunked.