An AI Glossary
Posted October 18, 2018 3:03 p.m. EDT
The term “artificial intelligence” may sound new and futuristic, but it was actually coined back in 1956 for a tech conference at Dartmouth College. Since then, the AI field has progressed in fits and starts as new hardware, software and ideas slowly propelled it forward.
The current boom started in 2012, when a team of researchers used an artificial neural network in an image recognition competition that showed what AI could do with faster computer chips and bigger data sets. The last six years have witnessed breakthroughs in everything from self-driving cars to algorithms that can detect diseases, and social networks like Twitter that rely on AI to determine what content appears on our feeds.
Like most technologies, the artificial intelligence world is littered with insider jargon. Here is a non-exhaustive glossary.
Artificial neural network (ANN): An algorithm that attempts to mimic the human brain, with layers of connected “neurons” sending information to each other.
Black box algorithms: When an algorithm’s decision-making process or output can’t be easily explained by the computer or the researcher behind it.
Computer vision: The field of AI concerned with teaching machines how to interpret the visual world — aka, how to see.
Deep learning: ANNs that have multiple layers of connected neurons. This makes the process deep compared to earlier, more shallow networks.
Embodied AI: A fancy way of saying “robots with AI capabilities.”
Few-shot learning: Most of the time, computer vision systems need to see hundreds or thousands (or even millions) of examples to figure out how to do something. One-shot and few-shot learning try to create a system that can be taught to do something with far less training. It’s similar to how toddlers might learn a new concept or task.
Generative adversarial networks: Also called GANs, these are two neural networks that are trained on the same data set of photos, videos or sounds. Then, one creates similar content while the other tries to determine whether the new example is part of the original data set, forcing the first to improve its efforts. This approach can create realistic media, including artworks.
Machine learning: Systems that learn from data sets to perform and improve upon a specific task. It’s the current area of AI experiencing the biggest research boom.
Natural language processing: The discipline within AI that deals with written and spoken language.
Reinforcement learning: A process where machines learn to do a new task like humans do — through a system of rewards and punishments — starting as a novice and improving with practice and feedback.
Supervised learning: A technique that teaches a machine-learning algorithm to solve a specific task using data that has been carefully labeled by a human. Everyday examples include most weather prediction and spam detection.
Transfer learning: This method tries to take training data used for one thing and reused it for a new set of tasks, without having to retrain the system from scratch.
Unsupervised learning: An approach that gives AI unlabeled data and has to make sense of it without any instruction. In essence, it is when machines “teach themselves.”
Explainable AI (XAI): AI that can tell or show its human operators how it came to its conclusions.
Weak AI: Our current level of AI, which can do just one thing at a time, like play chess or recognize breeds of cats. The opposite would be strong AI, also known as artificial general intelligence (AGI), which would have the capability to do anything that most humans can do.