What is Artificial Intelligence?

Artificial intelligence, also known as AI, are algorithms that simulate intelligence and the way humans think by performing tasks related to reasoning, problem-solving, learning, planning, and understanding language.

Whether or not you realize it, chances are you are surrounded by many different programs and outputs that are powered by AI right now. AI powers customer recommendations on Netflix, Pandora and Amazon, it is the engine behind Google’s search and language translation technology, Gmail’s spam filters, and more recently AI comprises the impressive technology behind self-driving cars.


John McCarthy one of the founders of the discipline of artificial intelligence

John McCarthy one of the founders of the discipline of artificial intelligence

Software engineers craft this sophisticated programming by writing complex logical and mathematical formulae to give computers instructions on what to do. The term “Artificial Intelligence” first appeared in 1956 when Dartmouth College math professor, John McCarthy, proposed a research project that suggested that a computer could be programmed to think like a human.

Jetsons gif

The Jetsons are often used as the de facto example when describing inventions that feel futuristic

The excitement about how AI could transform society for the better was one of the major influences for the classic TV cartoon The Jetsons, which is often used as the de facto example when describing inventions that feel futuristic, like robot housemaids and flying cars.

During the 1950s and 1960s, AI research focused on neural networks that worked similarly to brains and nervous systems. In the 1980s, public excitement waned due to unmet promises. As a result, AI research funding began to disappear. For the next couple of decades, discussions around AI faded away from the public’s consciousness and mostly stayed relegated to university computer science laboratories.

Despite waning public interest between the 1980s and 2010s, computer scientists were working on a new form of AI, machine learning, which would eventually bring about incredible technological breakthroughs.

Below I will provide a brief overview of what machine learning is and what deep learning is and if you want a more in-depth look at the two you can check out this great guide here.


What is Machine Learning?

Machine learning (ML) is a unique branch of AI because it trains AI to learn and gain insights on its own. In order to do so, machine learning must access vast amounts of data, like thousands of pictures of four-legged household pets, to eventually detect patterns that reveal insights such as which photos depict cats and which depict dogs.

Machine learning’s roots go back to 1997, briefly popping into public awareness when IBM’s chess-playing computer, Deep Blue (which was powered by machine learning), outsmarted world champion Garry Kasparov.

While computer programmers found Deep Blue’s defeat over Kasparov to be an impressive technological win for computer science, some critics were not very impressed because they believed there was “nothing intelligent or even interesting about the brute force approach” (Fish).

A computer using a brute force strategy refers to the systematic technique where a computer rapidly checks all the possible options until the correct one is found. As a result, AI began to fade away from the public’s consciousness again.

In 2011, IBM made another public splash by introducing the world to Watson, IBM’s new artificially intelligent computer powered by a more sophisticated version of machine learning. This time, the performance pitted IBM’s Watson against two human Jeopardy world champions, Ken Jennings, who holds the record for the longest winning streak on the television game show, and Brad Rutter.

The Jeopardy Challenge played out over the course of two days and aired over three episodes on national television. Watson, to the surprise of many, ended up sweeping the game with $35,734, compared to Jennings’ $4,800 and Rutter’s $10,400. Contrasted with DeepBlue’s chess-playing victory, critics and the public were awestruck by Watson’s win.

This feat went beyond pre-programmed logical tasks and brute force strategy. Just like the human players, Watson needed to use real-time reasoning to analyze and understand natural human language. It had to decipher wordplay, puns and verbal subtleties in an effort to extract the actual intent of each question.

Around the same time, in the early 2010s, personalized consumer experiences began to go mainstream when machine learning algorithms appeared on several of the Internet’s most popular websites. Some early examples included movie, music and shopping recommendations on Netflix, Pandora, and Amazon.


Facebook also began recommending friends for users, and the website also began auto-tagging friends when someone uploaded a photo, thanks to facial recognition technology powered by machine learning. Machine learning algorithms also helped financial companies predict whether or not a loan might be a bad risk and helped crediting agencies calculate a person’s credit score.

Companies like Target also used machine learning to personalize their coupons and mailers to shoppers, based on items they had previously bought. For example, Target was able to figure out with high certainty when a customer was pregnant, when the due date week might be, and the gender of the baby based on a combination of products being purchased in a certain time frame.



What is deep learning?

A tremendous leap forward in machine learning came by way of the breakthrough development called deep learning (DL), which is the current state-of-the-art sub-discipline of machine learning (Marr). Compared to machine learning, Deep learning requires even less handholding from a software engineer.

Compared to machine learning, Deep learning requires even less handholding from a software engineer.

This AI system relies on neural networks that contain significant amounts of observational data to learn on its own and then make decisions free from any programmers. An example of what a deep learning agent is capable of can be seen in its ability to automatically caption images based on the imagery a system is fed.


A deep learning agent can sift through data about the image and, based on its prior knowledge of what things are, it can generate captions describing the image’s content down to a person, an object, a location, or even a mood. Mostly, what the deep learning agent does is perform a classification task on large amounts of text, speech, images, videos, data signals and audio data sets. These datasets can be as vast as Google’s library of images or Twitter’s database of tweets.

Thanks to sensing and processing technology, AI can “sense” almost anything a human can sense. This means a computer can utilize any data point that can be made digital, which includes imagery, audio, temperature, position, vibration and smells. Going a step further, computers can surpass human sensing by processing such things as air quality, ultrasound, radar, LiDAR, sonar, infrared, radio waves, ultraviolet rays, x-rays, and microwaves.

Supervised learning

Deep learning systems learn primarily in three ways: supervised learning, unsupervised learning, and reinforcement learning (Liikkanen). With supervised learning, the AI system learns from data but needs some human handholding to generate an outcome. You can more about the three learning systems here, but I will include a brief breakdown below of the differences.

For example, when an AI system is fed numerous images of cats and dogs, the computer must be told by a programmer which images contain cats and which contain dogs. After a while, the AI agent begins to learn patterns associated with cats and can then make the classifications on its own.

Unsupervised learning

Unsupervised learning occurs when an AI system processes numerous images and automatically sorts them into groups based on patterns it detects. For instance, because an AI system can detect common differences without programmer input, it can automatically process a database of images and sort them into animals with four legs and animals with two.

Reinforcement learning

Finally, reinforcement learning makes deep learning magical because the AI system can be given a goal, such as maximizing the score in a video game. As long as there is some feedback, such as a score counter, it can learn from its actions and keep trying something different until it achieves that stated goal.

breakout atari game

In 2015, an impressive example of reinforcement learning took place when Google’s DeepMind researchers tasked a deep learning agent with maximizing the score in the classic arcade game Breakout. Breakout requires players to hit a moving ball with a paddle at the bottom of the screen to bounce the ball to the top of the screen and eliminate bricks.

The easiest way to think about AI is to think of AI as a box with math & code inside. Data goes in that box and decisions about the data come out. – Chris Nicholson, SkyMindThe more bricks the gamer hits by beating the ticking clock, the more points the gamer gets. In a time-lapse video posted to YouTube, DeepMind’s deep learning agent can be observed learning to play the game on its own, its only input coming from researchers who tasked it with maximizing the score. In the beginning, the deep learning agent fails miserably. It doesn’t understand what the paddle, balls and bricks do.

However, after several hours, the deep learning agent masters the game, taking it to the point where even researchers were surprised to learn about a trick where a gamer could hit the ball onto a specific spot to trap the ball and let the ball do all the work, bouncing around and knocking out bricks automatically with little effort from the gamer.

Representation of Tesla's Autopilot

Another deep learning example can be seen in how self-driving cars are no longer a hypothetical futuristic technology (Ng). Companies like Google, Intel, Uber, Lyft, Tesla, and a variety of automakers have poured billions of dollars into developing technology that allows cars to drive without a human driver. These deep learning agents learn to drive by analyzing hours and hours of video footage of humans driving.

Over time, the deep learning agent learns how to recognize and react appropriately to street lines, signs, signals, cars, and obstacles. Self-driving cars are estimated to save thousands of lives each year because AI does not fall victim to fatigue or distraction like human drivers.

The healthcare field is also using deep learning to help doctors in various sectors. One deep learning system was able to identify skin cancer in patients, which helped speed up processing and accuracy time for completing these tasks. Deep learning agents can process a patient’s skin disease images and then analyze them against a large library of skin disease images to find potential causes for concern. Impressively, the deep learning agent was exceptionally accurate, matching the performance of 21 board-certified dermatologists (Kubota).

The last example of deep learning’s capacities can be seen in the remarkable story of how a deep learning agent, called AlphaGo, plays Go, the world’s oldest game. To play the profoundly complex game of Go, players take turns placing black or white stones on a 19 x 19 grid to capture the opponent’s stones or surround more territory on the board.

Go is said to be 10 to the 100th power more complex than Chess (“The story of AlphaGo so far“). To further illustrate the complexity of Go, it is helpful to know that there are more possible move combinations in the game of Go than atoms in the universe (“AlphaGo“). Advancements in deep learning are what finally led researchers at the AI juggernaut company, DeepMind (which Google acquired in 2014), to try and defeat human Go champions. DeepMind researchers wanted to see if another milestone for artificial intelligence could be achieved beyond chess and Jeopardy.

To train the DeepMind algorithms, researchers had AlphaGo study moves that human Go players made in real-time. Then they had AlphaGo use that information to play millions of games against itself (Lewontin). Over time, AlphaGo became profoundly good at playing Go; the more it played, the better it got. 

Some consider the game of Go to be the apex of strategic thought

When researchers at DeepMind decided to challenge humans champions to games against AlphaGo, critics doubted AlphaGo would succeed. They predicted that a computer would not defeat a human for a couple of decades because the game requires real-time strategic intuition (Lewontin).

Some consider the game of Go to be the apex of strategic thought. It requires playing spontaneously “in the moment,” relying on “intuition and feel,” qualities that showcase the intellectual depth, beauty and subtlety required to play the game (“The story of AlphaGo so far“).

Alpha Go Documentary

In 2015, AlphaGo got its chance to prove itself when it defeated 3-time European Champion Fan Hui, 5-0. Interestingly, in the documentary movie about AlphaGo’s prominent rise, Hui is wholly shocked and emotionally distraught over his losses to AlphaGo.

Then in 2016, after a week-long showdown, AlphaGo beat 18-time world champion Lee Sedol 4-1. This feat stunned most fans and critics, as well as Sedol, who had bragged in press conferences leading up to the event that he would win all five games. Curiously, many insights into how AlphaGo “thinks” were revealed by the way it played, and how it reacted to Sedol’s expert moves. DeepMind engineers were particularly surprised by AlphaGo’s performance, which can be seen in the following post DeepMind researchers made on their website:

AlphaGo played a handful of highly inventive winning moves, several of which –including move 37 in game two – were so surprising they overturned hundreds of years of received wisdom, and have since been examined extensively by players of all levels. In the course of winning, AlphaGo somehow taught the world completely new knowledge about perhaps the most studied and contemplated game in history. (“The story of AlphaGo so far“)

AlphaGo’s wins were especially remarkable because DeepMind’s AI system provided never-before-seen insights into the world’s oldest game. In order to help them become better players, Go players from all around the world are now strategically analyzing games AlphaGo has played. While AI is typically thought of as simulating human reasoning, the AlphaGo story becomes a striking example of what AI is capable of doing beyond the programming of its human creators.

In an even more impressive feat, DeepMind unveiled in 2017 a much more powerful version called AlphaGo Zero which battered AlphaGo, the version Sedal played against, 100 games to none (Simonite). Go fans have already started to learn new ideas and game-play techniques for the 2,000-year-old game thanks to AlphaGo Zero.

What happens when AI systems move beyond self-driving cars, healthcare and gaming and enter into creative fields such as graphic design?

It’s important to note that the original AlphaGo system learned how to play the game by analyzing 160,000 games of data from an online Go community. AlphaGo Zero, on the other hand, bypassed human knowledge by learning how to play the game on its own by just playing itself.

Much like the classic Breakout arcade example mentioned earlier, programmers just gave the system the goal of knowing whether it won or lost and as a result AlphaGo Zero learned over time which moves lead to victories.


All these deep learning examples show how AI systems are succeeding by matching and surpassing the performance of experts in technical, complex and intuitive-based tasks. The question remains: what happens when AI systems move beyond self-driving cars, healthcare and gaming and enter into creative fields such as graphic design?
This is not a question of whether AI will enter creative fields; it is a question of how is AI already being used in creative fields, and what that means for designers.

This is not a question of whether AI will enter creative fields; it is a question of how is AI already being used in creative fields, and what that means for designers. This should then shift the conversations amongst designers, to ask how AI can help them work better, provide them with never before seen or considered insights into their work and what human design skills are necessary to best compliment AI systems? 

Let’s be friends!

If you enjoyed this please consider staying up to date by signing up for my email newsletter and then follow CreativeFuture on Twitter and Facebook.



Posted by Dirk Dallas

Dirk Dallas holds an M.F.A in Graphic Design and Visual Experience from Savannah College of Art and Design. In addition to being a designer, he is also a writer, speaker, educator & the founder of CreativeFuture & From Where I Drone. See what he is up to over on Twitter via @dirka.


Leave a reply

Your email address will not be published. Required fields are marked *