Back to Blog

What is Artificial Intelligence?

What is Artificial Intelligence?
Kate Dudzik

The Synthetic Umbrella

As I navigate another day filled with checking for misplaced commas and performance reports, I can’t help but imagine how others feel in a world with misrepresented names and titles, grasping at straws to understand what the heck someone means when they say “A.I.”. In my world and community, having a background in the Cognitive Sciences and Artificial Intelligence, I am more than aware of the debates over weak AI, strong AI, optimal models and questions of consciousness. It is something I often discuss with friends and colleagues over another Americano, in between tackling tasks towards developing AGI. We laugh and use our own jargon, “the Thermostat!”, “Yes but it’s just an NPC”, “Block world, am I right?!”, “It’s a hybrid model”, “Yes but the Turing test is no longer valid, if it ever was”, and then wonder how the world seems to be so confused about what exactly it is that we do and debate.

Well.

I am writing this for you, as much as I’m writing this for myself. I want you to know, first and for most: the way that many people describe Artificial Intelligence is often misleading, confusing, and/or ego-driven, which makes it very hard to understand if you’re on the outside looking in (heck, even from the inside).

Part 1: The Synthetic Umbrella

Artificial Intelligence is an umbrella term, which describes a whole discipline of study and does not specifically refer to one style, type, or idea. When I tell people outside of my field that I work with Artificial Intelligence, they often get confused in two ways:

  • The last representation or most common representation that they are exposed to labelled “A.I.” will influence their idea of what it is that I do, thinking all A.I. development is the same thing or goal (forgetting or not knowing that Artificial Intelligence is an Umbrella term for many subcategories)
  • The way I describe my work, “I work with A.I.”, does not always convey that I am a designer and developer of A.I. to many that are outside of AGI. I build A.I., but I strongly believe I work with it to improve what it is capable of doing – with the type of systems I develop, we require a relationship, because my work is not to develop A.I. tools.

Inside our umbrella, the first split begins with the distinction between “Weak” and “Strong”.

Artificial, meaning produced by humans, not found naturally.

Intelligence, being the ability to acquire and apply knowledge, and skills.

We put those things together and end up with systems that are designed, created and produced by humans that have the ability to acquire and apply knowledge and skills. But like all human creations, they are not made equally, not all types are the same (even when considered the same type of model, they are not guaranteed to be made to the same level, style, or capabilities), and as always, the goal will determine the scale in which something is measured and the steps to complete the task.

Part 2: The Primary Categories

Weak A.I. refers to almost all, if not all, artificial intelligence that you have ever seen, heard of, or that exists. It is the category of scripted Artificial Intelligence, that although may grow, learn, and adapt in some cases, is still limited to the bounds of pre-set rules and conditions belonging to maybe one or very few levels of the system. It is limited in the way that it can create its own goals (if it even can create its own goals, few systems do this). Even under the instances in which it is designed based upon humans – at a biological, cognitive, rational, social, or evolutionary level – it is still only the simulation of human-like behaviour as we believe it to be, coupled with the fact that most systems (if not all) do not replicate all levels at once of the human system.  

Many forget that even something as well known as working memory is still just a theory of memory – it is not proven to be true; but is one of the best theories we have about that specific process and behaviour. When we use our theories of human functionality in a model, or even things we know to be true in a model such as how a neuron functions, unless it is a whole, unscripted representation of every aforementioned level included and working without a script, it is a form of Weak AI. This includes most neural networks, production systems, software, decision trees, chatbots, personal assistants, aides, etc. Most A.I. is created as a form of tool to solve a specific goal, or to exhibit a pre-determined type of behaviour. Even Siri is considered to fall within a domain belonging to this category. The amount of guidance a model receives, and the structure of its code is different from system to system and differs depending on the goal for model performance and the way the developer wants the system to behave.

Strong A.I. refers to artificial intelligence with a mind of its own, capable of making its own decisions and thinking for itself with independence. It is beyond the simulation of just one or two levels (such as biological and cognitive, or social and rational). It is a system that is not bound to a script but can evolve in a whole way that was not pre-determined, pre-defined, and exceeds one or two levels. Areas of work such as Artificial General Intelligence strive towards this goal. By working towards Strong A.I., we learn more about how humans’ function through replication, and can gain insight to the “Black Boxes” of what happens internally to our own systems in the pursuit. Strong A.I. requires a multi-disciplinary approach, and is an ongoing effort driven by community and brilliant minds around the globe.

As you can imagine, this is a very hard problem to solve, and I believe that is why many people want to change the categories or re-define the first split within the Umbrella of A.I.. I do not think that because we are at a beginning phase of working towards Strong A.I. that we can throw away this distinction, nor that it takes away from the power and competency of Weak A.I.. It is incredible in its own right and can do many impressive tasks – “Weak” may not have been the best term, because I do see many researchers feeling it takes away from the value of their work, which in all honesty, it does not nor should it. It is merely a form of classifying the goals of A.I. development.

Part 3: Machine Learning

Machine learning (ML) is a way to build up and improve the intelligence of a system through data with as little human input as possible. So, you’ve built a system you’re sure will do the task, but how do you build up it’s knowledge base or know that it actually works? We need to work with our system before we can trust it to perform the best it can out in the wild.

The machine has to learn.

We train the system to be able to be able to perform a certain task by exposing it to information, and the system can then be “tested” on what it learns. There are three forms of ML techniques, which do not always apply to all types of A.I. and can differ greatly in terms of complexity.

  • Supervised Learning: the dataset used to train the model is pre-classified and labelled, giving you and the model an “answer-key” to learn from. This requires a vetted and pre-categorized set of data to teach the model what input belongs to which category.
  • Unsupervised Learning: the information given to the model to learn from is not handed over with that “answer-key”, the model is expected to categorize/parse the data based upon similar and distinguishing features.
  • Reinforcement Learning: The model is using what it already believes to be true to navigate a problem state or environment to maximize their reward. Some models are capable of learning through their mistakes and using this knowledge to improve performance in the future.

In conclusion, there is a great deal of discrepancy across the field of Artificial Intelligence. I hope that I have shed some light upon some of the general ways of classifying A.I., and the goals and pursuit of different developers. Artificial Intelligence can be a strong tool, a learning agent, a personal aide, a dream of P.K. Dick, hardware and software.

Whichever way it is described, and whichever category or domain in which it belongs, it is intelligence, and hopefully like humans, it continues to evolve as much as it learns.

Want to Learn More?

Give us a shout and let us know how we can help.

More from the Blog

Performance Marketing World Feature: ‘Marketers are facing a battlefield’: Industry predictions ahead of Black Friday

Performance Marketing World Feature: ‘Marketers are facing a battlefield’: Industry predictions ahead of Black Friday

On the eve of perhaps the biggest sales event of the year, thought leaders offer their insights on everything from the threats to avoid, channels to lean on, opportunities to grasp, and more.

Read Story
The MadTech Podcast: Hotspex Media’s Josh Rosen on Google vs the DOJ, Australia’s Social Media Ban for Under 16s and Amazon’s New Haul App

The MadTech Podcast: Hotspex Media’s Josh Rosen on Google vs the DOJ, Australia’s Social Media Ban for Under 16s and Amazon’s New Haul App

Josh Rosen, Hotspex Media President, recently joined Aimee Newell Tarín and Mat Broughton on an episode of ExchangeWire’s MadTech Podcast to discuss the latest developments in media, marketing, and ad tech.

Read Story
100 pitches, 99 no’s: Why we keep coming back for more

100 pitches, 99 no’s: Why we keep coming back for more

Let’s set the record straight: if we really heard 99 “no’s,” we wouldn’t still be in business. As a media buying and planning agency, what we’ve experienced instead is a learning curve that highlights an unaddressed issue in advertising and marketing. The problem isn't that brands keep rejecting our pitches -- it's that there's a disconnect between what the brands say they value and want, versus the reality of how they actually make their decisions.

Read Story