Do you already know artificial intelligence? Over the past six months, chatbots like ChatGPT and image generators like Midjourney have quickly become a cultural phenomenon.
But artificial intelligence (AI) or “machine learning” models have been around for a while.
In this beginner’s guide, we’ll go beyond chatbots to examine different types of AI – and see how it already plays a role in our lives.
How does AI learn?
The key to all machine learning is a process called training, in which a computer program is given a large amount of data – sometimes with labels explaining what the data is – and a set of instructions.
The instruction might be something like: “find all images that contain faces” or “categorize these sounds”.
The program will then look for patterns in the data it has received to achieve these goals.
It may take some nudging along the way – like “that’s not a face” or “these two sounds are different” – but what the program learns from the data and the given clues becomes the AI model – and the Training material ends by defining your skills.
One way to see how this training process can create different types of AI is to think about different animals.
Over millions of years, the natural environment has led animals to develop specific skills. Similarly, the millions of cycles an AI makes through its training data will shape the way it develops and lead to specialized AI models.
So what are some examples of how we train AIs to develop different skills?
Chatbots
Think of a chatbot as a parrot. He does imitation and may repeat words he has heard with some understanding of their context but without a full sense of their meaning.
Chatbots do the same – albeit on a more sophisticated level – and are about to change our relationship with the written word.
But how do these chatbots know how to write?
They are a type of AI known as Large Language Models (LLMs) and are trained on large volumes of text.
An LLM is able to consider not just individual words but entire sentences and compare the use of words and phrases in a passage with other examples across all of your training data.
Using these billions of word and sentence comparisons, it’s possible to read a question and generate an answer – like a predictive text message on your phone, but at scale.
The amazing thing about great language models is that they can learn the rules of grammar and figure out the meaning of words, without human help.
Can I chat with artificial intelligence?
If you’ve ever used Alexa, Siri, or any other kind of voice recognition system, you’re using AI.
Imagine a rabbit with its big ears, adapted to pick up small variations in sound. The AI records sounds as you speak, removes background noise, separates your speech into phonetic units – the individual sounds that make up a spoken word – and then compares them against a library of language sounds.
Your speech is then turned into text, where any listening errors can be corrected before a response is given.
This type of artificial intelligence is known as natural language processing.
It’s the technology behind everything from saying “yes” to confirming a bank transaction over the phone, to asking your cell phone to tell you the weather for the next few days in a city you’re traveling to.
Can AI understand images?
Has your phone ever collected your photos into folders with names like “at the beach” or “Christmas”?
So you are using AI without realizing it. An AI algorithm discovered patterns in your photos and put them together for you.
These programs were trained by examining a large number of images, all labeled with a simple description.
If you give an image recognition AI enough examples labeled “bicycle”, eventually it will start to figure out what a bicycle looks like and how it’s different from a boat or car.
AI is sometimes trained to spot small differences in similar images.
That’s how facial recognition works, finding a subtle relationship between the features of your face that make you distinct and unique when used. compared to every other face on the planet.
The same kind of algorithm has been trained on medical exams to identify life-threatening tumors – and can run thousands of investigations in the time it would take a doctor to examine just one patient.
How does AI create new images?
Recently, image recognition has been adapted to AI models that have learned the chameleon-like power of manipulating patterns and colors.
These image-generating AIs can transform the complex visual patterns they collect from millions of photographs and drawings into completely new images.
You can ask the AI to create a photographic image of something that never happened – for example, a photo of a person walking on the surface of Mars.
Or you can creatively direct the style of a picture: “Make a portrait of Brazil’s football technique, painted in the style of Picasso.”
Most recent AIs start the process of generating this new image with a collection of randomly colored pixels.
She searches the random dots for any hint of a pattern she learned during training – patterns for building different objects.
These patterns are slowly improved by adding more layers of random dots, keeping the dots that develop the pattern and discarding others, until finally a similarity emerges.
Develop all the necessary patterns like “surface of Mars”, “astronaut” and “walking” together and you have a new picture.
As the new image is built from layers of random pixels, the result is something that has never existed before, but is still based on the billions of patterns learned from the original training images.
Society is now starting to grapple with what this means for things like copyright and the ethics of creating artwork trained from the hard work of real artists, designers and photographers.
What about autonomous cars?
Self-driving cars have been part of the AI debate for decades, and science fiction has fixed them in the popular imagination.
The AI in cars of this type is known as autonomous driving and the cars are equipped with range-sensing cameras, radars and lasers.
Think of a dragonfly, with 360-degree vision and sensors on the wings to help it maneuver and make constant adjustments during flight.
Similarly, the AI model uses data from its sensors to identify objects and figure out whether they are moving and, if so, what type of moving object they are – another car, a bicycle, a pedestrian or whatever. thing.
Thousands and thousands of hours of training to understand what good driving looks like allowed the AI to make decisions and act in the real world to steer the car and avoid collisions.
Predictive algorithms may have struggled for many years to deal with the often unpredictable nature of human drivers, but driverless cars have already collected millions of kilometers of data on real roads. In San Francisco, California, they are already carrying paying passengers.
Autonomous driving is also a very public example of how new technologies must overcome more than just technical hurdles.
Government legislation and safety regulations, along with a deep sense of anxiety about what happens when we hand over control to machines, are still potential obstacles to a fully automated future on our roads.
What does the AI know about me?
Some AIs simply crunch numbers, collecting and combining them in bulk to create a swarm of information, the products of which can be extremely valuable.
There are probably already several profiles of your financial and social actions, mostly online, that can be used to make predictions about your behavior.
The supermarket loyalty card accompanies your habits and tastes throughout your purchases. Credit bureaus track how much you have in the bank and how much you owe on your credit cards.
Netflix and Amazon are tracking how many hours of content you watched last night. Your social media accounts know how many videos you’ve commented on today.
And it’s not just you, these numbers exist for everyone.