Machine learning (ML) is the latest buzzword, along with AI, Web 3.0, and blockchain.
ML is being used in a lot of places today, in everything from our phones, the stock market, to even in places where you least expect it like heavy-duty industrial machines.
But have you wondered if there’s more to the algorithms recommending the next Youtube video to watch? And what could be going on behind the scenes with these algorithms?
In this article, we will focus on what is machine learning, how is it different from Artificial Intelligence (AI), and how it intersects with our daily lives.
Difference between Machine Learning and Artificial Intelligence
The first thing to understand is that Artificial Intelligence and Machine Learning are not the same.
Instead, ML is a subcategory of AI. It is a field of computer science, and engineering focused on developing algorithms that allow computers to learn from data without being explicitly programmed.
Machine learning algorithms can be used to automatically learn and improve the performance of a task, such as recognizing objects in digital images or words in a text.
On the other hand, AI is software that is designed to simulate human intelligence.
Machines learn like humans learn
Machines learn similarly to how we humans learn. Let’s look at it through the lens of how one learns a new math formula:
- We first try to understand how the formula works.
- Next, we review examples of the formula being used to familiarise ourselves with it.
- Lastly, we solve problems using the formula and then check back to see whether we did it correctly.
Some of the more straightforward tasks, like differentiating between a cat and a dog, are very easy for us to solve. This is because we have the advantage of having years of experience and a “common sense” to back our predictions.
Machines don’t have this ability, and it could take up to millions of test data pieces to develop an algorithm that can identify objects or characteristics with an incredible degree of accuracy.
Two Types of Machine Learning: Classification and Prediction
This brings us to the next thing to know about ML: the two types of ML algorithms, specifically Classification and Prediction.
Classification algorithms look for notable features in the data and judge those to match those that it already knows.
You’ve probably seen images where there’s an apple outlined by a box with a heading like “Apple-92%”. The “92%” indicates how confident the algorithm is to classify the object as an apple, among other fruits or objects. These algorithms can progress to cover such complex use cases such as identifying tumors in humans or other cars on the road by self-driving vehicles.
The other category, prediction algorithms, leverages large quantities of data and extrapolates the data to make predictions. This helps users make better decisions.
The most common use of this algorithm is by Youtube and Netflix to make predictions of which videos you would like to watch next. How that works is by analyzing your watch history and matching you with other people who watch similar videos as you.
The same applies when Amazon recommends products based on your past purchases and matches you with those who have similar preferences as you.
If you have ever wondered why companies care so much about collecting data on their consumers, then there you have it! It’s simply to make sure they own data that can be used to predict your behaviors and feed you more products or services!
Symbolic Artificial Intelligence: The Origination of Machine Learning
ML actually originated from something known as symbolic AI.
Basically, early researchers of AI thought that by hard coding all possible pathways in an algorithm, they would be able to achieve artificial intelligence.
Coding a symbolic AI would be a tedious task that would take forever. Instead, ML was born. By feeding ML with all the necessary test data, it returns results based on what it has learned from previous examples.
ML has come a long way since then. It has been used for many years in various domains, including natural language processing, speech recognition, and bioinformatics.
However, the rapid growth of big data in recent years has led to a renewed interest in machine learning, as it offers a way to automatically extract useful information from large data sets.
This is where machines exceed us, humans. While we can quickly identify objects in a few images, doing the same millions of times over would be tiring and time-consuming. This is where machines help us. Having a machine go through a million pieces of data simultaneously improves efficiency and is also beneficial in the long run.
The “Testers” and “Builders” of Machine Learning
An interesting way to understand machine learning algorithms is that humans aren’t the ones who actually built the algorithms that we see in action today. What humans developed are “builder” algorithms that produce multiple algorithms by making millions of random changes. Also, they developed a “tester” algorithm, which measures their accuracy by testing them on the data set that has been provided.
The “builder” first creates a set of algorithms with different parameters. Then the “tester” checks the accuracy of these algorithms. The algorithm with the highest accuracy is sent back to the “builder”, which produces a new set of algorithms based on this algorithm’s parameters and then again sends them to the “tester” to see which one scores best.
Rinse and repeat many times, and you develop an algorithm with near-perfect accuracy. But the exciting part is that neither the human, the “builder”, the “tester”, or, surprise, surprise, the algorithm itself knows precisely what makes the final algorithm work.
This is pretty remarkable for things that understand just 0’s and 1’s and nothing else. This puts a whole new meaning to solving problems like reCaptcha. Because not only are you helping the website know you’re a human, but you are also giving machines more test data so that they can improve their accuracy.
The Latest Developments in Machine Learning
One of ML’s most recent and eye-catching uses are DALL-E 2 and GPT-3. Developed by OpenAI, they use millions of test data pre-labeled by humans to produce valuable results.
Take DALL-E 2 as an example. It has been given millions of labeled images, and its algorithm can now generate a photo of something that has never before existed. It expands a whole horizon for creativity.