Skip to main content

AI and ML: Are they one and the same?



As children we believed in magic, imagined, and a fantasy where robots would one day follow our commands, undertaking our most meager tasks and even help with our homework at the push of a button! But sadly it always seemed that these beliefs, along with the idea of self-driven aero cars and jetpacks, belonged in a future beyond our imagination or in a Hollywood Sci-fi. Would we ever get to experience the future in our lifetime?
But then it arrived! Artificial Intelligence, aka AI, made its debut in real life and became the buzz word of the 21st century, providing us with new ideas to explore and incredible possibilities. And just as we were getting used to AI we were introduced to Futuristic Learning, Deep Learning, and another term we often confuse with AI: Machine Learning (ML). Whew!
Suddenly the future is well and truly here, and it’s hard to keep up with the advancement of these technologies, what each term means and how they relate to one another – particularly when it comes to AI and ML, which are often perceived as interchangeable.
But while AL and ML fall into the same domain, they are significantly different – with each having a specific application and outcome. And as more and more businesses start to question whether these tools may benefit them, we thought it was time to get to the bottom of what makes them different.

It all begins with AI.

According to John McCarthy, one of the Godfathers of AI, “AI is the science and engineering of making intelligent machines”.  We first saw AI in practice mid last century with the Turing Test – a chess experiment designed by mathematician Alan Turing that became the first time a computer defied human intelligence by defeating a human player in the game.
When looking at how ML fits in with AI, AI is the superset while ML is its subset. The latter is more dominantly used in areas with huge data sets encompassing the ‘3 Vs’ of Big Data: Volume, and Variety. AI, on the other hand, covers not only ML but also other branches including Natural Language Processing, Deep Learning, Computer, and Speech Recognition. Nevertheless, both AI and ML have one common goal: to achieve intelligence on a scale that defeats natural human intelligence.
Everything that has a smart system and is making decisions based on inputted data can be considered an AI-driven machine – be it a car, door, or even a refrigerator. AI can consist of everything from Good Old-Fashioned AI (GOFAI) all the way up to newer and advanced technologies like Deep Learning. Whenever a machine can “intelligently” complete a set of tasks based on some algorithms without human intervention, it is termed as artificial intelligence – for, identifying a series of steps to win a game or answer a generic question set by itself. AI machines are generally classified into three groups: Narrow, General, and Super:


  1. Artificial Narrow Intelligence or Weak AI is every intelligent task by machines that are programmed to do a single task, such as a game of chess or even Siri, Google, and other NLP processing tools
  2. Artificial General Intelligence or Strong AI are machines that mimic human intelligence to its core, making decisions, and performing intellectual tasks that are driven by sentiments, emotions, and general awareness of the environment.
  3. Artificial Super Intelligence outdoes human intelligence in abstraction, creativity, and wisdom. This is what Elon Musk and similar people are fearful of controlling the world.


This brings us to the fact that we need more computing resources to handle the corpus of data which unfortunately is limited. Therefore, we need to work through rule-based programming – hence the shift away from AI towards ML.

The rise of the machines.

A subset of AI, ML refers to machines that learn on account of some sort of prior knowledge – hence making them smarter and more likely to give results close to human intelligence. ML systems train a machine on how to learn and apply decision making when encountered with new situations and are designed to get smarter over time. What started as AI is now leading major devices to adopt ML due to its likelihood to yield better results, and with the emergence of Big Data, ML has gained speed and is now utilized by some of the world’s most powerful tech companies including Google, IBM, Baidu, Microsoft, and Apple.
Tom M. Mitchell, a Computer Scientist, and machine learning pioneer have defined ML as: “The study of computer algorithms that allow computer programs to automatically improve through experience.” It focuses on making a machine or computer “learn” by providing it with a set of data and some predictions. Data is the fuel for machine learning and is to ML what code is to traditional computing.
Training an ML model requires giving algorithms a chunk of Big Data and one of the many learning models in order to extract processed, meaningful information – thus automating the process. It works for specific domains where we are creating models to detect or separate items, for example, one fruit from a given set of fruits. Another example of its use is in manufacturing, whereby if you give input to an ML program with a large dataset of pictures of defects, along with their description, it should have the capacity to automate the data analysis of pictures at a later point in time. The model can find similar patterns in pictures with indicators as to where the defect might be by analyzing the diverse dataset.
ML can be divided into three types: Supervised, Unsupervised, and Reinforcement Learning.


  1. Supervised Learning finds the relationship between the predicted output and input so that we can predict outputs for newer inputs based on our previous datasets. An example would be predicting the time when customers usually buy from an online store.
  2. Unsupervised Learning has no label on the output or the data, meaning you’re unclear of the output of the model – it may be a wild guess. For example, a robot that serves as a housekeeper is trained to clean dust anywhere it finds it. It finds dust under the sofa more often than in other places, and thus trains itself to clean under it confidently.
  3. Reinforcement Learning takes a similar approach as its name and inputs the results as a training model back into the system to improve it. Taking the same robot housekeeper example, the robot takes dust under the sofa as its input to improve the system.
For more about supervised, unsupervised, and reinforcement learning, check this article.


Final thoughts.

Today we see AI applied to many areas of our daily lives – but it’s not as obvious to ‘see’ ML. How often do you access Google Home or Alexa? These are AI interactions between humans and machines – but it’s what’s behind these interactions that are really interesting! They’re powered by training models and prediction systems of ML used by Netflix, YouTube, Facebook, and Amazon.
ML has certainly been seized by marketers due to the opportunities afforded from being able to understand audiences at a micro level – but it’s also a term misused more than it should be, with the assumption that every AI system is also ML. If you compare AI and ML, you can clearly arrive at the conclusion that everything that uses human intelligence as a tool to mimic intelligent behaviors by machines can be termed as AI. But for that operation to be an ML tool too, one needs to use modeling techniques and a Big Data set to apply these techniques to.   
By understanding the key differences between AI and ML and the different opportunities each provides, businesses will have a better understanding of how – if at all – these tools can be applied in their operations. 
This article originally appeared on Makeen Technologies.

Comments

Popular posts from this blog

How Big Data Analytics Can Help You Improve And Grow Your Business?

Big Data Analytics There are certain problems that can only solve through big data. Here we discuss the field big data as "Big Data Analytics". The big data came into the picture we never thought how commodity hardware is used to store and manage the data which is reliable and feasible as compared to the costly sources. Now let us discuss a few examples of how big data analytics is useful nowadays. When you go to websites like Amazon, Youtube, Netflix, and any other websites actually they will provide some field in which recommend some product, videos, movies, and some songs for you. What do you think about how they do it? Basically what kind of data they generated on these kind websites. They make sure to analyze properly. The data generated is not small it is actually big data. Now they analysis these big data they make sure whatever you like and whatever you are the preferences accordingly they generate recommendations for you. If you go to Youtube you have noticed it kn…

AI Vs Machine Learning Vs Deep Learning

AI Vs Machine Learning Vs Deep Learning Artificial intelligence, deep learning and machine learning are often confused with each other. These terms are used interchangeably but do they do not refer to the same thing. These terms are closely related to each other which makes it difficult for beginners to spot differences among them. The reason I think of this puzzle is that AI is classified in many ways. It is divided into subfields with respect to the tasks AI is used for such as computer vision, natural language processing, forecasting and prediction, with respect to the type of approach used for learning and the type of data used. Subfields of Artificial Intelligence have much in common which makes it difficult for beginners to clearly differentiate among these areas. Different approaches of AI can process similar data to perform similar tasks. For example Deep learning and SVM both could be used for object detection task. Both have pros and cons. In some cases Machine Learning is …

How Computers Understand Human Language?

How Computers Understand Human Language? Natural languages are the languages that we speak and understand, containing large diverse vocabulary. Various words have several different meanings, speakers with different accents and all sorts of interesting word play. But for the most part human can roll right through these challenges. The skillful use of language is a major part what makes us human and for this reason the desire for computers that understand or speak human language has been around since they were first conceived. This led to the creation of natural language processing or NLP.
Natural Language Processing is a disciplinary field combining computer science and linguistics. There is an infinite number of ways to arrange words in a sentence. We can't give computers a dictionary of all possible sentences to help them understand what humans are blabbing on about. So, an early and fundamental NLP problem was deconstructing sentences into small pieces which could be more easily…

Introduction to Data Science: What is Big Data?

What Is Big Data First, we will discuss how big data is evaluated step by step process. Evolution of Data How the data evolved and how the big data came. Nowadays the data have been evaluated from different sources like the evolution of technology, IoT(Internet of Things), Social media like Facebook, Instagram, Twitter, YouTube, many other sources the data has been created day by day. 1. Evolution of  Technology We will see how technology is evolved as we see from the below image at the earlier stages we have the landline phone but now we have smartphones of Android, IoS, and HongMeng Os (Huawei)  that are making our life smarter as well as our phone smarter. Apart from that, we have heavily built a desktop for processing of Mb's data that we were using a floppy you will remember how much data it can be stored after that hard disk has been introduced which can stored data in Tb. Now due to modern technology, we can be stored data in the cloud as well. Similarly, nowadays we noticed …

The Limits of Artificial Intelligence

If you are here, it means that you are familiar with term artificial intelligence. Either you have read about it in school or have seen it in sci-fi movies or somewhere else. Talking about the limitations of AI, let me ask you one simple question first, do you know the definition of AI? You might be thinking to answer me with a yes, yes I know what is artificial intelligence. But what if I tell you that AI is a buzzword and it is almost impossible to properly define. It is this way because the definition of artificial intelligence is moving. People don’t call the things AI that they used to call. For example, a problem that seemed too complex to be solved by human and was solved by AI algorithm is no longer a problem of AI. Playing chess, is one of the examples. It was considered the peek level of artificial intelligence back in previous century. Now it hardly fits the criteria for AI. It is presented to the world as a super power that when given to a computer, it magically starts li…

How To Become A Successful Programmer?

How To Become A Successful Programmer? I have heard many novice programmers saying I want to get better at programming but there is hardly a slight improvement in their skills. I have observed that most of them say they want to get better but that is just a wish. They do not really mean it. They mere wish to improve their skills. They do not work for it. Your wish does not guarantee that you will become a successful programmer. Many other people who have developed an interest in computer programming do not know how to reach to a point where they will be called successful programmers. They either keep wandering in the middle of nowhere or just give up. The same response is for them too as it was for the wishers. Your interest does not guarantee that you will succeed. Programming is a field which requires intensive work to master. Along with improving your technical knowledge of programming, you need to work on your interest. You need to develop a habit of not giving up. You need to…

What is Multithreading? JAVA Multithreading Tutorial

It is almost end of 2017. The computer has evolved throughout its age from a simple, huge machine which was used for just simple numerical calculations to a small and swift electronic device which is affecting almost every aspect of our life. There are a lot of efforts involved in these enhancements in both hardware and software. Powerful hardware has been invented, and robust software techniques have been designed to improve hardware efficiency. One of these methods is multithreading and this is what we are going to talk about.
Multithreading is the ability of a single processing unit to execute multiple programs concurrently, apparently supported by the operating system. Multithreading is achieved either by multithreaded architecture or by software techniques or by both. All processors and OSs today support multi-thread execution.
We are talking about multithreading but what actually a thread is? A thread is a single unit a single processor can execute. The group consists of the sh…

5 Tips for Computer Science Students

You are in college now so I am skipping the basics the go to class do your homework study for tests stay out of the hospital. These are not all important pieces of advice but I am sure you have heard them. Instead, let’s talk computer science. Here are some tips I have specially collected by talking to students who wish they’d heard them when they were students. Listen up.

Seek help when you need it. Your classes are going to get harder, they are going to test your knowledge but that’s why you are there for. Some people find attending office hours or seeking extra help to be embarrassing. But these resources are there for a reason. Taking advantage of the help you are offered will not only help you prepare for future classes and learn the material better but a lot less harmful than bad grades or any other consequences of struggling.Don’t let yourself intimidated by large projects. The best thing to do, sit down a day at the assignment and break it up into smaller tasks. A lot of times…

Supervised Learning vs Unsupervised Learning vs Reinforcement Learning

Supervised Learning vs Unsupervised Learning vs Reinforcement Learning Machine learning models are useful when there is huge amount of data available, there are patterns in data and there is no algorithm other than machine learning to process that data. If any of these three conditions are not satisfied, machine learning models are most likely to under-perform. Machine learning algorithms find patterns in data and try to learn from it as much as it can. Based on the type of data available and the approach used for learning, machine learning algorithms are classified in three broad categories. Supervised learningUnsupervised learningReinforcement learning An abstract definition of above terms would be that in supervised learning, labeled data is fed to ML algorithms while in unsupervised learning, unlabeled data is provided. There is a another learning approach which lies between supervised and unsupervised learning, semi-supervised learning. Semi supervised learning algorithms are giv…

Machine Learning: A Truthy Lie?

For all these years, we all have been misguided by the term machine learning. We have been told that machines learning makes a machine capable of how to think, how to act like a human. Machine learning is the most misused term. It does not really mean what it sounds like. It is a lie, a truthy lie. What is meant by a truthy lie? Each year Merriam-Webster releases a top 10 list of most searched words. In 2003, the top word in the list was democracy. In 2004, the word blog made it to the top. The winning word for the year 2006 was trustiness, "Truth coming from the gut, not books; preferring to believe what you wish to believe, rather than what is known to be true". A word which could be a lie is used so often that it eventually feels like truth. "Bet on the jockey, not the horse" is a truthy lie. Similarly, "machine learning" has been used over time for any kind of activity to train a machine or a computer so it could think or act like a human. The word i…