Featured post

The Limits of Artificial Intelligence

Photo by Matan Segev from Pexels If you are here, it means that you are familiar with term artificial intelligence. Either you ...

The Limits of Artificial Intelligence


litmitaions of artificial intelligence

Photo by Matan Segev from Pexels
If you are here, it means that you are familiar with term artificial intelligence. Either you have read about it in school or have seen it in sci-fi movies or somewhere else. Talking about the limitations of AI, let me ask you one simple question first, do you know the definition of AI? You might be thinking to answer me with a yes, yes I know what is artificial intelligence. But what if I tell you that AI is a buzzword and it is almost impossible to properly define. It is this way because the definition of artificial intelligence is moving. People don’t call the things AI that they used to call. For example, a problem that seemed too complex to be solved by human and was solved by AI algorithm is no longer a problem of AI. Playing chess, is one of the examples. It was considered the peek level of artificial intelligence back in previous century. Now it hardly fits the criteria for AI. It is presented to the world as a super power that when given to a computer, it magically starts living like other living things. It start thinking. Well, that's one side of the picture. AI has its own restrictions and limitations.
Artificial Intelligence today is solving problems ranging from playing games against human players to assisting medical expert in diagnostics. (visit: 10 Powerful Examples Of Artificial Intelligence In Use Today.) We human are using AI everywhere no matter if we are aware of it or not. The dominance of big fishes in American tech market- Facebook, Amazon, Netflix, Uber and Google versus the rise of Chinese tech giants: Baidu, Alibaba, Tencent, has created fierce quest for resources and specialists. Yet, what are all of them racing towards? AI. Artificial Intelligence is the one thing, after military and financial power, which will decide which country the next super power will be. 
However, today's artificial intelligence has certain limitations. It is not even near to the pinnacle of automation where human are. AI that exists today, is too far from achieving human level intelligence. Artificial Intelligence is compared with human intelligence because human is the most intelligent creature on earth and the goal of AI is to  at least reach to that level of intelligence if it is not to cross. Whether it is a computer program describing a scene in a picture or an autonomous car driving automatically on busy roads and avoiding collision and pedestrians, AI always try to mimic human intelligence.
Although, it is admissible that today's AI performs better and often outperforms human intelligence for specific problem. For example, in financing, AI comes with better forecasting and decision. AI is helping experts in almost all walks of life including medical, military, business and law. But as far as generalization is concerned, it will take it no less that 50 years to achieve human level intelligence.
By generalization I mean human mind is a single organ, it learns from experiencing different situations. But a single AI model or algorithm can learn from only one kind of data. For example, an algorithm that has learned to identify vehicles in an image will fail to identify animals. Although, it could be trained on mixed data including both animals and vehicles but it would never outperforms human mind in that area.
Another biggest limitation of AI is that it learns from data. There is no other way for AI power machine to gain knowledge, unlike human learning. AI performs better than human only in those cases where enormous amount of data is available. It fails where creativity, critical thinking and logical reasoning is involved. Any AI machine would always be a machine which will do the job it is told to do. It can’t figure out how to solve a problem it has never seen before.
The reason for failure of AI at general problems is that human mind has a modular structure. Each section or module in human brain is responsible for different functions. For example, frontal lobe of human brain is responsible for speaking while occipital lobe is more concerned with vision. Deep learning and machine learning model lack this feature of modularity. One model is almost similar to a single module in human brain. Another reason for AI failing to reach human level intelligence is that human brain is the most complex structure in the universe. AI models have not gone that complex until today.

Related Posts:

Build Your First Nueral Network: Basic Image Classification Using Keras

image Classification using keras img

Image classification is one of the most important problem to solve in machine learning. It can provide vital solutions to a variety of computer vision problems, such as face recognition, character recognition, object avoidance in autonomous vehicles and many others. Convolutional Neural Network (CNN), since its inception has been used for image classification and other computer vision problems. It is called convolutional neural network because of convolutional layer. Keras is a high level library which provides an easy way to get started with machine learning and neural networks. It will be used here to implement CNN to classify handwritten digits of MNIST dataset.

Image Classification is  a process to determine which of the given classes an input image belongs to. CNNs represent a huge breakthrough in image classification. In most cases, CNN outperforms other image classification methods and provides near to human-level accuracy. CNN models do not simply spit the class name the input image belongs to, rather it gives a list of probabilities. Each entry in the list shows the likelihood that the input image belong to a certain class. For example, if we have two classes in a dataset of "cats and dogs" images, a CNN model gives us two probabilities. One to show the likelihood or probability of the input image to belong to "dog" class and the other depicts the the probability that the image might belong to "cat" class.

There are four basic parts of any neural network model. 
  1. Network architecture 
  2. Loss function 
  3. Optimizer
  4. Regularizer.

1. Network architecture

Network architecture refers to the organization of layers in the network and the structure of each layer. It also shows the connectivity between the node of one layer to the nodes of next layer. A node is like a basic functional unit used repeatedly in a layer. A CNN model usually has convolutional layers, pooling layers, dropout layers and fully connected layers.

Convolutional layers extract different features, also called activations or feature maps, from images at different levels while pooling layer down samples and summarizes these features. Dropout out layer is a regularization technique which prevents model to overfit the training data.

2. Loss function

Loss function, also called cost function, calculates the cost of the network during each iteration in training phase. Cost or loss of a neural network refers to the difference between actual output and output predicted by the model. It tells how good the network performed during that iteration. The purpose of the training phase is to minimize this loss value. The only way to minimize loss value meaningfully is to change weights in each layer of the network. It is done with the help of optimizer.
Examples of loss functions include Mean Squared Error and Cross-Entropy loss which give best performance at classification problems.

3. Optimizer

An optimizer is basically an optimization algorithm which helps to minimize or maximize an objective function. In neural networks it is used to find minima of the loss function. Based on the loss value and existing weights, gradients are calculated which tell us in which direction (positive or negative) to update the weights and the amount by which the weights are supposed to change. These calculated gradients are propagated back throughout the network by optimizer.
There are different types of optimizers. Few of the popular optimizers are Adam and different variations of Gradient Decent algorithm. Each of these is suitable for different scenarios. However, Adam (adaptive momentum) is widely used for classification problems due to its speed and accuracy in finding local minima of the loss function.

4. Regularizer

Regularizer is not a mandatory component of a neural network but it is a good practice to use one because it prevent model from overfitting. Overfitting means larger generalization error. An overfit model performs extremely accurate on training data. However, it performs poorly on the data that is has never seen before.  There are different regularization techniques such as dropout, L1 and L2 regularization. To prevent our model overfit training data, we will add a dropout layer to it.

That's enough for theory. Let's see the code stepwise.

1. Import keras library:

import keras

2. Load MNIST dataset:

Keras provides an easy to use API to download the basic datasets like: MNIST, Cifar10, Cifar100, Fashion MNIST. It will take just two lines to load the entire dataset in local memory.

mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

3. Define some global variables

batch_size = 200
epochs=5
input_shape = [-1, 28,28,1]

4. Pro-process data

In pre-processing, we will only normalize images, convert labels to categorical format (also called one-hot encoding), and reshape images. Normalization brings pixel values in the range of 0-1. It is not necessary but it helps to improve accuracy. However, labels need to be converted to categorical format, because there are 10 classes in MNIST and as we have discussed in introductory section above, CNN gives a list of probabilities.
x_test = x_test/255.0
x_train = x_train/255.0
 
MNIST labels are single digits ranging from 0-9. In one-hot encoding, each digit is converted to an array of 10 values having 1 only at the digit itself as index of the array. For example 2 is converted to [0,0,1,0,0,0,0,0,0,0] and 3 is converted to [0,0,0,1,0,0,0,0,0,0,0].
One-hot encoding actually tells the model that for instance for an image of digit 3, you should give maximum probability at 3rd index. It sounds a little hard but keras has a utils module which saves us time.

y_train = keras.utils.to_categorical(y_train)
y_test = keras.utils.to_categorical(y_test)

CNN consider number of channels too in convolution operations and MNIST image are provided in 28x28 format. All these images are grayscale and it has only one channel so we will convert it to [-1, 28, 28, 1]. -1 here means that reshape all the images in array.
If you don't understand, don't worry about it—Legendary Andrew Ng
x_train = x_train.reshape(input_shape)
x_test = x_test.reshape(input_shape)

5. Build model

Here is where we define our network architecture. Keras' Sequential model API is pretty easy to understand. It creates a model but stacking layers over each other in the order they are provided. All we need to do it to create an object of Sequential class and add layers to it using add method. There is also an option to add layers at the constructor but I prefer to use add method. It gives a clue how the input pass through the network.

model = keras.Sequential()
model.add(keras.layers.Conv2D(6, (3,3), activation=keras.activations.relu,  
input_shape=[28,28,1]))
model.add(keras.layers.MaxPool2D())
model.add(keras.layers.Conv2D(16, (3,3), activation=keras.activations.relu))
model.add(keras.layers.MaxPool2D())
model.add(keras.layers.Conv2D(120, (3,3), activation=keras.activations.relu))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(84, activation=keras.activations.relu))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(10, activation=keras.activations.softmax))

Remember what is an optimizer and loss function? We definitely need optimizer to update weights and loss function to calculate the cost or loss of the network during training phase.
optimizer = keras.optimizers.adam()
model.compile(optimizer=optimizer, loss=keras.losses.categorical_crossentropy,
metrics=['accuracy'])

6. Training

Our model is now ready to enter the training phase. We will call fit function and provide the training data we want our model to fit. There are some other information needed such as batch size, number of epochs and verbose.
model.fit(x_train, y_train, batch_size, epochs, 1)

7. Testing

Once, all the epochs are completed and the training phase ends we evaluates our model to know how good it is at classification.

results = model.evaluate(x_test, y_test, batch_size, 0)
print('{}: {:.2f}, {}: {:.2f}'.format(model.metrics_names[0], results[0],\
model.metrics_names[1], results[1]))

8. Save trained model

In order to use the trained model next time for classification, it needs to be saved because it is insane to retrain a model each time we need it to use.

model.save('model.h5')

To use the already trained and saved model, it is loaded using keras' load_model function. If you have a saved model, you don't need step 5 and 6.

new_model = keras.models.load_model('model.h5')

Note: In this post, I have skipped some details to make things easy to understand. However, we will see those details in upcoming posts.
If you have any issue with the code, feel free to ask in the comments. I will try to reply instantly.

    How Computers Understand Human Language?

    How Computers Understand Human Language?

    Photo by Alex Knight on Unsplash
    Natural languages are the languages that we speak and understand, containing large diverse vocabulary. Various words have several different meanings, speakers with different accents and all sorts of interesting word play. But for the most part human can roll right through these challenges. The skillful use of language is a major part what makes us human and for this reason the desire for computers that understand or speak human language has been around since they were first conceived. This led to the creation of natural language processing or NLP.
    Natural Language Processing is a disciplinary field combining computer science and linguistics. There is an infinite number of ways to arrange words in a sentence. We can't give computers a dictionary of all possible sentences to help them understand what humans are blabbing on about. So, an early and fundamental NLP problem was deconstructing sentences into small pieces which could be more easily processed.
    In school you learned about nine fundamental types of English words.
    1.     Nouns
    2.     Pronouns
    3.     Articles
    4.     Verbs
    5.     Adjective
    6.     Adverbs
    7.     Prepositions
    8.     Conjunctions
    9.     Interjections
    These are all called parts of speech. There are all sorts of sub-categories too like singular vs. plural nouns and superlative vs. comparative adverbs but we are not going into that. Knowing a word’s type is definitely useful, but unfortunately there are a lot of words that have multiple meanings like rose and leaves which can be used as nouns or verbs.
    A digital dictionary alone is not enough to resolve this ambiguity so computers also need to know some grammar. Fro this, phrase structure rules were developed which encapsulate the grammar for a language. For example in english there's a rule that says a sentence can be comprised of a noun phrase followed by verb phrase. Noun phase can be an article like “the” followed by a noun or they could be an adjective followed by a noun. And you can make rules like this for an entire language. Then using these rules it is fairly easy to construct was called parse tree which not only tag every word with a likely part of speech but also reveals how the sentence is constructed.
    The smaller chunks of data allow computers more easily access, process and respond to information. Equivalent processes are happening every time you do a voice search like ‘where is the nearest pizza’. The computer can recognize this is a ‘where’ question, knows that you want the noun ‘pizza’ and the dimension you care about is the ‘nearest’. The same process applies to “what is the biggest giraffe?” or “who sang thriller?” By treating language almost like legos, computers can be quite adept at natural language tasks. They can also answer questions and also process commands like ‘set alarm for to 2:20’. But as you have probably experienced they fail when you start getting fancy and they can no longer parse the sentence correctly or capture your intent.
    I shall also mention that phrase structure rules and similar methods that codify language can be used by computers to generate natural language text. This works well when data is stored in a web of semantic information where entities are linked to one another in meaning for relationships, providing all the ingredient you need to craft informational sentences.
    These two processes, parsing and generating text are fundamental components of natural language chat bots. Chat bot is a computer program that chat with you. Early chat bots were primarily rule based where experts would encode hundred of rules mapping what a user might say, to how a program should reply. But this was difficult to maintain and limited the possible sophistication.
    A famous early example was Eliza, created in the mid 1960 at MIT. This was a chatbot that took on the role of a therapist and used basic syntactic rules to identify content in written exchanges, which it would turn around and ask the user about. Some times it felt very much like human-human communication but sometimes it would make simple and even comical mistakes. Chat bots today are more advanced. It has come a long way in the last fifty years and can be quite convincing today.
    Simultaneous innovation in the algorithms for processing natural language is moving from hand crafty rules to machine learning techniques that could lead automatically from existing data sets of human language. Today the speech recognition systems with the best accuracy are using deep learning.

    Related Read: Deep Learning: A Quick Overview

    Machine Learning: A Truthy Lie?

    machine learning
    unsplash-logoFranck V.
    For all these years, we all have been misguided by the term machine learning. We have been told that machines learning makes a machine capable of how to think, how to act like a human. Machine learning is the most misused term. It does not really mean what it sounds like. It is a lie, a truthy lie.
    What is meant by a truthy lie?
    Each year Merriam-Webster releases a top 10 list of most searched words. In 2003, the top word in the list was democracy. In 2004, the word blog made it to the top. The winning word for the year 2006 was trustiness, "Truth coming from the gut, not books; preferring to believe what you wish to believe, rather than what is known to be true". A word which could be a lie is used so often that it eventually feels like truth. "Bet on the jockey, not the horse" is a truthy lie.
    Similarly, "machine learning" has been used over time for any kind of activity to train a machine or a computer so it could think or act like a human. The word is used so often that now it feels like machines are really capable of learning.
    If machine learning is not actually machine learning then what is it?
    To answer this question, let's first understand what is learning. Learning by definition "the acquisition of knowledge or skills by observation, experience or being taught". Anything which has a capability to learn learns from its environment or experience. Any living thing lives the way it lives because it is grown in such an environment where others live that way. A chick of flying birds would never fly if it is grown with chickens and a cub may never eat meat if it is grown with cows and goats. We speak the language our society speaks and learn a new language from a teacher. Someone from Spanish parents would never speak Spanish if grown in the tribes of Africa. We also learn from examples. We do not touch a kettle if we see someone burning their fingers by touching it. An investor would never invest in a company which has pushed other investors into a loss.
    Machine learning is a process in which machines learn from experience and that experience comes in the form of examples. In order to teach a machine how to do a task, it is provided with a lot of examples. It has been told what is right and what is wrong either explicitly or implicitly. This is how machine learning defined by machine learning scientists to the rest of the world.
    But behind the scene, machine learning is nothing more than a mathematical function estimation. It is just the math that the human mind is incapable to solve not due to its complexity but because of its amount. A huge amount of simple calculations is required to make a machine able to do even a simple task. A simple neural network may have several thousand if not millions of multiplication operation just to detect a human face in a picture.
    Unlike the good-old-fashioned-AI, machine learning makes a computer system able to learn from experience and change its behavior without being implicitly programmed. This statement might sound fascinating to you that how could a computer do a task it has never been programmed that way? Well, that is a truthy lie but a lie anyhow. Each step in the learning process is written by a human. From simple addition and multiplication to the process of learning, everything is programmed in a dumb computer.
    Computers today have the so-called intelligence because of their speed and it could be a threat to humanity because of this quality. Artificial intelligence is a war between speed and real intelligence. If something is intelligent enough to make a counter-attack strategy to destroy its enemy but it lacks the speed to implement the strategy on time, its enemy will crush it for sure. That is why I say AI will take this world to its end.

    Supervised, Unsupervised and Reinforcement Learning

    supervised vs unsupervised vs reinforcement learning

    What's The Difference Between Supervised, Unsupervised and Reinforcement Learning?

    Machine learning models are useful when there is huge amount of data available, there are patterns in data and there is no algorithm other than machine learning to process that data. If any of these three conditions are not satisfied, machine learning models are most likely to under-perform.
    Machine learning algorithms find patterns in data and try to learn from it as much as it can. Based on the type of data available and the approach used for learning, machine learning algorithms are classified in three broad categories.
    1. Supervised learning
    2. Unsupervised learning
    3. Reinforcement learning
    An abstract definition of above terms would be that in supervised learning, labeled data is fed to ML algorithms while in unsupervised learning, unlabeled data is provided. There is a another learning approach which lies between supervised and unsupervised learning, semi-supervised learning. Semi supervised learning algorithms are given partially labeled data.
    Reinforcement learning however is a different type of learning which is based on a reward system. ML model/algorithm is rewarded for each decision it makes during training phase. Reward could be positive as encouragement for a right decision or negative as a punishment for wrong decision.
    Let's have a look each of these terms in detail with examples.

    What is supervised learning?

    Suppose you are present in maths class (yes, maths class. If you don't like maths, you shouldn't be here) and you are given with a problem and its related data and you are asked to solve it for available data. You make an attempt and come up with a wrong answer. Your teacher is a noble person. He/She does not evaluate your solution but show you the correct answer. You compare it with your answer, try to identify where you have made mistakes and try to correct it. You are math class, of course you will get many problems to solve and the process continues.
    Same is the case with supervised learning. The data is provided with its labels. Labels are the expected output of the input data which are provided by human. In above example, the correct answer the teacher give you is a label in that case. It is also called actual output.
    In supervised learning, each instance in the dataset is labeled with its actual output. Learning algorithm has to find a way to come up with exactly the same or a closely related answer. For example a labeled dataset of vehicle images, each image would have the name of the vehicle in the image. A car image would be tagged with "car", bus image with "bus" and so on.
    If you are thinking how a huge amount of data is labeled, visit Amazon Mechanical Turk. It is a program hosted by Amazon in which people are paid for data labeling tasks.

    How does the supervised learning approach works?

    When a machine learning model process an instance from the dataset and calculates the output for that instance. It is called predicted output. Predicted output is then compared with the label of the instance of dataset. Label is also called actual output. If you have understood the math class example then you might be able to guess the next step. Make a guess.
    The next step as you might have guessed is to find the difference between the actual output and predicted output and change the solution accordingly. The illustration below will you understand the process more.
    supervised learning
    supervised learning
     Supervised learning examples:
    • Classification 
    • Regression

    What is unsupervised learning?

    Suppose you are sitting in exam hall and you are given with a problem to solve. Unlike math class, this time you have the problem and input data but you do not have the correct answer. You just solve the problem your way. You have no idea whether you answer is correct or not or how wrong it is.
    Same is the case with unsupervised learning. In this learning approach, unlabeled data is given to a machine learning model. There is no actual or expected output. The only output in this case would be the predicted output which the model produces itself. So, the difference between the supervised and unsupervised learning is that in supervised learning, labeled data is provided and in unsupervised learning, unlabeled is handed to the model.
    unsupervised learning
    unsupervised learning
     Unsupervised learning examples:
    • Clustering
    • Pattern and sequence mining
    • Image segmentation.

    Reinforcement learning

    How do we teach kids? We ask them to do something, when they are done we reward them somehow for good results and punish them otherwise. The reward could be anything like a chocolate bar or just verbal encouragement. Operant Conditioning is the word used for this type of learning in biology.
    However, in deep learning we call this type of learning approach reinforcement learning. As in the beginning, kids have no idea of how to do a task. They learn from experience. They are more likely to repeat the behavior they are rewarded for and avoid the behavior they are punished for.
    Similarly, deep learning models initially have no idea how to perform a task. In reinforcement learning, the model is rewarded or punished each time it makes a correct or wrong prediction during training time.

    How does reinforcement learning works?

    Although, this type of learning is close to how human and animals learn but to implement it in computers, we use game theory. In game theory we have a set of states that player could be in, a set of actions that player could take to reach to another state and the rewards or points to for each state action pair. We will not go into details here how reinforcement learning works but just grab the point game theory is used in reinforcement learning. Of-course there are other methods too but they are similar to game theory.

    How Big Data Analytics Can Help You Improve And Grow Your Business?

    Big Data Analytics

    There are certain problems that can only solve through big data. Here we discuss the field big data as "Big Data Analytics". The big data came into the picture we never thought how commodity hardware is used to store and manage the data which is reliable and feasible as compared to the costly sources. Now let us discuss a few examples of how big data analytics is useful nowadays.
    When you go to websites like Amazon, Youtube, Netflix, and any other websites actually they will provide some field in which recommend some product, videos, movies, and some songs for you. What do you think about how they do it? Basically what kind of data they generated on these kind websites. They make sure to analyze properly. The data generated is not small it is actually big data. Now they analysis these big data they make sure whatever you like and whatever you are the preferences accordingly they generate recommendations for you.
    If you go to Youtube you have noticed it knows what kind of songs or what videos you wanna watch next.  Similarly, Netflix knows what kind of movies you like it. If you visit Amazon they know what kind of product you would prefer to buy it. So, how it actually happens, it happens only due to big data analytics. There is one example which is about Walmart. So what happens when Walmart uses big data analytics to profit from it. Now you will think about how they did it. Let us discuss, they study what the purchased pattern of the different customers. Their owner makes a strike on a particular area and when they made an analysis of it. So, they found out that people tend to buy emergency stuff like a flashlight, life jacket, and a little bit other stuff and also a lot of people buy chocolate. If you read the example you see how big data analytics can help improve or grow your business and can find better insights from the data you have.

    Big data Analytics

    Big Data Collected by Smart Meter

    In earlier, have you notices the data was collected from the meter in our home to measure the electricity consumed. It is actually sending the data from one month but nowadays IBM created the smart meter due to the use of smart meter it actually collects data after every 15 minutes. Whatever energy we have consumed after every fifteen minutes it will send data and due to this big data is generated. If we see in the below picture we have 96 million reads per day for every million meters. This amount of data generated by the smart meter is pretty huge data. " Managing the large volume and velocity of information generated by short interval read of smart meter data can overwhelm existing IT resources. "
    Smart Meter


    Problem With Smart Meter Big Data

    IBM realized that it is generating a huge amount of data is important for them to give something from that data. For what they need to do? They need to do to make sure to analyze this data. So they realize that big data can solve a lot of problems and they can get better business insight through that. Let's move forward what type of analysis they do on that data. 

    How Smart Meter Big Data Is analyzed

    So before analyzing that data, they came to know that energy utilization and billing was only increasing. Now after analyzing big data, they came to know that during peak load user require more energy and during off-peak times that users require less energy. So what advantages they must get from this analysis. One thing that we can think of right now is they can tell the industries to use their machinery only during off-peak times. So that load will be pretty much balanced and we can say even that time-of-use pricing encourages cost severe e-tail like industrial heavy machines to used off-peak times. It will save money as well because of off-peak time pricing will be less than peak time prices. So this just one analysis. 
    Analysis

    IBM Smart Meter Solution

    Over here we first dump all our data that we get in this data warehouse after that it is very important to make sure that our user data is secure. Then what happens we need to clean the data as we discussed earlier as well there might be many fees that we don't require. So we need to make sure we have only useful material in our dataset and then we perform certain analysis.
     In order to use this suite that IBM offered us efficiently. We have to take care of a few things.
    • we have to be able to manage the smart meter data now there are a lot of data coming from all these million smart meters. So we have to able to manage that large volume of data and also be able to retain it because maybe, later on, we might need it for some kind of regulatory requirements and something. 
    • To monitor the distribution grid so we can improve and optimize the overall grid reliability. So we can identify the abnormal condition which is causing any kind of problem.
    • We can also take care of the optimizing the unit commitment. Optimizing the unit commitment companies can satisfy their customers, even more, they can reduce the power outages so that their customers cant angry. They can identify more problems and then reduce it.
    • Optimizing energy trading means that we can advise the customers when they should use their appliances in order to maintain that balance in the power load.
    • Forecast and schedule load companies must be able to predict when they can profitably sell the excess power and when they need to hedge the supply.

    ONCOR Using IBM Smart Meter Solution

    Now let's discuss how ONCOR has made use of the i-beam solution. So anchor is an electric delivery company and it is the largest electrical distribution and transmission company in Texas and it is one of the six largest in the United States. They have more than three million customers and their services area covers almost 117 thousand square miles and they begin the advanced feeder program in 2008 and they have deployed almost 3.25 million meters serving customers of North and South Texas. When they were implementing they kept three things in mind. 
    • The first one is "it should be instrumented". So this solution utilizes smart electricity meters so that they can accurately measure the electricity usage of household in every 15 minutes because we already discussed that smart meter sending out data every 15 minutes and it provided data inputs. which is essential for consumer insights.
    •  It should be "Interconnected". Now the customer will have detail information about the electricity they are consuming and it creates a very enterprise-wide view of all the meter assets. It also helps them to improve service delivery.
    • To make your "customer intelligent ". Now it is getting monitored already about how each of the household or each customer is consuming the power. So now they are able to advise the customer about may be to tell them to wash their clothes at night times. Because they are using a lot of appliances during the day time and maybe they divide it up. So they can use some of the appliances at an off-peak hour so they can save more money. This is beneficial for both the customers and the company as well.   

    Artificial Intelligence Vs Machine Learning Vs Deep Learning

    artificial intelligence vs machine learning vs deep learning

    Artificial Intelligence Vs Machine Learning Vs Deep Learning

    Artificial intelligence, deep learning and machine learning are often confused with each other. These terms are used interchangeably but do they do not refer to the same thing. These terms are closely related to each other which makes it difficult for beginners to spot differences among them. The reason I think of this puzzle is that AI is classified in many ways. It is divided into subfields with respect to the tasks AI is used for such as computer vision, natural language processing, forecasting and prediction, with respect to the type of approach used for learning and the type of data used.
    Subfields of Artificial Intelligence have much in common which makes it difficult for beginners to clearly differentiate among these areas. Different approaches of AI can process similar data to perform similar tasks. For example Deep learning and SVM both could be used for object detection task. Both have pros and cons. In some cases Machine Learning is the best choice while sometimes Neural Networks outperforms Machine Learning algorithms.
    Here, we will make an attempt to define each of these terms and highlight similarities and dissimilarities with the help of examples.

    Artificial Intelligence

    Artificial Intelligence for short AI is a broad term used for any concept or technique which brings some sort of intelligence to a computer. Computer here means anything which has the capability to store and process data. In many writings, the word machine is used. However, they refer to the same thing.
    Artificial intelligence spans a wide range of machine intelligence. It refers to the intelligence that machine uses to play a game against a human, to translate one human language to another, to recognize things in a scene, to classify emails as Spam or not Spam, to drive car, auto pilot of a plain and you name it.

    Machine Learning

    Machine learning is a sub field of Artificial Intelligence. It includes the study of algorithms that brings intelligence to computer systems without any explicitly specified rules. Unlike obsolete expert systems which works on predefined if then rules, machine learning finds patterns in the data and makes decisions by itself. Finding patterns in data by a Machine Learning algorithm is called learning. Huge amount of data is required to teach a machine to perform a specific task. Here blends big data with machine learning. If there is no big data, there is no machine learning.
    One of the most important point to remember about machine learning is that researchers or developers are required to specify learnable features in data which is a tedious task. ML algorithm does not perform well on raw data. Extracting learnable features from raw data is called feature extraction which is a crucial task to be done.
    Machine learning algorithms include SVM, Random Forest, Decision Tree, Bayesian Network etc.

    Deep Learning

    Deep Learning is considered as a sub field of Machine Learning by many but I think it is a sub field of Artificial Intelligence. It is like a sibling to ML rather than a child. Although, in deep learning a machine learns from data and take decisions based on its knowledge as it does in Machine Learning but its learning process is completely different from Machine Learning approaches.
    Deep Learning takes input data in its raw format and decides itself which features are important enough to learn. Deep Learning models perform the feature selection process by itself thats why it is also called feature learning.
    Similar to machine learning, deep learning is nothing without huge amount of data. However, we do not need to worry about learnable features and patterns in the data. It finds important features and learn by itself.
    Deep Learning uses Neural Network which are inspired by biological neurons. Although, neural networks does not perform as well as biological neurons do but they mimic their functionality up to some extent.

    Conclusion

    You can think of Artificial Intelligence as it is the earth and Machine Learning and Deep Learning are two subcontinents which are not separated by a clear line. The border is blurry. Of course their are other subcontinents too which are also part of the earth. I think it does not matter if you can differentiate different fields of AI or not. What matters most is that whether you can solve a problem by using Artificial Intelligence or not. AI looks like a mess at the beginning but it is not what most people think it is.
    artificial intelligence vs machine learning vs deep learning artificial intelligence vs machine learning artificial intelligence vs machine learning vs deep learning

    How To Become A Successful Programmer?

    Photo by Samuel Zeller on Unsplash

    How To Become A Successful Programmer?

    I have heard many novice programmers saying I want to get better at programming but there is hardly a slight improvement in their skills. I have observed that most of them say they want to get better but that is just a wish. They do not really mean it. They mere wish to improve their skills. They do not work for it. Your wish does not guarantee that you will become a successful programmer.
    Many other people who have developed an interest in computer programming do not know how to reach to a point where they will be called successful programmers. They either keep wandering in the middle of nowhere or just give up. The same response is for them too as it was for the wishers.
    Your interest does not guarantee that you will succeed.
    Programming is a field which requires intensive work to master. Along with improving your technical knowledge of programming, you need to work on your interest. You need to develop a habit of not giving up. You need to make your brain believe that it is capable of the things it thinks it is not. You need to know how to move forward and how to keep moving forward. You need to convince your inner self that one day you will become the person who they call a successful programmer.
    I am not saying that technical knowledge is not necessary at all and you concentrate all your efforts on the non-technical things. What I am trying to say is that in order to get better at programming, a non-technical parallel process will help you. I am calling it a non-technical process for no good reason. You can call it whatever you want just go with the flow.
    So what is that non-technical process? It is nothing but a few habits and characteristics. The process is not specific to programming. It can be used for anything you want to achieve. It can be used to make more money, to get a sound health or make your relationships better. The reason I am calling it a process is that you need to keep practicing it. It never ends. Being a successful programmer is your goal and this is the process you need to repeat until you reach your goal.
    The process has four key elements which are given below in the order it works for me. If you have a good reason to change the order, please let us know too. There is a comment section at the end of the page. Interact with us, we don't bite.

    Four elements of the process:

    1. Burning Desire
    2. Faith
    3. Imagination
    4. Persistence

    1. Burning Desire

    Desire is defined in oxford dictionary as a strong feeling of wanting to have something or wishing for something to happen. Desire is the first reason why we do anything. It give us a direction to move on. It is desire which motivates us to do something. 
    But the desire itself it not enough for success. Desires are often replaced by other desires. In order to achieve what you desire for, you must have a burning desire not just a desire. Such a burning desire which leads to take decision and get up instead of to a state of indecision. You must feel the urge to get whatever you want to. You must develop extreme interest, an interest which could not be diminished by procrastination.

    How to cultivate a burning desire to become a successful programmer?

    First of all, convince your mind that there is no backing out. Promise yourself that you will never quite. Close all the doors that will help you to escape.
    Try to connect with people somehow who are already better than you in programming. Dan Peña, a successful businessman and a multimillionaire says:
    Show me your friends and I'll show you your future.
    People you surround yourself with always have an influence on your personality. Surround yourself with people better than you to get inspiration and also with the people who are not as good as you. People below you in your circle will help you see your progress and will learn from you. Make new connections on LinkedIn and join programming communities on Facebook, Reddit, etc but keep a balance in your friend zone. Not only surround yourself with only successful people and not only add people who want to learn from you.

    Next, reinforce your mind to believe that you are serious with programming, it is not a joke. Surround your self with things which motivates you like print quotes and image of brilliant programming gurus and put them up around your house. Changing desktop and mobile backgrounds to motivating wallpapers really boost the desire.
    There is nothing better than reading if you want to get good at something. Of course practice is necessary but reading is the first step. Practice comes next. Read books related to your topic, attend seminars, listen to pod casts etc. Most importantly, do read how did successful programmers get there and what are they doing now. Read their success stories. It will help you not to lose your interest.
    There are a lot of ways to keep your desire alive. Search for them and follow them or be creative and create your own ways. There is no wrong or right way. What matters the most is which works for you.

    2. Faith

    Faith is complete trust or confidence in someone or something. I am talking a religious faith or something. I am talking about faith in your abilities to succeed, self-confidence. You will never ever get the things that you believe you cannot.
    The Power of Your Subconscious Mind is a wonderful book by Dr. Joseph Murphy who describe in details how thoughts and believes are imprinted on subconscious mind and how these thoughts gain physical shape in the real world.
    Confident people do not hesitate to ask questions. The more you ask, the more you learn. It gives you the ability to say yes to right things and no to wrong and inappropriate things. Confidence makes you able to overcome any sort of fear, fear of failure, fear of success, fear of misfortune, fear of what others think, fear of loss of love and relationship. It is self-confidence and faith in yourself which can make you set your goals high. If you believe that you will succeed, you will.

    3. Imagination

    If you can imagine yourself where you want to be, you will be there one day. Imagination is considered one of keys to success. All successful people are dreamers. If you can imagine something in your possession, you will have it. Either it is a skill, fortune, good relationship or anything else. Just believe in yourself, work for it and chase you dreams.
    Let's make it more related to programming. Powerful imagination helps a lot in logic building. I have told you each concept of programming is a piece of puzzle. If you want to solve a problem, you need to combine these pieces. While you learn the basic concepts of programming, give each concept a unique shape in your imagination. For example, if statement would have one shape and a loop would have another shape. Recall these shapes when you need to solve a problem.
    To make the process more interesting, add sounds to it. As the two magnets make a sound when they come together, you imagine similar sounds when you combine two pieces of puzzle successfully.
    Programming is not a rocket science. Any combination that might sound perfect might not work in real. Don't give up. Keep trying. Try different combinations. The imagination faculty of mind gets better with practice. The more you practice, the more powerful your imagination becomes. One day you will reach to a point where you will make more than half of the logic to solve a problem while reading or listening to a problem statement.
    I would recommend you to read the power of your subconscious mind by Dr. Joseph Murphy. It presents sound proofs the the imagination works and teaches you how to think.

    4. Persistence

    Persistence something that if you don't have, you will never succeed. It does not matter how good you are at logic building, it does not matter how powerful your imagination is, it does not matter you have a burning desire or not. If you are not persistent, you will never succeed.
    Jack of all trades, master of none...
    In consistency is like a parasite. It eats all your efforts and all your hard work you done. I have listed inconsistency as first in 5 Mistakes That I Have Made. I know how inconsistency pushes one backward. How it downgrades the progress and how it vanish most of the efforts one make. It is like an income tax. You work to make money for yourself and family and the govt takes it share for no reason. There is nobody who is happy to pay taxes.
    If you want to be a good programmer, stick to programming. Never change your field. Learning how to code is a tedious task. Take your time to decide one programming language to learn and stick to it. Once you think you know how to code, then start learning other languages.
    Remember, each and every concept is like a piece of puzzle. It does not matter what kind of color it has, if it fits somewhere, it will work. Same is the case with different programming languages. All basic concepts work the same in all languages, just the syntax is different. Once you become a master of a programming language, then learning a new language becomes a matter of time. You just need to get familiar with its syntax.

    Recommended Readings:

    Introduction to Data Science: What is Big Data?

    What Is Big Data

    First, we will discuss how big data is evaluated step by step process.

    Evolution of Data

    How the data evolved and how the big data came.
    Nowadays the data have been evaluated from different sources like the evolution of technology, IoT(Internet of Things), Social media like Facebook, Instagram, Twitter, YouTube, many other sources the data has been created day by day.

    1. Evolution of  Technology

    We will see how technology is evolved as we see from the below image at the earlier stages we have the landline phone but now we have smartphones of Android, IoS, and HongMeng Os (Huawei)  that are making our life smarter as well as our phone smarter.
    Apart from that, we have heavily built a desktop for processing of Mb's data that we were using a floppy you will remember how much data it can be stored after that hard disk has been introduced which can stored data in Tb. Now due to modern technology, we can be stored data in the cloud as well.
    Similarly, nowadays we noticed that self-driving Car comes up. Now you must be thinking about why we are telling that you noticed the enhancement of the technology we are generating a lot of data. Let's see the example of your phones, Have you ever notices how much data is generated due to your fancy smartphones in your every action even one video is sent through any WhatsApp or any other  Messenger App that generate data. Now, this is just an example you have no idea how much data you generated because of every action you do. This data is not in the format that the Relational databases can handle and apart from that even the volume of the data has also increase exponentially.
    Now we are talking about self-driving cars basically this car having sensors that record every minor detail like the size of the obstacle, the distance of the obstacle and many more then it decides how to respond. You can imagine how much data is generated for each kilometer drive on that car. Let's move on to the next evolution of the data.
    Evolution of Technology. 

    2. IoT

    I think you people must hear about IOT if we recall the previous paragraph about the self-driving car it is nothing but its an example of IOT. Let me discuss what exactly it is. IOT connects the physical device with the internet and makes a device smarter. Nowadays we have noticed the smart AC, TV, etc, So we will take an example of Smart Air Conditioners this device monitor your body temperature and outside temperature accordingly maintain what should be the temperature of the room.
    Now in order to do this first, it accumulates the data from where it can accumulate data from the internet through sensors that monitoring data from your body temperature and surrounding. Basically from various sources that might you know about is actually fetching the data and accordingly it decide what should be the temperature of your room. Now actually we see because of in IOT we are generating a huge amount of data. As we are seeing in the below image there are a lot of IoT devices in future 2020 there will be 50 billion IoT devices. We will not discuss there how IOT will generate such a huge amount of smart devices. Now we will move forward and discuss another factor that generates big data.
    IOT

    3. Social Media

    Social media is one of the most important factors in the evolution of big data. Nowadays everyone using Facebook, Instagram, Youtube, Twitters and a lot of social media websites. As we see these social media websites have soo much data. e.g  If we have our personal details like our name, age apart from that with each picture we like, reacts and comments it also generates data. Even Facebook pages that we go around liking that also generates data. Nowadays we can see that a lot of people sharing videos on Facebook so that is generating a huge amount of data. The most challenges part is here that the data is not presenting in structure mannered and same time it is huge in size. As we see that not only data is generated in huge amount but it also generated in a different format. e.g Data generated with videos that are actually in an unstructured format the same goes for images, So there are numerous means million of ways that data are generated nowadays that are conveying to big data. 

    4. Other Factors

    All of us must visit websites like Amazon, Flipkart, etc. Suppose we want to buy a t-shirt or jeans so we search for a lot of t-shirts or jeans somewhere our search history will be stored. If we buy for the first time so there will be our purchase history as well along with personal details and there is numerous way in which didn't know that we generating data and also Amazon is not present earlier. So that time there is no way such a huge amount of data was generated. Similarly, data is evolving due to some other reason as well like Banking & Finance, Media & Entertainment, Healthcare, and Transportation, etc.
    So now the main point as what exactly the big data is, how we consider the data as big data.
    Other Factors

    What is Big Data 

    Now look at the proper definition of big data "is the term for the collection of large and complex data sets that it becomes difficult to process using on-hand database system tools or traditional database applications".
    What we understand from this that our traditional system or our old system can process our data?
    No, there is too much data to process. When the traditional system was invented at the beginning we never decapitated that we have to deal with such numerous amount of data.
    How do we consider some data as big data or how do we consider to classify data as big data? So we have 5 V's of big data.
    Big Data

    5 V's of Big Data

    If we can see some people write about 3 V's and some people write that there are 3 V's but here we will discuss the 5 V's. So look it the below discussion to understand how the data become big data due to these five characteristics


    1. Volume

    The first V of the big data is the volume of the data which tremendously large. So if we look at the diagram the volume of the data is increasing exponentially. We were dealing with 4.4 zettabytes of data in 2017 it will increase up to 44 zettabytes in 2020 which is equal to 44 trillion gigabytes. So that is really huge data.
    Volume

    2. Variety

    All the humongous data coming from multiple sources that is the second V's variety. We deal with different kind of files that is all in once mp3 files, videos, Jason, CSV, TSV and many more. Now if we look at these data that are Structure, Un-Structured and Semi-Structured all together. Let us explain from the below diagram. We have Audio file, Video, Png, JSON, Log file, emails various format of data.  Now, this data is classified into three forms.

    I. Structured Format

    In Structured format, we have a proper scheme of our data we will know what are column would be there and basically, we know about the scheme of our data, so it is in structured format means in tabular form.

    II. Semi-Structured Format

    The second is the Semi-Structured format, So we can see from the diagram it is nothing but JSON, XML, CS V, TS V, and email where is scheme is not defined properly.

    III. UN-Structured Format

    In UN-Structured form, We have Log file, Audio file, video file, and all type images file consider in the UN-Structured format.
    .
    Variety

    3. Velocity

    It is also because of the speed of accumulation of this variety of data altogether which brings us to our third V's is called velocity. Let us explain from the diagram we were using mainframe computer system huge computer but having less data because there were fewer people were working with the computer at that time. As the computer evolve to become the client-server model and the time came for the web application and the internet boots. As day by day, the web application increase on the internet and now everyone is using these applications from the computer as well as from their mobile devices. More user more appliances, more apps, and more mobile devices enhance a lot of data.
    When we talk about people to generate data our first thing coming in our mind is social media. If you think that how much data is generating by an Instagram alone on your post and stories.
    We will talk about every social media application. If you see the below diagram for every 60 seconds social media apps generate, Twitters generate about 100 hundred Tweets in every minute, on Facebook 695,000 status update,  11 million Instagrams messages, 698,445 Google searches, 168 million emails sent in every one minute,  which is almost equal to 1,820 Terabytes of data generated, also mobile users are increasing in every minute. There are 217 new mobile users are added in every minute.  So that is a lot of data to calculate, to arrange in a proper manner so it becomes big data.
    Velocity

    4.Value

    Now the bigger problem is here to extract useful data. So due to this reason, we come to the next V's that is Value. First, we need to mine useful content from our data basically we make sure that we have some useful field in our dataset and after that, we perform some certain analytics on that data we have to clean it.  after analysis on the dataset, it has some value that is it will help us in business to grow that can be found inside which is possible earlier. Whatever the big data or data has been generated it makes sense it will help us to grow our business and have some value.
    Value

    5. Veracity

    Now getting the value from that data is a big challenge that brings us to the next V's is Veracity.
    So that big data has a lot of uncertainty and inconsistencies. When we are dumping such a huge amount of data some of the data package bound to a loss in processing. So we need to do that to fill up these missing data then start mining again then processes it and then come up with good inside possible. If we look at the below diagram some of the data is missing, some of is minimum value and some of the data have a large value.

    Veracity
    We have a lot of problem in big data and a lot of opportunities that we will discuss in the next article.














    Translate