Why Your Virtual Assistant is a Genius: A Journey into Machine Learning
Now, it is time to go deep into artificial intelligence. This is the first subset of our precioussss. In this post, our main topic is machine learning. Initially, I planned to cover both ML and DL together. However, during the research phase, I found out that there are a bunch of things to delve into machine learning. So here we are. Let’s start from scratch.

Brief History
Machine Learning, or ML, is based on a model of brain cell interaction. Even though it became popular in the 21st century, the model was created in 1949 by Donald Hebb. In his book “The Organization of Behavior” he represented a theory on neuron excitement and communication between neurons.

Hebb stated that, “When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell.”
Translating Hebb’s concepts to artificial neural networks and artificial neurons, his model can be described as a way of altering the relationships between artificial nodes and the changes to individual nodes. The relationship between two nodes strengthens if the two nodes are activated at the same time and weakens if they are activated separately. The word “weight” is used to describe these relationships, and nodes tending to be both positive or both negative are described as having strong positive weights. Those nodes with opposite weights develop strong negative weights (e.g. 1×1=1, -1x-1=1, -1×1=-1).

If you are not a developer, it could seem very technical. Absolutely agree with you cause I felt the same way. I will put forth what I understood from all the research in a way that’s simple enough for a middle school kid. Sorry, not sorry. The IBM guy explained it in the exact same theoretical way in his video. If you start from there, you might get stuck like I did. I found a better way by starting from the types of machine learning. We all know from the previous post, ML is the subset of artificial intelligence. There is one and only statement you should keep in mind while working on this domain.
Artificial Intelligence is a field of technology that enables machines to think and act like humans.
Machine learning focuses and works on algorithms within the given data. These algorithms parse data, learn from it and make decisions or predictions. Since we are talking about models and their behavior, every ML system improves its performance by time. But how? It can remember the historical data, recognize new patterns and apply the statistical learnings. I hope it makes more sense. If some definitions are still unclear, they will become clearer as we discuss the types of machine learning.

Supervised Learning
Imagine that we have a dataset with its labels, meaning each row or in other words training example, is paired with an output. For instance, in a dataset, there are three variables and one output. As a real world example, we could provide three animal characteristics (tail, wing and species) and indicate which animal this is in output. Supervised Learning aims for the model to learn a mapping from labeled inputs to outputs or predict them.

Applications
As a data analyst, understanding the applications of machine learning is one of the most crucial parts in order to figure out its power more wisely.
- Linear Regression: It finds the relationship between dependent variables and one or more independent variables such as price predictions in real estate or stocks. It also helps in understanding ads performance to optimize budget allocation for marketing campaigns.
- Logistic Regression: This can be used for binary classification problems such as determining if an email is spam or not. It also measures the credit score of the bank users and predicts the customer churn.
- Decision Trees: It can be used for both regression and classification such as fraud detection, product recommendations.
- Neural Networks: These are used with large amounts of data and especially important for deep learning models which are inspired by the human brain. For example, image recognition used in X-rays and MRIs to detect anomalies, speech recognition for virtual assistants like Alexa and Siri.
Unsupervised Learning
In supervised learning we need to provide data with its labels. However, in unsupervised learning, datasets do not have labels. Unsupervised learning involves training a model on data without labels. Models learn and discover the patterns in the data itself which means there has to be a relationship within the given data.

Applications
As in the definition, unsupervised learning models figure out the hidden patterns like grouping similar data or finding correlation between them.
- Clustering: Unsupervised models group the similar data points together. There are some algorithms to do that like K-means or DBSCAN. This can help segment customer groups based on their purchasing behavior or anomaly detection in network traffic for security purposes.
- Association: This puts forth how meaningful are the data points together. It discovers the hidden correlations between points like in the market basket analysis or in other words apriori. Additionally, it can be used for manufacturing to discover associations between process parameters and the product quality.
- Principal Component Analysis: With the help of PCA, the dimensionality of the dataset reduces. For example, analyzing patient data for disease diagnosis, risk management in portfolios or reducing the size of image files.
Reinforcement Learning
In reinforcement learning, algorithms learn to make better decisions by the results of their actions. What kind of results are we talking about here? Rewards (positive) and penalties (negative), these are the keys for the RL. With each try, the algorithm receives feedback like rewards and penalties. In this way, it learns the best strategy to achieve the cumulative score. If you watch IBM’s video, you can understand it very clearly.

Applications
I am today years old, when I learned facts behind the online ads performance.
- Policy Gradient Methods: These optimize the strategy directly by gradient ascent on expected growth, making them suitable for continuous and high-dimensional environments. Hey, marketing guys, does it sound familiar? It is a part of advertising. Have you ever wondered how your ads are optimized by your selected result? Here is the answer for you, the algorithm learns the way for better results and gradually enhances the performance. Other applications like in portfolio management to maximize expected return and natural language processing for chatbots to generate more relevant answers for given context.
- Q-Learning: Algorithms learn the value of taking specific actions. This can be applied to methods for trial and error such as tic-tac-toe in gaming or navigation in robotics.
- Deep Q Network: This is an advanced version of Q-Learning and better to use in more complex areas like autonomous vehicles.
Machine learning algorithms offer tons of advantages for humanity such as automation of repetitive tasks, data driven insights for decision making processes and personalization for better user experiences in shopping. As we provide more data to algorithms, they will continuously improve themselves. However, it means there could be limitations too. To train the data, algorithms require large amounts of high quality data, so data dependency is crucial here. Since models are making their own decisions, the results are open to interpretation and criticism. Last but not least, there is the issue of overfitting — the nightmare of models — where the model learns the data too well and performs poorly on new ones.
In Conclusion
There are numerous usages of machine learning in any domain and we need to be careful while applying them. As this field continues to evolve, humanity may begin to fall short on its own and these algorithms will become our hands and feet. Till the next time, stay safe my friend — I mean from the robots.