To read part 2, please click
hereTo read part 3, please click
hereTo read part 4, please click
hereTo read part 5, please click
here
Grasping the Idea Behind ML
Nowadays, everybody is familiar with the term AI (Artificial Intelligence) and ML (Machine Learning). But, whatever is known under the AI is simply a reflection of ML solutions and sometimes, ML is unnecessarily used to solve some extremely simple. Hence, it's essential to understand the type of problems ML can solve and the other things about it.
Problems & Scenarios Requiring ML
ML can be basically described as an ever evolving algorithm, which can also be seen as a one complex mathematical function. All the computer process follows the simple structure of the input-process-output (IPO) model in which we define allowed inputs, process working with them as well as the output via the type of results the process will show us.
All these algorithms and processes have one thing in common, i.e., they were manually written by someone with the help of a high-level programming language. This clarify the action needed to be done when someone presses a letter in a word processing application which in turn allows you to build a process defining which input values should create which output values.
The History of ML
In order to understand ML more deeply, we must first understand where it came from.
Learnings from Neuroscience
Donald O. Hebb, a neuropsychologist, published a book titled The Organization of Behavior in 1949 in which he described how a neuron contributes to our learning also known as Hebbian learning:
When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.
The biological structure of a neuron is as follows:
If we look at the above structure with some creativity, we can see that it resembles the function of an algorithm a lot, in which input signals are coming from external neurons, some hidden process is also happening with these signals, and then there is an output in the form of an axon terminal connecting the results to other neurons, and therefore other processes again.
Learnings from Computer Science
Alan Turing published a paper called Computing Machinery and Intelligence in 1950, where he defines a test called Imitation Game or Turing test in order to evaluate whether a machine can show human behavior indistinguishable from a human. The basic idea was that, a person should not feel they are not speaking to a human during a conversation. Although this test was flawed, but, the paper triggered one of the first discussions on what AI could be and what it means that a machine can learn.
After this, Arthur Samuel, a researcher at IBM at the time, started developing a computer program that could make the right decision in a game of checkers which led to the definition of so-called minimax algorithm and its accompanying search tree, commonly used for any two-player adversarial game. He coined the name machine learning defining it as follows:
The field of study that gives computer the ability to learn without being explicitly programmed.
Hence, combining these ideas as well as the research done by Donald O. Hebb in neuroscience, Frank Rosenblatt, a researcher at the Cornell Aeronautical Laboratory, invented a new linear classifier called perceptron. Although his progress was short-lived, its original definition is still considered as the basis of every neuron in an Artificial Neural Network (ANN).
To read part 2, please click
hereTo read part 3, please click
hereTo read part 4, please click
hereTo read part 5, please click
here
Comments
Post a Comment