Skip to content

Mind and Machine : The Dawn of a New Era

By Sukanya Chakraborty and Siddhartha Sen

image showing brain in the form of a network of nodes

Ancient Greek philosophers had spent much of their time pondering about what truly makes one intelligent. But this concept was embraced in science and research only about half a century ago. Ever since its inception, neuroscience has strived to understand how the brain processes information, makes decisions, and interacts with the environment. But in the mid – 20th century, arose a new school of thought – how can we emulate intelligence in an artificial system? This does sound daunting and can definitely leave the few odd eccentric minds wondering about its dystopian implications. However, Artificial Intelligence, despite its mystifying name, is not something we see flourishing only in mind-bending science fiction. It has well-grounded roots in the real world as well.

Golgi Staining of Brain Tissue Slices, NIMHANS, Neurophysiology, 2019

Brain science and AI have been progressing in a closely knit fashion since the past several decades. Soon after the birth of modern computers, research on AI gained momentum, with the goal of building machines that can “think”. With the advent of microscopy in the early 1900s, researchers began probing into neuronal connections in brain tissues. The intricate web of connections between neurons inspired computer scientists to mould the Artificial Neural Network (ANN), one of the earliest and most effective models in the history of AI.

In 1949, came a ground-breaking revelation – Hebbian learning. This underlies the roots of one of the oldest learning algorithms and was subtly inspired by the dynamics of biological neural systems. The essential principle governing this concept is that, when synapses fire repeatedly, the connection gets strengthened. In a biological context, this is one of the ways that we learn and remember things, the way that “potentiation” of memory occurs. Finding your way back home or remembering tables of 12 seem effortless because the synapses in these pathways have worked relentlessly to ensure smooth sailing. Drawing a parallel with AI, when the input and output neurons have highly correlated outputs, the connection weight between them is increased.

Following this development, ANNs witnessed an enormous surge in research as new vistas were opened for exploration. A singular achievement was the perceptron, which replicated the information storage and organization in the brain. Developed by Frank Rosenblatt in 1957, this single-layer ANN can process multiple inputs, and laid the foundation for the subsequent multilayer networks.

When did the spark turn into a flame? The 1981 Nobel Prize in Physiology or Medicine was awarded to Hubel and Wiesel for elucidating visual processing. They used electronic signal detectors to capture the responses of neurons when a visual system saw different images. What did they find? A plethora of complicated operations at countless levels in the nervous system. This proved that biological systems are adept at using successive layers with nonlinear computations to transform simple inputs into complex features. These observations laid the foundation for the Convolutional Neural Networks, the fundamental model for the revolutionary deep learning technique.

A human brain is composed of an astounding 86 billion neurons, intricately connected with other neural cells. Activation of one neuron creates a spike in potential which is transmitted as electrical signals. On an analogous principle, the neuronal cell body equates to a node, dendrites to the input, axon to output and the synapse to the weights in an ANN. The action potential of an individual neuron is generated at the junction between the axon and the cell body based on the inputs it receives. In ANNs, mathematical tools such as linear combinations and sigmoid functions are used to compute the action potential at a certain node. The spike of a neuron is equivalent to the activation function of a specific node. A typical ANN consists of at least three layers : input, output, hidden. The complexity of the network is enhanced with an increase in the number of hidden layers.

Layers in an Artificial Neural Network, sourced from Xenonstack.com

In the modern day world, AI and Neuroscience are interlinked, much like the coordinated cosmic dance of galaxies during birth of new stars. The complexity of the brain, its cognitive power, demands multidisciplinary expertise. AI aims to investigate theories and build computer systems equipped to perform tasks that require human intelligence, decision-making and control. Thus, it has an important role in understanding the nuances of the brain. But it can also benefit from this, because a detailed analysis of how the brain works could provide important insights into the nature of cognition itself, and help in simulating these artificially.

Galaxies Passing in the Night | Edited European Southern Obs… | Flickr
Arp 271 Twin Galaxies, Flickr

AI is rapidly metamorphosing into an indispensable tool in neuroscience. Training an algorithm to mimic the qualities of vision and hearing enables us to understand how the biological network functions. It can also help crack the activation patterns of individual groups of neurons that underlie a thought or behaviour. Incorporation of AI into the construction of brain implants is making their internal processes, such as identifying electrical spikes of activity, far more effective. Machine Learning has greatly simplified the interpretation and reconstruction of data obtained from techniques such as Functional Magnetic Resonance Imaging (fMRI), integral to brain research. 

AI and Neuroscience collectively aim to answer several questions which have remained unaddressed for a long time. One such question is the exact mechanism of learning. ANNs mostly perform supervised learning. The task of image recognition for example, is performed by training  neural networks on the ImageNet dataset. The networks develop a statistical understanding of what images with the same label — ‘dog’, for instance — have in common. When shown a new image, the networks examine it for similar patterns in parameters and respond accordingly.

Neural networks are only a rough representation of the brain, as it models neurons as numbers in a high dimensional matrix. On the contrary, our brains function as highly sophisticated devices with the help of electrical and chemical signals. That makes us unique as individuals and vastly different from machines.

It is an undeniable fact that AI has left an indelible mark on the world in several facets. Nevertheless, for machines to achieve the computational power and mystique of the human brain, may well remain an impossible feat for years to come

REFERENCES

  1. Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron95(2), 245-258.
  2. Hubel, D. H. (1982). Exploration of the primary visual cortex, 1955–78. Nature299(5883), 515-524.
  3. Potter, S. M. (2007). What can AI get from neuroscience?. In 50 years of artificial intelligence (pp. 174-185). Springer, Berlin, Heidelberg.
  4. Van der Velde, F. (2010). Where artificial intelligence and neuroscience meet: The search for grounded architectures of cognition. Advances in Artificial Intelligence2010.

2 Comments »

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: