Nwadiugwu Seminar Research Manuscript

May 22, 2017 | Autor: L. Romero Lévano | Categoria: Neuroscience, Artificial Intelligence
Share Embed


Descrição do Produto

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/281374291

NEURAL NETWORK, ARTIFICIAL INTELLIGENCE AND THE COMPUTATIONAL BRAIN Thesis · August 2015 DOI: 10.13140/RG.2.1.3942.3204

CITATIONS

READS

0

84

1 author: Martin Nwadiugwu University of Ilorin 56 PUBLICATIONS 0 CITATIONS SEE PROFILE

All content following this page was uploaded by Martin Nwadiugwu on 31 August 2015. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately.

NEURAL NETWORK, ARTIFICIAL INTELLIGENCE AND THE COMPUTATIONAL BRAIN

Nwadiugwu, M. C. 1* 1

University of Ilorin, Department of Anatomy, Ilorin. Nigeria

Submitted June, 2015 Correspondence: University of Ilorin, College of Health Sciences Faculty of Basic Medical Sciences Department of Anatomy Email: [email protected] Tel: +2347040423995

ABSTRACT

In recent years, scientists have learned a great deal about how the brain functions. The brain is composed of nerve cells, which are connected to other nerve cells by synapses to form networks. A system of interconnected neurons forms neural networks which are of two types: a Biological Neural Network (interconnected nerve cells), and an Artificial Neural Network (ANNs). The ANNs are computational tools inspired by neurons in the brain, and are used to model a biological brain. The concept of ANNs can be applied in pattern recognition, anomaly detection, electronic noses, instant physician and in diagnosing the cardiovascular system via neural modeling. Scientist are looking at the development of artificial intelligence using the understanding of the architecture of the human brain; as research into potential systems of artificial intelligence, now looks to the brain for models rather than looking to technology for ideas from which to model the brain. Computation in the brain brings together computational concepts and behavioral data within a neurobiological framework.

-2-

CHAPTER ONE 1.0 INTRODUCTION The human brain has been undergoing serious investigations by many researchers in the field of neuroscience. In time past, there have been considerable investigations of the structure of the brain (anatomy of the brain), but studies on the functional operation of its complex neural network, paraded all sorts of fantasies as knowledge for many centuries (Sundal et al., 2014). Around the middle of the 18th century, a functional understanding of the human brain began to take shape. At that time, studies on the brain revealed that nerve signals formerly thought of as “animal spirits” are actually electric signals not very different from the currents that flow in an electrical circuit (Sundal et al., 2014). Not only that, advancements in microscope and neuroscience revealed the morphology of neurons, and presented a vision of the brain as a network of neurons; unraveling how neurons interact among themselves using chemical signals. The adult human brain is made up of about 100 billion neurons, each with about 1,000 14

15

10,000 connections, making a total 10 -10 connections in the brain (Sundal et al., 2014). It is perhaps the most complex system, more complex than the entire mobile network of the world, with its neurons making and breaking connections at a time-scale that can be as short as a few tens of seconds (Sundal et al., 2014). How the brain enables human beings to think has remained a mystery until the present day. Significant ventures in the field of Artificial Intelligence have enabled scientists to come close to the nature of thought processes inside a brain (Zhang, 2011). In the area of artificial intelligence, Artificial Neural Network (ANN) is employed as computational tools to model a biological brain (Willamette, 2014). Artificial intelligence seeks to answer questions like “how -3-

network of neurons in the visual processing areas of the brain transduce the optical image that falls on the retina and how they can be simulated to make intelligent device?” Answers to questions like this are best described in the language of mathematics, which is the primary preoccupation of science in computational neuroscience (Sundal et al., 2014). In the same vein, computation in the brain involves understanding human and animal brains using computational models (computational neurobiology); and the process of simulating and building a machine to emulate the real brain (neural computing). All these and other diverse neural network models are examined in the emerging field of computational neuroscience.

-4-

CHAPTER TWO 2.0 ANATOMY OF A NEURON The basic unit of the nervous system is the neuron (nerve cell). A neuron is the name given to the nerve cell and all its processes (Snell et al., 2010). Neurons are found in ganglia, in the brain and spinal cord (Snell et al., 2010). They receive input from other neurons, and are excitable cells specialized for the reception of stimuli and the conduction of nerve impulse (Snell et al., 2010). Neurons vary considerably in size and shape, but each possesses a cell body from whose surface project one or more processes called neurites (Snell et al., 2010). The neurites, also known as dendrites, are responsible for receiving and conducting information towards the cell body (Snell et al., 2010). The term nerve fiber is used to refer to the dendrites and axons which conduct impulses away from the cell body (Snell et al., 2010).

Fig. 1.0 Structure of a Neuron

(Papadourakis, 2014)

-5-

The figure (Fig. 1.0) above shows the structure of a biological neuron with various parts. They are: (a) The Dendrites (b) The Cell body or Soma (c) The Axon THE DENDRITES The dendrites are the branched projections of a neuron that helps to conduct the electrochemical stimulation received from other neural cells; by upstream neurons via synapses which are located at various points throughout the dendritic arbor, to the cell body of the neuron from which the dendrites project (Mujeeb, 2012). Dendrites are important in integrating synaptic inputs and in determining the extent to which action potentials are produced by the neuron (Mujeeb, 2012). More so, recent research has found that dendrites can support action potentials and release neurotransmitters, a property believed to be specific to axons (Mujeeb, 2012). THE SOMA This is where the signals from the dendrites are joined and passed on. The soma serves to maintain the cell and keep the neuron functional, but does not play an active role in the transmission of the neural signal (Mujeeb, 2012).

THE AXON The axon is also known as a nerve fiber. It is a long, slender projection of a nerve cell or neuron that typically conducts electrical impulses away from the neuron’s cell body or soma (Mujeeb, 2012). Axons are distinguished from dendrites by several features, including shape, length, and function. Some types of neurons have no axon and transmit signals from their -6-

dendrites (Mujeeb, 2012). No neuron ever has more than one axon, as most axons branch in some cases very profusely (Mujeeb, 2012).

2.1 BIOLOGICAL AND ARTIFICIAL NEURONS The types of neurons that interconnect to form neural networks are: 1. Biological Neurons (Nerve cells) 2. Artificial Neurons The biological neurons constitute the basic building blocks of the human brain. Artificial neurons on the other hand, forms artificial neural networks (ANNs) which are computer algorithms inspired by a neuron and modeled after brains, to perform specific computational tasks (Mano, 2014).

Fig 1.1 Artificial Neuron Model

.

(Stergiou and Siganos, 2014). The above figure (Fig. 1.1) shows a model of an artificial neuron described in 1943 by McCulloch and Pitts. It consists of inputs with an electrical impulse represented as (1) or with no electrical impulse represented as (0) (Reed, nd.). Each input has a weight associated with it and -7-

the activation function multiplies each input value by its weight. If the sum of the weighted inputs is greater than or equal to , then the neuron fires and returns 1, if not, it does not fire but returns 0 (Reed, nd.). Each neuron basically has an activation threshold, and series of weighted connections to other neurons (Mano, 2014). When the aggregate activation that a neuron receives from the neurons connected to it exceeds its activation threshold, the neuron fires and relays its activation to the neurons connected to it. The weights associated with these connections can be modified by training the network to perform certain task, and this modification accounts for learning (Mano, 2014). Artificial neuron forms the basis of artificial neural networks (ANNs). In the 1950’s, it became a focus of Computer Science research, because at that time, it was said that humans lack the speed and memory of computers, yet are capable of complex action and reasoning. 2.2 NEURAL NETWORKS Neural networks are formed by interconnecting neurons. There are two types of neural network: Biological Neural Network (BNN) and Artificial Neural Network (ANN). 2.3 BIOLOGICAL NEURAL NETWORKS The neural network in the brain is an interconnected web of biological neurons transmitting elaborate patterns of electrical signals. The human brain can anatomically be distinguished into several divisions like; the cortex, brainstem, cerebellum e.t.c. It can further be subdivided into several areas and regions according to the functions performed by them, and the anatomical structure of the neural networks within it (Mano, 2014).

-8-

Fig 2.0 Anatomical areas of the Human Brain (Willamette, 2014) The figure (Fig. 2.0) shows the different anatomical areas of the human brain. These areas consist of numerous biologically interconnected neurons, which works in a very complex and elaborate way. In biological neural network, dendrites receive input signals which fires an output signal based on the input signals. The overall pattern of bundles of neural connections (projections) between areas is extremely complex, and only partially known.

-9-

Fig. 2.1 A Biological Neural Network (Shiffman, 2014) The figure (Fig. 2.1) shows a biological neural network formed by an interconnected nerve cell. Apart from forming long-range connections with neighboring neurons, they also link up with many thousands of their neighbors, to form very dense, complex local networks. 2.3.1 GENERAL ARCHITECTURE OF NEURAL NETWORKS IN THE BRAIN The general brain architecture contains many networks formed by interconnected neurons. Each neuron is a cell that uses biochemical reactions to receive process and transmit information. The terminal button of each neuron is connected to other neurons across a small gap called a synapse.

- 10 -

. Fig. 2.3 A neuron forming synapse

(Mujeeb, 2012)

Figure 2.3 shows a neuron forming a synapse through the connection of its axon endings with the dendrites of another neuron. The dendrite serves as the input device which receives electrical signals or impulses from other neurons. The cell body or soma gives a summation of inputs from the dendrites which causes excitation or inhibition. When the summation of inputs from the dendrites exceeds some threshold, the neuron fires an output along axon (Mujeeb, 2012). A neuron typically receives input from other neurons via dendrites as seen in figure 2.3. Its dendritic tree is connected to a thousand neighboring neurons. When a neurons fires, a positive or negative charge is received by one of the dendrites. The strengths of all the received charges are added together through the processes of spatial and temporal summation. These inputs sums approximately, and once the summed input exceeds a critical level, the neuron discharges an electrical pulse that travels from the body, down the axon, to the next neuron(s) or

- 11 -

receptor(s). This event leads to depolarization, followed by a refractory period, during which the neuron is unable to fire. The axon endings of a neuron form the output zone. The output zones almost touch the dendrites or cell body of the next neuron. Transmission of an electrical signal from one neuron to the next is effected by chemicals called neurotransmitters, which are released from the first neuron and which bind to receptors in the second neuron. This forms a linkage between the two neurons, and is called a synapse (Mujeeb, 2012). THE SYNAPSE

(Tewari, 2012) Fig. 2.4 Synapse A synapse is a structure that permits a neuron to pass an electrical or chemical signal to another neuron. The word “synapse” comes from ”synaptein”, which Sir Charles Scott Sherrington and colleagues coined from the Greek ”syn-” (together) and ”haptein” (to clasp). Synapses are essential to neuronal function and are the means by which neurons pass signals to individual target cells. At a synapse, the plasma membrane of the presynaptic neuron

- 12 -

comes into close apposition with the membrane of the postsynaptic cell. Both the presynaptic and postsynaptic sites contain extensive arrays of molecular machinery that link the two membranes together and carry out the signaling process (Mujeeb, 2012). In many synapses, the presynaptic part is located on an axon, but some presynaptic sites are located on a dendrite or soma. There are two fundamentally different types of synapses: 1.

The chemical synapse and

2. The electrical synapse. In a chemical synapse, “the presynaptic neuron releases a chemical called a neurotransmitter that binds to receptors located in the postsynaptic cell, usually embedded in the plasma membrane” (Mujeeb, 2012). The neurotransmitter sometimes initiates an electrical response or a second messenger pathway that may excite or inhibit the postsynaptic neuron. In an electrical synapse, the presynaptic and postsynaptic cell membranes are connected by gap junctions that are capable of passing electrical current, and causing voltage changes in the presynaptic cell to induce voltage changes in the postsynaptic cell (Mujeeb, 2012). Electrical synapses rapidly transfers signals from one cell to the next, which is a major advantage The extent to which the signal from one neuron is passed on to the next depends on many factors, such as: (a) The amount of neurotransmitter available, (b) The number and arrangement of receptors, (c) Amount of neurotransmitter reabsorbed, etcetera.

- 13 -

2.4 ARTIFICIAL NEURAL NETWORKS An artificial neural network (ANN) is a computational system, where information is processed collectively, in parallel throughout a network of nodes (neuron) (Shiffman, 2014). In ANN the individual elements of the network, the neurons (nodes), read an input, process it, and generate an output. With ANN, a network of many neurons, can exhibit incredibly rich and intelligent behaviors (Shiffman, 2014). The ANN can also be described as an information processing pattern that is inspired by the way biological nervous systems process information (Stergiou and Siganos, 2014). The key element of this pattern is the novel structure of the information processing system. Analysis of the artificial neural network (ANN) shows that it is a method of data analysis, which imitates the human brain’s way of working. It is composed of a large number of highly interconnected processing elements known as neurons, working in unison to solve specific problems. As a part of Artificial Intelligence, Artificial Neural Networks (ANNs) attempt to bring computers a little closer to the brain's capabilities by imitating certain aspects of information processing in the brain, in a highly simplified way (Willamette, 2014). An artificial neural network (ANN) is a programmed computational model that aims to replicate the neural structure or the architecture and functioning of the human brain. It consists of an interconnected structure of artificially produced neurons that function as pathways for data transfer. Artificial neural networks are flexible and adaptive, learning and adjusting with each different internal or external stimulus. The power of Artificial Neural Networks (ANNs) has been successfully used over the years in many types of problems with different degrees of complexity and in different fields of application. Neural networks represent the way in which arrays of neurons probably function in - 14 -

biological learning and memory. These networks are known as the computational models with particular characteristics such as the ability to learn or adapt, to organize or to generalize data. The learning of ANNs takes place by training with examples, in a process that uses a training algorithm to iteratively adjust the connection weights between neurons to produce the desired input–output relationships. This has been widely used in optimization, calibration, modeling and pattern recognition. ANNs is very useful in medical and pharmaceutical sciences, and in diagnosis of diseases. They have shown a good potential in calculation of physic-chemical and biological properties of drugs with more attention to pharmaceutical and chemical areas . In recent years, the pharmaceutical applications of ANN have been reviewed by AgatonovicKustrin and Beresford, and have been used to calculate aqueous solubility of drugs employing a number of molecular descriptors. It is proposed that by using artificial neural networks and by designing and testing the appropriate ANN, it could allow the prediction of binding energy of drugs on basis of structural descriptors describing the structure of selected basic drugs. 2.4.1 HISTORICAL BACKGROUND OF ANNS Artificial Neural Network (ANN) or Neural network simulations appear to be a recent development. Historically, this field was established before the advent of computers. The first artificial neuron was produced in 1943 by the neurophysiologist Warren McCulloch and the logician Walter Pitts (Stergiou and Siganos, 2014). Warren McCulloch and Walter Pitts could not achieve much at that time as a result of the technology available at that time. The use of inexpensive computer emulations enabled important advances in neural network simulations.

There were periods of excitement initially, followed by periods of

frustration when funding and professional support was minimal, as further vital advances were - 15 -

made by relatively few researchers (Stergiou and Siganos, 2014). In 1969, Minsky and Papert, published a book, summing up a general feeling of frustration against neural networks among researchers. However, other pioneers of neural network simulations were able to develop convincing technology which surpassed the limitations identified by Minsky and Papert, which was accepted by many without further analysis (Stergiou and Siganos, 2014). As of this time, the neural network field currently enjoys a resurgence of interest and a corresponding increase in funding. 2.5 GENERAL ARCHITECTURE OF ARTIFICIAL NEURAL NETWORK (ANN) An artificial neuron is the basic building block of an artificial neural network which consists of many interconnected neurons, each working in parallel, with no central control. Neurons in artificial neural networks are often organized into layers, where the neurons in one layer are only connected to those of adjacent layers. Each neuron-to-neuron connection has an associated weight, and learning within the network is accomplished by updating these weights (Mano, 2014). An ANN usually takes one or more inputs, and produces one or more outputs, based on the strength of the connections within, and the way the connections change the input signals. Each neuron receives a signal. If this signal exceeds a certain threshold, it is modified and propagated to connected neurons. The output layers of neurons, those which do not propagate their signals to other neurons, produce the output calculated by the whole network (Mano, 2014). An Artificial Neural Networks (ANN) involves nodes that are known as neurons. The neurons are structured into a sequence of layers and connected to each other by using variable connection weights. Each layer can have a number of different neurons with various transfer functions.

- 16 -

Like the human brain, Artificial Neural Networks, learn by example. From what is known of neuronal structures, the human brain learns by altering the strengths of connections between neurons, and by adding or deleting connections between neurons (Willamette, 2014). An ANN on the other hand, is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems is similar to learning in ANNs, as it involves adjustments to the synaptic connections that exist between the neurons. The structure and computation of artificial neural networks (ANNs) are often organized into layers, with each layer receiving input from one adjacent layer, and sending it to another. The Layers are categorized as: 1. Input layers 2. Output layers, and 3. Hidden layers. The input layer is initialized to a certain set of values, and the computations performed by the hidden layers update the values of the output layers, which comprise the output of the whole network (Russel et al, 2002). 2.6 USES OF ARTIFICIAL NEURAL NETWORKS Artificial neural networks have been used for a variety of tasks. These include: (i) Artificial Intelligence: ANN has been used as a form of weak artificial intelligence, to study how the brain works. Certain types of brain damage can be modeled by removing nodes and connections from an appropriately trained network. They can also be used to estimate mathematical functions, and extract features from images for optical character recognition. An artificial neural network, the Autonomous Land Vehicle in a Neural Network, was used by Carnegie Mellon University's NAVLAB to extract road features for navigating an unmanned - 17 -

vehicle. Neural networks have also been used for voice recognition, game playing and email spam filtering. (ii)

Learning: Learning in neural networks can be supervised or unsupervised. It is accomplished by updating the weights between connected neurons. The most common method for training neural networks is back propagation, a statistical method for updating weights based on how far their output is from the desired output. To search for the optimal set of weights, various algorithms can be used. The most common is gradient descent, which is an optimization method that, at each step, searches in the direction that appears to come nearest to the goal (Heidelberg, 2005).

(iii)

Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.

(iv)

Self-Organization: An ANN can create its own organization or representation of the information it receives during learning time.

(v)

Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability (Heidelberg, 2005).

(vi)

Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage (Heidelberg, 2005).

2.7 REASONS FOR THE STUDY OF ANNs IN NEUROSCIENCE Neural networks are a popular target representation for learning. Neural networks are inspired by the neurons in the brain. Artificial neural networks typically contain many fewer than

- 18 -

the approximately 100 billion neurons that are in the human brain, and are much simpler than their biological counterparts (Poole, 2010). Artificial neural networks are interesting to study in neuroscience, because it is employed to understand real neural systems. Researchers in the field are simulating the neural systems of simple animals to lead to an understanding about which aspects of neural systems are necessary to explain the behavior of these animals (Poole, 2010). One hypothesis states that the only way to build the functionality of the brain is by using the mechanism of the brain by attempting to build intelligence using the mechanism of the brain, as well as without using the mechanism of the brain. ANNs is interesting to study because the brain inspires a new way to think about computation that contrasts with currently available computers, because it simulates the functionality of a biological neuron. The brain consists of a huge number of asynchronous distributed processes, all running concurrently with no master controller, unlike current computers that have a few processors, and a large but essentially inert memory (Poole, 2010).

2.8 APPLICATION OF ANNs IN THE FIELD OF MEDICINE Artificial Neural Networks (ANN) is currently an interesting research area in medicine, and it is believed that in few years they will receive extensive application to biomedical systems. Currently, the research is mostly on modeling parts of the human body and recognizing diseases from various scans e.g. cardiograms, CAT scans, ultrasonic scans, etc (Stergiou and Siganos, 2014). According to Stergiou and Siganos (2014) artificial neural networks are ideal in recognizing diseases using scans since there is no need to provide a specific algorithm on how to - 19 -

identify the disease. Neural networks learn by example so the details of how to recognize the disease are not needed. What is needed is a set of examples that are representative of all the variations of the disease. The examples need to be selected very carefully if the system is to perform reliably and efficiently. In the field of medicine, ANN can be applied in the following: (a) Diagnosing the Cardiovascular System via Neural Modeling Neural Networks are used experimentally to model the human cardiovascular system. Diagnosis can be achieved by building a model of the cardiovascular system of an individual and comparing it with the real time physiological measurements taken from the patient (Stergiou and Siganos, 2014). If this routine is carried out regularly, potential harmful medical conditions can be detected at an early stage and thus make the process of combating the disease much easier. A model of an individual's cardiovascular system must mimic the relationship among physiological variables i.e., heart rate, systolic and diastolic blood pressures, and breathing rate at different physical activity levels. If a model is adapted to an individual, then it becomes a model of the physical condition of that individual (Stergiou and Siganos, 2014). The simulator will have to be able to adapt to the features of any individual without the supervision of an expert. This calls for a neural network. Another reason that justifies the use of ANN technology is the ability of ANNs to provide sensor fusion which is the combining of values from several different sensors. Sensor fusion enables the ANNs to learn complex relationships among the individual sensor values, which would otherwise be lost if the values were individually analyzed. In medical modeling and diagnosis, this implies that even though each sensor in a set may be sensitive only to a specific physiological variable, ANNs are capable of detecting complex medical conditions by fusing the data from the individual biomedical sensors (Stergiou and Siganos, 2014). - 20 -

(b) Electronic noses ANNs are used experimentally to implement electronic noses. Electronic noses have several potential applications in telemedicine. Telemedicine is the practice of medicine over long distances via a communication link. The electronic nose would identify odours in the remote surgical environment. These identified odours would then be electronically transmitted to another site where an odour generation system would recreate them. Because the sense of smell can be an important sense to the surgeon, telesmell would enhance telepresent surgery (Stergiou and Siganos, 2014). (c) Instant Physician An application developed in the mid-1980s called the instant physician trained an autoassociative memory neural network to store a large number of medical records, each of which includes information on symptoms, diagnosis, and treatment for a particular case. After training, the net can be presented with input consisting of a set of symptoms; it will then find the full stored pattern that represents the best diagnosis and treatment (Stergiou and Siganos, 2014). Another key element of an artificial neural network is its ability to learn. A neural network is a complex and adaptive system, which can change its internal structure based on the information flowing through it. Typically, this is achieved through the adjustment of weights. Each connection between neuron has a weight, which controls the signal between the two neurons. The ability of a neural network to learn, to make adjustments to its structure over time, is what makes it so useful in the field of artificial intelligence. Artificial Neural Networks have other diverse application in: 1. Pattern Recognition: This is a common application of artificial neural networks used in facial recognition, optical character recognition, etc (Shiffman, 2014). - 21 -

2. Time Series Prediction: Artificial Neural networks can be used to make predictions e.g. predictions on rise and fall in the stock market, predictions on whether. 3. Signal Processing: Cochlear implants and hearing aids need to filter out unnecessary noise and amplify the important sounds. Neural networks can be trained to process an audio signal and filter it appropriately (Shiffman, 2014). 4. Control: you may have read about recent research advances in self-driving cars. Neural networks are often used to manage steering decisions of physical vehicles, or simulated ones (Shiffman, 2014). 5. Soft Sensors: A soft sensor refers to the process of analyzing a collection of many measurements. Artificial Neural networks can be employed to process the input data from many individual sensors and evaluate them as a whole. For example, a thermometer can give us information about the temperature of the air. With artificial neural networks, we can also get additional information on humidity, barometric pressure, dew point, air quality, air density, etc (Shiffman, 2014). 6. Anomaly Detection: because neural networks are so good at recognizing patterns, they can also be trained to generate an output when something occurs that doesn’t fit the pattern. For example, a neural network monitoring a person’s daily routine over a long period of time could alert one when something goes wrong, after learning the patterns of such behavior (Shiffman, 2014).

- 22 -

CHAPTER THREE 3.0 ARTIFICIAL INTELLIGENCE Artificial Intelligence (AI) can be defined as a subfield of computer science closely tied with biology and cognitive science. It is concerned with computing techniques and models that simulate and investigate intelligent behavior. Research into artificial intelligence builds upon our understanding of the brain, its evolutionary development, and provides insights into the way the brain works, as well as the larger process of biological evolution. Artificial Intelligence (AI) can also be simply described as a collection of hard problems which can be solved by humans and other living things, but for which the algorithm for solving them is not available (Zhang, 2011). Although a subfield of computer science, Artificial Intelligence (AI) has had its fair share from the field of neuroscience, particularly in the study of the brain. How the brain enables human beings to think has remained a mystery until the present day. However, significant ventures in the field of Artificial Intelligence have enabled scientists to come close to the nature of thought processes inside a brain. 3.1 RESEARCH AREAS AND APROACHES IN AI Two major research areas in Artificial Intelligence (AI) are: (a) Artificial Neural Networks: This area of research involves building a model of the brain and training that model to recognize certain types of patterns. (b) Genetic Algorithms: This area of research deals with evolving solutions to complex problems that is hard to control or deal with using other methods.

- 23 -

Artificial Neural Networks (ANN) are an important part of Artificial Intelligence (AI), an area of computer science concerned with making computers behave more intelligently (Agarwal, 2014). ANN is modeled on the brain where neurons are connected in complex patterns to process data from the senses, establish memories and control the body (Agarwal, 2014). ANN process data and exhibit some intelligent behaviors like learning, generalization and pattern recognition. 3.2 BENEFITS OF ANNs IN ARTIFICIAL INTELLIGENCE ANNs forms an important aspect in the field of Artificial intelligence and are very much involved in so many exciting applications of AI. The benefit of ANNs in Artificial Intelligence includes: 1. Backpropagation Nets Backpropagation nets learn to generalize and classify patterns. When they are presented with a pattern, the interconnections between the artificial neurons are adjusted until they give a correct response. Backpropagation nets are the most common kind of ANN (Mano, 2014). The basic topology is that layers of neurons are connected to each other. Patterns cause information to flow in one direction, then the errors "backpropagate" in the other direction, changing the strength of the interconnections between layers (Mano, 2014). A successful example of backpropagation nets is NetTalk. NetTalk was invented by Terry Sejnowski, professor and head of the Computational Neurobiology Laboratory at the Salk Institute in La Jolla, California (Mano, 2014). This net learns to read English or any other language and is used all over the world to read to blind people. Basically with backpropagation nets, after sufficient training with a number of patterns, they will give the correct response to a pattern they have never seen (Mano, 2014). - 24 -

2. Hopfield Nets John Hopfield, a Nobel Prize-winning physicist at California Institute of Technology (Caltech), invented Hopfield nets. The basic topology is that every artificial neuron is connected to every other artificial neuron. These nets memorize collections of patterns (Mano, 2014). When given a part of one of the patterns or a badly distorted pattern, the net delivers the complete pattern. Hopfield nets have been applied in fingerprint recognition. Given a partial print or a smudged print, the Hopfield net can deliver the complete fingerprint (Mano, 2014). NASA uses Hopfield nets to orient deep-space craft by visual star fields. When the craft looks at a picture of the stars, a Hopfield net can match the view with the known pictures of the stars to orient the craft (Mano, 2014). 3. Self-Organizing Maps Finnish professor Teuvo Kohonen invented self-organizing maps, also known as Kohonen nets. The basic topology is that each artificial neuron is connected only to its neighbors (Mano, 2014). Kohonen nets reduce the complexity of data--especially experimentally obtained data. Repeatedly "training" a Kohonen net with an n-dimensional data set can produce a lower dimensional data set that captures the essential nature of the n-dimensional data set in a much simpler form (Mano, 2014). A major application of self-organizing maps is in the several projects that are looking for a simpler way to understand the Internet. Kohonen nets are regularly used as a preprocessor for other types of ANN (Mano, 2014).

- 25 -

3.3 APPLICATION OF ARTIFICIAL INTELLIGENCE According to Zhang (2011) artificial intelligence can be applied in the following areas which include: 1. Intelligent Agents 2. Information Retrieval 3. Electronic Commerce 4. Data Mining 5. Bioinformatics 6. Natural Language 7. Expert Systems 3.3.1 APPLICATION OF AI IN EXPERTS SYSTEMS In expert systems Artificial Intelligence (AI) can be applied to do intelligent task. For example, an experts system helps ford mechanics track down and fix engine problems as seen in the diagram below.

(Zhang, 2011) Fig. 3.1 Expert system used by ford mechanics. - 26 -

Not only that, an airline scheduling program produced with the aid of an expert system, offers graphical user interface to help solve complex airport scheduling problems. For example, it can show graphics of planes circling the airport, the number of planes approaching the airport, gate information, and two concourses with planes at their gate (Zhang, 2011). The airline scheduling program can be seen below in figure 3.2.

(Zhang, 2011) Fig. 3.2 Expert System used in Airline Scheduling Program. (a) GUI (b) Airplane screen windows - 27 -

CHAPTER FOUR 4.0 BENEFITS OF NEURAL NETWORKS IN NEUROSCIENCE The benefit of neural networks in neurobiology is quite enormous. Humans have not fully understood the complex nature of the brain neural networks. In neurobiology analysis, the understanding of neural networks is used in applications in medical science, psychological science, behavioral analysis and treatment of diseases and defects of the nervous system. Artificial neural networks on the other hand, helps as research tools in developing the understanding of neural networks by simulating those networks. The use of artificial neural networks models in research have led to significant developments in the field of neuroscience. Neural networks are fault tolerant (Mano, 2014). This can be seen in Biological Neural Networks that are inherently fault tolerant, which is seen in frequent cases of partial nervous system or brain damage without disruption of life itself. Artificial neural networks (ANN), also exhibit a similar high level of fault tolerance because of their highly distributed and modular nature. In neural networks, if one particular component or a group of components fails, certain functions cannot be performed (Mano, 2014). However, the capabilities of the intact components are retained, and the networks do not completely fail. This makes them fault tolerant. 4.1 SCIENTISTS CONCERNED WITH NEURAL NETWORKS Research into the use, benefits and applications of neural networks covers a wide range of topics ranging from theoretical neurobiology to statistical physics and machine learning. It encompasses so many fields such as neuroscience, computer science, engineering, statistics, cognitive science, physics, biology and philosophy. Scientists from these disciplines are concerned about the exciting and complex nature of neural networks. - 28 -

The computer scientist wants to find out about the properties of non-symbolic information processing with neural nets and about learning systems in general. Statisticians use them as flexible, nonlinear regression and classification models (Heskes and Barber, 2014). Engineers make use of the capabilities of neural networks in areas, such as signal processing and automatic control. As for Cognitive scientists, they are trying to exploit neural networks as a possible apparatus to describe models of thinking and consciousness relating to High-level brain function (Heskes and Barber, 2014). In the field of neuroscience, neuroscientists use neural networks to describe and explore medium-level brain function such as memory, sensory systems, etcetera (Heskes and Barber, 2014). Physicists and Biologist also use neural networks. While Physicists use neural networks to model phenomena in statistical mechanics and other tasks, Biologists use them to interpret nucleotide sequences (Heskes and Barber, 2014). Philosophers and some other academicians may also be interested in Neural Networks for various reasons (Heskes and Barber, 2014).

4.2 THE COMPUTER AND THE HUMAN BRAIN The brain's network of neurons forms a massively parallel information processing system. This contrasts with conventional computers, in which a single processor executes a single series of instructions. According to Willamette (2014), the similarities and contrast between the brain and the computer is based on the following: (a) Processing Element: While the brain has 1014 synapses, a computer has 1018 transistors as their processing element.

- 29 -

(b) Processing Speed: For the human brain, it is composed of about 10 billion neurons while a computer have less than 1 million processors (Zhang, 2011). While the human brain has a processing speed of 100 Hz, that of a computer is 109 Hz. (c) Style of Computation: The human brain performs massively parallel computations extremely efficiently. For example, complex visual perception occurs within less than 100 ms, that is, 10 processing steps; while the computer performs serial centralized computations (Zhang, 2011). (d) Fault Tolerance: The human brain is fault tolerant. This means that partial recovery from damage is possible if healthy units can learn to take over the functions previously carried out by the damaged areas. This is not true with computers (Zhang, 2011). (e) Intelligence and Consciousness: The brain supports our intelligence and self-awareness. Conventional computers have not yet been able to do this. (f) Learning: The human brain can learn to reorganize itself from experience unlike in conventional computer, where little learning occurs (Zhang, 2011).

4.3 NEURAL NETWORKS VERSUS CONVENTIONAL COMPUTERS Neural networks take a different approach to problem solving than that of conventional computers. Conventional computers follow a set of instructions in order to solve a problem (algorithmic approach). If the specific steps that the computer needs to follow are not known, the computer cannot solve the problem (Stergiou and Siganos, 2014). That restricts the problem solving capability of conventional computers to problems that we already understand and know how to solve (Stergiou and Siganos, 2014).

- 30 -

Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem. Neural networks learn by example, so they cannot be programmed to perform a specific task (Stergiou and Siganos, 2014). The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly. A major disadvantage is that, because the network finds out how to solve the problem by itself, its operation can be unpredictable (Stergiou and Siganos, 2014). On the other hand, conventional computers use a cognitive approach to problem solving. The way the problem is to be solved must be known and stated in small unambiguous instructions. These instructions are then converted to a high level language program and then into machine code that the computer can understand. These machines are totally predictable; if anything goes wrong it will be due to a software or hardware fault. Despite their many differences, neural networks and conventional algorithmic computers complement each other. This is because, there are some tasks that are more suited to an algorithmic approach like arithmetic operations and tasks that are more suited to neural networks (Stergiou and Siganos, 2014). 4.4 WHY WE NEED BRAIN-LIKE INTELLIGENCE Scientists have spent a lot of time researching and implementing complex solutions involving brain-like intelligence. It is obvious that there are problems that are incredibly simple for a computer to solve, but difficult for humans. If a computer is to find the square root of 864,900, for example, an algorithm of a quick line of code produces the value 930, computed in less than a millisecond as the answer. For humans, this will prove a difficult task and will require more time (Sawicki, 2014). On the other hand, some task that are simple for humans, are - 31 -

not so easy for a computer. If a human is shown a picture of a mice or an African giant rat, they will be able to say very quickly which one is which as humans do not need a machine to perform this tasks. One of the reasons why we need brain like intelligence is to perform task that are easy for a human but difficult for a machine. An example of this task is pattern recognition, which is a common application of neural networks in today’s computing. Applications that require brainlike intelligence range from optical character recognition such as turning printed or handwritten scans into digital text, to facial recognition. These neural network applications use artificial intelligence algorithms. Another reason is that a computerized neural network performs better than the brain in terms of speed. The brain cannot process information nearly as quickly as a computer, due to some physical limitations such as concentrating long enough to perform a task. More so, the human brain does not have a programmer in any easily definable sense, as it programs itself in response to input from the person's senses (Sawicki, 2014). When there is a large number of an input variable, the task becomes very difficult to visualize, although the brain has the potential to be more than sufficient to make a neural network that could eventually solve this kind of problem. Naturally unassuming, nothing in the physical world prepares a person for the task of creating a twelve-dimensional boundary that divides a thirteen-dimensional space into regions that are characterized by different likelihoods of events being signal and background as our brains have learned from birth how to help us perform tasks in a three-dimensional environment, so anything more is a struggle to learn. Therefore, brain-like information processing is needed to achieve true human-level intelligence (Zhang, 2011). - 32 -

CHAPTER FIVE 5.0 CONCLUSION It is now very clear that the biological and computing world has a lot to gain from neural networks because of their ability to learn by example which makes them very flexible and powerful. In computation, they are very well suited for real time systems because of their fast response which are due to their parallel architecture. In areas of research such as neurology, they are used to model parts of living organisms and to investigate the internal mechanisms of the brain. Even though neural networks have a huge potential, scientist will only get the best of them when they are combined with computing. In general, putting a neural network into a computer allows it to make intelligent judgments. Computerized neural networks can function in ways that are a huge improvement upon the brain's own processing mechanisms. Although, for some, it is easy to visualize how the huge number of neurons in the brain could provide the computational power that a person might need, it is difficult to explain why the brain is not better at forming complex mathematical judgments. In all, neural network is a rich area of research which has the potential to capture perhaps a greater range of the operation of the brain than with only computational models. It should be noted that neural network do not perform magic, but can produce very exciting results if used intelligently, as this paper has attempted to explain some of its benefits, and the kind of task that a neural network excels at in computation.

- 33 -

5.1 RECOMMENDATION The benefit of the understanding and application of neural networks in medical science is quite enormous; I therefore recommend that computational studies involving the use of artificial neural networks should be incorporated in the field of neuroscience. More also, research in the emerging field of computational neuroscience should be highly encouraged by setting up facilities and training personnel in the field.

- 34 -

5.2 REFERENCES Agarwal, T. (2014). Artificial neural networks and their types. Retrieved from http://www.elprocus.com/artificial-neural-networks-ann-and-their-types/ Heidelberg, S. B. (2005). Introduction to Machine Learning Using Neural Nets. Retrieved on 9/02/2015 from http://link.springer.com/chapter/10.1007/3-540-27335-2_7 Heskes, Tom and Barber, David. (2014). Neural Networks. Retrieved from http://www.eolss.net/Eolss-sampleAllChapter.aspx Mano, C. (2014). Definition of neural network. Retrieved on June, 2014 from http://www.ehow.com/print/about_5585309_definition-neural-networks.html Mano, C. (2014). Examples of artificial neural network. Retrieved on June, 2014 from http://www.ehow.com/print/about_5585309_definition-neural-networks.html Mano, C. (2014). How neural network performs computation. Retrieved from http://www.ehow.com/print/about_5585309_neural-networks-explained.html Mano, C. (2014). Neural Networks Explained. Retrieved on June, 2014 from http://www.ehow.com/print/about_5585309_neural-networks-explianed.html McCulloch, W.S., and Pitts, W.H. (1965). A Logical Calculus of the Ideas Immanent in Nervous Activity in McCulloch, W.S. Emodiments of Mind. MIT Press. Mujeeb, R. (2012). Introduction to artificial neural network and machine learning. Palakkad: Government engineering college, sreekrishnapuram. Sundal, M. K. et al. (2014). Introduction. Retrieved on 20th Nov., 2014 from http://nptel.ac.in/courses/102106023/ Papadourakis, G. (2014). Computational Intelligence. Technological Educational Institute of

- 35 -

Crete. Department of Applied Informatics. Poole, D. (2010). Artificial Intelligence. Retrieved from http://artint.info/index.html Reed, D. (nd.). Application in artificial intelligence: Computers and scientific thinking, Creighton University. Rumelhart, D., McClelland, J. (1987). Parallel Distributed Processing, Vol. 1 Russell S. and Peter N. (2002). Artificial Intelligence: A Modern Approach Sawicki, D. (2014). Neural Networks, the Top Quark, and Brain Computation. Retrieved from www.cs.rochester.edu/users/faculty/dana/csc240_Fall97/Ass/Denise_Sawicki.html Shiffman, D. (2014). The Nature of Code. Retrieved on June, 2014 from http://natureofcode.com/ Snell, R. S. (2010). Neurobiology of the Neuron and the Neuroglia. Clinical neuroanatomy 7th edition, P. 35 Stergiou, C. and Siganos, D. (2014). Artificial Neural Network in Medicine.

Retrieved from

http://www-students.doc.ic.ac.uk/~cbp/article2.html Tewari, S. (2012). The molecular basis of learning and memory. India: ISI Bangalore Centre, Science and informatics Unit. Willamette. (2014). Computation in the brain: The brain as an information processing System. Retrieved from http://www.willamete.edu/~gorr/classes/as449/intro.html Zhang, B. (2011). Brain, computation and neural learning. Retrieved on June, 2014 from http://bi.snu.ac.kr/~btzhang/ - 36 -

View publication stats

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.