×

### Let's log you in.

or

Don't have a StudySoup account? Create one here!

×

or

by: Isaac Hauck

48

0

8

# Introduction to Neural Networks EEL 6812

Isaac Hauck
University of Central Florida
GPA 3.7

Georgiopoulos

These notes were just uploaded, and will be ready to view shortly.

Either way, we'll remind you when they're ready :)

Get a free preview of these Notes, just enter your email below.

×
Unlock Preview

COURSE
PROF.
Georgiopoulos
TYPE
Class Notes
PAGES
8
WORDS
KARMA
25 ?

## Popular in Electrical Engineering

This 8 page Class Notes was uploaded by Isaac Hauck on Thursday October 22, 2015. The Class Notes belongs to EEL 6812 at University of Central Florida taught by Georgiopoulos in Fall. Since its upload, it has received 48 views. For similar materials see /class/227660/eel-6812-university-of-central-florida in Electrical Engineering at University of Central Florida.

×

## Reviews for Introduction to Neural Networks

×

×

### What is Karma?

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/22/15
2 Neuron Mode and Network Architectures Neuron Model Simple Neuron A neuron with a single scalar input and no bias appears on the left below Input Neuron without bias Input Neuron with bias U K a fwp 1 fWPb The scalar input p is transmitted through a connection that multiplies its strength by the scalar weight w to form the product wp again a scalar Here the weighted input wp is the only argument of the transfer function f which produces the scalar output a The neuron on the right has a scalar bias I You may View the bias as simply being added to the product wp as shown by the summingjunction or as shifting the function fto the left by an amount I The bias is much like a weight except that it has a constant input of 1 The transfer function net input n again a scalar is the sum of the weighted input wp and the bias I This sum is the argument of the transfer function f Chapter 7 discusses a different way to form the net input n Here fis a transfer function typically a step function or a sigmoid function which takes the argument n and produces the output 1 Examples of various transfer functions are given in the next section Note that w and b are both adjustable scalar parameters of the neuron The central idea of neural networks is that such parameters can be adjusted so that the network exhibits some desired or interesting behavior Thus we can train the network to do a particular job by adjusting the weight or bias parameters or perhaps the network itself will adjust these parameters to achieve some desired end Neuron Model All of the neurons in this toolbox have provision for a bias and a bias is used in many of our examples and will be assumed in most of this toolbox However you may omit a bias in a neuron if you want As previously noted the bias I is an adjustable scalar parameter of the neuron It is not an input However the constant 1 that drives the bias is an input and must be treated as such when considering the linear dependence of input vectors in Chapter 4 Linear Filters Transfer Functions Many transfer functions are included in this toolbox A complete list of them can be found in Transfer Function Graphs in Chapter 14 Three of the most commonly used functions are shown below 4 a hardlimn HardLimit Transfer Function The hardlimit transfer function shown above limits the output of the neuron to either 0 if the net input argument n is less than 0 or 1 ifn is greater than or equal to 0 We will use this function in Chapter 3 Perceptrons to create neurons that make classification decisions The toolbox has a function har dlim to realize the mathematical hardlimit transfer function shown above Try the code shown below n 5 O 1 5 plotnhar dlimn 39c 39 It produces a plot of the function har dlim over the range 5 to 5 All of the mathematical transfer functions in the toolbox can be realized with a function having the same name The linear transfer function is shown below 2 Neuron Model ond Network Architectures a purelinn Linear Transfer Function Neurons of this type are used as linear approximators in Linear Filters in Chapter 4 The sigmoid transfer function shown below takes the input which may have any value between plus and minus infinity and squashes the output into the range 0 to 1 a logsign LogSigmoid Transfer Function This transfer function is commonly used in backpropagation networks in part because it is differentiable The symbol in the square to the right of each transfer function graph shown above represents the associated transfer function These icons will replace the general f in the boxes of network diagrams to show the particular transfer function being used For a complete listing of transfer functions and their icons see the Transfer Function Graphs in Chapter 14 You can also specify your own transfer functions You are not limited to the transfer functions listed in Chapter 14 Neuron Model You can experiment with a simple neuron and various transfer functions by running the demonstration program nnd2n1 Neuron with Vector Inpul A neuron with a single Relement input vector is shown below Here the individual element inputs P 1 a P2 gt P R are multiplied by weights wL 1 1012 wLR and the weighted values are fed to the summingjunction Their sum is simply Wp the dot product of the single row matrix W and the vector p Input Neuron w Vector Input Where R number of elements in input vector u quotGEE N quotB 2 a Wp b The neuron has a bias I which is summed with the weighted inputs to form the net input n This sum n is the argument of the transfer function f n wly 1p1 w12p2 wLRpR 1 This expression can of course be written in MATLAB code as n Wp b However the user will seldom be writing code at this low level for such code is already built into functions to define and simulate entire networks 2 Neuron Model and Network Architectures The gure ofa single neuron shown above contains a lot of detail When we consider networks with many neurons and perhaps layers of many neurons there is so much detail that the main thoughts tend to be lost Thus the authors have devised an abbreviated notation for an individual neuron This notation which will be used later in circuits of multiple neurons is illustrated in the diagram shown below Input Neuron Where R number of elements in input vector afWpb Here the input vector p is represented by the solid dark vertical bar at the left The dimensions ofp are shown below the symbol p in the figure as Rxl Note that we will use a capital letter such as R in the previous sentence when referring to the size ofa vector Thus p is a vector ofR input elements These inputs post multiply the single row R column matrix W As before a constant 1 enters the neuron as an input and is multiplied by a scalar bias I The net input to the transfer function fis n the sum of the bias I and the product Wp This sum is passed to the transfer function fto get the neuron s output a which in this case is a scalar Note that ifwe had more than one neuron the network output would be a vector A layer of a network is defined in the figure shown above A layer includes the combination of the weights the multiplication and summing operation here realized as a vector product Wp the bias I and the transfer function f The array of inputs vector p is not included in or called a layer Each time this abbreviated network notation is used the size of the matrices will be shown just below their matrix variable names We hope that this notation will allow you to understand the architectures and follow the matrix mathematics associated with them Neuron Model As discussed previously When a speci c transfer function is to be used in a figure the symbol for that transfer function Will replace the f shown above Here are some examples 741 hardlim purelin lagsig You can experiment With a twoelement neuron by running the demonstration program nnd2n2 2 Neuron Model and Network Arcnrtectures Network Architectures Two or more of the neurons shown earlier can be combined in a layer and a particular network could contain one or more such layers First consider a single layer of neurons A Layer of Neurons A onelayer network with R input elements and S neurons follows Input Layer of Neurons Where R number of elements in input vector S number of neurons in layer afWpb In this network each element of the input vector p is connected to each neuron input through the weight matrix W The ith neuron has a summer that gathers its weighted inputs and bias to form its own scalar output ni The various ni taken together form an S element net input vector 11 Finally the neuron layer outputs form a column vector a We show the expression for a at the bottom of the figure Note that it is common for the number of inputs to a layer to be different from the number of neurons ie R i S A layer is not constrained to have the number of its inputs equal to the number of its neurons

×

×

### BOOM! Enjoy Your Free Notes!

×

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

Bentley McCaw University of Florida

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

Jennifer McGill UCSF Med School

#### "Selling my MCAT study guides and notes has been a great source of side revenue while I'm in school. Some months I'm making over \$500! Plus, it makes me happy knowing that I'm helping future med students with their MCAT."

Bentley McCaw University of Florida

Forbes

#### "Their 'Elite Notetakers' are making over \$1,200/month in sales by creating high quality content that helps their classmates in a time of need."

Become an Elite Notetaker and start selling your notes online!
×

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com