4th Neural Computation and Psychology Workshop, London, 9–11 by Mike Page (auth.), John A. Bullinaria BSc, MSc, PhD, David

By Mike Page (auth.), John A. Bullinaria BSc, MSc, PhD, David W. Glasspool BSc, Msc, George Houghton BA, MSc, PhD (eds.)

This quantity collects jointly refereed types of twenty-five papers offered on the 4th Neural Computation and Psychology Workshop, held at college university London in April 1997. The "NCPW" workshop sequence is now good validated as a full of life discussion board which brings jointly researchers from such diversified disciplines as man made intelligence, arithmetic, cognitive technological know-how, computing device technological know-how, neurobiology, philosophy and psychology to debate their paintings on connectionist modelling in psychology. the final subject of this fourth workshop within the sequence used to be "Connectionist Repre­ sentations", an issue which not just attracted members from these kind of fields, yet from allover the realm besides. From the perspective of the convention organisers targeting representational matters had the virtue that it instantly concerned researchers from all branches of neural computation. Being so crucial either to psychology and to connectionist modelling, it truly is one region approximately which every person within the box has their very own robust perspectives, and the range and caliber of the displays and, simply as importantly, the dialogue which them, definitely attested to this.

Show description

Read or Download 4th Neural Computation and Psychology Workshop, London, 9–11 April 1997: Connectionist Representations PDF

Similar psychology books

Memory

Our stories are our highest quality resources of knowledge approximately ourselves, our neighbors and fans, our jobs. Or are they? we all know we may perhaps sometimes omit someone's birthday, leave out appointments, or lose song of info. yet what in regards to the instances we're yes we have in mind anything, purely to determine it didn't ensue that approach?

From ''Perverts'' to ''Fab Five'': The Media's Changing Depiction of Gay Men and Lesbians

В From ''Perverts'' to ''Fab Five'' tracks the dramatic swap in how the yankee media have depicted homosexual humans during the last half-century. each one bankruptcy illuminates a selected media product that served as a milestone at the media's trip from demonizing homosexuals a few fifty years in the past to celebrating homosexual people--or not less than a few different types of homosexual people--today.

Freud For Scholars. Extracts, Arranged by Topic and Ordered by Date.

Freud For students. Extracts from entire works of Sigmund Freud, prepared by way of subject and ordered by means of date.

Additional info for 4th Neural Computation and Psychology Workshop, London, 9–11 April 1997: Connectionist Representations

Sample text

The network was then trained to assign each training pattern to the correct class using a 1-of-3 softmax output layer activation function. Network training involved adapting the weights to the output layer using gradient descent with momentum. As expected, the network learns the distorted patterns, and generalizes to treat the undistorted patterns correctly. Figure 5 shows the percentage of training and test patterns correctly classified after a fixed amount of training, for various fixed norm weights.

In this case, with fixed centres and norm weights, learning a mapping between the inputs and outputs involves linear optimization of the weights on connections to the output layer. Such networks behave much like a look-up table, and can be called localist networks because each hidden unit can be interpreted as standing for a training pattern. When an input pattern matches a training pattern, a single hidden unit has maximum output. If the norm weight along each dimension is decreased at some RBFs, the receptive fields will overlap, and the network will form a distributed representation by employing an extended, and possibly superposed representation on its hidden layer.

The vertex angle is chosen in such a way that the network would start as RBF. Then, the weights, centres and angle values are updated using error back propagation so that the network converges quickly. 6. cx=:-----1 i=1 (' i : input nodes j : hidden nodes k : output nodes p : number of patterns cos CJ ,=0 for ail k Figure 6: Block diagram of CSFN for training where api = Xpi if unit i is an input unit, Cij are the centres for the RBF network, Wij are the weights in an MLP, W is the half opening angle, which can be any value in the range [-11"/2,11"/2] and determines the different forms of the decision borders, i and j are the indices referring to units in the input and the hidden layer and p refers to the number of the patterns.

Download PDF sample

Rated 4.19 of 5 – based on 39 votes