Conclusion

1) From the simulations of the neural learning algorithms, it can be seen that the proposed extended learning algorithm to dynamically reroute synapses is the ideal method of improving the efficiency of a neural layout. It allows the number of synapses to be reduced without significantly affecting the performance. A lower synapse count may help improve generalization capabilities of the network, and also allow chip space to be more efficiently used for other parts of the neural network.

The flexibility in the routing of the proposed chip allows this algorithm to be efficiently implemented in hardware. To a certain density, any synapse may be connected to any neuron.

2) The proposed circuits for the chip design have shown excellent results in PSPICE simulations. The sigmoid and sigmoid derivative circuits show insignificant error from the theoretical functions over a large input swing. The multiplier cell shows excellent linearity when the input range is reduced to 150mV using a differential pair, with a common-mode voltage of between 2-3 V, in contrast to other designs using the full multiplier range.

Current-to-voltage conversion and range reduction circuits were also simulated and shown to have good performance and linearity over a wide range of inputs. The new method of high-resolution analog multivalued memory shows improved performance and density over previous methods, due to the limited number of required routing connections.

While the capacitor-based Multivalued-to-Analog Converter has not yet been simulated, parasitic capacitance was extracted from MOSIS test parameters. Mean parasitic capacitance was found to be 12.9%, with standard deviation of 0.5%. Compensation can take place to remove the majority of the effect of parasitic capacitance.

Copyright © Malcolm Stagg 2006. All Rights Reserved.
Website: http://www.virtualsciencefair.org/2006/stag6m2. Email: malcolmst@shaw.ca.