Artificial Neural Networks

Typically implemented as computer software, artificial neural networks emulate the human brainís ability to learn and recognize patterns. They consist of an array of neurons as basic processing elements, connected by synapses. Error can be calculated as data is presented, to allow the neural network to make modifications to its synaptic weights to reduce the error. Neural networks can be used in applications including business, manufacturing, and areas of science and engineering. Robotic navigation is a key field where neural networks are required.

[back to top]


Backpropagation or BP is the most common neural network learning algorithm. Input signals propagate forwards through the network, and error signals propagate backwards. Weight adjustments are made to reduce error.

[back to top]


Hebbian is an unsupervised learning algorithm, where the neural network adjusts weights as it finds similarities in input patterns. There is no need to compute error in this algorithm.

[back to top]


Basic arithmetic circuits required for neural networks can be implemented in analog VLSI (Very Large Scale Integration) CMOS (Complementary Metal Oxide Semiconductor) process using only a few transistors. The basic nonlinear sigmoid circuit of tanh [25] can be put in as a simple differential pair. The derivative of tanh (required for backwards-propagation) as sech2 [11] is identical, except with a different layout of the two output transistors.

The proposed neuron design supports both Backpropagation and Hebbian learning on the same cell. Differential currents are summed at the input, and converted into voltages. The voltages are input into the tanh sigmoid and sech2 sigmoid derivative. The tanh is output from the neuron as a differential voltage. An analog memory averaging cell stores the moving average of the sigmoid output. A multiplier cell calculates the product of the backwards-propagating input and the sech2 of the input signal. An SRAM cell is used to store whether BP or Hebbian is used. If BP is used, the multiplier output is connected to the backwards propagation output using a transmission gate. If Hebbian is used, transmission gates connect both backwards-propagating input and output to the single-ended sigmoid output minus the single-ended average, as a differential voltage.

Fig. 2: Proposed Multi-Algorithm Neuron Block Diagram

Fig. 3: Proposed Multi-Algorithm Neuron Cell Layout
[back to top]


For Backpropagation, synapses consist of a stored weight which is multiplied by each input to generate outputs. The weight is adjusted by multiplying both forwards and error inputs, and scaling that value by a learning rate parameter.

In Hebbian learning, the forwards-propagating signal is again multiplied by the stored weight. The weight update is computed as the product of the backwards propagating signal on each side of the synapse, again scaled by the learning rate parameter.

The proposed synapse design supports both Backpropagation and Hebbian learning on the same cell. Differential voltages at the input are multiplied by the synaptic weight stored in an analog memory cell. An SRAM cell is used to store whether BP or Hebbian is used. If BP is used, a similar multiplier circuit is connected to the backwards-propagating output using a transmission gate. Weight update multiplies the backwards-propagating input by a value selected by transmission gates. With BP this value is the forwards- propagating input. With Hebbian, it is the backwards-propagating output.

Fig. 4: Proposed Multi-Algorithm Synapse Block Diagram

Fig. 5: Multi-Algorithm Synapse Cell Layout
[back to top]


The key analog cells are formed from the multiplier, sigmoid, and sigmoid derivative circuits. However, to allow the network to be routed into any configuration, many digital SRAM cells must be used. A basic 6-transistor cell, with row-select is used for each bit. Row and column decoders allow addressing to change individual bits.

A new arrangement of programmable wires and connections allows the neural network to be extremely routable. SRAM cells are arranged in a new configuration of square arrays of programmable junctions diagonally between neurons and synapses (Fig. 6). These arrays are surrounded by SRAM programmable wires. Directly between adjacent neurons and synapses, lines of programmable junctions are also formed. This arrangement allows the network to be extremely reconfigurable, with any neuron connected to any other neuron, to a certain routing density.

A demonstration of the routing circuitry has been constructed, as a model of the neural network routing performance. Discrete analog switches and flip-flops are used.

Fig. 6: Routing configuration of progammable wires and connections

Fig. 7: Programmable Connection
Fig. 8: Programmable Wire

SRAM cells are also used with transmission-gates to transform Backpropagation neural cells into Hebbian neural cells and vice-versa. Therefore, the neural network may contain an arbitrary mixture of Backpropagation and Hebbian circuits. This may further extend the learning algorithm by having some parts of the network learn by pattern, and other parts of the network learn by feedback.

[back to top]

Complementary Metal Oxide Semiconductor (CMOS)

The design for this neural network chip is made using CMOS technology. It is possible to implement N-channel and P-channel MOSFET transistors using this process, to form many transistor based processing circuits using both analog and digital. Resistors, diodes, and small capacitors may also be created.

[back to top]

Analog Memory

Digital memory typically uses volatile transistor-based SRAM (Static Random Access Memory), volatile capacitor-based DRAM (Dynamic Random Access Memory), which requires values to be refreshed, or non-volatile EEPROM (Electronically Erasable Programmmable Read Only Memory) using floating-gate transistors. SRAM cannot be used for analog values, but DRAM and ROM can be modified to work in analog circuits. DRAM can be quantized to a low resolution of values, whereas ROM can store any analog value, but is limited to a small number of write-cycles.

Multi-valued analog and digital circuits are used for the quantization of the multi-valued analog memory. All capacitor-based analog memory cells inherently have some leakage over time. The time period before a significant loss of data is typically a fraction of a second. Therefore, the memory must be frequently quantized to a known value at some resolution, and refreshed.

Two possibilities have been proposed:

a) High resolution combination analog/digital memory was proposed some time ago by Lee and Gulak [19]. This uses the analog capacitor for some of the MSBs (Most Significant Bits), and digital RAM for LSBs (Least Significant Bits). Depending on the process parameters, this may have best density.

b) A new extension of this idea replaces digital memory with additional analog memory cells, which are added to multiply the resolution. Each is quantized, with the main capacitor quantized to the highest resolution, based on the values of the other capacitors. A new multivalued-to-analog converter quantizes the main capacitor to full resolution, based on other capacitors which are quantized to lower resolution.

A new concept similar to existing C-2C capacitor-based DACs [10], as a Multivalued-to-Analog Converter is used for the quantization. Instead of converting digital bits to an analog signal, multi-valued analog signals are combined. This is possible by solving the capacitor values to work at the required resolution. With compensation for parasitic capacitance; error, area, and current-consumption are all very low.

[back to top]

Extended Algorithms

The FPGA-based supervising controller selects locations to place Hebbian-based monitoring nodes. These nodes provide feedback about the relationship between two areas of the neural network. If a relation is found, these monitoring nodes are changed into synapses. If not, they are moved to other locations, forming the basis of the extended learning algorithm to dynamically re-route synapses. Existing neurons are also periodically monitored to find the mean and deviation outputs. If the neuron is not being adequately used, they will be removed along with any connected synapses.

Dynamic reconfiguration of the network can greatly extend the entire learning process; emulating the human brainís ability to create and remove synaptic connections subjectively.

Dynamic Reconfiguration

High-routability of the proposed chip allows this algorithm to autonomously change the network layout. Synapses are placed as monitoring nodes with no synaptic weight, randomly throughout the chip. The chipís routing is modified to allow them to be connected to neurons. If the stored weight becomes significant, the synapse is kept. If not, it is removed. Neurons are randomly monitored to determine if the standard deviation of the output is significant. If not, the neuron and any connected synapses are removed. Synapses are also randomly monitored to determine if they have a significant weight. Synapses with insignificant weights are removed.

Concept Storage

Sometimes it is desirable to allow learning to continue in the neural network without damaging existing concepts which have already been learned. This algorithm is possible with Backpropagation learning, and may also be used with Hebbian by temporarily changing all nodes to Backpropagation. A constant value is input into all inputs and error inputs associated with the concept. Each refresh controller is accessed to modify the learning rate in each synapse, based on its error-responsibility. Synapses which are most responsible for providing the concept will have low learning rates to prevent modification to the concept.

[back to top]
Copyright © Malcolm Stagg 2006. All Rights Reserved.
Website: Email: