0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R2,500 - R5,000 (4)
  • -
Status
Brand

Showing 1 - 4 of 4 matches in All Departments

Neural Information Processing and VLSI (Hardcover, 1995 ed.): Bing J. Sheu, Joongho Choi Neural Information Processing and VLSI (Hardcover, 1995 ed.)
Bing J. Sheu, Joongho Choi
R4,494 Discovery Miles 44 940 Ships in 12 - 17 working days

Neural Information Processing and VLSI provides a unified treatment of this important subject for use in classrooms, industry, and research laboratories, in order to develop advanced artificial and biologically-inspired neural networks using compact analog and digital VLSI parallel processing techniques. Neural Information Processing and VLSI systematically presents various neural network paradigms, computing architectures, and the associated electronic/optical implementations using efficient VLSI design methodologies. Conventional digital machines cannot perform computationally-intensive tasks with satisfactory performance in such areas as intelligent perception, including visual and auditory signal processing, recognition, understanding, and logical reasoning (where the human being and even a small living animal can do a superb job). Recent research advances in artificial and biological neural networks have established an important foundation for high-performance information processing with more efficient use of computing resources. The secret lies in the design optimization at various levels of computing and communication of intelligent machines. Each neural network system consists of massively paralleled and distributed signal processors with every processor performing very simple operations, thus consuming little power. Large computational capabilities of these systems in the range of some hundred giga to several tera operations per second are derived from collectively parallel processing and efficient data routing, through well-structured interconnection networks. Deep-submicron very large-scale integration (VLSI) technologies can integrate tens of millions of transistors in a single silicon chip for complex signal processing and information manipulation. The book is suitable for those interested in efficient neurocomputing as well as those curious about neural network system applications. It has been especially prepared for use as a text for advanced undergraduate and first year graduate students, and is an excellent reference book for researchers and scientists working in the fields covered.

Hardware Annealing in Analog VLSI Neurocomputing (Hardcover, 1991 ed.): Bank W. Lee, Bing J. Sheu Hardware Annealing in Analog VLSI Neurocomputing (Hardcover, 1991 ed.)
Bank W. Lee, Bing J. Sheu
R2,935 Discovery Miles 29 350 Ships in 10 - 15 working days

Rapid advances in neural sciences and VLSI design technologies have provided an excellent means to boost the computational capability and efficiency of data and signal processing tasks by several orders of magnitude. With massively parallel processing capabilities, artificial neural networks can be used to solve many engineering and scientific problems. Due to the optimized data communication structure for artificial intelligence applications, a neurocomputer is considered as the most promising sixth-generation computing machine. Typical applica tions of artificial neural networks include associative memory, pattern classification, early vision processing, speech recognition, image data compression, and intelligent robot control. VLSI neural circuits play an important role in exploring and exploiting the rich properties of artificial neural networks by using pro grammable synapses and gain-adjustable neurons. Basic building blocks of the analog VLSI neural networks consist of operational amplifiers as electronic neurons and synthesized resistors as electronic synapses. The synapse weight information can be stored in the dynamically refreshed capacitors for medium-term storage or in the floating-gate of an EEPROM cell for long-term storage. The feedback path in the amplifier can continuously change the output neuron operation from the unity-gain configuration to a high-gain configuration. The adjustability of the vol tage gain in the output neurons allows the implementation of hardware annealing in analog VLSI neural chips to find optimal solutions very efficiently. Both supervised learning and unsupervised learning can be implemented by using the programmable neural chips."

Neural Information Processing and VLSI (Paperback, Softcover reprint of the original 1st ed. 1995): Bing J. Sheu, Joongho Choi Neural Information Processing and VLSI (Paperback, Softcover reprint of the original 1st ed. 1995)
Bing J. Sheu, Joongho Choi
R4,311 Discovery Miles 43 110 Ships in 10 - 15 working days

Neural Information Processing and VLSI provides a unified treatment of this important subject for use in classrooms, industry, and research laboratories, in order to develop advanced artificial and biologically-inspired neural networks using compact analog and digital VLSI parallel processing techniques. Neural Information Processing and VLSI systematically presents various neural network paradigms, computing architectures, and the associated electronic/optical implementations using efficient VLSI design methodologies. Conventional digital machines cannot perform computationally-intensive tasks with satisfactory performance in such areas as intelligent perception, including visual and auditory signal processing, recognition, understanding, and logical reasoning (where the human being and even a small living animal can do a superb job). Recent research advances in artificial and biological neural networks have established an important foundation for high-performance information processing with more efficient use of computing resources. The secret lies in the design optimization at various levels of computing and communication of intelligent machines. Each neural network system consists of massively paralleled and distributed signal processors with every processor performing very simple operations, thus consuming little power. Large computational capabilities of these systems in the range of some hundred giga to several tera operations per second are derived from collectively parallel processing and efficient data routing, through well-structured interconnection networks. Deep-submicron very large-scale integration (VLSI) technologies can integrate tens of millions of transistors in a single silicon chip for complex signal processing and information manipulation. The book is suitable for those interested in efficient neurocomputing as well as those curious about neural network system applications. It has been especially prepared for use as a text for advanced undergraduate and first year graduate students, and is an excellent reference book for researchers and scientists working in the fields covered.

Hardware Annealing in Analog VLSI Neurocomputing (Paperback, Softcover reprint of the original 1st ed. 1991): Bank W. Lee, Bing... Hardware Annealing in Analog VLSI Neurocomputing (Paperback, Softcover reprint of the original 1st ed. 1991)
Bank W. Lee, Bing J. Sheu
R2,778 Discovery Miles 27 780 Ships in 10 - 15 working days

Rapid advances in neural sciences and VLSI design technologies have provided an excellent means to boost the computational capability and efficiency of data and signal processing tasks by several orders of magnitude. With massively parallel processing capabilities, artificial neural networks can be used to solve many engineering and scientific problems. Due to the optimized data communication structure for artificial intelligence applications, a neurocomputer is considered as the most promising sixth-generation computing machine. Typical applica tions of artificial neural networks include associative memory, pattern classification, early vision processing, speech recognition, image data compression, and intelligent robot control. VLSI neural circuits play an important role in exploring and exploiting the rich properties of artificial neural networks by using pro grammable synapses and gain-adjustable neurons. Basic building blocks of the analog VLSI neural networks consist of operational amplifiers as electronic neurons and synthesized resistors as electronic synapses. The synapse weight information can be stored in the dynamically refreshed capacitors for medium-term storage or in the floating-gate of an EEPROM cell for long-term storage. The feedback path in the amplifier can continuously change the output neuron operation from the unity-gain configuration to a high-gain configuration. The adjustability of the vol tage gain in the output neurons allows the implementation of hardware annealing in analog VLSI neural chips to find optimal solutions very efficiently. Both supervised learning and unsupervised learning can be implemented by using the programmable neural chips."

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
The Dead Romantics
Ashley Poston Paperback R295 R236 Discovery Miles 2 360
Beyond Order - 12 More Rules For Life
Jordan B. Peterson Paperback R295 R231 Discovery Miles 2 310
Twisted Hate - Twisted: Book 3
Ana Huang Paperback  (2)
R280 R189 Discovery Miles 1 890
Introduction to the Law of Tenures
Martin Wright Paperback R468 Discovery Miles 4 680
Good Thoughts in Bad Times, and Other…
Thomas Fuller Paperback R589 Discovery Miles 5 890
Natural causes of language
N.J. Enfield Hardcover R663 Discovery Miles 6 630
How to Change the World in Seven Years…
Steven L Smith Hardcover R526 Discovery Miles 5 260
Thirty Days In Paris
Veronica Henry Paperback R250 R200 Discovery Miles 2 000
The Satires of Decimus Junius Juvenalis…
Juvenal Paperback R591 Discovery Miles 5 910
'n Ultimatum Vir Liefde
Corne van Rooyen Paperback R110 R95 Discovery Miles 950

 

Partners