0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R2,500 - R5,000 (4)
  • -
Status
Brand

Showing 1 - 4 of 4 matches in All Departments

Neural Information Processing and VLSI (Hardcover, 1995 ed.): Bing J. Sheu, Joongho Choi Neural Information Processing and VLSI (Hardcover, 1995 ed.)
Bing J. Sheu, Joongho Choi
R4,494 Discovery Miles 44 940 Ships in 12 - 17 working days

Neural Information Processing and VLSI provides a unified treatment of this important subject for use in classrooms, industry, and research laboratories, in order to develop advanced artificial and biologically-inspired neural networks using compact analog and digital VLSI parallel processing techniques. Neural Information Processing and VLSI systematically presents various neural network paradigms, computing architectures, and the associated electronic/optical implementations using efficient VLSI design methodologies. Conventional digital machines cannot perform computationally-intensive tasks with satisfactory performance in such areas as intelligent perception, including visual and auditory signal processing, recognition, understanding, and logical reasoning (where the human being and even a small living animal can do a superb job). Recent research advances in artificial and biological neural networks have established an important foundation for high-performance information processing with more efficient use of computing resources. The secret lies in the design optimization at various levels of computing and communication of intelligent machines. Each neural network system consists of massively paralleled and distributed signal processors with every processor performing very simple operations, thus consuming little power. Large computational capabilities of these systems in the range of some hundred giga to several tera operations per second are derived from collectively parallel processing and efficient data routing, through well-structured interconnection networks. Deep-submicron very large-scale integration (VLSI) technologies can integrate tens of millions of transistors in a single silicon chip for complex signal processing and information manipulation. The book is suitable for those interested in efficient neurocomputing as well as those curious about neural network system applications. It has been especially prepared for use as a text for advanced undergraduate and first year graduate students, and is an excellent reference book for researchers and scientists working in the fields covered.

Hardware Annealing in Analog VLSI Neurocomputing (Hardcover, 1991 ed.): Bank W. Lee, Bing J. Sheu Hardware Annealing in Analog VLSI Neurocomputing (Hardcover, 1991 ed.)
Bank W. Lee, Bing J. Sheu
R2,935 Discovery Miles 29 350 Ships in 10 - 15 working days

Rapid advances in neural sciences and VLSI design technologies have provided an excellent means to boost the computational capability and efficiency of data and signal processing tasks by several orders of magnitude. With massively parallel processing capabilities, artificial neural networks can be used to solve many engineering and scientific problems. Due to the optimized data communication structure for artificial intelligence applications, a neurocomputer is considered as the most promising sixth-generation computing machine. Typical applica tions of artificial neural networks include associative memory, pattern classification, early vision processing, speech recognition, image data compression, and intelligent robot control. VLSI neural circuits play an important role in exploring and exploiting the rich properties of artificial neural networks by using pro grammable synapses and gain-adjustable neurons. Basic building blocks of the analog VLSI neural networks consist of operational amplifiers as electronic neurons and synthesized resistors as electronic synapses. The synapse weight information can be stored in the dynamically refreshed capacitors for medium-term storage or in the floating-gate of an EEPROM cell for long-term storage. The feedback path in the amplifier can continuously change the output neuron operation from the unity-gain configuration to a high-gain configuration. The adjustability of the vol tage gain in the output neurons allows the implementation of hardware annealing in analog VLSI neural chips to find optimal solutions very efficiently. Both supervised learning and unsupervised learning can be implemented by using the programmable neural chips."

Neural Information Processing and VLSI (Paperback, Softcover reprint of the original 1st ed. 1995): Bing J. Sheu, Joongho Choi Neural Information Processing and VLSI (Paperback, Softcover reprint of the original 1st ed. 1995)
Bing J. Sheu, Joongho Choi
R4,311 Discovery Miles 43 110 Ships in 10 - 15 working days

Neural Information Processing and VLSI provides a unified treatment of this important subject for use in classrooms, industry, and research laboratories, in order to develop advanced artificial and biologically-inspired neural networks using compact analog and digital VLSI parallel processing techniques. Neural Information Processing and VLSI systematically presents various neural network paradigms, computing architectures, and the associated electronic/optical implementations using efficient VLSI design methodologies. Conventional digital machines cannot perform computationally-intensive tasks with satisfactory performance in such areas as intelligent perception, including visual and auditory signal processing, recognition, understanding, and logical reasoning (where the human being and even a small living animal can do a superb job). Recent research advances in artificial and biological neural networks have established an important foundation for high-performance information processing with more efficient use of computing resources. The secret lies in the design optimization at various levels of computing and communication of intelligent machines. Each neural network system consists of massively paralleled and distributed signal processors with every processor performing very simple operations, thus consuming little power. Large computational capabilities of these systems in the range of some hundred giga to several tera operations per second are derived from collectively parallel processing and efficient data routing, through well-structured interconnection networks. Deep-submicron very large-scale integration (VLSI) technologies can integrate tens of millions of transistors in a single silicon chip for complex signal processing and information manipulation. The book is suitable for those interested in efficient neurocomputing as well as those curious about neural network system applications. It has been especially prepared for use as a text for advanced undergraduate and first year graduate students, and is an excellent reference book for researchers and scientists working in the fields covered.

Hardware Annealing in Analog VLSI Neurocomputing (Paperback, Softcover reprint of the original 1st ed. 1991): Bank W. Lee, Bing... Hardware Annealing in Analog VLSI Neurocomputing (Paperback, Softcover reprint of the original 1st ed. 1991)
Bank W. Lee, Bing J. Sheu
R2,778 Discovery Miles 27 780 Ships in 10 - 15 working days

Rapid advances in neural sciences and VLSI design technologies have provided an excellent means to boost the computational capability and efficiency of data and signal processing tasks by several orders of magnitude. With massively parallel processing capabilities, artificial neural networks can be used to solve many engineering and scientific problems. Due to the optimized data communication structure for artificial intelligence applications, a neurocomputer is considered as the most promising sixth-generation computing machine. Typical applica tions of artificial neural networks include associative memory, pattern classification, early vision processing, speech recognition, image data compression, and intelligent robot control. VLSI neural circuits play an important role in exploring and exploiting the rich properties of artificial neural networks by using pro grammable synapses and gain-adjustable neurons. Basic building blocks of the analog VLSI neural networks consist of operational amplifiers as electronic neurons and synthesized resistors as electronic synapses. The synapse weight information can be stored in the dynamically refreshed capacitors for medium-term storage or in the floating-gate of an EEPROM cell for long-term storage. The feedback path in the amplifier can continuously change the output neuron operation from the unity-gain configuration to a high-gain configuration. The adjustability of the vol tage gain in the output neurons allows the implementation of hardware annealing in analog VLSI neural chips to find optimal solutions very efficiently. Both supervised learning and unsupervised learning can be implemented by using the programmable neural chips."

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Loot
Nadine Gordimer Paperback  (2)
R383 R318 Discovery Miles 3 180
Efekto Roundup - Ready-To-Use Weedkiller…
R369 R299 Discovery Miles 2 990
Seagull Clear Storage Box (29lt)
R241 Discovery Miles 2 410
Dala Craft Pom Poms - Assorted Colours…
R34 Discovery Miles 340
Bestway Solar Float Lamp
R265 Discovery Miles 2 650
Casio LW-200-7AV Watch with 10-Year…
R999 R884 Discovery Miles 8 840
Butterfly A4 120gsm Sketch Pad - Medium…
R64 R35 Discovery Miles 350
Complete Snack-A-Chew Iced Dog Biscuits…
R119 R89 Discovery Miles 890
Sony NEW Playstation Dualshock 4 v2…
 (22)
R1,428 Discovery Miles 14 280
Bostik Clear (50ml)
R57 Discovery Miles 570

 

Partners