Vu Huy The - Background
をテンプレートにして作成
[
トップ
] [
新規
|
一覧
|
単語検索
|
最終更新
|
ヘルプ
|
ログイン
]
開始行:
[[Vu Huy The]]
*BAckground [#zfc727ba]
The biological brain implements massively parallel computations using a complex architecture that is different from current Von Neuman machine. Our brain is a low-power, fault-tolerant and high-performance machine! It consumes only about 20W and brain circuits continue to operate as the organism needs even when the circuit (neuron, neuroglia, etc.) is perturbed or died.
Conventional neural networks encode information with static input coding, eg. encoding a pattern as 0011 (binary bits) for 4 input neurons, and another pattern 0010. While in SNN, besides the pattern code, the time-related factors, eg. spiking rate, spiking rank and spiking intervals, can be used to present the information. This greatly increases the information processing capacity of a neural network.
SNN only process information when spikes occur. As a result, SNNs consume almost no energy when no spikes occur [www].
A biological SNN (e.g. mammalian brains) uses a spike rate of only several KHz to finish a complex task (e.g. vision pattern recognition) with each spike consumes an energy of in fJ per synapse.
Spiking neural network (SNN) simulations are a flexible and powerful method for investigating the behavior of neuronal systems. However, simulation of the spiking neural networks in software is slow. An alternative approach is a hardware implementation of such system, which provides the possibility to generate independent spikes accurately and simultaneously output spikes in real time. Also, the spiking neural network can take full advantage of hardware inherent parallelism. SNN and ANN are widely used in signal processing, speech synthesis, pattern recognition, and so forth.
When the system is implemented in FPGA/hardware, there are several aspects of computer arithmetic that need to be considered; these include data representation, inner product computation, implementation of activation functions, storage and update of weights, and the nature of learning algorithms.
*Research Goal [#ia9f7128]
The goal of this Doctoral thesis is to research and implement an adaptive and low-power Deep Spiking Neural Networks Processor based on the [[OASIS Scalable Packet-Switched Network>http://adaptive.u-aizu.ac.jp/aslint/index.php?OASIS]]. In particular, the research consists of:
-(1) An efficient adaptive configuration method which enables reconfiguration of different SNN parameters (spike weights, routing, hidden layers, topology, etc.)
-(2) A mixture of different Deep NN topologies (CNN and RNN) shall be investigated
-(3) An Efficient Multicast Fault-tolerant Routing Algorithm for the neurochip
-(4) Efficient on-chip learning Algorithm
-(5) To demonstrate the performance of the algorithms and the system, an FPGA implementation shall be developed and interfaced to a small drone. In addition, a VLSI implementation shall be also developed.
*Oasis [#lfcfd5a0]
-[[OASIS]]
-[[ASL Theses>Theses]]
-Nigram Simulator, University of Southampton, UK, [[http://nirgam.ecs.soton.ac.uk/>http://nirgam.ecs.soton.ac.uk/]]. SystemC based simulator with configurable NoC parameters.
*Neuromorphic Processors Related Main Papers [#a527057f]
-1.(2015-MEMRESISTOR) Yongtae Kim, Yong Zhang, Peng Li, [[''A Reconfigurable Digital Neuromorphic Processor with Memristive Synaptic Crossbar for Cognitive Computing''>http://dl.acm.org/citation.cfm?id=2700234]], ACM Journal on Emerging Technologies in Computing Systems (JETC) - Special Issues on Neuromorphic Computing and Emerging Many-Core Systems for Exascale Computing, Volume 11 Issue 4, April 2015 Article No. 38.
-2.(2011-SRAM) J. s. Seo et al., [[''"A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons''>http://ieeexplore.ieee.org/document/6055293/]]," 2011 IEEE Custom Integrated Circuits Conference (CICC), San Jose, CA, 2011, pp. 1-4. doi: 10.1109/CICC.2011.6055293
-3.(2010) Janardan Misra, [[''Artificial neural networks in hardware: A survey of two decades of progress''>https://drive.google.com/file/d/0B2HMlO4p7SuwRXduX1pkNEpLc1k/view?usp=sharing]], 2010.
*ACM Special Issues [#tdfef934]
-[[ACM:Special Issue on Hardware and Algorithms for Learning On-a-chip>http://dl.acm.org/citation.cfm?id=3051701&picked=prox&CFID=734063357&CFTOKEN=62358140]], Feb 2017
-[[ACM:Special Issue on Neuromorphic Computing>http://dl.acm.org/citation.cfm?id=2767119&picked=prox&CFID=734063357&CFTOKEN=62358140]],Volume 11 Issue 4, April 2015
-[[ACM:Special issue on memory technologies Volume 9 Issue 2, May 2013>http://dl.acm.org/citation.cfm?id=2463585&picked=prox&CFID=734063357&CFTOKEN=62358140]]
終了行:
[[Vu Huy The]]
*BAckground [#zfc727ba]
The biological brain implements massively parallel computations using a complex architecture that is different from current Von Neuman machine. Our brain is a low-power, fault-tolerant and high-performance machine! It consumes only about 20W and brain circuits continue to operate as the organism needs even when the circuit (neuron, neuroglia, etc.) is perturbed or died.
Conventional neural networks encode information with static input coding, eg. encoding a pattern as 0011 (binary bits) for 4 input neurons, and another pattern 0010. While in SNN, besides the pattern code, the time-related factors, eg. spiking rate, spiking rank and spiking intervals, can be used to present the information. This greatly increases the information processing capacity of a neural network.
SNN only process information when spikes occur. As a result, SNNs consume almost no energy when no spikes occur [www].
A biological SNN (e.g. mammalian brains) uses a spike rate of only several KHz to finish a complex task (e.g. vision pattern recognition) with each spike consumes an energy of in fJ per synapse.
Spiking neural network (SNN) simulations are a flexible and powerful method for investigating the behavior of neuronal systems. However, simulation of the spiking neural networks in software is slow. An alternative approach is a hardware implementation of such system, which provides the possibility to generate independent spikes accurately and simultaneously output spikes in real time. Also, the spiking neural network can take full advantage of hardware inherent parallelism. SNN and ANN are widely used in signal processing, speech synthesis, pattern recognition, and so forth.
When the system is implemented in FPGA/hardware, there are several aspects of computer arithmetic that need to be considered; these include data representation, inner product computation, implementation of activation functions, storage and update of weights, and the nature of learning algorithms.
*Research Goal [#ia9f7128]
The goal of this Doctoral thesis is to research and implement an adaptive and low-power Deep Spiking Neural Networks Processor based on the [[OASIS Scalable Packet-Switched Network>http://adaptive.u-aizu.ac.jp/aslint/index.php?OASIS]]. In particular, the research consists of:
-(1) An efficient adaptive configuration method which enables reconfiguration of different SNN parameters (spike weights, routing, hidden layers, topology, etc.)
-(2) A mixture of different Deep NN topologies (CNN and RNN) shall be investigated
-(3) An Efficient Multicast Fault-tolerant Routing Algorithm for the neurochip
-(4) Efficient on-chip learning Algorithm
-(5) To demonstrate the performance of the algorithms and the system, an FPGA implementation shall be developed and interfaced to a small drone. In addition, a VLSI implementation shall be also developed.
*Oasis [#lfcfd5a0]
-[[OASIS]]
-[[ASL Theses>Theses]]
-Nigram Simulator, University of Southampton, UK, [[http://nirgam.ecs.soton.ac.uk/>http://nirgam.ecs.soton.ac.uk/]]. SystemC based simulator with configurable NoC parameters.
*Neuromorphic Processors Related Main Papers [#a527057f]
-1.(2015-MEMRESISTOR) Yongtae Kim, Yong Zhang, Peng Li, [[''A Reconfigurable Digital Neuromorphic Processor with Memristive Synaptic Crossbar for Cognitive Computing''>http://dl.acm.org/citation.cfm?id=2700234]], ACM Journal on Emerging Technologies in Computing Systems (JETC) - Special Issues on Neuromorphic Computing and Emerging Many-Core Systems for Exascale Computing, Volume 11 Issue 4, April 2015 Article No. 38.
-2.(2011-SRAM) J. s. Seo et al., [[''"A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons''>http://ieeexplore.ieee.org/document/6055293/]]," 2011 IEEE Custom Integrated Circuits Conference (CICC), San Jose, CA, 2011, pp. 1-4. doi: 10.1109/CICC.2011.6055293
-3.(2010) Janardan Misra, [[''Artificial neural networks in hardware: A survey of two decades of progress''>https://drive.google.com/file/d/0B2HMlO4p7SuwRXduX1pkNEpLc1k/view?usp=sharing]], 2010.
*ACM Special Issues [#tdfef934]
-[[ACM:Special Issue on Hardware and Algorithms for Learning On-a-chip>http://dl.acm.org/citation.cfm?id=3051701&picked=prox&CFID=734063357&CFTOKEN=62358140]], Feb 2017
-[[ACM:Special Issue on Neuromorphic Computing>http://dl.acm.org/citation.cfm?id=2767119&picked=prox&CFID=734063357&CFTOKEN=62358140]],Volume 11 Issue 4, April 2015
-[[ACM:Special issue on memory technologies Volume 9 Issue 2, May 2013>http://dl.acm.org/citation.cfm?id=2463585&picked=prox&CFID=734063357&CFTOKEN=62358140]]
ページ名: