Yuji Murakami - Master
をテンプレートにして作成
[
トップ
] [
新規
|
一覧
|
単語検索
|
最終更新
|
ヘルプ
|
ログイン
]
開始行:
CENTER:COLOR(#C0C0C0){SIZE(20){[[NASH: Neuro-inspired ArchitectureS in Hardware Project>http://adaptive.u-aizu.ac.jp/aslint/index.php?NASH]]}}
----
CENTER:SIZE(40){COLOR(green){Design of Neural Network Architecture on Reconfigurable Hardware for Traffic Light Detection in Autonomous Driving Vehicle }}
----
***On-going Paper [#t858692d]
-:&ref(Thesis_Japanes.pdf,,);, Date: Janury 15, 2018
-:&ref(Thesis_Japanes_2.pdf,,);, Date: Janury 19, 2018
----
**COLOR(red){Research Schedule - Please update it always according to your progress} [#pe7f765e]
please insert your reseach schedule here. Due date: July 7, 2017
-Step 1. Hardware implementation not spiking on FPGA
--
-Step 2. HW implementation spiking feed forward neural network using Suzuki’s LIF core on FPGA
-Step 3. Spiking back propagation function and simulation
-Step 4. modify for NoC
----
*[[NASH seminars>http://adaptive.u-aizu.ac.jp/aslint/index.php?NASH-SEMINAR]] [#tf538694]
----
* Background and Motivation [#hda87a4b]
The biological brain implements massively parallel computations using a complex architecture that is different from current Von Neuman machine. Our brain is a low-power, fault-tolerant, and high-performance machine! It consumes only about 20W and brain circuits continue to operate as the organism needs even when the circuit (neuron, neuroglia, etc.) is perturbed or died. Conventional neural networks encode information with static input coding, eg. encoding a pattern as 0011 (binary bits) for 4 input neurons, and another pattern 0010. While in SNN, besides the pattern code, the time-related factors, eg. spiking rate, spiking rank and spiking intervals, can be used to present the information. This greatly increases the information processing capacity of a neural network. SNN only process information when spikes occur. As a result, SNNs consume almost no energy when no spikes occur [www]. A biological SNN (e.g. mammalian brains) uses a spike rate of only several KHz to finish a complex task (e.g. vision pattern recognition) with each spike consumes an energy of in fJ per synapse. Spiking neural network (SNN) simulations are a flexible and powerful method for investigating the behavior of neuronal systems. However, simulation of the spiking neural networks in software is slow. An alternative approach is a hardware implementation of such system, which provides the possibility to generate independent spikes accurately and simultaneously output spikes in real time. Also, spiking neural network can take full advantage of hardware inherent parallelism. SNN and ANN are widely used in signal processing, speech synthesis, pattern recognition, and so forth.
*Goal [#r6101ae3]
The goal of this research is to (1) Design a Feed-Forward Neural-Network on FPGA for Traffic Light Recognition, (2) Design space exploration.
CENTER:&ref(NN_on_FPGA.jpg,,80%);
CENTER:Overall Implementation Approach
//CENTER:&ref(Autonomous-Car.jpg,,80%);
//CENTER:Autonomous Car
CENTER:&ref(stop-sign-recognition.jpg,,80%);
CENTER:Stop sign Recognition
* RPS and RPR [#hda87a4b]
-RPS
--2017/04/26:&ref(RPS_20170426.pdf,,);
--2017/06/07:&ref(RPS_20170607.pdf,,);
--2017/07/05:&ref(RPS_20170705.pdf,,);
--2017/09/20:&ref(RPS_20170920.pdf,,);
--2018/10/15: &ref(10_15.pdf,,);&ref(10_15.pptx,,);
--2018/12/7: &ref(RPS_20181207.pdf,,);
-RPR
--2017/05/10:&ref(RPR_20170510.pdf,,);
--2017/06/21:&ref(RPR20170621.pdf,,);
-Research Plan Seminar
--rehearsal: &ref(Research_Plan_Seminar_07312017 .pptx,,);
-Thesis
--Thesis_1 Dec. 18, 2017 : &ref(Yuji_Murakami_Dec_18.zip,,);
--Thesis_1_ENG Apr. 11, 2017 :&ref(IPSJ_ENG.zip,,);
*References [#a16be670]
**COLOR(red){''Multicast Routing [マルチキャストルーティング]''} [#h09400e0]
-[[Multicast Routing Refrences - shared GD folder>https://drive.google.com/drive/folders/0B2HMlO4p7SuwRUVpR0trVWZvRU0?usp=sharing]]
**Others [#u6f047d4]
-R0. [[Scalable Hardware Architecture for Memristor Based Artificial Neural Network Systems>https://drive.google.com/file/d/0B2HMlO4p7SuwNHF3YWloNVEtNkk/view?usp=sharing]]
-R1. [[ニューラルネットワークによる演算>https://drive.google.com/file/d/0B2HMlO4p7SuwN1Z0dW1pdEdDeEU/view?usp=sharing]], 信州大学理学部物理科学科卒業論文, 平成16 年3 月1 日.
-R2. [[‘‘Adaptive SoCs for Smart Autonomous Systems”>http://web-ext.u-aizu.ac.jp/~benab/publications/keynotes/BenAbdallah_PlenaryTalk_STA2016.pdf]], 2016.
-R3. [[A Survey of Neuro-inspired Computing Systems>http://adaptive.u-aizu.ac.jp/aslwp/wp-content/uploads/2017/01/neurosystem-survey-vht2016.pdf]], 2017.
-R4. [[Demo II on FPGA: Design of an Neural Network for Character Recognition with Backpropagation Learning Metod>http://adaptive.u-aizu.ac.jp/aslwp/wp-content/uploads/2016/06/20170303-COSCO-LR-BacPG.pdf]]
-[[Memristors and Memristive Systems 2,014th Edition, Kindle Edition>https://www.amazon.com/Memristors-Memristive-Systems-Ronald-Tetzlaff-ebook/dp/B00H5IO80S/ref=sr_1_1?ie=UTF8&qid=1499153248&sr=8-1&keywords=Memristors+and+Memristive+Systems]]
-[[DSNNBP>https://www.frontiersin.org/articles/10.3389/fnins.2016.00508/full]]
-[[https://www.ibm.com/smarterplanet/jp/ja/brainpower/]]
終了行:
CENTER:COLOR(#C0C0C0){SIZE(20){[[NASH: Neuro-inspired ArchitectureS in Hardware Project>http://adaptive.u-aizu.ac.jp/aslint/index.php?NASH]]}}
----
CENTER:SIZE(40){COLOR(green){Design of Neural Network Architecture on Reconfigurable Hardware for Traffic Light Detection in Autonomous Driving Vehicle }}
----
***On-going Paper [#t858692d]
-:&ref(Thesis_Japanes.pdf,,);, Date: Janury 15, 2018
-:&ref(Thesis_Japanes_2.pdf,,);, Date: Janury 19, 2018
----
**COLOR(red){Research Schedule - Please update it always according to your progress} [#pe7f765e]
please insert your reseach schedule here. Due date: July 7, 2017
-Step 1. Hardware implementation not spiking on FPGA
--
-Step 2. HW implementation spiking feed forward neural network using Suzuki’s LIF core on FPGA
-Step 3. Spiking back propagation function and simulation
-Step 4. modify for NoC
----
*[[NASH seminars>http://adaptive.u-aizu.ac.jp/aslint/index.php?NASH-SEMINAR]] [#tf538694]
----
* Background and Motivation [#hda87a4b]
The biological brain implements massively parallel computations using a complex architecture that is different from current Von Neuman machine. Our brain is a low-power, fault-tolerant, and high-performance machine! It consumes only about 20W and brain circuits continue to operate as the organism needs even when the circuit (neuron, neuroglia, etc.) is perturbed or died. Conventional neural networks encode information with static input coding, eg. encoding a pattern as 0011 (binary bits) for 4 input neurons, and another pattern 0010. While in SNN, besides the pattern code, the time-related factors, eg. spiking rate, spiking rank and spiking intervals, can be used to present the information. This greatly increases the information processing capacity of a neural network. SNN only process information when spikes occur. As a result, SNNs consume almost no energy when no spikes occur [www]. A biological SNN (e.g. mammalian brains) uses a spike rate of only several KHz to finish a complex task (e.g. vision pattern recognition) with each spike consumes an energy of in fJ per synapse. Spiking neural network (SNN) simulations are a flexible and powerful method for investigating the behavior of neuronal systems. However, simulation of the spiking neural networks in software is slow. An alternative approach is a hardware implementation of such system, which provides the possibility to generate independent spikes accurately and simultaneously output spikes in real time. Also, spiking neural network can take full advantage of hardware inherent parallelism. SNN and ANN are widely used in signal processing, speech synthesis, pattern recognition, and so forth.
*Goal [#r6101ae3]
The goal of this research is to (1) Design a Feed-Forward Neural-Network on FPGA for Traffic Light Recognition, (2) Design space exploration.
CENTER:&ref(NN_on_FPGA.jpg,,80%);
CENTER:Overall Implementation Approach
//CENTER:&ref(Autonomous-Car.jpg,,80%);
//CENTER:Autonomous Car
CENTER:&ref(stop-sign-recognition.jpg,,80%);
CENTER:Stop sign Recognition
* RPS and RPR [#hda87a4b]
-RPS
--2017/04/26:&ref(RPS_20170426.pdf,,);
--2017/06/07:&ref(RPS_20170607.pdf,,);
--2017/07/05:&ref(RPS_20170705.pdf,,);
--2017/09/20:&ref(RPS_20170920.pdf,,);
--2018/10/15: &ref(10_15.pdf,,);&ref(10_15.pptx,,);
--2018/12/7: &ref(RPS_20181207.pdf,,);
-RPR
--2017/05/10:&ref(RPR_20170510.pdf,,);
--2017/06/21:&ref(RPR20170621.pdf,,);
-Research Plan Seminar
--rehearsal: &ref(Research_Plan_Seminar_07312017 .pptx,,);
-Thesis
--Thesis_1 Dec. 18, 2017 : &ref(Yuji_Murakami_Dec_18.zip,,);
--Thesis_1_ENG Apr. 11, 2017 :&ref(IPSJ_ENG.zip,,);
*References [#a16be670]
**COLOR(red){''Multicast Routing [マルチキャストルーティング]''} [#h09400e0]
-[[Multicast Routing Refrences - shared GD folder>https://drive.google.com/drive/folders/0B2HMlO4p7SuwRUVpR0trVWZvRU0?usp=sharing]]
**Others [#u6f047d4]
-R0. [[Scalable Hardware Architecture for Memristor Based Artificial Neural Network Systems>https://drive.google.com/file/d/0B2HMlO4p7SuwNHF3YWloNVEtNkk/view?usp=sharing]]
-R1. [[ニューラルネットワークによる演算>https://drive.google.com/file/d/0B2HMlO4p7SuwN1Z0dW1pdEdDeEU/view?usp=sharing]], 信州大学理学部物理科学科卒業論文, 平成16 年3 月1 日.
-R2. [[‘‘Adaptive SoCs for Smart Autonomous Systems”>http://web-ext.u-aizu.ac.jp/~benab/publications/keynotes/BenAbdallah_PlenaryTalk_STA2016.pdf]], 2016.
-R3. [[A Survey of Neuro-inspired Computing Systems>http://adaptive.u-aizu.ac.jp/aslwp/wp-content/uploads/2017/01/neurosystem-survey-vht2016.pdf]], 2017.
-R4. [[Demo II on FPGA: Design of an Neural Network for Character Recognition with Backpropagation Learning Metod>http://adaptive.u-aizu.ac.jp/aslwp/wp-content/uploads/2016/06/20170303-COSCO-LR-BacPG.pdf]]
-[[Memristors and Memristive Systems 2,014th Edition, Kindle Edition>https://www.amazon.com/Memristors-Memristive-Systems-Ronald-Tetzlaff-ebook/dp/B00H5IO80S/ref=sr_1_1?ie=UTF8&qid=1499153248&sr=8-1&keywords=Memristors+and+Memristive+Systems]]
-[[DSNNBP>https://www.frontiersin.org/articles/10.3389/fnins.2016.00508/full]]
-[[https://www.ibm.com/smarterplanet/jp/ja/brainpower/]]
ページ名: