Mimicking Neuro-inspired ArchitectureS in Hardware
をテンプレートにして作成
[
トップ
] [
新規
|
一覧
|
単語検索
|
最終更新
|
ヘルプ
|
ログイン
]
開始行:
[[<== Back to NASH Project>http://adaptive.u-aizu.ac.jp/aslint/index.php?Vu%20Huy%20The]]
* Pig Face Recognition on FPGA using Deep Convolution Neural Network [#x8550f64]
Leader: Murakami-1
***Motivation [#x5f82436]
Deep Neural Networks have been proved to be powerful tools for real world applications/tasks, such as pattern recognition,
classification, regression, and prediction. However, simulating a large network in real-time requires high-performance machines or accelerators.
Typical accelerators for large-scale NN accelerators use GPUs or ASIC chips. While ASICs deliver high performance, they lack the flexibility to reconfigure and hence are unable to adapt variation in the design and models employed. GPUs have better speedup over multi-core CPUs and good flexibility, but it lacks scalability to handle larger networks.
***Goal and Expected output [#x13225ae]
The goal of this research is to implement an FPGA-based Convolutional Neural Network for Pig Recognition.
-References
--1. [[畳み込みニューラルネットワークの仕組み>http://postd.cc/how-do-convolutional-neural-networks-work/]]. (English) [[How do Constitutional Neural Networks work?>http://brohrer.github.io/how_convolutional_neural_networks_work.html]],
--2.[[Cat Face Recognition with CNN>https://drive.google.com/file/d/0B2HMlO4p7SuwUkdoX1pBTWt2NXM/view?usp=sharing]]
--3.[[深層畳み込みニューラルネットワークによる画像特徴抽出と転移学習 (CNN Survey)>https://drive.google.com/file/d/0B2HMlO4p7SuwbEx4UjdpVGlkXzg/view?usp=sharing]]
--4.[[深層学習を用いた音声認識システム>https://drive.google.com/file/d/0B2HMlO4p7Suwb3ZUNVpEWGdQTGM/view?usp=sharing]]
--5.[[Visualizing and Understanding Convolutional Networks>https://drive.google.com/file/d/0B2HMlO4p7SuwNjZGYlNmMS03Z0E/view?usp=sharing]]
*Implementation of a Recurrent Neural Network on FPGA for [[Video Description Generation>http://adaptive.u-aizu.ac.jp/aslint/index.php?plugin=attach&pcmd=open&file=rnn-video-description-generation.jpg&refer=Mimicking%20Neuro-inspired%20ArchitectureS%20in%20Hardware]] or [[(Image Caption Generation)>http://adaptive.u-aizu.ac.jp/aslint/index.php?plugin=attach&pcmd=open&file=rnn-image-caption-generation.jpg&refer=Mimicking%20Neuro-inspired%20ArchitectureS%20in%20Hardware]] [#s2e8b284]
[[Looking at a sequence of images>http://adaptive.u-aizu.ac.jp/aslint/index.php?plugin=attach&pcmd=open&file=rnn.jpg&refer=Mimicking%20Neuro-inspired%20ArchitectureS%20in%20Hardware]]
and find out what is going on.
-References
--1.[[Introduction to DL>https://www.youtube.com/watch?v=1L0TKZQcUtA]] https://www.youtube.com/watch?v=1L0TKZQcUtA
--2.[[Multi-layer Recurrent Neural Networks (LSTM, GRU, RNN) for character-level language models in Torch>https://github.com/karpathy/char-rnn]] https://github.com/karpathy/char-rnn
--3.[[Sequence to Sequence – Video to Text>http://www.cs.utexas.edu/~ml/papers/venugopalan.iccv15.pdf]]
''Modeling attention Steering Application:'' Example: When a drone flies at a very high speed (+300 fps), it cannot capture the whole scene. So, what we can do is steering around an image)
--[[Multiple Object Recognition with Visual Attention, 2015>https://arxiv.org/pdf/1412.7755.pdf]]
--[[DRAW: A Recurrent Neural Network For Image Generation>http://jmlr.org/proceedings/papers/v37/gregor15.pdf]]
*DL Course [#b0fc9054]
-http://selfdrivingcars.mit.edu/
-http://cs231n.github.io/
-Neuromorphic Eng. http://avlsi.ini.uzh.ch/classwiki/doku.php
終了行:
[[<== Back to NASH Project>http://adaptive.u-aizu.ac.jp/aslint/index.php?Vu%20Huy%20The]]
* Pig Face Recognition on FPGA using Deep Convolution Neural Network [#x8550f64]
Leader: Murakami-1
***Motivation [#x5f82436]
Deep Neural Networks have been proved to be powerful tools for real world applications/tasks, such as pattern recognition,
classification, regression, and prediction. However, simulating a large network in real-time requires high-performance machines or accelerators.
Typical accelerators for large-scale NN accelerators use GPUs or ASIC chips. While ASICs deliver high performance, they lack the flexibility to reconfigure and hence are unable to adapt variation in the design and models employed. GPUs have better speedup over multi-core CPUs and good flexibility, but it lacks scalability to handle larger networks.
***Goal and Expected output [#x13225ae]
The goal of this research is to implement an FPGA-based Convolutional Neural Network for Pig Recognition.
-References
--1. [[畳み込みニューラルネットワークの仕組み>http://postd.cc/how-do-convolutional-neural-networks-work/]]. (English) [[How do Constitutional Neural Networks work?>http://brohrer.github.io/how_convolutional_neural_networks_work.html]],
--2.[[Cat Face Recognition with CNN>https://drive.google.com/file/d/0B2HMlO4p7SuwUkdoX1pBTWt2NXM/view?usp=sharing]]
--3.[[深層畳み込みニューラルネットワークによる画像特徴抽出と転移学習 (CNN Survey)>https://drive.google.com/file/d/0B2HMlO4p7SuwbEx4UjdpVGlkXzg/view?usp=sharing]]
--4.[[深層学習を用いた音声認識システム>https://drive.google.com/file/d/0B2HMlO4p7Suwb3ZUNVpEWGdQTGM/view?usp=sharing]]
--5.[[Visualizing and Understanding Convolutional Networks>https://drive.google.com/file/d/0B2HMlO4p7SuwNjZGYlNmMS03Z0E/view?usp=sharing]]
*Implementation of a Recurrent Neural Network on FPGA for [[Video Description Generation>http://adaptive.u-aizu.ac.jp/aslint/index.php?plugin=attach&pcmd=open&file=rnn-video-description-generation.jpg&refer=Mimicking%20Neuro-inspired%20ArchitectureS%20in%20Hardware]] or [[(Image Caption Generation)>http://adaptive.u-aizu.ac.jp/aslint/index.php?plugin=attach&pcmd=open&file=rnn-image-caption-generation.jpg&refer=Mimicking%20Neuro-inspired%20ArchitectureS%20in%20Hardware]] [#s2e8b284]
[[Looking at a sequence of images>http://adaptive.u-aizu.ac.jp/aslint/index.php?plugin=attach&pcmd=open&file=rnn.jpg&refer=Mimicking%20Neuro-inspired%20ArchitectureS%20in%20Hardware]]
and find out what is going on.
-References
--1.[[Introduction to DL>https://www.youtube.com/watch?v=1L0TKZQcUtA]] https://www.youtube.com/watch?v=1L0TKZQcUtA
--2.[[Multi-layer Recurrent Neural Networks (LSTM, GRU, RNN) for character-level language models in Torch>https://github.com/karpathy/char-rnn]] https://github.com/karpathy/char-rnn
--3.[[Sequence to Sequence – Video to Text>http://www.cs.utexas.edu/~ml/papers/venugopalan.iccv15.pdf]]
''Modeling attention Steering Application:'' Example: When a drone flies at a very high speed (+300 fps), it cannot capture the whole scene. So, what we can do is steering around an image)
--[[Multiple Object Recognition with Visual Attention, 2015>https://arxiv.org/pdf/1412.7755.pdf]]
--[[DRAW: A Recurrent Neural Network For Image Generation>http://jmlr.org/proceedings/papers/v37/gregor15.pdf]]
*DL Course [#b0fc9054]
-http://selfdrivingcars.mit.edu/
-http://cs231n.github.io/
-Neuromorphic Eng. http://avlsi.ini.uzh.ch/classwiki/doku.php
ページ名: