Neuromorphic
をテンプレートにして作成
[
トップ
] [
新規
|
一覧
|
単語検索
|
最終更新
|
ヘルプ
|
ログイン
]
開始行:
[[ASL Wiki]]
#CONTENTS
*Specila ISssues [#f27679fc]
-[[ACM:Special Issue on Hardware and Algorithms for Learning On-a-chip>http://dl.acm.org/citation.cfm?id=3051701&picked=prox&CFID=734063357&CFTOKEN=62358140]], Volume 13 Issue 3, May 2017
-[[SECTION: Special Issue on Neuromorphic Computing>http://dl.acm.org/citation.cfm?id=2767119&picked=prox&CFID=734063357&CFTOKEN=62358140]],Volume 11 Issue 4, April 2015
-[[ACM Journal on Emerging Technologies in Computing Systems (JETC) - Special issue on memory technologies Volume 9 Issue 2, May 2013>http://dl.acm.org/citation.cfm?id=2463585&picked=prox&CFID=734063357&CFTOKEN=62358140]]
--
*How the brain Works [#r004783c]
Although the brain is often compared to a computer, the electrical signals function very differently. Your brain contains tens of billions of neurons, each forming connections, called synapses, with thousands of other neurons. When stimulated by sensory information or other neurons, an electrical signal called an action potential spreads throughout the neuron. The action potential causes the release of chemicals called neurotransmitters that spread information to surrounding cells. The pattern of neurons firing action potentials is responsible for producing thoughts and actions.
''Potassium in the Brain'':
The action potential in neurons depends on electrolytes, mainly sodium and potassium. Low potassium levels cause your brain to slow down. Neurons with low potassium require more stimulation before firing an action potential and cannot fire action potentials rapidly. You may experience this as fatigue, confusion or the inability to start actions or finish trains of thought. The symptoms may be informally called brain fog and will not be corrected by taking stimulants, rest, good nutrition or removing stress.
*Neuron models - There are three types of neurons: [#s900ae5e]
[[CENTER:&ref(neron_types.jpg,,80%);>https://www.youtube.com/watch?v=HZh0A-lWSmY]]
CENTER:The Neuron
-1. ''Binary signal neuron'': The binary neuron model was jointly developed by McCulloch and Pitts in 1943. This model takes the weighted sum of the inputs and then compares the result with a threshold value: if the sum is larger than the threshold value, the neuron will give 1 as output, otherwise the output will be 0.
-2. ''Continuous value neuron'': The continuous value neuron model, as its name suggests, is different from the binary neuron model in the way that its output is a continuous value instead of a binary one. The activation
function of this neuron model is usually a sigmoid function like hyperbolic tangent or logistic. The output of a neuron can be interpreted as either the value itself, or as the probability of producing 1 as output. Most of the state-of-
the-art Machine Learning algorithms employ this type of neuron model.
-3. ''Spiking neuron'': The third type of neuron model is the spiking neuron model. This type of model takes spiking events as input and also outputs spiking events. Information is stored in the timing of spike
events instead of being interpreted as spiking frequency like in the binary and continuous value neuron models. Spiking neuron model is considered the most biologically plausible (معقول) among the three types, as transmitting spikes is how real neurons communicate with each other. In such a model, a neuron computes the weighted sum of all the spiking input currents integrated over time; when the membrane potential rises above a certain threshold, the neuron res a spike. A spiking neuron model usually can be described using an electronic circuit or a set of ordinary dierential equations, such as: leaky integrate and re model, Izhikevich model, Hodgkin - Huxley model, etc.
-''Activation funciton'' : An activation function for limiting the amplitude of the output of a neuron. Typically, the normalized amplitude range of the output of a neuron is written as the [0, 1] or alter-natively [-1, 1].
-''Bia'': The model of a neuron also includes an externally applied bias (threshold) wk0 = bk that has the effect of lowering or increasing the net input of the activation function.
**[[Biological Neuron Model>https://en.wikipedia.org/wiki/Biological_neuron_model]] [#q9221c4f]
*Memory [#t554edd1]
-http://science.howstuffworks.com/life/inside-the-mind/human-brain/human-memory4.htm
*ANN Classification [#hfed3f7f]
CENTER:[[&ref(Classification-of-ANN.jpg,,80%);>https://drive.google.com/file/d/0B2HMlO4p7SuwemZxTkl3c1YzVW8/view?usp=sharing]]
*Neuromorphic [#m434927c]
- Originally, the term ‘neuromorphic’ (coined by Carver Mead in 1990 was used to describe systems comprising analog integrated circuits, fabricated using standard complementary metal oxide semiconductor (CMOS) processes.
--Paper: [[Mead C 1990 Neuromorphic electronic systems Proc. IEEE 78 1629–36>http://booksc.org/book/20539257]]
*SNN [#t2063fe0]
[[Spiking neural networks (SNN) are a set of neurons that communicate through spikes and compute through the timing of the spike]>https://drive.google.com/file/d/0B2HMlO4p7SuwemZxTkl3c1YzVW8/view?usp=sharing]]. These spiking neurons have become
popular since they mimic the spiking nature of biological neurons and can reproduce those neuron spiking patterns. There are cases where SNNs are more biologically
plausible and more powerful than non-spiking ones [20] [21]. The Hodgkin-Huxley model [3] is one of the most detailed and best known models of
spiking neurons. It clearly describes the subcellular level behaviors, the membrane current generation and propagation of neural spikes. And because of these high level
of details, the Hodgkin-Huxley model is too complex to be used for a large scale simulation or hardware implementation. The Izhikevich model [4] is relatively recent
model that is simple, has good performance both on computational efficiency and functional richness.
For large scale hardware implementations, simplified models such as integrate-andfire model are preferred. This is due to limited hardware resources for any design
and same is the case for this thesis. Such simpler models can emulate the spiking nature of neurons, most of its behaviors and also keep the cost of computation at a
comparatively low level [22]. In this thesis, the neuron model used is the leaky, integrate and fire model [[[Ref.5>http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6945192]]] which has a lesser level of complexity than Hodgkin-Huxley and Izhikevich, which we think
would lead us to a more low power implementation. Details about Leaky, Integrate and Fire models are disccused in forthcoming sections.
-SNN consists of spiking neurons that do not generate outputs at each time step like other artificial neutral networks. Instead, a neuron produces a spike asynchronously when
its membrane potential (Vm) reaches a specific value. The information transfer in SNN hence takes place through precise spiking time or rates of spikes.
-Spiking neural network implementation using CPU and CUDA engine
--Introduction https://www.youtube.com/watch?v=Oe1ldwEEwsI
--Code https://github.com/jirihybek/cuda-nnet
-Ref. 5 Z. Wang, L. Guo, and M. Adjouadi, A biological plausible Generalized Leaky
Integrate-and-Fire neuron model’, in 2014 36th Annual International Conference
of the IEEE Engineering in Medicine and Biology Society, 2014, pp. 6810–6813.
*CNN [#q76763f8]
-[[Lec 01>https://www.youtube.com/watch?v=FmpDIaiMIeA]] --> [[paper 1>https://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf]]
-[[Lec 2>https://www.youtube.com/watch?v=s716QXfApa0]]
-[[Lec 3>https://www.youtube.com/watch?v=3iCifD8gZ0Q]]
-[[Lec 4>https://www.youtube.com/watch?v=SQ67NBCLV98]]
-[[Lec 5 CNN with Mathlab>https://www.youtube.com/watch?v=tGuwCDUNkeE]]
*RNN [#qcef5bdb]
-[[Lec 01>https://www.youtube.com/watch?v=AqEF2HIMjYA]]
*[[Learning>learning]] [#ie6daeb6]
CENTER:&ref(STDP.jpg,,80%);
Learning in spiking neural networks is devised based on synaptic plasticity change in biology. One of the most important rules for learning in such network is Spike Timing Dependent Plasticity (STDP).
The main principle of STDP is that synaptic plasticity is updated according to the difference in spike timing between the pre and post synaptic neurons.
There are many variations of STDP, but all of them follow this basic principle. For example, in the upper figure, when the pre-synaptic neuron ''j'' fires within approximately 40ms before the post-synaptic neuron ''i'' does, the synaptic weight from ''j to ''i'' will increase. The closer the ring time of the two neurons are, the more the synaptic weight will increase. Vice versa, if the pre-synaptic neuron res within approximately 40ms after the post-synaptic neuron, the synaptic weight update will decrease.
Offline training: weights are pre-defined by software training, just need one-time loading to the array; Conventional RRAM with gradual reset only is good enough
Online training: weights are updated during run-time; Special RRAM with both smooth set and reset is needed. Online, real-time learning in neuromorphic circuits have been implemented through variants of Spike-Time Dependent Plasticity (STDP). Current implementations have used either floating-gate devices or memristors to implement such learning synapses together with non-volatile storage. However, these approaches require high voltages (3- 12V) for weight update and entail high energy for learning (4- 30pJ/write.
CENTER:[[&ref(List_of_Neuron_models.jpg,,80%);>https://drive.google.com/file/d/0B2HMlO4p7SuwemZxTkl3c1YzVW8/view?usp=sharing]]
-References:
--1.G.-Q. Bi and M.-M. Poo, \Synaptic modications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type," The Journal of Neuroscience, 1998. 10
--2.[[Adaptive Neural Algorithms: the What, Why, and How, Brad Aimone>http://nice.sandia.gov/documents/2015/Aimone_NICE_2015.pptx]] --> On-chip Learning
*Simulation [#z253d7c8]
-[[Spike Simulator>http://www.lloydwatts.com/spike2.html]] -> fast event-driven simulator, called Spike, for simulating large networks of simple spiking neurons.
-[[NES3>https://sourcesup.renater.fr/wiki/n2s3]]
*Applications [#tf665c60]
- [[2014 Neuromorphic Electronic Circuits for Building Autonomous Cognitive Systems Chicca, Elisabetta, Stefanini, Fabio, Bartolozzi, Chiara, Indiveri, Giacomo, Vol. 102, No. 9, September 2014 Proceedings of the IEEE>http://booksc.org/book/30265662]]
-[[2001G. Indiveri, ‘‘A neuromorphic VLSI device for implementing 2-D selective attention systems,’’ IEEE Trans. Neural Netw., vol. 12, no. 6, pp. 1455–1463, Nov. 2001.>http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=963780]]
-[[2000. G. Indiveri and R. Douglas, ‘‘Robotic vision: Neuromorphic vision sensor,’’ Science, vol. 288, pp. 1189–1190, May 2000.>http://booksc.org/book/22427835]]
*Memristor, others [#b8608ba8]
Definition: COLOR(red){According to the characterizing mathematical relations, the memristor would hypothetically operate in the following way: The memristor's electrical resistance is not constant but depends on the history of current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has flowed in what direction through it in the past; the device remembers its history — the so-called non-volatility property.[2] When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again [[Wikipedia>https://en.wikipedia.org/wiki/Memristor]]}
- [[2012 ''Why Are Memristor and Memistor Different Devices?'',IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 59, NO. 11, NOVEMBER 2012 2611>http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6192335]]
-[[2013.Integration of nanoscale memristor synapses in neuromorphic computing architectures>http://booksc.org/dl/22867956/206b9b]], Nanotechnology 24 (2013) 384010 (13pp)
Note:
-L. O. Chua, “Memristor-the missing circuit element,” IEEE Trans. Circuit Theory, vol. CT-18, no. 5, pp. 507–519, 1971.
- L. O. Chua, “Resistance switching memories are memristors,” Appl. Phys. A, vol. 102, pp. 765–783, 2011.
*Conductive-bridge random access memory (CBRAM) [#oed41ee9]
-2016. [[Scalable Neuron Circuit Using Conductive-Bridge RAM for Pattern Reconstructions, IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 63, NO. 6, JUNE 2016>http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7450157]]
*References/Projects [#jc744550]
***Singapore http://www3.ntu.edu.sg/home/arindam.basu/writings.htm [#t42f9340]
-[[A Low-Voltage, Low Power STDP Synapse Implementation Using Domain-Wall Magnets for Spiking Neural Networks>http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7527390]], IEEE ISCAS, Montreal, May, 2016.
***Swiss - EPFL Lauzanne [#m937b092]
-https://infoscience.epfl.ch/search?ln=en&p=Neural+Network+Th%C3%A8se&jrec=31
--[[2015 These H. Zhang, J. d. R. Millán Ruiz (Dir.). Connectivity Analysis of Brain States and Applications in Brain-Computer Interfaces. Thèse EPFL, n° 6779 (2015)>https://infoscience.epfl.ch/record/213238/files/EPFL_TH6779.pdf]]
--[[2015. E. Fischi Gomez, J.-P. Thiran (Dir.). Connectomics across development: towards mapping brain structure from birth to childhood. Thèse EPFL, n° 6754 (2015)>https://infoscience.epfl.ch/record/212720/files/EPFL_TH6754.pdf]]
--[[F. Zenke, W. Gerstner (Dir.). Memory formation and recall in recurrent spiking neural networks. Thèse EPFL, n° 6260 (2014)>https://infoscience.epfl.ch/record/203260/files/EPFL_TH6260.pdf]]
--http://robogen.org/
---Paper: https://infoscience.epfl.ch/record/200995/files/978-0-262-32621-6-ch022.pdf
--[[A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems>https://arxiv.org/pdf/1011.2861.pdf]]
--[[PhD Thesis: Supernumerary Robotic Arm for Three-Handed Surgical Application: Behavioral Study and Design of Human-Machine Interface.]]
***Neuromorphic Computing at Tennessee [#q9ca8abd]
-[[DANA on FPGA>http://neuromorphic.eecs.utk.edu/pages/research-overview/]]
--[[2016 An Application Development Platform for Neuromorphic Computing.pdf>http://neuromorphic.eecs.utk.edu/raw/files/publications/2016-IJCNN-Dean.pdf]]
---[[Slides :Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture.pdf>http://ornlcda.github.io/MLHPC2015/presentations/5-Katie.pdf]]
---[[more slides from MLHPC2015 conference.html>http://ornlcda.github.io/MLHPC2015/program.html]]
***Europe [#g572315b]
-[[NeuRAM Cube: NEUral computing aRchitectures in Advanced Monolithic 3D-VLSI nano-technologies>http://www.neuram3.eu/project-work-plan]]
---[[ESSDERC 2016 Workshop Presentations>http://www.neuram3.eu/achievements/essderc-2016-workshop-presentations]]
***Japan [#k672528c]
-Tohoko Univ.: Soft Computing Integrated Systems Lab, Yoshihiko HORIO, Professo5, http://www.ecei.tohoku.ac.jp/ecei_web/Laboratory/horio_e_index.html
-UEC: The University of Electro-Communications, Brain Science Inspired Life Support Research Center
--Project Professor Shigeru TANAKA http://tanaka-lab.net/
--http://kjk.office.uec.ac.jp/Profiles/63/0006220/prof_e.html
*Multicast Routing [#t995dafc]
-[41]Joshi, S.; Deiss, S.; Arnold, M.; Jongkil Park; Yu, T.; Cauwenberghs, G.; , [["Scalable event routing in hierarchical neural array architecture with global synaptic connectivity,">http://ieeexplore.ieee.org/abstract/document/5430296/]] Cellular Nanoscale Networks and Their Applications (CNNA), 2010 12th International Workshop on , vol., no., pp.1-6, 3-5 Feb. 2010
-[42] Zamarreno-Ramos, C.; Linares-Barranco, A.; Serrano-Gotarredona, T.; Linares-Barranco, B.; , [["Multicasting Mesh AER: A Scalable Assembly Approach for Reconfigurable Neuromorphic Structured AER Systems>http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6211459]]. Application to ConvNets," Biomedical Circuits and Systems, IEEE Transactions on , vol.PP, no.99.
-[43] A. K. Mishra, N. Vijaykrishnan, Chita R. Das, [["A Case for Heterogeneous On-Chip Interconnects for CMPs">http://booksc.org/book/44790068]], 38th International Symposium on Computer Architecture (ISCA, 2011).
-[44] Matos, D.; Concatto, C.; Carro, L.; Kastensmidt, F.; Kreutz, M.; Susin, A.; , [["Highly efficient reconfigurable routers in Networks-on-Chip,">http://booksc.org/book/22603152]] Very Large Scale Integration (VLSI-SoC), 2009 17th IFIP International Conference on , vol., no., pp.165-170, 12-14 Oct. 2009.
-[45] N. E. Jerger, L. S. Peh, and M. Lipasti, [[“Virtual Circuit Tree Multicasting: A Case for On-Chip Hardware Multicast Support>http://booksc.org/book/30855336]],” Proceedings of the 35th Annual International Symposium on Computer Architecture (ISCA), June, 2008.
-[46]. J. Wu, and S. Furber, “[[A Multicast Routing Scheme for a Universal Spiking Neural Network Architecture>http://booksc.org/book/42146358]],” Journals of Computers vol. 53, no. 3, pp. 280-288, March 2010.
*How to Generate Artificial Spike train [#gede1c61]
...Theinput stimuli are a series of Poisson spike trains, generated artificially and sent via the AER protocol to the chip virtual synapses.
-[[Poisson Model of Spike Generation>http://www.cns.nyu.edu/~david/handouts/poisson.pdf]]
*LTP and LTD [#tec17953]
-[[LTP Video>https://www.youtube.com/watch?v=vso9jgfpI_c]],
***AMPA and NMDA Receptors [#q05f12aa]
&ref(SAMPA and NMDA Receptors.jpg,,40%);
-[[AMPA and NMDA Receptors>https://www.youtube.com/watch?v=FfZNn2hRe1s]]
[[Long Term Potentiation and Memory Formation, Animation>https://www.youtube.com/watch?v=4Hm08ksPtMo]]
&ref(signals-to-hypocupsu.jpg,,40%);
&ref(wired-back-to-the-cortex.jpg,,60%);
&ref(brain-uses-synapse-for-communication.jpg,,60%);
&ref(weak-and-strong-synapses.jpg,,60%);
***Glutamate receptors [#w97556c2]
They are responsible for the glutamate-mediated postsynaptic excitation of neural cells, and are important for neural communication, memory formation, learning, and regulation. Ref. https://en.wikipedia.org/wiki/Glutamate_receptor
----
----
*Tools and [#pdfaf4fd]
-http://openvibe.inria.fr/downloads/ Look at MindWave mobile or Brain Rythm BR8 (PRF-OKU RECOM)
*Workshops [#q99ce29d]
-[[NICE 2017>https://www.src.org/calendar/e006125/]]
-[[NICE 2016>https://www.youtube.com/watch?v=skLG-MBj0OI&index=10&list=PLOyuQaVrp4qqkSep0TP20Gw9mzFyp-GH9]]
-[[Neuromorphic2016>http://ornlcda.github.io/neuromorphic2016/files/ORNLNeuromorphicComputingWorkshop2016Report.pdf]]
-[[ESSDERC 2016 Workshop Presentations>http://www.neuram3.eu/achievements/essderc-2016-workshop-presentations]]
----
-フィードフォワードニューラルネットワーク (Feed-forward
Neural Networks)
- リカレントニューラルネットワーク (Recurrent Neural
Networks; RNNs)
- 畳み込みニューラルネットワーク (Convolutional Neural
Networks; CNNs)
- 再帰ニューラルネットワーク (Recursive Neural Networks)
*Funds [#m9967c1c]
-https://docs.google.com/spreadsheets/d/1Fz4B4e1miLOm-7sM2I-5VIqLpNT88xJK7ArhL_YTwIU/edit?ts=5919347e#gid=0
-----
*Tools [#ead6d8fe]
-http://image.online-convert.com/convert-to-eps
*KTH [#n6e4893d]
http://kth.diva-portal.org/smash/search.jsf?dswid=-6128
*Existing Deep Learning Frameworks [#ledb3c1b]
*** Caffee [#kc7f59e9]
Caffee is a framework developed by the Berkley Vision and Learning Center and
community contributors [JSD+14]. As examples demonstrate, the software can be
used to train a state of the art neural networks using the ImageNet data set. It
utilizes only the available CPUs, or, if compatible, also GPUs. For the mobile
nVidia Tegra K1 GPU, the support was recently added6
.
*** Torch [#hebfd329]
Torch is another computing framework, which supports deep learning algorithms
[CBM02]. It can use GPUs to seed up computation. Recent ports brought support
to run on mobile devices like iOS, Android and FPGAs.
*** Theano [#fa48b6e8]
Theano also supports deep learning algorithms with efficient calculation of multidimensional
arrays [BLP+12]. The python library includes GPU support as well as
symbolic differentiation. Support for mobile devices has not been a core feature of
the software yet.
*** Deeplearning4j [#h741c4f4]
Deeplearning4j tries to solve some issues of the other frameworks, namely Java as
a more robust and portable language and providing commercial support7
. This
enables the framework to run on any platform with Java support. Additionally,
strong support to scale over many parallel GPUs or CPUs is implemented.
***Tensorflow [#pd0acb25]
Tensorflow is an open-source framework for numerical computations, which was
recently released by Google [AAB+]. It facilitates large-scale machine learning on
distributed systems. Without changing code, it can run on multiple CPUs or GPUs
in a server or mobile device.
*Actication funciton [#p5da14a2]
-What makes neural networks a nonlinear classification model?
--https://stats.stackexchange.com/questions/222639/what-makes-neural-networks-a-nonlinear-classification-model/222642
--[[Tutorial>http://www2.econ.iastate.edu/tesfatsi/NeuralNetworks.CheungCannonNotes.pdf]]
*WWW Labs [#p20c4539]
-Neuromorphic - Architectures, Learning, Applicaiotns (University of Tennessee)
--http://neuromorphic.eecs.utk.edu/pages/research-projects/
-Neuromorphic Laboratory (Boston University)
-- http://nl.bu.edu/
-Neuromorphic and Cognitive Integrated Circuits
-- http://www.lumerink.com/pages/neuromorphic.html
-Neuromorphic Machine Intelligence Lab Cognition, Neuroscience and Machine Learning
--http://nmi-lab.org/
-Comutaional Evolutionnary Intelligence Lab, USA
--http://cei.pratt.duke.edu/research/neuromorphic-computing
-Departmnet bof Humnan Intelligence System, Japan
--http://www.brain.kyutech.ac.jp/people/people_e.html
-http://www.brain.kyutech.ac.jp/coe21/english/sub_ishii.html
_Neuro-computer, Japan
-- Workshop: http://www.airc.aist.go.jp/info_details/2017330.html#1
-The 3rd Workshop on Bio-Inspired Energy-Efficient Information Systems
in the University of Tokyo on March 12th, 2018.
--http://park.itc.u-tokyo.ac.jp/EEIP-SCP/workshop20180312/index.html
--http://park.itc.u-tokyo.ac.jp/EEIP-SCP/workshop20160302/program.html
-The 2nd Neuromorphic Research Retreat in AIST
-The 4th brainware seminar (H29), (Research Institute of Electrical Communication, Tohoku university).
--http://www.riec.tohoku.ac.jp/ja/events/others/2018/01/post-17050/
*[[Spike Computing]] [#x176dca1]
終了行:
[[ASL Wiki]]
#CONTENTS
*Specila ISssues [#f27679fc]
-[[ACM:Special Issue on Hardware and Algorithms for Learning On-a-chip>http://dl.acm.org/citation.cfm?id=3051701&picked=prox&CFID=734063357&CFTOKEN=62358140]], Volume 13 Issue 3, May 2017
-[[SECTION: Special Issue on Neuromorphic Computing>http://dl.acm.org/citation.cfm?id=2767119&picked=prox&CFID=734063357&CFTOKEN=62358140]],Volume 11 Issue 4, April 2015
-[[ACM Journal on Emerging Technologies in Computing Systems (JETC) - Special issue on memory technologies Volume 9 Issue 2, May 2013>http://dl.acm.org/citation.cfm?id=2463585&picked=prox&CFID=734063357&CFTOKEN=62358140]]
--
*How the brain Works [#r004783c]
Although the brain is often compared to a computer, the electrical signals function very differently. Your brain contains tens of billions of neurons, each forming connections, called synapses, with thousands of other neurons. When stimulated by sensory information or other neurons, an electrical signal called an action potential spreads throughout the neuron. The action potential causes the release of chemicals called neurotransmitters that spread information to surrounding cells. The pattern of neurons firing action potentials is responsible for producing thoughts and actions.
''Potassium in the Brain'':
The action potential in neurons depends on electrolytes, mainly sodium and potassium. Low potassium levels cause your brain to slow down. Neurons with low potassium require more stimulation before firing an action potential and cannot fire action potentials rapidly. You may experience this as fatigue, confusion or the inability to start actions or finish trains of thought. The symptoms may be informally called brain fog and will not be corrected by taking stimulants, rest, good nutrition or removing stress.
*Neuron models - There are three types of neurons: [#s900ae5e]
[[CENTER:&ref(neron_types.jpg,,80%);>https://www.youtube.com/watch?v=HZh0A-lWSmY]]
CENTER:The Neuron
-1. ''Binary signal neuron'': The binary neuron model was jointly developed by McCulloch and Pitts in 1943. This model takes the weighted sum of the inputs and then compares the result with a threshold value: if the sum is larger than the threshold value, the neuron will give 1 as output, otherwise the output will be 0.
-2. ''Continuous value neuron'': The continuous value neuron model, as its name suggests, is different from the binary neuron model in the way that its output is a continuous value instead of a binary one. The activation
function of this neuron model is usually a sigmoid function like hyperbolic tangent or logistic. The output of a neuron can be interpreted as either the value itself, or as the probability of producing 1 as output. Most of the state-of-
the-art Machine Learning algorithms employ this type of neuron model.
-3. ''Spiking neuron'': The third type of neuron model is the spiking neuron model. This type of model takes spiking events as input and also outputs spiking events. Information is stored in the timing of spike
events instead of being interpreted as spiking frequency like in the binary and continuous value neuron models. Spiking neuron model is considered the most biologically plausible (معقول) among the three types, as transmitting spikes is how real neurons communicate with each other. In such a model, a neuron computes the weighted sum of all the spiking input currents integrated over time; when the membrane potential rises above a certain threshold, the neuron res a spike. A spiking neuron model usually can be described using an electronic circuit or a set of ordinary dierential equations, such as: leaky integrate and re model, Izhikevich model, Hodgkin - Huxley model, etc.
-''Activation funciton'' : An activation function for limiting the amplitude of the output of a neuron. Typically, the normalized amplitude range of the output of a neuron is written as the [0, 1] or alter-natively [-1, 1].
-''Bia'': The model of a neuron also includes an externally applied bias (threshold) wk0 = bk that has the effect of lowering or increasing the net input of the activation function.
**[[Biological Neuron Model>https://en.wikipedia.org/wiki/Biological_neuron_model]] [#q9221c4f]
*Memory [#t554edd1]
-http://science.howstuffworks.com/life/inside-the-mind/human-brain/human-memory4.htm
*ANN Classification [#hfed3f7f]
CENTER:[[&ref(Classification-of-ANN.jpg,,80%);>https://drive.google.com/file/d/0B2HMlO4p7SuwemZxTkl3c1YzVW8/view?usp=sharing]]
*Neuromorphic [#m434927c]
- Originally, the term ‘neuromorphic’ (coined by Carver Mead in 1990 was used to describe systems comprising analog integrated circuits, fabricated using standard complementary metal oxide semiconductor (CMOS) processes.
--Paper: [[Mead C 1990 Neuromorphic electronic systems Proc. IEEE 78 1629–36>http://booksc.org/book/20539257]]
*SNN [#t2063fe0]
[[Spiking neural networks (SNN) are a set of neurons that communicate through spikes and compute through the timing of the spike]>https://drive.google.com/file/d/0B2HMlO4p7SuwemZxTkl3c1YzVW8/view?usp=sharing]]. These spiking neurons have become
popular since they mimic the spiking nature of biological neurons and can reproduce those neuron spiking patterns. There are cases where SNNs are more biologically
plausible and more powerful than non-spiking ones [20] [21]. The Hodgkin-Huxley model [3] is one of the most detailed and best known models of
spiking neurons. It clearly describes the subcellular level behaviors, the membrane current generation and propagation of neural spikes. And because of these high level
of details, the Hodgkin-Huxley model is too complex to be used for a large scale simulation or hardware implementation. The Izhikevich model [4] is relatively recent
model that is simple, has good performance both on computational efficiency and functional richness.
For large scale hardware implementations, simplified models such as integrate-andfire model are preferred. This is due to limited hardware resources for any design
and same is the case for this thesis. Such simpler models can emulate the spiking nature of neurons, most of its behaviors and also keep the cost of computation at a
comparatively low level [22]. In this thesis, the neuron model used is the leaky, integrate and fire model [[[Ref.5>http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6945192]]] which has a lesser level of complexity than Hodgkin-Huxley and Izhikevich, which we think
would lead us to a more low power implementation. Details about Leaky, Integrate and Fire models are disccused in forthcoming sections.
-SNN consists of spiking neurons that do not generate outputs at each time step like other artificial neutral networks. Instead, a neuron produces a spike asynchronously when
its membrane potential (Vm) reaches a specific value. The information transfer in SNN hence takes place through precise spiking time or rates of spikes.
-Spiking neural network implementation using CPU and CUDA engine
--Introduction https://www.youtube.com/watch?v=Oe1ldwEEwsI
--Code https://github.com/jirihybek/cuda-nnet
-Ref. 5 Z. Wang, L. Guo, and M. Adjouadi, A biological plausible Generalized Leaky
Integrate-and-Fire neuron model’, in 2014 36th Annual International Conference
of the IEEE Engineering in Medicine and Biology Society, 2014, pp. 6810–6813.
*CNN [#q76763f8]
-[[Lec 01>https://www.youtube.com/watch?v=FmpDIaiMIeA]] --> [[paper 1>https://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf]]
-[[Lec 2>https://www.youtube.com/watch?v=s716QXfApa0]]
-[[Lec 3>https://www.youtube.com/watch?v=3iCifD8gZ0Q]]
-[[Lec 4>https://www.youtube.com/watch?v=SQ67NBCLV98]]
-[[Lec 5 CNN with Mathlab>https://www.youtube.com/watch?v=tGuwCDUNkeE]]
*RNN [#qcef5bdb]
-[[Lec 01>https://www.youtube.com/watch?v=AqEF2HIMjYA]]
*[[Learning>learning]] [#ie6daeb6]
CENTER:&ref(STDP.jpg,,80%);
Learning in spiking neural networks is devised based on synaptic plasticity change in biology. One of the most important rules for learning in such network is Spike Timing Dependent Plasticity (STDP).
The main principle of STDP is that synaptic plasticity is updated according to the difference in spike timing between the pre and post synaptic neurons.
There are many variations of STDP, but all of them follow this basic principle. For example, in the upper figure, when the pre-synaptic neuron ''j'' fires within approximately 40ms before the post-synaptic neuron ''i'' does, the synaptic weight from ''j to ''i'' will increase. The closer the ring time of the two neurons are, the more the synaptic weight will increase. Vice versa, if the pre-synaptic neuron res within approximately 40ms after the post-synaptic neuron, the synaptic weight update will decrease.
Offline training: weights are pre-defined by software training, just need one-time loading to the array; Conventional RRAM with gradual reset only is good enough
Online training: weights are updated during run-time; Special RRAM with both smooth set and reset is needed. Online, real-time learning in neuromorphic circuits have been implemented through variants of Spike-Time Dependent Plasticity (STDP). Current implementations have used either floating-gate devices or memristors to implement such learning synapses together with non-volatile storage. However, these approaches require high voltages (3- 12V) for weight update and entail high energy for learning (4- 30pJ/write.
CENTER:[[&ref(List_of_Neuron_models.jpg,,80%);>https://drive.google.com/file/d/0B2HMlO4p7SuwemZxTkl3c1YzVW8/view?usp=sharing]]
-References:
--1.G.-Q. Bi and M.-M. Poo, \Synaptic modications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type," The Journal of Neuroscience, 1998. 10
--2.[[Adaptive Neural Algorithms: the What, Why, and How, Brad Aimone>http://nice.sandia.gov/documents/2015/Aimone_NICE_2015.pptx]] --> On-chip Learning
*Simulation [#z253d7c8]
-[[Spike Simulator>http://www.lloydwatts.com/spike2.html]] -> fast event-driven simulator, called Spike, for simulating large networks of simple spiking neurons.
-[[NES3>https://sourcesup.renater.fr/wiki/n2s3]]
*Applications [#tf665c60]
- [[2014 Neuromorphic Electronic Circuits for Building Autonomous Cognitive Systems Chicca, Elisabetta, Stefanini, Fabio, Bartolozzi, Chiara, Indiveri, Giacomo, Vol. 102, No. 9, September 2014 Proceedings of the IEEE>http://booksc.org/book/30265662]]
-[[2001G. Indiveri, ‘‘A neuromorphic VLSI device for implementing 2-D selective attention systems,’’ IEEE Trans. Neural Netw., vol. 12, no. 6, pp. 1455–1463, Nov. 2001.>http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=963780]]
-[[2000. G. Indiveri and R. Douglas, ‘‘Robotic vision: Neuromorphic vision sensor,’’ Science, vol. 288, pp. 1189–1190, May 2000.>http://booksc.org/book/22427835]]
*Memristor, others [#b8608ba8]
Definition: COLOR(red){According to the characterizing mathematical relations, the memristor would hypothetically operate in the following way: The memristor's electrical resistance is not constant but depends on the history of current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has flowed in what direction through it in the past; the device remembers its history — the so-called non-volatility property.[2] When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again [[Wikipedia>https://en.wikipedia.org/wiki/Memristor]]}
- [[2012 ''Why Are Memristor and Memistor Different Devices?'',IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 59, NO. 11, NOVEMBER 2012 2611>http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6192335]]
-[[2013.Integration of nanoscale memristor synapses in neuromorphic computing architectures>http://booksc.org/dl/22867956/206b9b]], Nanotechnology 24 (2013) 384010 (13pp)
Note:
-L. O. Chua, “Memristor-the missing circuit element,” IEEE Trans. Circuit Theory, vol. CT-18, no. 5, pp. 507–519, 1971.
- L. O. Chua, “Resistance switching memories are memristors,” Appl. Phys. A, vol. 102, pp. 765–783, 2011.
*Conductive-bridge random access memory (CBRAM) [#oed41ee9]
-2016. [[Scalable Neuron Circuit Using Conductive-Bridge RAM for Pattern Reconstructions, IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 63, NO. 6, JUNE 2016>http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7450157]]
*References/Projects [#jc744550]
***Singapore http://www3.ntu.edu.sg/home/arindam.basu/writings.htm [#t42f9340]
-[[A Low-Voltage, Low Power STDP Synapse Implementation Using Domain-Wall Magnets for Spiking Neural Networks>http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7527390]], IEEE ISCAS, Montreal, May, 2016.
***Swiss - EPFL Lauzanne [#m937b092]
-https://infoscience.epfl.ch/search?ln=en&p=Neural+Network+Th%C3%A8se&jrec=31
--[[2015 These H. Zhang, J. d. R. Millán Ruiz (Dir.). Connectivity Analysis of Brain States and Applications in Brain-Computer Interfaces. Thèse EPFL, n° 6779 (2015)>https://infoscience.epfl.ch/record/213238/files/EPFL_TH6779.pdf]]
--[[2015. E. Fischi Gomez, J.-P. Thiran (Dir.). Connectomics across development: towards mapping brain structure from birth to childhood. Thèse EPFL, n° 6754 (2015)>https://infoscience.epfl.ch/record/212720/files/EPFL_TH6754.pdf]]
--[[F. Zenke, W. Gerstner (Dir.). Memory formation and recall in recurrent spiking neural networks. Thèse EPFL, n° 6260 (2014)>https://infoscience.epfl.ch/record/203260/files/EPFL_TH6260.pdf]]
--http://robogen.org/
---Paper: https://infoscience.epfl.ch/record/200995/files/978-0-262-32621-6-ch022.pdf
--[[A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems>https://arxiv.org/pdf/1011.2861.pdf]]
--[[PhD Thesis: Supernumerary Robotic Arm for Three-Handed Surgical Application: Behavioral Study and Design of Human-Machine Interface.]]
***Neuromorphic Computing at Tennessee [#q9ca8abd]
-[[DANA on FPGA>http://neuromorphic.eecs.utk.edu/pages/research-overview/]]
--[[2016 An Application Development Platform for Neuromorphic Computing.pdf>http://neuromorphic.eecs.utk.edu/raw/files/publications/2016-IJCNN-Dean.pdf]]
---[[Slides :Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture.pdf>http://ornlcda.github.io/MLHPC2015/presentations/5-Katie.pdf]]
---[[more slides from MLHPC2015 conference.html>http://ornlcda.github.io/MLHPC2015/program.html]]
***Europe [#g572315b]
-[[NeuRAM Cube: NEUral computing aRchitectures in Advanced Monolithic 3D-VLSI nano-technologies>http://www.neuram3.eu/project-work-plan]]
---[[ESSDERC 2016 Workshop Presentations>http://www.neuram3.eu/achievements/essderc-2016-workshop-presentations]]
***Japan [#k672528c]
-Tohoko Univ.: Soft Computing Integrated Systems Lab, Yoshihiko HORIO, Professo5, http://www.ecei.tohoku.ac.jp/ecei_web/Laboratory/horio_e_index.html
-UEC: The University of Electro-Communications, Brain Science Inspired Life Support Research Center
--Project Professor Shigeru TANAKA http://tanaka-lab.net/
--http://kjk.office.uec.ac.jp/Profiles/63/0006220/prof_e.html
*Multicast Routing [#t995dafc]
-[41]Joshi, S.; Deiss, S.; Arnold, M.; Jongkil Park; Yu, T.; Cauwenberghs, G.; , [["Scalable event routing in hierarchical neural array architecture with global synaptic connectivity,">http://ieeexplore.ieee.org/abstract/document/5430296/]] Cellular Nanoscale Networks and Their Applications (CNNA), 2010 12th International Workshop on , vol., no., pp.1-6, 3-5 Feb. 2010
-[42] Zamarreno-Ramos, C.; Linares-Barranco, A.; Serrano-Gotarredona, T.; Linares-Barranco, B.; , [["Multicasting Mesh AER: A Scalable Assembly Approach for Reconfigurable Neuromorphic Structured AER Systems>http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6211459]]. Application to ConvNets," Biomedical Circuits and Systems, IEEE Transactions on , vol.PP, no.99.
-[43] A. K. Mishra, N. Vijaykrishnan, Chita R. Das, [["A Case for Heterogeneous On-Chip Interconnects for CMPs">http://booksc.org/book/44790068]], 38th International Symposium on Computer Architecture (ISCA, 2011).
-[44] Matos, D.; Concatto, C.; Carro, L.; Kastensmidt, F.; Kreutz, M.; Susin, A.; , [["Highly efficient reconfigurable routers in Networks-on-Chip,">http://booksc.org/book/22603152]] Very Large Scale Integration (VLSI-SoC), 2009 17th IFIP International Conference on , vol., no., pp.165-170, 12-14 Oct. 2009.
-[45] N. E. Jerger, L. S. Peh, and M. Lipasti, [[“Virtual Circuit Tree Multicasting: A Case for On-Chip Hardware Multicast Support>http://booksc.org/book/30855336]],” Proceedings of the 35th Annual International Symposium on Computer Architecture (ISCA), June, 2008.
-[46]. J. Wu, and S. Furber, “[[A Multicast Routing Scheme for a Universal Spiking Neural Network Architecture>http://booksc.org/book/42146358]],” Journals of Computers vol. 53, no. 3, pp. 280-288, March 2010.
*How to Generate Artificial Spike train [#gede1c61]
...Theinput stimuli are a series of Poisson spike trains, generated artificially and sent via the AER protocol to the chip virtual synapses.
-[[Poisson Model of Spike Generation>http://www.cns.nyu.edu/~david/handouts/poisson.pdf]]
*LTP and LTD [#tec17953]
-[[LTP Video>https://www.youtube.com/watch?v=vso9jgfpI_c]],
***AMPA and NMDA Receptors [#q05f12aa]
&ref(SAMPA and NMDA Receptors.jpg,,40%);
-[[AMPA and NMDA Receptors>https://www.youtube.com/watch?v=FfZNn2hRe1s]]
[[Long Term Potentiation and Memory Formation, Animation>https://www.youtube.com/watch?v=4Hm08ksPtMo]]
&ref(signals-to-hypocupsu.jpg,,40%);
&ref(wired-back-to-the-cortex.jpg,,60%);
&ref(brain-uses-synapse-for-communication.jpg,,60%);
&ref(weak-and-strong-synapses.jpg,,60%);
***Glutamate receptors [#w97556c2]
They are responsible for the glutamate-mediated postsynaptic excitation of neural cells, and are important for neural communication, memory formation, learning, and regulation. Ref. https://en.wikipedia.org/wiki/Glutamate_receptor
----
----
*Tools and [#pdfaf4fd]
-http://openvibe.inria.fr/downloads/ Look at MindWave mobile or Brain Rythm BR8 (PRF-OKU RECOM)
*Workshops [#q99ce29d]
-[[NICE 2017>https://www.src.org/calendar/e006125/]]
-[[NICE 2016>https://www.youtube.com/watch?v=skLG-MBj0OI&index=10&list=PLOyuQaVrp4qqkSep0TP20Gw9mzFyp-GH9]]
-[[Neuromorphic2016>http://ornlcda.github.io/neuromorphic2016/files/ORNLNeuromorphicComputingWorkshop2016Report.pdf]]
-[[ESSDERC 2016 Workshop Presentations>http://www.neuram3.eu/achievements/essderc-2016-workshop-presentations]]
----
-フィードフォワードニューラルネットワーク (Feed-forward
Neural Networks)
- リカレントニューラルネットワーク (Recurrent Neural
Networks; RNNs)
- 畳み込みニューラルネットワーク (Convolutional Neural
Networks; CNNs)
- 再帰ニューラルネットワーク (Recursive Neural Networks)
*Funds [#m9967c1c]
-https://docs.google.com/spreadsheets/d/1Fz4B4e1miLOm-7sM2I-5VIqLpNT88xJK7ArhL_YTwIU/edit?ts=5919347e#gid=0
-----
*Tools [#ead6d8fe]
-http://image.online-convert.com/convert-to-eps
*KTH [#n6e4893d]
http://kth.diva-portal.org/smash/search.jsf?dswid=-6128
*Existing Deep Learning Frameworks [#ledb3c1b]
*** Caffee [#kc7f59e9]
Caffee is a framework developed by the Berkley Vision and Learning Center and
community contributors [JSD+14]. As examples demonstrate, the software can be
used to train a state of the art neural networks using the ImageNet data set. It
utilizes only the available CPUs, or, if compatible, also GPUs. For the mobile
nVidia Tegra K1 GPU, the support was recently added6
.
*** Torch [#hebfd329]
Torch is another computing framework, which supports deep learning algorithms
[CBM02]. It can use GPUs to seed up computation. Recent ports brought support
to run on mobile devices like iOS, Android and FPGAs.
*** Theano [#fa48b6e8]
Theano also supports deep learning algorithms with efficient calculation of multidimensional
arrays [BLP+12]. The python library includes GPU support as well as
symbolic differentiation. Support for mobile devices has not been a core feature of
the software yet.
*** Deeplearning4j [#h741c4f4]
Deeplearning4j tries to solve some issues of the other frameworks, namely Java as
a more robust and portable language and providing commercial support7
. This
enables the framework to run on any platform with Java support. Additionally,
strong support to scale over many parallel GPUs or CPUs is implemented.
***Tensorflow [#pd0acb25]
Tensorflow is an open-source framework for numerical computations, which was
recently released by Google [AAB+]. It facilitates large-scale machine learning on
distributed systems. Without changing code, it can run on multiple CPUs or GPUs
in a server or mobile device.
*Actication funciton [#p5da14a2]
-What makes neural networks a nonlinear classification model?
--https://stats.stackexchange.com/questions/222639/what-makes-neural-networks-a-nonlinear-classification-model/222642
--[[Tutorial>http://www2.econ.iastate.edu/tesfatsi/NeuralNetworks.CheungCannonNotes.pdf]]
*WWW Labs [#p20c4539]
-Neuromorphic - Architectures, Learning, Applicaiotns (University of Tennessee)
--http://neuromorphic.eecs.utk.edu/pages/research-projects/
-Neuromorphic Laboratory (Boston University)
-- http://nl.bu.edu/
-Neuromorphic and Cognitive Integrated Circuits
-- http://www.lumerink.com/pages/neuromorphic.html
-Neuromorphic Machine Intelligence Lab Cognition, Neuroscience and Machine Learning
--http://nmi-lab.org/
-Comutaional Evolutionnary Intelligence Lab, USA
--http://cei.pratt.duke.edu/research/neuromorphic-computing
-Departmnet bof Humnan Intelligence System, Japan
--http://www.brain.kyutech.ac.jp/people/people_e.html
-http://www.brain.kyutech.ac.jp/coe21/english/sub_ishii.html
_Neuro-computer, Japan
-- Workshop: http://www.airc.aist.go.jp/info_details/2017330.html#1
-The 3rd Workshop on Bio-Inspired Energy-Efficient Information Systems
in the University of Tokyo on March 12th, 2018.
--http://park.itc.u-tokyo.ac.jp/EEIP-SCP/workshop20180312/index.html
--http://park.itc.u-tokyo.ac.jp/EEIP-SCP/workshop20160302/program.html
-The 2nd Neuromorphic Research Retreat in AIST
-The 4th brainware seminar (H29), (Research Institute of Electrical Communication, Tohoku university).
--http://www.riec.tohoku.ac.jp/ja/events/others/2018/01/post-17050/
*[[Spike Computing]] [#x176dca1]
ページ名: