Print this Page

Research Introduction

Research activities at the Adaptive Systems Laboratory (ASL) are focusing on self-aware & adaptive computing, and computer systems, which are expanding beyond the usual focus on performance to have other quantitative and qualitative criteria. The quantitative criteria mainly include energy/power consumption. The qualitative criteria include adaptability, efficiency (e.g. performance/area or performance/energy), and reliability which is important in: (1) dynamic environments, where physical context, network topologies, and workloads are always changing, and (2) harsh environments, such as an environment with high radiation.

There are several enabling paradigms or methods, such as reconfigurable fabric, ASIP, and neuro-inspired, that allow a computing system to perform such an adaptation to achieve the criteria mentioned above. This is achieved if such adaptive computing systems can monitor themselves, analyze their behavior, learn and adapt to several execution environments while keeping the system’s complexity invisible to the user.

We are currently engaged in research and development of of innovative adaptive systems for incremental learning and adaptation in dynamic and harsh environments. We are also investigating low-power neural chips which implements neural systems at biological scales. Our target applicaitons are mobile robotic platforms both on land and in unmanned aerial vehicles, and innovative embedded devices and systems. Currently, our focus includes:

Adaptive Neuro-inspired Computing Systems 

The biological brain implements parallel computations using a complex structure that is different from the conventional von Neumann or load/store computing style. Our brain is a low-power, fault-tolerant, and high-performance machine! It consumes only about 20W and brain circuits continue to operate as the organism needs even when the circuit (neuron, neuroglia, etc.) is perturbed or died. Computations in neural networks are naturally parallel and distributed among billion neurons. This very high degree of distributed parallelism allows us to design a low-clock frequency (low-power) and high-throughput neuro-inspired circuits and systems. Hardware implementations of neural networks are very efficient and effective methods to provide cognitive functions on a chip compared with conventional von Neumann processors. One of the most difficult of the challenges in modeling the brain is the massive interconnectivity. So, the challenges that need to be solved include building a small-size massively parallel architecture with scalable interconnects, low-power consumption, and reliable circuits.

Our focus is to investigate novel adaptive low-power low-power neural chips able to scale up to biological levels. Currently, we are studying the following topics:

  • Low-power adaptive neural chips
  • Spiking neural architecture building blocks
  • Conventional hardware (i.e. VLSI, FPGAs) and innovative hardware (i.e., memristor) implementation of Neuro-inspired systems
  • Synaptic and structural plasticity circuit emulations
  • Reliable and scalable communication networks for neuro-inspired chips/systems
  • Neural circuits, models for neurons and synapses
  • Ultra-low power biological-scale neurons
  • Reconfigurability and adaptability methods
  • On-chip learning algorithms
  • Artifical brain emulations

Reliable Scalable On-chip Networks 

Future computing systems would contain hundreds of components made of processor cores, DSPs, memory, accelerators, and I/O all integrated into a single die area of just a few square millimeters. Such complex system would be interconnected via a novel on-chip interconnect (Network-on-Chip) closer to a sophisticated network than to current bus-based solutions. This network must provide high throughput and low latency while keeping area and power consumption low.

Our research effort in this area is about solving several design challenges to enable the packet-switched and other novel switching schemes in multi and many-core systems/SoCs. In particular, we are investigating the following topics: 

  • Implementation techniques for TSV based NoCs
  • 3D-IC integration
  • Fault-tolerant and reliability issues
  • New topologies and flow-control techniques
  • Photonic Interconnects

 Current Projects [現在のプロジェクト]

Permanent link to this article: