LAB MISSION & CURRENT RESEARCH

Research at the Adaptive Systems Laboratory focuses on energy-efficient adaptive computing systems and novel parallel computer architectures which guarantee high performance, low-power, high reliability, and also learning capability. This includes coordinated tasks on all system layers ranging from core level architecture to compiler and runtime systems all the way to the system level. We are also researching non-conventional computing models. In particular, we are studying a novel brain-like computing model and the computational properties of neural processing systems by developing new devices and systems that emulate the principles of computation in the biological brain.


CURRENT RESEARCH

ARTIFICIAL/BIOLOGICAL NEURAL  NETWORK ACCELERATORS & APPLICATIONS 

Hardware implementations of neural network systems are power-efficient and effective methods to provide cognitive functions on a chip compared with the conventional stored-program computing style.  In recent years, neuroscience research has revealed a great deal about the structure and operation of individual neurons, and medical tools have also revealed a great deal about how neural activity in the different regions of the brain follows a sensory stimulus. Moreover, the advances of software-based Artificial Intelligence (AI) have brought us to the edge of building brain-like functioning devices and systems overcoming the bottleneck of the conventional von Neumann computing style. Energy-efficient devises/accelerators for neural-networks are needed for power-constrained devices, such as smartphones, drones, robots, and autonomous-driving cars. We are investigating energy-efficient devices and accelerators for NNs on FPGA and ASIC.  We are also investigating how to map the latest deep learning algorithms to application-specific hardware and emerging devices/systems to achieve orders of magnitude improvement in performance and energy efficiency. Currently, we are  investigating Neuro-inspired neural network models and computing methods ….read more


HIGH-PERFORMANCE RELIABLE  INTERCONNECT TECHNOLOGIES FOR NOCS

Future generations of high-performance computing systems would contain hundreds of components made of processing cores, DSPs, memory, accelerators, learning circuits, FPGAs, etc., all integrated into a single die area of just a few square millimeters. Such a ”tinny” and complex system would be interconnected via a novel on-chip interconnect closer to a sophisticated network than to current bus-based solutions. The on-chip network must provide high-throughput, and low-latency  supports while keeping the area and power consumption low. Moreover, these muti-/many core systems are becoming susceptible to a variety of faults caused by crosstalk, the impact of radiations, oxide breakdown, and so on. As a result, a simple failure in a single transistor caused by one of these factors may compromise the entire system’s dependability where a failure can be illustrated in corrupted message delivery, time requirement unsatisfactory, or even sometimes the whole system collapse.Our research effort in this area is about solving several design challenges to enable the packet-switched and other novel switching schemes for networks of massively parallel cores in conventional load/store and in a novel neuro-inspired computing systems. …..read more


MACHINE LEARNING FOR POWER-EFFICIENT MULTICORE SOCS/NOCS

The attraction of multicore processing for power reduction is compelling in embedded and in general purpose computing. By splitting a set of tasks among multiple cores, the operating frequency necessary for each core can be reduced, thereby facilitating a reduction in the voltage on each core. As dynamic power is proportional to the frequency and to the square of the voltage, we are able to obtain a sizable gain, even though we may have more cores running. Moreover, as more and more cores are integrated into these designs to share the ever-increasing processing load, the primary challenges are geared toward efficient memory hierarchy, scalable system interconnect, new programming models, and efficient integration methodology for connecting such heterogeneous cores into a single system capable of leveraging their individual flexibility. From another hand, current design methods are inclined toward mixed hardware/software (HW/SW) co-designs, targeting multicore SoCs for application-specific domains. To decide on the lowest cost mix of cores, designers must iteratively map the device’s functionality to a particular HW/SW partition and target architectures. In addition, to connect the heterogeneous cores, the architecture requires high performance-based complex communication architectures and efficient communication protocols, such as hierarchical bus, point-to-point connection, or the recent new interconnection paradigm—network on chip. We are currently researching about algorithm and hardware for power-efficient multicore SoCs/NoCs empowered by innovative machine learning algorithms.…..read more

Permanent link to this article: https://adaptive.u-aizu.ac.jp/?page_id=668