LAB MISSION & CURRENT RESEARCH

ASLab focuses on energy-efficient adaptive computer system architectures and parallel systems which guarantee high performance, low-power, fault-tolerance, and also cognitive abilities (learning, adaptivity). This includes coordinated tasks on all system layers ranging from core-level architecture to runtime systems all the way to the system level.  We are also studying a novel adaptive brain-like computing model and the computational properties of neural processing systems by developing efficient algorithms and systems that emulate the principles of computation in the biological brain. Our new algorithms, devices, and systems form the core technology for emerging applications, such as IoT, Edge computing, and Autonomous Vehicles.


CURRENT RESEARCH

ADAPTIVE BRAIN-INSPIRED ACCELERATORS/CHIPS & APPLICATIONS

Hardware implementations of neural network systems are power-efficient and effective methods to provide cognitive functions on a chip compared with the conventional stored-program computing style.  In recent years, neuroscience research has revealed a great deal about the structure and operation of individual neurons, and medical tools have also revealed a great deal about how neural activity in the different regions of the brain follows a sensory stimulus. Moreover, the advances of software-based Artificial Intelligence (AI) have brought us to the edge of building brain-like functioning devices and systems overcoming the bottleneck of the conventional von Neumann computing style. Energy-efficient devises/accelerators for neural-networks are needed for power-constrained devices, such as smartphones, drones, robots, and autonomous-driving cars. We are investigating energy-efficient devices and accelerators for NNs on FPGA and ASIC.  We are also investigating how to map the latest deep learning algorithms to application-specific hardware and emerging devices/systems to achieve orders of magnitude improvement in performance and energy efficiency. Currently, we are  investigating the following four main themes….read more


HIGH-PERFORMANCE RELIABLE  INTERCONNECT TECHNOLOGIES FOR NOCS AND GOGNITIVE SOCS

Future generations of high-performance computing systems would contain hundreds of components made of processing cores, DSPs, memory, accelerators, learning circuits, FPGAs, etc., all integrated into a single die area of just a few square millimeters. Such a ”tinny” and complex system would be interconnected via a novel on-chip interconnect closer to a sophisticated network than to current bus-based solutions. The on-chip network must provide high-throughput, and low-latency  supports while keeping the area and power consumption low. Moreover, these muti-/many core systems are becoming susceptible to a variety of faults caused by crosstalk, the impact of radiations, oxide breakdown, and so on. As a result, a simple failure in a single transistor caused by one of these factors may compromise the entire system’s dependability where a failure can be illustrated in corrupted message delivery, time requirement unsatisfactory, or even sometimes the whole system collapse.Our research effort in this area is about solving several design challenges to enable the packet-switched and other novel switching schemes for networks of massively parallel cores in conventional load/store and in a novel neuro-inspired computing systems. …..read more


ENERGY-EFFICIENT MULTICORE SOCS/NOCS WITH MACHINE LEARNING

High-performance embedded and general purpose computer system researches are expanding beyond the usual focus on performance to have other quantitative and qualitative criteria. These quantitative problems include energy/power consumption, and the qualitative problems mainly include portability, reliability, and fault tolerance. Since there is no single architecture that is capable of fulfilling all these requirements, future systems will require some kind of adaptivity. This means systems should monitor themselves, analyze their own behavior, and adapt to several execution environments. From another hand,  the attraction of multicore processing for power reduction is compelling in embedded and in general purpose computing. By splitting a set of tasks among multiple cores, the operating frequency necessary for each core can be reduced, thereby facilitating a reduction in the voltage on each core. As dynamic power is proportional to the frequency and to the square of the voltage, we are able to obtain a sizable gain, even though we may have more cores running.  Current design methods are inclined toward mixed hardware/software (HW/SW) co-designs, targeting multicore SoCs for application-specific domains. To decide on the lowest cost mix of cores, designers must iteratively map the device’s functionality to a particular HW/SW partition and target architectures. In addition, to connect the heterogeneous cores, the architecture requires high performance-based complex communication architectures and efficient communication protocols, such as hierarchical bus, point-to-point connection, or the recent new interconnection paradigm—network on chip. We are currently researching about algorithm and hardware for power-efficient multicore SoCs/NoCs empowered by innovative machine learning algorithms.…..read more.

Permanent link to this article: https://adaptive.u-aizu.ac.jp/?page_id=668