This project considered hardware designs, architectural innovations, and compiler techniques for simultaneously optimizing memory usage, performance, and power consuption of applications. Security issues relevant to distributed and parallel systems were also considered.
In the context of embedded processors we developed techniques
which allow us to achieve performance while operating on compacted code and data.
We have shown how compact code can be executed to deliver performance through proper
instruction set and microarchitectural support. Compacted high performance code results
in lower power consumption. We have also developed new compiler algorithms and
instruction set support to show how compacted narrow width data, prevelant in
multimedia codes, can be effectively manipulated. A novel register allocation algorithm
that allows colocation of multiple narrow width data items in a single register have
Low Power Caches and Buses.
These techniques were developed techniques for lowering the power consumed by
on-chip memory and external data buses associated with the processors. They are
useful for both high-performance and embedded processors since in both types of
processors on-chip memory and external buses consume significant part of the
total power. These techniques are based upon compression/encoding of frequent
values. We have also developed compiler support for carrying out data compression
for reducing power consumed by the memory subsystem.
Superscalar and VLIW Processors.
In context of high performance superscalar processors we have developed low complexity
memory disambiguation mechanism, path-sensitive value prediction technique, power
efficient dynamic instruction issue mechanism, and load/store reuse techniques.
These techniques have also been implemented as part of the gcc compiler and the
FAST simulation system. In context of VLIW processors, a
novel architecture that incorporates value prediction has been developed. In
addition, global instruction scheduling algorithms based upon control dependence
regions have been developed.
Path-Sensitive Optimizations represent situations in which it is possible to optimize a statement with respect to some paths along which it lies while the same optimization opportunity does not exist along other paths through the statement. We have developed demand-driven and profile-guided analysis for aggressive application of path-sensitive optimizations. Examples of optimizations studied include conditional branch elimination, partial redundancy elimination, partial dead code elimination, load redundancy removal, and elimination of array bound checks. Code motion and control flow restructuring are two transformations that have been used to enable path sensitive optimizations along frequently executed paths. In this research, we also developed techniques which apply optimizations in a resource sensitive and take advantage of machine characteristics such as support for speculation and predication.