Ali Akoglu

Research Interests

Ali Akoglu is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Arizona. He is the Co-Director of the National Science Foundation, Center for Cloud and Autonomic Computing, Director of the NVIDIA CUDA Teaching Center, and the Director of the Reconfigurable Computing Laboratory. He received his PhD degree in Computer Science from the Arizona State University in 2005.

Dr. Akoglu is an expert in high performance scientific computing and parallel computing with a primary focus on restructuring computationally challenging algorithms for achieving high performance on field programmable gate array (FPGA) and graphics processing unit (GPU) hardware architectures. He has been involved in many crosscutting collaborative projects with the goal of solving the challenges of bridging the gap between the domain scientist, programming environment and emerging highly-parallel hardware architectures. His research projects have been funded by the National Science Foundation, iPlant Collaborative, US Air Force, NASA Jet Propulsion Laboratories, and Army Battle Command Battle Laboratory.

Akoglu has contributed to the scientific computing domain with: 1) design and development of scalable and novel approach to sequence alignment problem using the graphics processing unit (GPU) technology; 2) development of novel methods to accelerate T-Cell Receptor (TCR) synthesis for studying the immune systems of complex organisms; 3) completion of the first study on joining Variable, Diverse, and Joining (VDJ) gene segments using GPU to determine all possible ways (several trillions of sequences) in which proteins can be encoded to match antigens from viruses, cancers, and other diseases; 4) developing high performance algorithms for rapid identification of phenotype to genotype linkage with GPU based HPC systems; 5) building a library of next-generation sequencing and error correction algorithms for improving assembly quality and detecting gene-gene interactions with large scale data sets; and 6) building tools for autonomic management services (self-aware, self-adaptive, self-protect, self-optimizing, and self-healing) for “hassle-free, uninterrupted” cloud computing services.