AUSTIN (KXAN) — The University of Texas in Austin is set to house one of the most advanced artificial intelligence computer clusters in the academic realm. This cluster, comprising 600 high-performance graphics processing units, is designed to swiftly execute intricate mathematical calculations.
Focused on biosciences, healthcare, computer vision, and natural language processing, the cluster’s initial applications will involve testing its capability to interpret MRIs and contribute to the development of novel vaccines. It will be an integral part of UT’s forthcoming Center for Generative AI, with an anticipated launch in the upcoming spring.
Adam Klivans, the Director of UT’s Machine Learning Laboratory, discussed the cluster’s potential achievements with KXAN’s Tom Miller.
Tom Miller: Could you provide a layman’s explanation of what this cluster is and how it operates within the realm of artificial intelligence technology?
Adam Klivans: To train large deep learning models that underpin tools like Chat GPT, necessitating millions, billions, or even trillions of parameters and calculations, specialized hardware such as GPUs is indispensable. These GPUs, integrated into a computer cluster with multiple units, enable the scalability required to develop extensive models facilitating automated decision-making processes.
Tom: Can you illustrate a practical application that you aim to realize through this initiative?
Klivans: One of our envisioned applications involves leveraging AI tools to enhance healthcare practices, particularly in refining MRI scans. For instance, in scenarios where infants undergoing MRI scans exhibit excessive movement, resulting in blurry images, AI tools can effectively denoise and enhance these scans to yield clearer and more precise images for medical professionals.
Tom: While prominent private entities like Open AI have introduced products like Chat GPT, why is it essential to incorporate such advancements within academic institutions as well?
Klivans: There is a growing concern regarding the potential presence of sensitive information within these massive models. Instances have shown that inputting certain keywords into these models could inadvertently reveal personal data or display copyrighted or inappropriate images. Ensuring transparency and safety in deploying these models is challenging due to the opacity of their internal processes. By developing these models in a university environment where datasets and models are open and processes are transparent, we can significantly enhance the safety and interpretability of the models for future deployment.
Tom: How does this initiative align with UT’s existing efforts in the field of artificial intelligence?
Klivans: UT boasts a formidable foundation in artificial intelligence research globally, with a distinguished cohort of researchers. However, there are numerous ambitious projects that necessitate substantial computing power, which was previously unavailable to us. With this enhanced computing capability, we can delve into projects aimed at improving healthcare tools, simulating large models for discovering new therapeutics and vaccines, and scaling our envisioned models effectively.