Written by 2:49 am Generative AI

### Accelerating Generative AI and ML Development: Hugging Face and Google Cloud Forge Strategic Partnership

/PRNewswire/ — Google Cloud and Hugging Face today announced a new strategic partnership that…

On Google Cloud, developers can efficiently and cost-effectively train, fine-tune, and deploy AI models.

A strategic partnership between Google Cloud and Hugging Face was announced today in SUNNYVALE, California, on Jan. 25, 2024. This collaboration allows developers to utilize Google Cloud’s services for all Hugging Face operations, including model training and deployment on the Google Cloud platform.

This partnership reinforces Google Cloud’s commitment to supporting open-source AI ecosystems and furthers Hugging Face’s mission to empower AI development. By joining forces, Google Cloud becomes the preferred choice for Hugging Face training and inference tasks, serving as a corporate cloud partner for the platform. Developers will have the opportunity to train and deploy open models and create innovative generative AI applications seamlessly, thanks to Google Cloud’s AI-optimized infrastructure, which includes compute resources, Tensor Processing Units (TPUs), and Graphics Processing Units (GPUs).

The collaboration between Hugging Face and Google Cloud aims to assist developers in training and utilizing large AI models more efficiently and affordably on the platform. Key initiatives include:

  • Enabling developers to access Vertex AI-equipped Hugging Face models easily from the platform, facilitating the rapid development of new AI applications using Google Cloud’s comprehensive MLOps services.
  • Supporting deployments on Google Kubernetes Engine (GKE), where Hugging Face developers can handle training, fine-tuning, and deployment tasks using GKE-specific Deep Learning Containers.
  • Providing open-source developers with access to Cloud TPU v5e, offering increased efficiency and reduced latency compared to previous versions.
  • Potential support for A3 VMs powered by NVIDIA’s H100 Tensor Core GPUs, delivering faster training and improved inference speed.
  • Utilizing the Google Cloud Marketplace for managing and billing the Hugging Face-managed platform, including features like AutoTrain, Inference, Endpoints, and Spaces.

Thomas Kurian, CEO of Google Cloud, emphasized the shared vision of Google Cloud and Hugging Face in democratizing advanced AI tools for developers. Through this collaboration, developers can leverage the purpose-built AI platform Vertex AI and secure infrastructure to expedite the development of cutting-edge AI services and applications.

Clement Delangue, CEO of Hugging Face, highlighted Google’s leadership in AI innovation and the open-source community. With this partnership, they aim to simplify access to the latest open models and optimized AI hardware, such as Vertex AI and TPUs, enhancing developers’ capabilities to create their AI models effectively.

The Hugging Face platform is set to introduce Vertex AI and GKE as deployment options in the first quarter of 2024.

About Google Cloud

Google Cloud empowers organizations worldwide to undergo digital transformation efficiently. Leveraging Google’s state-of-the-art technology, we provide enterprise-grade solutions to help developers work more effectively. Trusted by customers in over 200 countries and territories, Google Cloud serves as a reliable partner in driving growth and overcoming significant business challenges.

Visited 2 times, 1 visit(s) today
Tags: Last modified: April 15, 2024
Close Search Window
Close