Written by 2:51 am Generative AI

### Enhancing Engineers’ GenAI “Impactivity” Through Google Cloud’s New Collaboration with Hugging Face

Google Cloud partners with AI company Hugging Face to make generative AI accessible for developers …

According to Chief Executive Officer Thomas Kurian, this partnership provides Hugging Face developers with access to Google Cloud’s custom-built AI system, Vertex AI, along with our secure infrastructure, which can expedite the advancement of the upcoming generation of AI services and applications.

An agreement was announced by Hugging Face and Google Cloud on Thursday, with the aim of democratizing Google’s GenAI technology for developers, further solidifying the relational AI movement towards global market leadership.

Through the newly established partnership between the two AI companies, Hugging Face’s AI models can now undergo training and deployment on Google Cloud, empowering developers to leverage the platform’s resources for all their services.

In a statement released on Thursday, Google Cloud’s CEO, Thomas Kurian, emphasized, “Google Cloud and Hugging Face are aligned in their vision to democratize advanced AI for engineers.”

Kurian highlighted that this collaboration enables developers to efficiently train, fine-tune, and deploy Hugging Face models using Google’s Vertex AI, streamlining the process for programmers to build new GenAI applications by utilizing Google Cloud’s comprehensive MLOps services.

As per Kurian, this partnership grants Hugging Face developers access to Google Cloud’s Vertex AI platform and our secure facilities, expediting the development of AI services and applications.

Developers will have the capability to utilize Google Cloud’s AI infrastructure, including compute, Tensor Processing Units (TPUs), and Graphics Processing Units (GPUs), to train and deploy open models and create innovative generative AI applications.

Hugging Face, GKE Deployments Choose Google Cloud as their “Preferred Destination.”

Google Cloud, headquartered in Mountain View, California, and Hugging Face, based in New York, are currently two of the most sought-after artificial service providers globally.

With this fresh collaboration, Google Cloud will emerge as the preferred partner for Hugging Face and the primary destination for inference and training workloads.

A crucial component of this partnership is Google’s Cloud TPU v5e, offering up to 2.5 times more efficiency per dollar and 1.7 times less overhead for inference compared to previous versions, with the goal of providing broader access to open-source developers.

Additionally, the partnership introduces support for Google Kubernetes Engine (GKE) deployments, enabling Hugging Face developers to scale models and conduct training, fine-tuning, and deployment of their workloads using GKE’s Deep Learning Containers.

The Hugging Face program will introduce Vertex AI and GKE as deployment alternatives in the initial quarter of 2024.

Hugging Face CEO: “Google Leads the Way in AI.”

Hugging Face offers over 100,000 machine learning models for free, downloaded more than a million times daily.

The open-source AI company provides various AI architectures, datasets, and collaboration options for businesses.

Clement Delangue, CEO of Hugging Face, mentioned that Google has been a pioneer in AI development and the open technology movement for a considerable time.

Through this latest collaboration, users of Hugging Face and Google Cloud will have seamless access to the newest open models, top-notch tools, and optimization facilities for AI, such as Vertex AI and TPUs, significantly enhancing developers’ capacity to create their AI models.

Hugging Face will also be accessible on the Google Cloud Marketplace, enabling consumers to easily manage and pay for the Hugging Face-managed platform, inclusive of tools like Inference, Endpoints, Space, AutoTrain, among others.

Visited 2 times, 1 visit(s) today
Tags: Last modified: April 15, 2024
Close Search Window