Meta has recently updated its AI model for code generation, Code Llama 70B, which is considered to be the most extensive and high-performing model to date. The Code Llama tools were introduced in August and are accessible for both research and commercial purposes without any cost. As per a post on Meta’s AI blog, the latest version, Code Llama 70B, has the capability to handle a larger number of queries compared to its predecessors. This enhancement allows developers to input more prompts during programming, resulting in improved accuracy.
Code Llama 70B achieved an accuracy rate of 53 percent on the HumanEval benchmark, surpassing GPT-3.5’s 48.1 percent and approaching the 67 percent milestone reported for GPT-4 in an OpenAI paper (PDF).
Based on Llama 2, Code Llama assists developers in generating code snippets from prompts and debugging human-created code. Meta also introduced two other variations of Code Llama last autumn: Code Llama – Python and Code Llama – Instruct, which concentrate on specific programming languages.
Code Llama 70B is compatible with three versions of the code generator and remains free for both research and commercial applications. This large-scale model was trained using 1TB of code and code-related data. It is accessible on the code repository Hugging Face, which provides GPU access for running AI models.
Meta emphasized that their larger models, 34B and 70B, yield superior outcomes and enhance coding support.
In the past year, several other AI developers have launched code generation tools. Amazon introduced CodeWhisperer in April, while Microsoft utilized OpenAI’s model to introduce GitHub Copilot.