The AI designs that power OpenAI’s ChatGPT associate received updates on Thursday. Among less significant updates, OpenAI tucked in a suggestion that GPT-4 Turbo may be able to repair reportedly widespread “laziness” issues that have been present since the game’s launch in November. Additionally, the business unveiled a new GPT-3.5 Turbo model with lower prices, an updated tolerance model, novel integrating models, and an API usage management strategy.
“Now, we are releasing a revised GPT-4 Turbo demo model, gpt-4 0125 preview.” According to OpenAI’s blog post, this design completes tasks like code generation more carefully than the previous preview model and is designed to lessen instances of “laziness” where the model falls short.
Many ChatGPT users have reported that since the release of GPT-4 Turbo, the AI assistant’s ability to perform tasks (especially coding tasks) with the same complete detail as it did in earlier versions of the game has decreased. While experimenting with ChatGPT over time, we have observed this behavior firsthand.
The ChatGPT X consideration wrote in December, “We’ve heard all your comments about GPT4 getting lazier!” OpenAI people have previously acknowledged on social media that the problem is true, despite the fact that it has never provided an official reason for this change in behavior. Since November 11th, the model hasn’t been updated, and this is undoubtedly unintentional. We’re looking into fixing model behavior because it can be unstable.
By hit time, we had never heard back from OpenAI after asking if it could offer a formal explanation for the laziness problem.
Another update, including a new GPT-3.5 Turbo
The company also revealed a new version of GPT-3.5 Turbo (gpt-3.5-turbo-0125) in another blog update from OpenAI. This version is said to offer “various improvements, such as higher accuracy at responding in requested formats and fixes for bugs that caused text encoding issues for non-English language function calls.”
Additionally, “to help our customers level,” the price of GPT-3.5 Turbo through OpenAI’s API may drop for the next period this year. The cost of new input tokens is 50% lower, at $0.0005 per 1,000 in-serial token, and the cost to produce token is 25% lower—$0.0015 per 1,000 infeed tokens.
Operating third-party machines may be significantly less expensive with GPT-3.5 Turbo’s lower key prices, but this type is typically more likely to confabulate than GPT-4 Turbo. Therefore, we might see more examples, such as Quora’s bot telling users that eggs can melt (although it did so using the now-deprecated text-davinci-003) GPT-3 model. Some of those dream problems with second events might finally disappear if GPT-4 Turbo API prices decline over time.
In order to help with device learning tasks like clustering and retrieval, OpenAI even unveiled new embedding models, text-embedding-3, modest, and text, embedded, 3 large. These models convert content into quantitative sequences. Additionally, the company’s API includes an updated restraint model called text-moderation-007 that, according to OpenAI, “allows developers to identify potentially damaging text.”
Lastly, OpenAI is introducing new tools for managing API locks and a new platform for monitoring API usage as part of its creator software enhancements. Developers can then grant rights to API keys from the API key page, which helps to prevent misuse of the keys (if they fall into the wrong hands), which could end up costing developers a lot of money. Developers can “view use on a per feature, team, product, or project level” using the API dashboard’s individual API keys.
Releases like these demonstrate that the coder teams at OpenAI are still moving along as usual with updates at a very normal rate as the media globe seems to swirl around the company with controversies and consider pieces about the implications of its tech. Despite the firm almost entirely disintegrating soon last season, it appears that OpenAI is conducting business as usual.