

This is the only line you really need from the entire atricle:
That’s the idea behind our new OpenAI Certifications.
It is an age-old idea. People were getting Cisco, Microsoft, Oracle, AWS certificates to pad their CVs for ages. This is a legitimate way for a person to put a well known logo on their page and an easy way for companies to make a few bucks. OpenAI wants that as well.
The certificate means nothing. The course for it teaches nothing. But a CV with an OpenAI logo on it looks better than without and OpenAI wants people to pay for the privilege.
Translation. Only works for unified technical texts. The older non-LLM translation is still better for any general text and human translation for any fiction is a must. Case in point: try to translate Severance TV show transcript to another language. The show makes a heavy use of “Innie/Outie” language that does not exist in modern English. LLM fail to translate that - human translator would be able to find a proper pair of words in the target language.
Triaging issues for support. This one is a double-edged sword. Sure you can triage issues faster with LLM, but other people can also write issues faster with their LLMs. And they are winning more. Overall, LLM is a net negative on your triage cost as a business because while you can process each one faster than before, you are also getting way higher volume of those.
Grammar. It fails in that. I asked LLM about “fascia treatment” but of course I misspelled “fascia”. The “PhD-level” LLM failed to recognize the typo and gave me a long answer about different kinds of “facial treatment” even though for any human the mistake would’ve been obvious. Meaning, it only corrects grammar properly when the words it is working on are simple and trivial.
Starting points for deeper research. So was the web search. No improvement there. Exactly on-par with the tech from two decades ago.
Recipes. Oh, you stumbled upon one of my pet peeves! Recipes are generally in the gutter on the textual Internet now. Somehow a wrong recipe got into LLM training for a few things and now those mistakes are multiplied all over the Internet! You would not know the mistakes if you did not not cook/bake the thing previously. The recipe database was one of the early use cases for the personal computers back in 1990s and it is one of the first ones to fall prey to “innovation”. The recipes online are so bad, that you need an LLM to distill it back to manageable instructions. So, LLM in your example are great at solving the problem they created in the first place! You would not need LLM to get cooking instructions out of 1990s database. But early text generation AIs polluted this section of the Internet so much, that you need the next generation AI to unfuck it. Tech being great at solving the problem it created in the first place is not so great if you think about it.