Humans have outperformed cutting‑edge generative AI models from Google and OpenAI at one of the world’s most prestigious mathematics competitions, even though both systems achieved gold‑level scores for the first time.....KINDLY READ THE FULL STORY HERE▶

At this year’s International Mathematical Olympiad (IMO), held in Queensland, Australia, five human contestants earned perfect scores of 42 points — a feat neither AI model could match.

Google announced on Monday that an advanced version of its Gemini chatbot solved five of the six complex problems, earning 35 out of 42 points — a gold‑medal benchmark. “We can confirm that Google DeepMind has reached the much‑desired milestone,” IMO president Gregor Dolinar said, praising the AI’s solutions as “clear, precise, and in many cases easy to follow.”

Nigerian Church Leader Condemns Gay Marriage As Foreign Concept

Similarly, OpenAI reported that its experimental reasoning model also scored 35 points. Researcher Alexander Wei described the result as “achieving a longstanding grand challenge in AI” at “the world’s most prestigious math competition,” noting that former IMO medalists independently graded the AI’s work under the same rules as human contestants.

While roughly 10 percent of human participants earned gold‑level medals, five competitors achieved flawless performances.

The milestone marks a significant leap for AI. Just last year, Google’s system managed only a silver‑level score after solving four out of six problems over several days of computation. This year, Gemini completed its solutions within the IMO’s 4.5‑hour time limit.

HURIWA Demands Justice After Deadly Attack on Senator Natasha's Residence

According to the IMO, tech companies tested their closed‑source models on the same problems tackled by 641 students from 112 countries.

“It is very exciting to see progress in the mathematical capabilities of AI models,” Dolinar said — even as humans remain firmly at the top of the podium