[ad_1]
A brand new AI coding problem has revealed its first winner — and set a brand new bar for AI-powered software program engineers.
On Wednesday at 5pm PST, the nonprofit Laude Institute introduced the primary winner of the Ok Prize, a multi-round AI coding problem launched by Databricks and Perplexity co-founder Andy Konwinski. The winner was a Brazilian immediate engineer named Eduardo Rocha de Andrade, who will obtain $50,000 for the prize. However extra stunning than the win was his closing rating: he gained with right solutions to simply 7.5% of the questions on the check.
“We’re glad we constructed a benchmark that’s truly arduous,” stated Konwinski. “Benchmarks ought to be arduous in the event that they’re going to matter,” he continued, including: “Scores could be totally different if the massive labs had entered with their largest fashions. However that’s form of the purpose. Ok Prize runs offline with restricted compute, so it favors smaller and open fashions. I like that. It ranges the enjoying area.”
Konwinski has pledged $1 million to the primary open-source mannequin that may rating larger than 90% on the check.
Much like the well-known SWE-Bench system, the Ok Prize checks fashions in opposition to flagged points from GitHub as a check of how properly fashions can cope with real-world programming issues. However whereas SWE-Bench is predicated on a set set of issues that fashions can prepare in opposition to, the Ok Prize is designed as a “contamination-free model of SWE-Bench,” utilizing a timed entry system to protect in opposition to any benchmark-specific coaching. For spherical one, fashions had been due by March twelfth. The Ok Prize organizers then constructed the check utilizing solely GitHub points flagged after that date.
The 7.5% high rating stands in marked distinction to SWE-Bench itself, which at present exhibits a 75% high rating on its simpler ‘Verified’ check and 34% on its more durable ‘Full’ check. Konwinski nonetheless isn’t certain whether or not the disparity is because of contamination on SWE-Bench or simply the problem of accumulating new points from GitHub, however he expects the Ok Prize venture to reply the query quickly.
“As we get extra runs of the factor, we’ll have a greater sense,” he advised TechCrunch, “as a result of we anticipate individuals to adapt to the dynamics of competing on this each few months.”
Techcrunch occasion
San Francisco
|
October 27-29, 2025
It would seem to be an odd place to fall brief, given the big selection of AI coding instruments already publicly obtainable – however with benchmarks changing into too straightforward, many critics see initiatives just like the Ok Prize as a needed step towards fixing AI’s rising analysis downside.
“I’m fairly bullish about constructing new checks for present benchmarks,” says Princeton researcher Sayash Kapoor, who put ahead an analogous concept in a latest paper. “With out such experiments, we are able to’t truly inform if the problem is contamination, and even simply focusing on the SWE-Bench leaderboard with a human within the loop.”
For Konwinski, it’s not only a higher benchmark, however an open problem to the remainder of the trade. “In the event you hearken to the hype, it’s like we ought to be seeing AI docs and AI legal professionals and AI software program engineers, and that’s simply not true,” he says. “If we are able to’t even get greater than 10% on a contamination free SWE-Bench, that’s the fact examine for me.”
[ad_2]
{content material}
Supply: {feed_title}

