OpenAI's first security hire, Ari Herbert-Voss, thinks more automated bug finding will improve security without costing jobs
Black Hat Asia Open source models can find bugs as effectively as Anthropic's Mythos, according to Ari Herbert-Voss, CEO of AI-powered security startup RunSybil and OpenAI's first security hire.
Speaking at the Black Hat Asia conference in Singapore today, Herbert-Voss said Mythos excels at finding both "shallow" bugs - well-described flaws that are and easy to validate - and more complex vulnerabilities.
In his talk, he attributed this to "supralinear scaling": where researchers assumed LLM capability would improve linearly, evidence now suggests a model trained on twice the data, compute, and time produces something four times more capable.
He hinted supralinear scaling might produce even better multipliers but could not say more due to a non-disclosure agreement.
Anthropic has kept access to Mythos tghtly restricted, citing fears of misuse.
However Herbert-Voss argues attackers and defenders alike can achieve comparable results with open source models by building "scaffolding" to run several of them in harness.
That approach also improves defense in depth, as different models tend to catch different flaws — a useful hedge against any single model's blind spots.
Cost is another driver.
Mythos is expensive to build and run, and may never be publicly available, making open source alternatives not just viable but necessary for many organizations.
Herbert-Voss feels human expertise is still needed to orchestrate open source models so they together deliver Mythos-grade performance, and to assess the bug reports AI generates.
He then noted that fuzzing , the testing technique which injects random or near-random data into software to see if doing so produces bugs, also creates so many warnings that it can make extra work for humans.
AI bug-hunters already produce the same problem, and he expects it will persist.
Herbert-Voss therefore thinks infosec workers will have plenty on their plates for the foreseeable future, and the economic incentive to use AI – someone's got to use services that pay for all those GPUs and datacenters – will act as a forcing function that makes infosec teams adopt AI and as a result improve their proactive and defensive work.
®
Related Stories
Source: This article was originally published by The Register
Read Full Original Article →
Comments (0)
No comments yet. Be the first to comment!
Leave a Comment