AI vs Human performance in audit competitions - what each found and where to specialize
See what AI found, what humans found, and where each excels. Use this data to improve your workflow or benchmark your AI tool.
AI tools scan codebases before human auditors compete. Findings are marked out-of-scope, filtering out AI-detectable issues so humans focus on novel bugs. This is different from AI vs Human competition.
Prize estimates use Code4rena's formula: High=10 shares, Medium=3 shares, with 0.85^(n-1)/n decay for duplicates.
These AI findings were marked out-of-scope for human wardens. Humans competed for remaining issues only.
View V12 Report →These AI findings were marked out-of-scope for human wardens. Humans competed for remaining issues only.
View V12 Report →These AI findings were marked out-of-scope for human wardens. Humans competed for remaining issues only.
View V12 Report →These AI findings were marked out-of-scope for human wardens. Humans competed for remaining issues only.
View V12 Report →These AI findings were marked out-of-scope for human wardens. Humans competed for remaining issues only.
View V12 Report →These AI findings were marked out-of-scope for human wardens. Humans competed for remaining issues only.
View V12 Report →These AI findings were marked out-of-scope for human wardens. Humans competed for remaining issues only.
View V12 Report →Submit your AI tool's contest results and findings to be tracked here.
Submit Tool DataTrack your performance against AI tools. Coming soon: connect your Sherlock/C4 profile.
Request Feature