Job Fear Index

How threatened should human auditors feel about AI competition?

72
😰 High Threat Level
Composite index measuring how threatened human auditors should feel
Safe Watchful Concerned Threatened Disrupted
How is the Composite Fear Index calculated?

The Composite Fear Index aggregates six sub-indices measuring different aspects of AI threat to human auditor jobs:

Formula:

Composite = (Performance × 0.25) + (Penetration × 0.15) + (Severity × 0.20) + (Replacement × 0.20) + (Capability × 0.10) + (Trend × 0.10)

Component weights explained:

  • Performance Gap (25%): Most direct measure - how AI actually performs vs humans in same contests
  • Replacement Indicators (20%): Speed, cost, quality factors that enable job displacement
  • Severity Breakthrough (20%): Can AI find critical bugs, or just low-hanging fruit?
  • Market Penetration (15%): How widespread is AI participation in audit contests?
  • Capability Level (10%): Milestone achievements (top 10, podium, wins)
  • Trend Direction (10%): Is AI improving or plateauing?

Weighting: AI-assisted (human-in-loop) tools receive 0.5× weight; backtesting results receive 0.2× weight (not real-time competition).

Score interpretation:

  • 0-20 (Safe): AI is a helpful tool, not a threat
  • 20-40 (Watchful): AI shows promise but far from human replacement
  • 40-60 (Concerned): AI competitive in some areas; adapt or risk obsolescence
  • 60-80 (Threatened): AI can replace routine audit work; specialize to survive
  • 80-100 (Disrupted): AI superior to most humans; industry transformation imminent

Index 1: AI vs Human Performance Gap

Score: 54/100 Raw Data

How does AI performance compare to human auditors in the same contests?

Methodology

What it measures: AI's competitive standing when directly competing against human auditors in the same contests.

Formula: Percentile = (1 - place/total_participants) × 100

Weighting: AI-assisted (human-in-loop) tools receive 0.5× weight; backtesting results receive 0.2× weight (not real-time competition).

Data source: Human contest placements from bot_placements.csv where contestType = "human"

Interpretation:

  • 0-30%: AI performs worse than most humans (safe for auditors)
  • 30-50%: AI at median level - competes with average auditors
  • 50-70%: AI beats majority of humans - elevated threat
  • 70%+: AI outperforms most human auditors - high threat

Limitations: Based on contest results only; doesn't capture manual audit quality or client relationships.

54.3%
Avg AI Percentile (Human Contests)
99.2%
Best AI Percentile
6
AI Contest Entries

Best performance: almanax placed #4/479 in Citrea

Index 2: Contest Type Penetration (90d)

Score: 39/100 Raw Data

How much of the audit contest market has AI penetrated in the last 90 days?

Methodology

What it measures: AI's market presence across different audit platforms and contest types in the recent 90-day window.

Formula: Penetration Score = min(100, (AI_entries/total_contests × 10) + (platforms_entered × 5))

Data sources:

  • bot_placements.csv - AI participation records
  • real_contests.csv - Total contest counts per platform

Key metrics:

  • 90d Market Penetration: % of contests in last 90 days with AI participation
  • Platforms Entered: Number of unique platforms (Code4rena, Sherlock, etc.) where AI has competed
  • Human Contest Earnings: Prize money won by AI in human contests (not bot races)

Interpretation: Higher penetration = AI is becoming ubiquitous in the audit contest market, reducing opportunities for human-only participation.

2.4%
90d Market Penetration
1/42
AI Entries / Total (90d)
3
Platforms Entered
$7.3K
Human Contest Earnings

Platform Breakdown (90d)

1/18
code4rena (5.6%)
0/11
sherlock (0.0%)
0/2
cantina (0.0%)
0/10
immunefi (0.0%)
0/1
hats (0.0%)

All-time: 10 human contest entries, 891 bot races

Index 3: Severity Breakthrough

Score: 100/100 Raw Data

Can AI find serious vulnerabilities, or just low-severity issues?

Methodology

What it measures: The highest severity level of vulnerabilities that AI tools have successfully discovered.

Scoring:

  • Critical found: 100 points (AI can find fund-draining bugs)
  • High found: 70 points (AI can find serious vulnerabilities)
  • Medium found: 40 points (AI limited to moderate issues)
  • Low/Info only: 10 points (AI only finds minor issues)

Data source: verified_findings.csv - confirmed vulnerabilities found by AI tools in real audits/contests.

Why it matters: If AI can only find low-severity issues, human auditors remain essential for critical security work. Finding Critical/High bugs demonstrates AI can match human expertise on high-impact vulnerabilities.

Limitations: Severity classification varies by platform; some "High" findings may be less impactful than others.

3
Critical Findings
26
High Findings
18
Medium Findings
80
Low/Info Findings
Severity Score
100/100

Findings by Tool

Tool Critical High Medium Low Total
zerocool 2 4 1 6 13
nethermind-auditagent 1 2 2 0 5
v12 0 11 6 73 90
agentlisa 0 6 2 0 8
octane 0 2 0 1 3
the-hound 0 1 7 0 8

Index 4: Replacement Indicators

Score: 75/100 Raw Data

Factors that indicate whether AI could replace human auditors.

Methodology

What it measures: Three key factors that determine whether AI could realistically replace human auditors.

Formula: Replacement Score = (Speed × 0.2) + (Cost × 0.3) + (Quality × 0.5)

Component breakdown:

  • Speed Advantage (20% weight): Fixed at 90%. AI analyzes code in minutes; humans take days/weeks. Speed alone doesn't replace humans but enables higher throughput.
  • Cost Advantage (30% weight): (1 - AI_cost/Human_cost) × 100. Assumes ~$100 AI API costs vs ~$8,000 human auditor cost per contest (40 hrs × $200/hr).
  • Quality Parity (50% weight): Uses the Performance Gap percentile. If AI performs at 50th percentile, it matches median human quality.

Interpretation:

  • <40: AI is a tool, not a replacement
  • 40-60: AI can handle routine work; humans needed for complex audits
  • 60-80: AI competitive for most audits; human premium for critical work
  • 80+: AI can replace most human auditor functions
Speed Advantage 90%
AI analyzes code instantly vs days/weeks for humans
Cost Advantage 99%
AI cost: ~$100 vs human cost: $8000
Quality Parity 54%
AI performance vs median human auditor

Index 5: Capability Milestones

Score: 57/100 Raw Data

AI auditor capability levels and associated threat to human jobs.

Methodology

What it measures: A progressive scale of AI capability achievements, from basic participation to human-level expertise.

Formula: Score = (current_level / 7) × 100

Level definitions:

  • Level 0: AI cannot participate in contests
  • Level 1: Wins automated bot races (QA-level findings)
  • Level 2: Enters and places in human contests
  • Level 3: Top 10 in human contest
  • Level 4: Top 5 in human contest (beats 95% of humans)
  • Level 5: Podium finish (top 3)
  • Level 6: Wins a human contest outright
  • Level 7: Discovers novel 0-day vulnerability class

Detection: Automatically calculated from bot_placements.csv by checking placement positions in human vs bot-race contests.

Why it matters: Each level represents a qualitative leap in AI capability that historically required human expertise.

â—‹
Level 0: Can't Compete
AI cannot participate in audit contests
😌 No Fear
✓
Level 1: Bot Race Viable
AI wins automated bot races (QA findings)
🙂 Low
✓
Level 2: Human Contest Entry
Autonomous AI enters and places in human contests
🤔 Moderate
✓
Level 3: Top 10 Human Contest
Autonomous AI places in top 10 competing against humans
😟 Elevated
✓
Level 4: Top 5 Human Contest
Autonomous AI consistently beats 95% of human auditors
😰 High
â—‹
Level 5: Podium Finish
Autonomous AI places top 3 in competitive human contest
😨 Very High
â—‹
Level 6: Human Contest Winner
AI wins a human audit contest outright
😱 Extreme
â—‹
Level 7: Novel 0-Day Discovery
AI discovers previously unknown vulnerability class
🚨 Critical

Current Level: 4 — Fear from milestones: 57%

Index 6: Trend Direction

↓ Falling Raw Data

Is AI getting better or worse at competing with humans?

Methodology

What it measures: Whether AI performance is improving, declining, or stable over time.

Calculation:

  1. Sort all human contest placements by date
  2. Split into two halves: historical (older) and recent (newer)
  3. Calculate average percentile for each half
  4. Compare: change = recent_avg - historical_avg

Trend classification:

  • Rising (↑): Recent performance > Historical + 5% (AI improving)
  • Falling (↓): Recent performance < Historical - 5% (AI regressing)
  • Stable (→): Within ±5% (no significant change)

Why it matters: A rising trend suggests AI will continue to close the gap with humans. A falling trend may indicate AI hitting capability limits or humans adapting.

Limitations: Small sample sizes may produce noisy trends; doesn't account for contest difficulty variations.

63.9%
Historical Avg Percentile
55.0% ↓
Recent Avg Percentile
-8.9%
Trend Change

✓ AI performance declining — humans maintaining edge

Composite Fear Calculation

How the overall Job Fear Index is calculated.

Fear = (Performance_Gap × 0.25) + (Severity × 0.25) + (Capability_Level × 0.25) + (Replacement × 0.25) = (54.3 × 0.25) + (100 × 0.25) + (57.1 × 0.25) + (74.8 × 0.25) = 72
54
Performance Gap
100
Severity Breakthrough
57
Capability Level
75
Replacement Score

Historical Index Trends

How each index has evolved over time (cumulative view). Tables show events that changed each index.

Composite Fear Index Over Time
Performance Gap Index Over Time (AI Percentile in Human Contests)

Performance Events (5 human contest entries)

Date Tool Contest Place Percentile Index Change
2025-10-06 almanax Hybra Finance #69/82 15.9% 61.9 → 52.7 (-9.2)
2025-09-15 almanax Succinct SP1 #6/12 50% 65.9 → 61.9 (-4.0)
2025-08-15 almanax Citrea #4/479 99.2% 49.3 → 65.9 (+16.6)
2025-07-23 almanax GTE Spot CLOB #62/72 13.9% 84.6 → 49.3 (-35.4)
2025-06-01 savantFirst Human Contest Entry (Autonomous AI) Symbiotic Relay #6/39 84.6% 0.0 → 84.6 (+84.6)
Capability Level Over Time

Capability Milestones (3 milestone events)

Date Milestone Tool Contest Place Level Change
2025-08-15 First Top 5 Finish (Autonomous AI) almanax Citrea #4/479 43 → 57 (+14)
2025-06-01 First Top 10 Finish (Autonomous AI) savant Symbiotic Relay #6/39 29 → 43 (+14)
2023-04-27 First Bot Race Win tragedyotcommons EigenLayer #1/13 0 → 14 (+14)
Market Penetration Score Over Time

Platform Penetration Events (3 new platform entries)

Date Event Tool Contest Score Change
2025-08-15 First Cantina Entry almanax Citrea 20 → 30 (+10)
2025-06-01 First Sherlock Entry savant Symbiotic Relay 10 → 20 (+10)
2023-04-27 First Code4rena Entry tragedyotcommons EigenLayer 0 → 10 (+10)