
(Open AI) PaperBench: Evaluating AI’s Ability to Replicate AI Research
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
Dive into PaperBench, a novel benchmark introduced by OpenAI designed to rigorously evaluate AI agents' ability to replicate state-of-the-art machine learning research. Unlike previous benchmarks, PaperBench requires agents to build complete codebases from scratch based solely on the paper content, and successfully run experiments from 20 selected ICML papers. Performance is meticulously graded using detailed, author-approved rubrics containing thousands of specific outcomes. To facilitate scalable evaluation, the benchmark employs an LLM-based judge, assessed for its accuracy against human grading. Early results show that current models, like Claude 3.5 Sonnet, achieve average replication scores of around 21.0%, demonstrating emerging capabilities but not yet matching the performance of human ML PhDs. PaperBench serves as a crucial tool for measuring AI autonomy and ML R&D capabilities, potentially accelerating future scientific discovery. However, challenges remain, including the high computational cost of evaluations and the labour-intensive process of creating the comprehensive rubrics.
Paper link: https://arxiv.org/pdf/2504.01848