Projects per year
Abstract
As machine intelligence evolves, the need to test and compare the problem-solving abilities of different AI models grows. However, current benchmarks are often simplistic, allowing models to perform uniformly well and making it difficult to distinguish their capabilities. Additionally, benchmarks typically rely on static question-answer pairs that the models might memorize or guess. To address these limitations, we introduce Dynamic Intelligence Assessment (DIA), a novel methodology for testing AI models using dynamic question templates and improved metrics across multiple disciplines such as mathematics, cryptography, cybersecurity, and computer science. The accompanying dataset, DIA-Bench, contains a diverse collection of challenge templates with mutable parameters presented in various formats, including text, PDFs, compiled binaries, visual puzzles, and CTF-style cybersecurity challenges. Our framework introduces four new metrics to assess a model's reliability and confidence across multiple attempts. These metrics revealed that even simple questions are frequently answered incorrectly when posed in varying forms, highlighting significant gaps in models' reliability. Notably, API models like GPT-4o often overestimated their mathematical capabilities, while ChatGPT-4o demonstrated better performance due to effective tool usage. In self-assessment, OpenAI's o1-mini proved to have the best judgement on what tasks it should attempt to solve. We evaluated 25 state-of-the-art LLMs using DIA-Bench, showing that current models struggle with complex tasks and often display unexpectedly low confidence, even with simpler questions. The DIA framework sets a new standard for assessing not only problem-solving but also a model's adaptive intelligence and ability to assess its limitations. The dataset is publicly available on the project's page: https://github.com/DIA-Bench.
Original language | English |
---|---|
Title of host publication | IEEE International Conference on Big Data Workshop (MMAI 2024) |
Publisher | IEEE |
Publication status | Accepted/In press - 16 Nov 2024 |
Keywords
- Artificial Intelligence
- Large Language Models
- Dynamic Benchmarking
- Performance Metrics
- Reliability
Fingerprint
Dive into the research topics of 'Dynamic Intelligence Assessment: Benchmarking LLMs on the Road to AGI with a Focus on Model Confidence'. Together they form a unique fingerprint.Projects
- 1 Finished
-
EnnCore: End-to-End Conceptual Guarding of Neural Architectures
Cordeiro, L. (PI), Brown, G. (CoI), Freitas, A. (CoI), Luján, M. (CoI) & Mustafa, M. (CoI)
1/02/21 → 31/12/24
Project: Research