SCALE: Scaling up the Complexity for Advanced Language Model Evaluation

Rasiah, Vishvaksenan; Stern, Ronja; Matoshi, Veton; Stürmer, Matthias; Chalkidis, Ilias; Ho, Daniel E; Niklaus, Joël (15 June 2023). SCALE: Scaling up the Complexity for Advanced Language Model Evaluation Cornell University 10.48550/arXiv.2306.09237

[img]
Preview
Text
2306.09237.pdf
Available under License Creative Commons: Attribution (CC-BY).

Download (4MB) | Preview

Recent strides in Large Language Models (LLMs) have saturated many NLP benchmarks (even professional domain-specific ones), emphasizing the need for novel, more challenging novel ones to properly assess LLM capabilities. In this paper, we introduce a novel NLP benchmark that poses challenges to current LLMs across four key dimensions: processing long documents (up to 50K tokens), utilizing domain specific knowledge (embodied in legal texts), multilingual understanding (covering five languages), and multitasking (comprising legal document to document Information Retrieval, Court View Generation, Leading Decision Summarization, Citation Extraction, and eight challenging Text Classification tasks). Our benchmark comprises diverse legal NLP datasets from the Swiss legal system, allowing for a comprehensive study of the underlying Non-English, inherently multilingual, federal legal system. Despite recent advances, efficiently processing long documents for intense review/analysis tasks remains an open challenge for language models. Also, comprehensive, domain-specific benchmarks requiring high expertise to develop are rare, as are multilingual benchmarks. This scarcity underscores our contribution’s value, considering most public models are trained predominantly on English corpora, while other languages remain understudied, particularly for practical domain-specific NLP tasks. Our benchmark allows for testing and advancing the state-of-the-art LLMs. As part of our study, we evaluate several pre-trained multilingual language models on our benchmark to establish strong baselines as a point of reference. Despite the large size of our datasets ∗ Equal contribution. (tens to hundreds of thousands of examples), existing publicly available models struggle with most tasks, even after in-domain pretraining. We publish all resources (benchmark suite, pre-trained models, code) under a fully permissive open CC BY-SA license.

Item Type:

Working Paper

Division/Institute:

Business School > Institute for Public Sector Transformation > Data and Infrastructure
Business School

Name:

Rasiah, Vishvaksenan;
Stern, Ronja;
Matoshi, Veton0009-0002-6613-5701;
Stürmer, Matthias0000-0001-9038-4041;
Chalkidis, Ilias;
Ho, Daniel E and
Niklaus, Joël0000-0002-2779-1653

Publisher:

Cornell University

Language:

English

Submitter:

Safiya Verbruggen

Date Deposited:

25 Aug 2023 11:45

Last Modified:

09 Oct 2023 21:46

Publisher DOI:

10.48550/arXiv.2306.09237

ARBOR DOI:

10.24451/arbor.19713

URI:

https://arbor.bfh.ch/id/eprint/19713

Actions (login required)

View Item View Item
Provide Feedback