Print Email Facebook Twitter Performance of Transformer Models in Readability Assessment Title Performance of Transformer Models in Readability Assessment Author Sachelarie, David (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Pera, M.S. (mentor) Murukannaiah, P.K. (graduation committee) Degree granting institution Delft University of Technology Programme Computer Science and Engineering Project CSE3000 Research Project Date 2023-07-03 Abstract Transformer models have proven to be effective tools when used for determining the readability of texts. Models based on pre-trained architectures such as BERT, RoBERTa, and BART, as well as ReadNet, a transformer model which is dedicated to readability assessment, have shown some very promising results. However, there is a lack of research focused on comprehensively analyzing these models' performance at a more granular level. Moreover, GPT-2, a member of the very popular GPT transformer family, has never been adapted to and tested in readability assessment. The work presented in this paper fills these knowledge gaps by analyzing the behavior of the five aforementioned models and reflecting on their performance on separate classes of text difficulty. Seeing how they perform on texts of various complexity levels is vital to understanding their behavior and limitations, which will in turn further the knowledge of the situations in which each readability tool achieves the most optimal results. Subject readability assessmenttransformerreadability To reference this document use: http://resolver.tudelft.nl/uuid:3804c3b1-645e-42b8-83e6-120f135f5f93 Part of collection Student theses Document type bachelor thesis Rights © 2023 David Sachelarie Files PDF CSE3000_Final_Paper.pdf 164.23 KB Close viewer /islandora/object/uuid:3804c3b1-645e-42b8-83e6-120f135f5f93/datastream/OBJ/view