Problematic Dependency on Large Language Models vs. Attitudes Towards Them: The Moderating Role of Perceived Trustworthiness

Authors: Rizvi, S.W.F., Yankouskaya, A., Alshakhsi, S., Al-Thani, D., Ali, R.

Journal: 2025 3rd International Conference on Foundation and Large Language Models Fllm 2025

Publication Date: 01/01/2025

Pages: 1018-1027

DOI: 10.1109/FLLM67465.2025.11391019

Abstract:

The rapid integration of Large Language Models (LLMs) into personal and professional life has led numerous users to depend on these systems to problematic levels, raising concerns about the factors that drive such dependency. This study is among the first to examine factors contributing to the development of dependency on LLM. It is focused on examining the relationship between attitude towards LLMs (acceptance and fear) and LLM dependency (instrumental and relational) and the moderating role of trust in LLMs in shaping this relationship, across two cultural contexts: Arab and the British. Data used in this study was collected from 526 participants from the UK and 250 participants from the Arab countries. Canonical correlation analysis was employed to explore the multivariate association between attitudes and dependency, and multiple linear regression analysis was conducted to test the moderation effect of trust in LLMs. Our results indicated that in both cultural contexts, higher acceptance of LLMs was strongly linked to greater dependency, while fear played a minimal role. Additionally, trust amplified the positive link between acceptance and both LLM dependency types in the UK sample. Whilst in the Arab sample trust strengthened the negative association between fear and both LLM dependency types. Findings from this study highlight the importance of culturally sensitive LLM adoption strategies and the identification of measures to calibrate trust and attitudes to help alleviate overdependence on LLMs.

Source: Scopus