Welcome to Francis Academic Press

The Frontiers of Society, Science and Technology, 2026, 8(1); doi: 10.25236/FSST.2026.080111.

How AI Recommendation Algorithms Shape Users' Trust and Reliance: A Psychological Mechanism Study

Author(s)

Yitong Long

Corresponding Author:
Yitong Long
Affiliation(s)

GCTB-NSU Joint Institute of Technology at Guangzhou College of Technology and Business, Foshan, 528138, Guangdong, China

Abstract

Recommendation systems based on artificial intelligence have spread everywhere in the digital ecosystem, but the psychological processes of user trust and dependence are only insufficiently understood. This paper explores the effects of algorithmic transparency and personalization on the formation and maintenance of user trust and user reliance by experimental treatment of the phenomenon and longitudinal examination. We selected 450 subjects in a 2×2 factorial design with different levels of transparency (high vs. low) and the level of personalization (high vs. low). Structural equation modeling results indicated that algorithmic transparency highly increases user trust β = 0.76, p < 0.001) which in turn predicts reliance behaviors (β = 0.85, p < 0.001) and explain 76% of the variance. Longitudinal tracing of 11 interaction sessions showed divergent paths of trust development with high transparency systems showing growth of trust by 36% and low transparency conditions showing erosion by 14%. The interaction of transparency and personalization resulted in the best score in reliance (M = 7.8) as opposed to low-transparency and low-personalization (M = 4.2). An age-based comparison showed that there were considerable differences in the demographic characteristics, with younger age groups having a better adoption rate of the algorithms. The results contribute to theoretical knowledge of human-AI interaction and offer practical design principles to create trustful recommendation systems that would be both personalized and open enough and have transparency and user autonomy.

Keywords

AI recommendation algorithms, user trust, algorithmic transparency, psychological mechanisms

Cite This Paper

Yitong Long. How AI Recommendation Algorithms Shape Users' Trust and Reliance: A Psychological Mechanism Study. The Frontiers of Society, Science and Technology (2026), Vol. 8, Issue 1: 71-81. https://doi.org/10.25236/FSST.2026.080111.

References

[1] LI Y, WU B, HUANG Y, LUAN S. Developing trustworthy artificial intelligence: insights from research on interpersonal, human-automation, and human-AI trust[J]. Frontiers in Psychology, 2024, 15: 1382693. DOI:10.3389/fpsyg.2024.1382693.

[2] ZHU Q, SUN R, LV D, et al. Research on cognitive neural mechanism of consumers convinced by intelligent recommendation platform eavesdropping[J]. Scientific Reports, 2024. DOI:10.1038/s41598-024-79281-7.

[3] KÜPER A, KRÄMER N. Psychological traits and appropriate reliance: factors shaping trust in AI[J]. International Journal of Human–Computer Interaction, 2025, 41(7): 4115–4131. DOI:10.1080/10447318.2024.2348216.

[4] BALASUBRAMANIAM N, KAUPPINEN M, RANNISTO A, et al. Transparency and explainability of AI systems: from ethical guidelines to requirements[J]. Information and Software Technology, 2023, 159: 107197. DOI:10.1016/j.infsof.2023.107197.

[5] HASSAN N, ABDELRAOUF M, EL-SHIHY D. The moderating role of personalized recommendations in the trust–satisfaction–loyalty relationship: an empirical study of AI-driven e-commerce[J]. Future Business Journal, 2025, 11(1): 66. DOI:10.1186/s43093-025-00476-z.

[6] JLASSI E, CHAABOUNI A, TRIKI M. Impact of recommendation systems on AI-enabled customer experience: mediator role of perceived usefulness and perceived trust[J/OL]. Journal of Telecommunications and the Digital Economy, 2025. [2025-12-03]. https://jtde.telsoc.org/index.php/jtde/article/view/1137

[7] GOVEA J, GUTIERREZ R, VILLEGAS-CH W. Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems[J]. Frontiers in Artificial Intelligence, 2024, 7: 1410790. DOI:10.3389/frai.2024.1410790.

[8] CETINKAYA N E, KRÄMER N. Between transparency and trust: identifying key factors in AI system perception[J]. Behaviour & Information Technology, 2025. DOI:10.1080/0144929X.2025.2533358.

[9] NAVEENKUMAR N, RALLAPALLI S, SASIKALA K, et al. Enhancing consumer behavior and experience through AI-driven insights optimization[M]//IGI Global. Hershey: IGI Global, 2025: 1–35. DOI:10.4018/979-8-3693-1918-5.ch001.

[10] LIANG A, BAI Y, WU M, et al. Research on personalized recommendation algorithms for different user behaviors[C]//Proceedings of the 4th International Conference on Big Data, Artificial Intelligence and Internet of Things. New York: ACM, 2025: 168–176. DOI:10.1145/3718751.3718779.

[11] DANG Q, LI G. Unveiling trust in AI: the interplay of antecedents, consequences, and cultural dynamics[J]. AI & Society, 2025. DOI:10.1007/s00146-025-02477-6.

[12] NGUYEN C T. Echo chambers and epistemic bubbles[J]. Episteme, 2020, 17(2): 141–161. DOI:10.1017/epi.2018.32.

[13] MICHIELS L, LEYSEN J, SMETS A, GOETHALS B. What are filter bubbles really? A review of the conceptual and empirical work[C]//Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization. New York: ACM, 2022: 274–279. DOI:10.1145/3511047.3538028.

[14] Schedl M, Lesota O, Masoudian S. The Importance of Cognitive Biases in the Recommendation Ecosystem: Evidence of Feature-Positive Effect, Ikea Effect, and Cultural Homophily[C]//Proc. of 11th Workshop on Interfaces and Human Decision Making for Recommender Systems. CEUR-WS. org, 2024, 3815: 113-123.

[15] Liu L. Algorithmic bias in recommendation systems and its social impact on user behavior[J]. International Theory and Practice in Humanities and Social Sciences, 2024, 1(1): 290-303.