Shilesh Karunakaran1 & Prof.(Dr.) Arpit Jain2
1University of Cincinnati
Carl H. Lindner College of Business
Cincinnati, OH, USA. shilesh.k@gmail.com
2K L E F Deemed University
Vaddeswaram, Andhra Pradesh 522302, India
Abstract
The application of large language models (LLMs) in artificial intelligence (AI) assistants has drawn a lot of interest due to their ability to generate conversations that are close to human-like interaction and to provide contextually relevant responses. Nonetheless, the age-old problem is still that of preserving the personality and consistency of the AI system across different interactions. Despite the advances in conversational AI, most AI assistants lack the ability to maintain a consistent personality, resorting to generating robotic conversations or those that lack personality. This work aims to close the personality conservation gap in AI assistants through the exploration of methods to fine-tune LLMs for consistent personality preservation. Through data-driven methods, the research explores the role of user selection, context recall, and user-specific interaction history in shaping and preserving the personality of an AI assistant. Additionally, the research explores the application of emotional intelligence and adaptive learning algorithms to make the assistant persona more natural, dynamic, and user-relevant. This work introduces a new paradigm for LLM fine-tuning that is able to leverage such factors while allowing responsiveness, adaptability, and user appeal. The ultimate contribution of this research is to lay the foundation for the creation of AI assistants that can provide personalized experiences without compromising reliability or user enjoyment. The expectation is that findings will have far-reaching implications in customer service, personal assistance, and other areas where consistent and engaging AI personalities are the secret to successful user interaction.
Keywords
Fine-tuning, large language models, AI assistants, personality preservation, conversational AI, user personalization, emotional intelligence, adaptive learning, consistency, interaction history.
References
- Frisch, I., & Giulianelli, M. (2024). LLM Agents in Interaction: Measuring Personality Consistency and Linguistic Alignment in Interacting Populations of Large Language Models. arXiv preprint arXiv:2402.02896.
- Jiang, H., Zhang, X., Cao, X., Breazeal, C., Roy, D., & Kabbara, J. (2023). PersonaLLM: Investigating the Ability of Large Language Models to Express Personality Traits. arXiv preprint arXiv:2305.02547.
- Weng, Y., He, S., Liu, K., Liu, S., & Zhao, J. (2024). ControlLM: Crafting Diverse Personalities for Language Models. arXiv preprint arXiv:2402.10151.
- Song, H., Wang, Y., Zhang, W.-N., Liu, X., & Liu, T. (2020). Generate, Delete and Rewrite: A Three-Stage Framework for Improving Persona Consistency of Dialogue Generation. arXiv preprint arXiv:2004.07672.
- Metz, C. (2025, March 5). Chatbots, Like the Rest of Us, Just Want to Be Loved. Wired.
- Kolodny, L. (2025, March 26). The rise of chatbot “friends”. Vox.
- Metz, C. (2024, September 22). This New Tech Puts AI In Touch With Its Emotions—and Yours. Wired.
- del Valle, J. I., & Lara, F. (2023). AI-powered recommender systems and the preservation of personal autonomy. AI & Society, 39(6), 2479–2491.
- Eslami, M., Karimi, S., & Shamsfard, M. (2023). Attitudes towards AI: measurement and associations with personality. Frontiers in Psychology, 14, 1084420.
- Lee, J., & Kim, H. (2025). When humble AI meets narcissistic customers: A terror management perspective. Computers in Human Behavior, 56, 1–9.
- Zhang, Y., & Zhao, Y. (2025). Using AI Assistants in Software Development: A Qualitative Study on Security Implications. Proceedings of the ACM Conference on Computer and Communications Security, 1–15.