Behind the Words: Dissecting ChatGPT's Capability of Manipulative Tactics in Forming Product Reviews

BEHIND THE WORDS: DISSECTING CHATGPT'S CAPABILITY OF MANIPULATIVE TACTICS IN FORMING PRODUCT REVIEWS

Abstract

OpenAI developed a large language model, ChatGPT, trained on human-generated text, and capable of analyzing, understanding, and producing human language. Studies show it can manipulate users, especially in product reviews, by influencing trust and decisions. Salespersons can use GPT models to craft deceptive reviews. To explore this, custom GPT models generated fake reviews for smart speakers, using tactics like exaggeration, omission, and false claims. Frequency and qualitative analysis revealed patterns in how these reviews distort perceptions. This highlights the need for awareness of AI ethics and future research on detecting manipulation and protecting users.

People

Lead by

Naw Eh Htoo, Indiana University

Mentor

Sabid Bin Habib Pias (Ph.D. Student), Indiana University

Apu Kapadia (PI), Indiana University