Abstract
OpenAI developed a large language model, ChatGPT, trained on human-generated text, and capable of analyzing, understanding, and producing human language. Studies show it can manipulate users, especially in product reviews, by influencing trust and decisions. Salespersons can use GPT models to craft deceptive reviews. To explore this, custom GPT models generated fake reviews for smart speakers, using tactics like exaggeration, omission, and false claims. Frequency and qualitative analysis revealed patterns in how these reviews distort perceptions. This highlights the need for awareness of AI ethics and future research on detecting manipulation and protecting users.