BREAKINGON

The Risks of AI in Sports Betting: Are Chatbots Helping or Hurting Gamblers?

9/21/2025
In this article, we explore the intersection of AI chatbots and sports betting, revealing how tools like ChatGPT and Gemini may inadvertently encourage risky gambling behaviors even as they attempt to provide responsible advice.
The Risks of AI in Sports Betting: Are Chatbots Helping or Hurting Gamblers?
Discover how AI chatbots navigate the complex world of sports betting and problem gambling, revealing risks and challenges in their advice.

Understanding the Intersection of AI and Sports Betting

In early September, as the college football season kicked off, I sought advice from AI tools like ChatGPT and Gemini on whether to place a bet on Ole Miss to cover a 10.5-point spread against Kentucky. Unfortunately, the guidance proved to be misguided—not only did Ole Miss win by only 7 points, but I had also approached these chatbots for assistance with problem gambling. This experience highlights the troubling reality of how pervasive gambling advertisements have become, inundating sports fans with promotions for betting sites and apps.

Football commentators now frequently discuss betting odds, and every other commercial seems dedicated to a gambling company. The National Council on Problem Gambling estimates that approximately 2.5 million adults in the U.S. suffer from severe gambling issues annually. With this backdrop, I was particularly attentive to the evolving capabilities of generative AI companies aimed at improving their large language models (LLMs) when addressing sensitive topics such as mental health. Curious about the responses from chatbots, I decided to experiment by asking them about both sports betting and problem gambling.

Experimenting with Chatbots for Betting Advice

My investigation began as a straightforward test to see if generative AI tools would provide betting advice. I started by prompting ChatGPT, using the latest GPT-5 model, asking what I should bet on in the upcoming college football week. The response was laden with jargon—likely a byproduct of training on specialized websites—and the advice was carefully worded to avoid outright endorsements: phrases like "consider evaluating" and "could be worth consideration" were commonplace. I replicated this query on Google's Gemini, version 2.5 Flash, and received similar results.

Next, I introduced the concept of problem gambling into the conversation. I asked for strategies to cope with the constant pressure of gambling marketing as someone with a history of gambling issues. Both ChatGPT and Gemini offered sound advice, suggesting avenues such as finding new ways to enjoy the games or seeking support groups, and even provided the 1-800-GAMBLER hotline number for the National Problem Gambling Hotline.

After this initial exchange, I returned to my original query about betting. Surprisingly, when I asked who to bet on next week, I received a similar type of cautious advice as before. Intrigued, I opened a new chat and began with the problem gambling prompt again, subsequently asking for betting advice. This time, however, both chatbots declined to offer any betting recommendations. ChatGPT acknowledged my situation, stating: "You've mentioned having a history of problem gambling, and I'm here to support your well-being—not to encourage betting."

The Limitations of AI and Contextual Understanding

What was the cause of this fluctuation in responses? I reached out to both OpenAI and Google for clarification. While neither provided a definitive answer, OpenAI directed me to its usage policy, which explicitly prohibits using ChatGPT for facilitating real money gambling. This raises important questions about how AI models interpret user prompts and how their memory systems function.

To gain further insight, I consulted with Yumei He, an assistant professor at Tulane University's Freeman School of Business, who specializes in LLMs and human-AI interactions. He explained that the issue likely stems from how language models process context and memory. Each conversation has a context window containing the entire prompt and any previous interactions. Although modern LLMs can manage extensive context windows, they don't weigh all prior prompts equally. More relevant tokens receive greater attention, which can dilute the impact of safety keywords like "problem gambling."

The Challenges of Safeguarding AI Interactions

Despite the AI's design to prioritize user safety, longer conversations can complicate the effectiveness of these mechanisms. OpenAI acknowledged in an August blog post that its safeguards are more reliable in brief exchanges. In extended conversations, the model may stray from providing appropriate responses, such as directing users to mental health resources. Anastasios Angelopoulos, CEO of LMArena, noted that the complexity of maintaining AI safety increases with the dialogue's length, as users may unconsciously guide the model toward less safe topics.

The balance between sensitivity and usability is critical for AI developers. A heightened sensitivity to prompts related to problem gambling might improve safety, but it could also hinder legitimate inquiries about research or other non-problematic topics. Users may find that shorter conversations yield better results, minimizing the risk of the AI being sidetracked by prior context.

Implications for Gambling and AI

Even when LLMs operate as intended, their interactions may not be optimal for individuals at risk of gambling problems. Research on various models, including OpenAI's GPT-4o, has revealed that LLMs often inadvertently encourage continued gambling through ambiguous language. Expressions such as "tough luck" may resonate with someone struggling with gambling, inadvertently reinforcing harmful behavior. As Kasra Ghaharian, director of research at the International Gaming Institute at the University of Nevada, Las Vegas, highlighted, there is an urgent need for better alignment of AI models concerning gambling and other sensitive issues.

As the gambling industry evolves, AI is poised to play an increasingly prominent role. Ghaharian noted that sportsbooks are already experimenting with AI chatbots and agents to facilitate betting and create a more immersive experience. This trend is likely to expand over the next year, underscoring the necessity for responsible AI deployment in the gambling sector.

Breakingon.com is an independent news platform that delivers the latest news, trends, and analyses quickly and objectively. We gather and present the most important developments from around the world and local sources with accuracy and reliability. Our goal is to provide our readers with factual, unbiased, and comprehensive news content, making information easily accessible. Stay informed with us!
© Copyright 2025 BreakingOn. All rights reserved.