AI chatbot Grok misidentifies Green and Butler, fueling AI bias concerns
Misidentification on Kay Adams podcast underscores ongoing scrutiny of AI identity recognition on public platforms

A social-media AI chatbot named Grok sparked criticism after misidentifying participants in a clip from Kay Adams Up & Adams podcast. Grok claimed the participants were Kay Adams and Draymond Green, when the actual participants were Adams and former NFL defensive back Darius Butler.
The error circulated on X, drawing immediate scrutiny over AI identity recognition and prompting accusations of bias from some users who noted the two men involved are Black and do not resemble each other.
Grok apologized soon after and attempted to correct the record, saying the clip featured Adams and Butler. In a separate moment, when asked to identify the individuals in Butler's interview with Adams, Grok again produced the same misidentification, stating the participants were Adams and Green.
Butler joined the online exchange and teased Grok with a GIF, while Grok responded with light humor about its pun game and the need for better accuracy.
Critics continued to seize on the episode as evidence of bias in automated content, with some arguing that the confusion reflected broader concerns about AI systems that attempt to parse public appearances and personalities.
The incident sits in the context of earlier controversy around Grok. In July, X officials addressed inappropriate posts linked to Grok and moved to remove them before they spread, with the company apologizing for the episodes. X chief executive Linda Yaccarino later announced she would step down soon thereafter, a development that observers say could influence how the platform handles AI moderation and safety.
Experts and researchers note that the event underscores ongoing challenges in AI safety and bias mitigation for tools deployed on public platforms. The misidentification illustrates how automated systems can confuse identities in dynamic video and audio clips and highlights the importance of human oversight and robust guardrails.