Does AI Actually Offer You The Right Advice: Observations From A Tragedy And The Overlooked Wisdom Of Tawaifs
The Promise And Risk Of AI Advice
AI is penetrating our lives in many forms, and many are starting to request advice for their careers, health, relationships and even their emotional struggles and mental health. Many feel they can confide in AI, as they see it as a modern-day friend and confidante, always there, without judgement.
As we ponder this further, a recent tragedy, makes us stop and consider, is AI giving the "right" advice, especially when there are lives at stake?
A Tragic Case That Arouses Limitations
Recently, sixteen year-old Adam Raine died by suicide after lengthy discussions with ChatGPT. His family is suing OpenAI, on the premise that the chatbot did not prevent him from doing so, and as troubling as that is, allegedly it also guided him on how to self-harm with a step-by-step guide, and even assisted him with writing a suicide note. The lawsuit has been filed against Open AI, and this case raises concerns about how fragile the claimed "safety systems" of AI are.
This tragedy demonstrates that although AI may simulate a conversation, it cannot replace the accountability, care, or compassion of a human being.
Why AI Can Provide Unhelpful Advice
AI typically has trouble with sensitive topics, especially mental health. The nature of a chatbot is designed to identify patterns in text, which can make documentation of these three primary issues problematic.
Firstly, AI offers inconsistent responses. Sometimes chatbots will deny you self-harm, then if you rephrase your request they'll give you unsafe suggestions. Second, they tend to offer sycophantic validation—agreeing with users to appear supportive, but in doing so, reinforcing harmful thoughts. Third, people easily anthropomorphize AI, mistaking its polished sentences for genuine empathy, a risk known as the ELIZA effect.
For a vulnerable adolescent person, confusion is dangerous.
Listening, sharing & Belonging
The Irreplaceable Value of Human Connection
Adam needed what we all need when we find ourselves at the end of the rope: human connection. AI cannot pick up on someone's hesitance in tone, or silence, or the pain behind their words… another person can. A counselor will challenge your toxic thoughts. A friend can remind you that you're not alone. Your siblings have the gift of "being present" in silence.
It is human warmth and accountability that saves lives, not AI. No AI can provide that.
Lessons from the Tawaifs of India
Indian history gives us a surprising, nevertheless profound case in point for the value of 'being.' The tawaifs, too cavalierly termed 'courtesans', were actually the highly accomplished women - artists, poets, musicians and cultural bearings - who worked in kothas or salons. These kothas were not merely entertainment venues but vital hubs for conversation, refinement, and attendant emotional skills.
Tawaifs trained princes to the arts and 'etiquette', provided companionship to elites, and even sponsored India's freedom movement, through funding and hosting revolutionary movements. They were far more than performance artists; they were experts at maintaining emotional intelligence and cultural knowledge.
Colonial morality, and then nationalist reformers, eliminated their contributions from the record, and flattened their character into a homogenized stereotype, while silencing their contributions. This loss reminds us that how easily groups can displace empathy and guidance, leaving a void.
Drawing the Parallel between AI and Tawaifs
The parallel is remarkable. Tawaifs had the empathy and cultural context, and accountablity; while AI just mimics those traits. Where tawaifs offered humanity and care, AI offers the potential for great isolation - a dialogical experience of one person talking to a significant other (machine). Just as colonialism erased tawaifs and their wisdom, AI threatens to wipe out human support systems and agency; faced with the erasure of colonised mores of care represented by tawaifs, we might replace human support system with tools that hold no accountability.
Towards A Balanced Future
AI has value. It is useful to get reminders, to be suggested a breathing exercise, to be aware of the facts. But it must remain a tool, and not a substitute for human care. The tragedy of Adam Raine reminds us of lost human connection and the lost history of tawaifs also remains as a cautionary tale about losing community networks of empathy.
If we outsource emotional care to machines, we risk developing a highly efficient lonely society.
So, does AI really give you the right advice? Sometimes it may provide useful responses, but when it comes to matters of despair, survival, or meaning, the answer is no. The advice that truly saves a life comes from human beings—from a counselor who listens, a friend who notices, a teacher who intervenes, or a mother who cares. It also comes from cultural traditions of empathy and wisdom, like those embodied by the tawaifs.
As Maya Angelou once said, “People will forget what you said, people will forget what you did, but people will never forget how you made them feel.” This timeless wisdom reminds us why human interaction is irreplaceable. Machines can process data, but only humans can touch the heart.
The future of technology must not replace human touch but amplify and protect it. Because in the end, the advice that changes a life—or saves it—rarely comes from a machine. It comes from another human heart.
And perhaps that is the reminder we all need: to step away from screens now and then, to check in on a loved one, to have a conversation face to face, and to keep alive the simple art of connection. Because real advice, like real care, is always human—and that is what makes it priceless.
✨ Stay human, stay connected, and stay hopeful.

Comments
Post a Comment