OpenAI has restricted ChatGPT from offering specific medical, legal or financial advice, reclassifying the chatbot as an “educational tool” rather than a consultant.
The move, reported by Nexta, responds to mounting regulatory scrutiny and liability concerns over users treating AI outputs as professional guidance.
Under updated policies, the system will no longer name medications or dosages, draft lawsuit templates, or suggest investments. Requests framed as hypotheticals are blocked by strengthened safety filters. ChatGPT will explain general principles, summarise concepts and direct users to qualified experts, but prohibit reliance on its responses for high-stakes decisions in healthcare, law, finance, housing, education, migration or employment.
The changes aim to “enhance user safety and prevent potential harm”, OpenAI said. Unlike licensed professionals, conversations lack doctor-patient or attorney-client privilege and could be subpoenaed in court. The company has also bolstered safeguards for users in distress, addressing mental health crises including self-harm and suicide.
Public debate has grown over people seeking expert advice from chatbots, particularly in medicine. OpenAI stressed that ChatGPT cannot read body language, show empathy or guarantee safety. In emergencies, it cannot detect hazards, contact authorities or provide real-time updates. Outputs may contain errors, outdated information or misreported statistics.
Users are warned against sharing sensitive data such as medical records or financial details, as privacy is not assured. While helpful for brainstorming or defining terms like exchange-traded funds, the tool cannot account for personal circumstances, risk tolerance or local regulations.
OpenAI urged reliance on trained professionals for critical matters. In the US, those in crisis should call 988 rather than consult AI. The restrictions follow separate announcements on hiring in Bengaluru and automation initiatives in banking.
(298 words)
