By Julian Fraser
Artificial intelligence is transforming cancer care with unprecedented precision in diagnosis and treatment, yet its potential for errors demands unwavering human oversight. From legal missteps to life-threatening medical risks, the stakes underscore the need for clinicians to validate AI outputs, ensuring technology enhances rather than replaces the human touch in patient care.
The Promise and Peril of AI
Imagine a cancer patient reassured by an AI-powered scan that their tumor is benign—only to find months later it was a fatal misdiagnosis caused by an AI “hallucination” unverified by any doctor. This is not science fiction but a looming reality, mirrored by recent sanctions against Australian lawyers for submitting AI-generated falsehoods in court.
As artificial intelligence revolutionizes cancer care—diagnosing endometrial cancer with 99.26% accuracy and personalizing chemotherapy treatments—its risks demand equal attention. Without rigorous human oversight, AI’s potential to save lives could become a threat to patient safety. The solution is clear: clinicians must actively validate every AI output, ensuring technology enhances rather than replaces medical judgment.
Lessons from the Legal Battlefield
The dangers of unchecked AI became apparent in 2025 Australian Federal Court cases. These legal mishaps highlight AI’s capacity for fabrications, underscoring the dangers if such errors occur in medicine. In Valu v Minister for Immigration (No 2) [2025] FCA, a lawyer faced regulatory referral after submitting AI-generated documents containing fabricated case citations.
Similarly, Melbourne firm Massar Briggs Law incurred significant costs when a junior solicitor’s AI-crafted submissions in a native title dispute included false references. Justice Bernard Murphy warned that AI’s “capacity to fabricate or hallucinate information” requires thorough human verification. These legal failures preview the risks now facing oncology.
Consider the stakes: an AI misreading a histopathology image or miscalculating chemotherapy doses could delay treatment or cost lives. Just as courts hold lawyers accountable for AI errors, clinicians must accept full responsibility for medical decisions informed by AI. The lesson is clear—blind trust in AI invites catastrophe.
The Irreplaceable Human Clinician
While AI processes vast data and identifies patterns, it cannot replicate the nuanced judgment essential in quality cancer care. Clinicians integrate complex patient histories—comorbidities, lifestyle, preferences—that AI cannot fully capture. A machine might flag a suspicious lesion, but only an experienced doctor can assess its significance within a patient’s complete health picture.
Critical elements remain beyond AI’s capabilities:
- Empathy and Communication: Explaining complex diagnoses with sensitivity, offering comfort, and guiding families through difficult decisions requires human understanding.
- Ethical Reasoning: Balancing aggressive treatment with quality of life demands moral judgment, not algorithmic calculations.
- Clinical Accountability: Responsibility for patient care rests with clinicians, not code.
Pilot programs like Cancer Australia’s and RPA’s AI-assisted mammography demonstrate effective models where AI accelerates detection without replacing human verification. At Sydney’s Royal Prince Alfred Hospital, radiologists using AI-assisted mammography screening report 23% faster detection rates while maintaining 100% physician verification of all findings. Without this oversight, a single AI error could erode patient trust and trigger negligence claims.
Critics argue that mandatory human verification slows diagnosis and increases costs in resource-constrained healthcare systems. However, while acknowledging these constraints, prioritizing long-term patient safety and maintaining system trust must remain paramount. The goal isn’t to slow progress but to ensure it remains safe and trustworthy.
Safeguarding the Future of Cancer Care
Harnessing AI’s power safely requires coordinated action:
- For Clinicians: Develop AI literacy, understand system limitations, and demand transparency from developers. Validate every AI recommendation and document its use for accountability.
- For Healthcare Systems and Regulators: Following frameworks like the EU’s AI Act (effective 2026), enforce stringent guidelines requiring clinical validation and auditing of AI applications. Cancer Australia’s cautious approach—limiting AI to supportive rather than autonomous roles—provides an excellent model.
- For Patients: Ask questions about AI’s role in your care and advocate for continued human oversight of medical decisions.
This shared responsibility ensures AI augments rather than undermines care quality. Neglecting oversight risks not only clinical errors but also public trust, which healthcare cannot afford to lose. By maintaining human centrality in AI implementation, we can realize AI’s potential for earlier detection and personalized treatment while protecting patients from harm.
A Human-Centered Future
AI promises a new era in cancer care, but its risks demand constant vigilance. Australia’s recent legal missteps—where lawyers suffered consequences from AI fabrications—serve as a stark warning: in oncology, even a single AI error could prove fatal.
Embedding systematic human oversight into AI deployment allows us to harness precision while preserving the judgment, empathy, and accountability that only human clinicians provide. This approach transcends avoiding lawsuits—it’s about safeguarding lives and maintaining patient trust. As AI reshapes healthcare, we must ensure it consistently enhances rather than replaces the healer’s touch.
The path forward is clear: embrace AI’s capabilities while keeping humans firmly at the helm. AI can revolutionize cancer care—but only when humans remain firmly in control. Patients deserve nothing less than the best of both worlds—cutting-edge technology guided by human wisdom and compassion.
© 2025 South Burnett Advocate (kingaroy.org)