Are We Looking at an Unchecked AI Revolution in Healthcare in LMICs?

There is a quiet change taking hold which you notice when entering the Services Hospital in Lahore on any day of June in the scorching heat, looking at the overburdened public hospitals of Johannesburg or the rural health outposts of Uttar Pradesh in India.

This is not linked to a new kind of medicine or some new practice, but the use of AI models capable of scanning X-rays and drafting histopathology reports on instructions of clinicians or helping doctors in writing notes. It was in 2007 that you would go to the largest private hospital in Islamabad and the doctor would have a recorder in their hand which they would use to dictate the condition of the patient and that recorder at the end of the day would go to a room where another person would type it for internal record. This aspect has been completely transformed and is now available also for hospitals on the peripheries ensuring that the medical record of patients is up to date.

It helps us realise that Artificial Intelligence (AI) is not a luxury, but an advancement that can help bridge the gap between clinical demand and infrastructural scarcity. It is my belief that the success of medical AI will not be determined only by the usage of its code, but by the effectiveness of the legal frameworks we build to govern it.

The promise is undeniable as AI can increase the clinical capacity where specialists don’t exist and can help ensure resource allocation in a world where medical supply chains remain disrupted. The rapid deployment of these technologies at times outpaces the evolution of law.

The LMICs with their limited resources face a dual crisis as there is a severe shortage of medical professionals and a divided health data ecosystem. If the space is not regulated by those who are aware of the challenges then the adoption of medical AI can introduce fatal risks. The Agentic AIs have their own algorithmic bias based on the kind of data they have been exposed to and many systems can compromise sensitive personal health data of patients which can be exploited by a pharmaceutical industry which thrives on its largesse and profits. There is a need to stop viewing the law or regulations as restrictive barriers and ensure that they become a critical enabler of innovation. I could not stress this enough as a long-term voice for patient engagement and advocacy because the kind of malpractice which can take place in complete absence of regulations would have direct impact on the quality of life of the patients.

It is the right time for the conversation and discussions to start in developing states like South Africa and Pakistan. The right to health is not merely a policy goal, but is a fundamental part of Constitutional practice. South Africa’s Section 27 and Pakistan’s Article 9 (the right to life) makes it obligatory that the state ensure equitable, transparent and accessible healthcare.

The AI-powered diagnostics for tuberculosis or breast cancer can prove to materially improve outcomes and their deployment would no longer be restricted to a particular group of elite hospitals. The access to these algorithmic interventions falls within the protective ambit of fundamental rights.

Healthcare is an industry driven by data, national registries of diseases and outcomes determined in numbers. The data inequity is a problem which threatens the healthcare of more than two-thirds of the world population. The AI tools being deployed in LMICs are trained on datasets from high-income countries while systematically excluding local genetic, environmental and cultural variances.

This is not just a technical flaw, but a result of non-neutral interventions in healthcare. If an algorithm trained in the West fails to recognise pathogenic patterns in a South American population then that encodes discrimination. It is important to legally compel algorithmic equity using national laws and international treaties. These low-hanging fruits in international law still remain part of the discourse where change is possible.

The current jurisprudence often relies on the human-in-the-loop fiction (just like ensuring a kill switch in autonomous weapons which is enacted when the loss has already become unbearable). The physician remains the sole liable partner which is inherently unjust. It would be unwise if not impractical to expect a rural practitioner to override a statistically superior machine which ignores the reality of automation bias.

There is a need as is being discussed globally to restructure liability toward enterprise and product liability models. The LMICs should explore state-backed no-fault compensation funds akin to vaccine injury programmes. This would ensure swift justice for injured patients while shielding clinicians from innovation-averse litigation.

It is important to reiterate that there is a need to develop an infrastructure of trust. Overly restrictive localisation laws are not what is being called for — they would prevent the LMIC population from benefiting from global innovation. We should instead incentivise privacy-preserving architectures.

The integration of AI into LMIC healthcare is a generational opportunity. There is a need to move from theoretical promise to sustainable reality, and we must architect a legal framework that is as innovative as the technology it seeks to govern. Only then can we ensure that AI serves as a powerful instrument of social equity rather than furthers the disparity.

Hassan Raza is Director of Esquare Institute and Senior Associate at Esquare Legal, advising on digital health regulation, cross-border compliance, and public sector technology procurement. Contact: admin@esquarelegal.com

Tags :

Share post :