WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use (2024)

Artificial Intelligence (AI) holds great promise for improving the delivery of healthcare and medicine worldwide, but only if ethics and human rights are put at the heart of its design, deployment, and use, according to new WHO guidance published today.

The report, Ethics and governance of artificial intelligence for health, is the result of 2 years of consultations held by a panel of international experts appointed by WHO.

“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. “This important new report provides a valuable guide for countries on how to maximize the benefits of AI, while minimizing its risks and avoiding its pitfalls.”

Artificial intelligence can be, and in some wealthy countries is already being used to improve the speed and accuracy of diagnosis and screening for diseases; to assist with clinical care; strengthen health research and drug development, and support diverse public health interventions, such as disease surveillance, outbreak response, and health systems management.

AI could also empower patients to take greater control of their own health care and better understand their evolving needs. It could also enable resource-poor countries and rural communities, where patients often have restricted access to health-care workers or medical professionals, to bridge gaps in access to health services.

However, WHO’s new report cautions against overestimating the benefits of AI for health, especially when this occurs at the expense of core investments and strategies required to achieve universal health coverage.

It also points out that opportunities are linked to challenges and risks, including unethical collection and use of health data; biases encoded in algorithms, and risks of AI to patient safety, cybersecurity, and the environment.

For example, while private and public sector investment in the development and deployment of AI is critical, the unregulated use of AI could subordinate the rights and interests of patients and communities to the powerful commercial interests of technology companies or the interests of governments in surveillance and social control.

The report also emphasizes that systems trained primarily on data collected from individuals in high-income countries may not perform well for individuals in low- and middle-income settings.

AI systems should therefore be carefully designed to reflect the diversity of socio-economic and health-care settings. They should be accompanied by training in digital skills, community engagement and awareness-raising, especially for millions of healthcare workers who will require digital literacy or retraining if their roles and functions are automated, and who must contend with machines that could challenge the decision-making and autonomy of providers and patients.

Ultimately, guided by existing laws and human rights obligations, and new laws and policies that enshrine ethical principles, governments, providers, and designers must work together to address ethics and human rights concerns at every stage of an AI technology’s design, development, and deployment.

Six principles to ensure AI works for the public interest in all countries

To limit the risks and maximize the opportunities intrinsic to the use of AI for health, WHO provides the following principles as the basis for AI regulation and governance:

Protecting human autonomy: In the context of health care, this means that humans should remain in control of health-care systems and medical decisions; privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection.

Promoting human well-being and safety and the public interest. The designers of AI technologies should satisfy regulatory requirements for safety, accuracy and efficacy for well-defined use cases or indications. Measures of quality control in practice and quality improvement in the use of AI must be available.

Ensuring transparency, explainability and intelligibility. Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.

Fostering responsibility and accountability. Although AI technologies perform specific tasks, it is the responsibility of stakeholders to ensure that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.

Ensuring inclusiveness and equity. Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes.

Promoting AI that is responsive and sustainable. Designers, developers and users should continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. AI systems should also be designed to minimize their environmental consequences and increase energy efficiency. Governments and companies should address anticipated disruptions in the workplace, including training for health-care workers to adapt to the use of AI systems, and potential job losses due to use of automated systems.

These principles will guide future WHO work to support efforts to ensure that the full potential of AI for healthcare and public health will be used for the benefits of all.

Media Contacts

Tarik Jasarevic

Spokesperson / Media Relations
WHO

Telephone: +41227915099

Mobile: +41793676214

Email: jasarevict@who.int

Related

Ethics and governance of artificial intelligence for health

WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use (2024)

FAQs

WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use? ›

The six principles its experts came up with are: protecting autonomy; promoting human safety and well-being; ensuring transparency; fostering accountability; ensuring equity; and promoting tools that are responsive and sustainable. There are dozens of potential ways AI can be used in healthcare.

WHO issues the first global report on artificial intelligence AI in health and six guiding principles for its design and use? ›

Linked to this, six core principles for ethical AI in health are identified: 1) protecting autonomy; 2) promoting human well-being, human safety, and the public interest; 3) ensuring transparency, explainability and intelligibility; 4) fostering reproducibility and accountability; 5) ensuring inclusivity and equity and ...

WHO issues guidelines for use of AI technology in healthcare? ›

The WHO guidance emphasizes the critical role of governments in regulating AI in healthcare. For example, governments are encouraged to establish regulatory frameworks to develop and enforce standards for the development and deployment of AI in healthcare.

WHO identifies 6 key principles for ethics in artificial intelligence AI? ›

The 6 core principles identified by WHO are: (1) protect autonomy; (2) promote human well-being, human safety, and the public interest; (3) ensure transparency, explainability, and intelligibility; (4) foster responsibility and accountability; (5) ensure inclusiveness and equity; (6) promote AI that is responsive and ...

WHO was the first person to present the AI concept? ›

1955: John McCarthy held a workshop at Dartmouth on “artificial intelligence” which is the first use of the word, and how it came into popular usage.

Who is the father of AI in healthcare? ›

What is Artificial Intelligence (AI) in Healthcare? In 1956, Professor John McCarthy, the father of Artificial Intelligence (AI), defined the concept for the first time as using computers and technology to simulate intelligent behavior and reflective thinking equivalent to humans.

Who wrote the first artificial intelligence program? ›

The first demonstration of the Logic Theorist (LT) written by Allen Newell, J.C. Shaw and Herbert A. Simon (Carnegie Institute of Technology, now Carnegie Mellon University or CMU). This is often called the first AI program, though Samuel's checkers program also has a strong claim.

WHO regulates AI technology? ›

Currently, there is no comprehensive federal legislation or regulations in the US that regulate the development of AI or specifically prohibit or restrict their use. However, there are existing federal laws that concern AI albeit with limited application.

WHO calls for greater regulation of AI in healthcare? ›

The World Health Organization says providers have a role to play in developing guardrails for artificial intelligence in healthcare. WHO outlined its concerns in a report published last Thursday that focused on the ethics and governance of AI in healthcare.

WHO issues ethical guidelines for AI in healthcare focusing on large multi-modal models? ›

The WHO guidance summarizes the broad applications of LMMs in the healthcare industry and includes recommendations for governments, which have the primary responsibility of setting standards for the development and deployment of LMMs, and their integration and use for public health and medical purposes.

Who introduced the first set of AI ethical guidelines? ›

In 2021, UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence, the first global standard on the ethics of AI.

Who is responsible for AI ethics? ›

Humans are accountable for AI design, development, decision processes, and outcomes. This includes thinking through the impact of choices made in the creation of a model. Transparency – This principle is about making sure that the decisions made by AI systems are understandable and explainable.

Who is responsible for AI governance? ›

Who is accountable for AI governance within organisations? Any organisation that engages with AI systems is responsible for governing them and is also at risk of being sanctioned for non-compliance.

Who were the first AI researchers? ›

Alan Turing was among the first people to seriously investigate the theoretical possibility of "machine intelligence". The field of "artificial intelligence research" was founded as an academic discipline in 1956.

Who leads the world in AI research? ›

Main 20 AI countries 2023, by research capacity

The United States had the strongest capacity for research among the leading 20 AI nations worldwide in 2023. It has a ranking of 100, compared with its nearest competitor China at just around 54.

Who is regarded as one of the founding fathers of AI? ›

John McCarthy is considered as the father of Artificial Intelligence. John McCarthy was an American computer scientist. The term "artificial intelligence" was coined by him.

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Frankie Dare

Last Updated:

Views: 6488

Rating: 4.2 / 5 (53 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Frankie Dare

Birthday: 2000-01-27

Address: Suite 313 45115 Caridad Freeway, Port Barabaraville, MS 66713

Phone: +3769542039359

Job: Sales Manager

Hobby: Baton twirling, Stand-up comedy, Leather crafting, Rugby, tabletop games, Jigsaw puzzles, Air sports

Introduction: My name is Frankie Dare, I am a funny, beautiful, proud, fair, pleasant, cheerful, enthusiastic person who loves writing and wants to share my knowledge and understanding with you.