The issue of the fairness in automated decision-making algorithms has been actively discussed since the mid-2010s. Cases of biased algorithms causing various ethical issues have been reported [1, 2]. These problems could promote and further perpetuate inequality and discrimination in society [3]. In their review article, Ueda et al. [4] focused on artificial intelligence (AI) in healthcare, particularly in radiology, and defined fairness as “development and deployment of unbiased AI that provides accurate diagnoses and treatment for all patients regardless of their social status or ethnic differences.” They proposed a list of recommendations to achieve fairness in healthcare AI called the Fairness of Artificial Intelligence Recommendations (FAIR) principles.

The authors categorized the biases in healthcare AI at different levels, from development to clinical use, into four types: data biases, which is introduced when collecting and organizing the AI training data; algorithmic biases, which is introduced during the development and implementation of AI; clinician interaction-related biases, which is introduced when the physicians use AI for patient care; and patient interaction-related biases, which is introduced between AI and patients receiving AI-driven care. Each of these biases causes problems that hamper patient equality in receiving high-quality medical care. Moreover, the problems can be complicated and difficult to address because of the inherent characteristics of AI: (i) healthcare AI is trained using a large amount of personal information, (ii) AI decision-making processes are often unclear and unexplainable to humans, and (iii) unfamiliarity with AI among physicians and patients could cause overreliance on or the unreasonable refusal of AI.

To mitigate the abovementioned biases, countermeasures at their corresponding levels, such as collecting diverse and representative training data, conducting regular audits and validation of AI algorithms, and educating physicians and patients, are indicated. Additionally, maintenance of privacy and security of the training data, clarification of stakeholder roles and responsibilities, transparency of AI algorithms, and explanatory visualization of the AI decision-making processes are presented as important factors to ensure fairness in healthcare AI.

To summarize the strategies to ensure fairness in healthcare AI, the authors presented the FAIR principles comprising 10 recommendations. This statement is intended to be applicable not only to AI in radiology but also to healthcare AI systems in general. Thus, it should be shared throughout various healthcare fields.

We believe that AI-based systems will provide powerful assistance in clinical radiology. Furthermore, in the near future, AI may efficiently and reliably reshape healthcare to offer substantial benefits to the patients. The achievement of fairness in healthcare AI would lead to improved healthcare.