Skip to main content
Clinical, Cosmetic and Investigational Dermatology logoLink to Clinical, Cosmetic and Investigational Dermatology
. 2025 Sep 4;18:2173–2182. doi: 10.2147/CCID.S543045

Beyond the Algorithm: A Perspective on Tackling Bias and Cultural Sensitivity in AI-Guided Aesthetic Standards for Cosmetic Surgery in the Middle East and North Africa (MENA) Region

Abdulrahman Makhseed 1, Husain Arian 2, Ali Shuaib 3,
PMCID: PMC12416507  PMID: 40927497

Abstract

Artificial intelligence (AI) is increasingly reshaping cosmetic surgery by enhancing surgical planning, predicting outcomes, and enabling objective aesthetic assessment. Through narrative synthesis of existing literature and case studies, this perspective paper explores the issue of algorithmic bias in AI-powered aesthetic technologies and presents a framework for culturally sensitive application within cosmetic surgery practices in the Middle East and North Africa (MENA) region. Existing AI systems are predominantly trained on datasets that underrepresent MENA phenotypes, resulting in aesthetic recommendations that disproportionately reflect Western beauty ideals. The MENA region, however, encompasses a broad spectrum of beauty standards that merge traditional cultural aesthetics with modern global trends, posing unique challenges for AI integration. To ensure ethical and clinically relevant deployment, AI systems must undergo fundamental changes in algorithm design, including the incorporation of culturally diverse datasets with adequate MENA representation, implementation of cultural competency principles, and active collaboration with regional healthcare professionals. The framework outlines concrete criteria for evaluating cultural representativeness in AI training data and outcome assessments, supporting future empirical validation. Developing culturally aware AI tools is both a moral obligation and a clinical priority. This framework provides both a moral imperative and clinical pathway for ensuring AI serves to support, rather than homogenize, the region’s diverse aesthetic traditions.

Keywords: artificial intelligence, algorithmic bias, cosmetic surgery, cultural competency, MENA region, aesthetic medicine, facial analysis, health equity, medical ethics

Introduction

Artificial intelligence (AI) is increasingly being integrated into cosmetic medicine, offering the promise of objective analysis in aesthetic assessment and surgical planning. In the Middle East and North Africa (MENA) region, where the demand for aesthetic procedures is experiencing unprecedented growth, this technological shift carries particularly significant implications. The MENA medical aesthetics market was valued at approximately $594 million in 2024 and is projected to reach $1.39 billion by 2032, underscoring a rapidly expanding interest in cosmetic interventions.1 Notably, Iran performs seven times more rhinoplasty procedures per capita than the United States, reinforcing its global reputation as the “nose job capital of the world”.2

As AI-driven systems begin to influence beauty standards and inform surgical recommendations, there is growing concern that these technologies may perpetuate Western-centric ideals that do not align with the cultural diversity of the MENA region. This article explores the emerging evidence of algorithmic bias in aesthetic AI and highlights the critical need for culturally sensitive design. Without intentional mitigation, such tools risk marginalizing local beauty ideals rather than enhancing individualized care. Ensuring that AI serves to support, not homogenize, the region’s rich aesthetic traditions is both an ethical and clinical imperative.

This perspective article synthesizes existing evidence to: (1) examine algorithmic bias in aesthetic AI systems affecting MENA populations, (2) analyze the cultural implications of Western-centric AI applications, and (3) propose a practical framework for culturally sensitive AI implementation in aesthetic medicine. Our analysis provides actionable recommendations for technology developers, healthcare providers, and policymakers to ensure AI tools respect and enhance the region’s diverse aesthetic traditions. The proposed framework focuses on redesigning algorithms to incorporate regionally representative data, establishing cultural competency standards for AI evaluation, and fostering close collaboration with MENA healthcare professionals to ensure that AI-driven recommendations reflect the region’s diverse aesthetic values.

The Rise of AI in Aesthetic Medicine

Artificial intelligence (AI) is emerging as a powerful tool for standardizing and enhancing aesthetic evaluations in cosmetic medicine. Traditionally, plastic surgeons have relied on subjective clinical judgment and culturally informed training to assess facial features. AI now offers the potential to reduce inter-practitioner variability by providing data-driven, algorithmically derived assessments. A notable advancement in this domain is the development of the Facial Aesthetic Index (FAI), an algorithmic tool designed to analyze facial attributes and propose cosmetic enhancements based on quantified metrics.3 These systems utilize large-scale image datasets to assess characteristics such as facial symmetry, proportions, skin texture, and signs of aging, ultimately generating “objective” beauty scores and individualized treatment recommendations.

Initial perspectives within the aesthetic medicine community have been largely optimistic. In a 2024 global consensus panel of aesthetic experts, 100% of participants agreed that AI could enhance standardization in patient evaluations, assist in surgical planning, and mitigate risks such as overcorrection.4 This has driven efforts to establish validated indices that quantitatively assess facial attractiveness and aging, serving as adjuncts to human clinical expertise.

The scope of AI applications in aesthetic and reconstructive surgery has expanded rapidly, encompassing surgical planning, outcome prediction, and patient assessment across a range of subspecialties.5–9 These advancements are poised to fundamentally reshape clinical approaches, offering new levels of precision and consistency in cosmetic care.

Despite the promise of AI, caution has been strongly advised. Aesthetic perception is not solely grounded in anatomical proportions; it is deeply shaped by cultural norms, individual identity, and sociopolitical context.10 Features flagged as “flaws” by an algorithm may, in fact, represent valued ethnic or cultural traits. Accordingly, there is growing consensus that AI systems used in aesthetic evaluation must integrate considerations of patients’ ancestry and cultural backgrounds. The aforementioned expert panel explicitly recommended that “patients’ ancestral roots should be included in the AI system” to ensure culturally appropriate evaluations.4

This call for culturally responsive AI arises amid increasing awareness of algorithmic bias, systematic errors that disproportionately affect certain demographic groups. In the context of cosmetic surgery, such biases can perpetuate narrow beauty ideals and misrepresent diverse populations. As AI continues to gain traction in aesthetic medicine, there is an urgent need to critically examine how these biases manifest, particularly in culturally heterogeneous regions such as the MENA.

Algorithmic Bias in Beauty AI Systems

Algorithmic bias arises when AI models systematically disadvantage or privilege specific demographic groups due to skewed training data or flawed model design. This phenomenon is well documented in facial-analysis research. In a seminal study, Buolamwini and Gebru reported that commercial gender-classification algorithms misclassified darker-skinned women far more frequently than lighter-skinned men, a disparity directly attributable to imbalanced training datasets.11 Early face-image corpora frequently contained more than 80% light-skinned individuals of European ancestry, leaving algorithms ill equipped to generalize to other phenotypes.

Subsequent efforts have aimed to correct these imbalances. The creators of the FairFace dataset, for example, constructed a race-balanced image repository with approximately equal representation across seven racial groups, explicitly including MENA faces.12 They observed that many earlier datasets lacked a “Middle Eastern” category altogether, effectively rendering these populations invisible to computer-vision models.

When MENA phenotypes are under-represented in training data, AI systems default to the majority patterns they have learned, typically Eurocentric facial proportions and skin tones, thereby encoding Western beauty ideals as normative. As a result, faces exhibiting MENA features may be erroneously classified as outliers or implicitly rated as less attractive. In clinical practice, this translates to AI systems that might flag the naturally broader nasal bridges common among Arab populations as requiring correction, or classify the fuller lips characteristic of many North African ethnicities as disproportionate. Such biased assessments could lead surgeons to recommend unnecessary procedures or aggressive modifications that strip away distinctive ethnic characteristics that patients and their communities actually value. In cosmetic practice, such biased outputs risk reinforcing narrow aesthetic standards and marginalizing the region’s diverse notions of beauty.

This algorithmic bias becomes particularly concerning when combined with the psychological phenomenon of visual adaptation. Research by Goldie et al demonstrated that exposure to exaggerated facial features rapidly shifts individuals’ perceptions of attractiveness toward extreme aesthetics.13 The study revealed that participants’ beauty standards could be recalibrated within minutes of viewing modified images, highlighting the malleability of aesthetic judgment. This psychological mechanism poses significant risks in the context of AI-driven aesthetic assessments: when practitioners and patients are repeatedly exposed to AI recommendations based on exaggerated or culturally unrepresentative training data, their own aesthetic standards may unconsciously shift away from culturally appropriate ideals. The combination of biased algorithms and rapid visual adaptation creates a compounding effect that can accelerate cultural homogenization and further distance individuals from their authentic aesthetic heritage.

The Beauty.AI Controversy

A prominent example of algorithmic bias in aesthetic evaluation is the Beauty.AI contest, promoted as the “first international beauty pageant judged by algorithms”. When the results were released in 2016, they sparked widespread criticism: among the 44 faces selected as the “most attractive”, 36 were white, 6 were East Asian, and only one had dark skin.14 This outcome was particularly striking given the contest featured thousands of contestants from diverse ethnic backgrounds worldwide. The contest’s chief scientist later acknowledged that the AI’s training dataset lacked sufficient representation of non-white faces, causing the algorithm to effectively equate beauty with lighter skin tones.

This incident exemplifies the dangers of machine bias, an ostensibly objective AI system replicated deeply subjective and discriminatory beauty standards. If such an algorithm had been implemented within cosmetic surgery practices in the MENA region, it could have systematically rated patients with darker skin tones, broader noses, or other characteristic MENA features as less attractive, potentially discouraging them from procedures that enhance their natural ethnic features. Instead, such biased systems might push MENA patients toward rhinoplasties that create smaller, more European-style noses, or skin treatments aimed at lightening complexion, effectively undermining the cultural pride in traditional Middle Eastern and North African aesthetic characteristics.

Similar biases have been consistently documented in multiple studies, demonstrating that artificial intelligence systems tend to perpetuate and even amplify prevailing beauty stereotypes.15,16 These findings highlight the widespread nature of algorithmic bias in aesthetic contexts and underscore the critical need for more diverse and inclusive training methodologies.

Bias in Generative AI

Contemporary generative AI models also exhibit concerning biases that may impact aesthetic medicine. For example, OpenAI’s image generation model, DALL·E 2, has been shown to produce skewed outputs when given neutral prompts involving human subjects. Internal testing by OpenAI revealed that the model’s “default behavior” tends to overrepresent white-passing individuals and reflects predominantly Western aesthetic ideals unless specifically directed otherwise.17

A recent study by Lim et al evaluated the use of DALL·E in plastic surgery visualization and observed that the model consistently generated images of young, fair-skinned women, while largely omitting representation of darker skin tones, older patients, and individuals with higher body weight.18 For MENA cosmetic surgery practices, this bias presents particular challenges in patient consultation and outcome visualization. When MENA patients use AI-powered visualization tools to preview potential surgical results, they may be shown outcomes that reflect Western rather than culturally appropriate aesthetic goals. For instance, a Palestinian woman seeking rhinoplasty might be shown visualization results featuring narrow, upturned noses typical of European aesthetics rather than refined versions of the strong, dignified nasal profiles valued in her cultural context. This misrepresentation can lead to unrealistic expectations and procedures that ultimately diminish rather than enhance the patient’s ethnic identity. These biases mirror prevailing stereotypes within the beauty industry and, if left unaddressed, risk perpetuating a restrictive aesthetic standard that marginalizes significant portions of the population.

Why These Biases Matter in Cosmetic Surgery

Unlike biases in web searches or product recommendations, biased aesthetic algorithms have a direct impact on individuals’ bodies and identities. When AI-driven “beauty score” systems or cosmetic outcome simulators are primarily calibrated on Western norms, patients from regions such as the Middle East and North Africa (MENA) region may receive misleading or culturally incongruent feedback, for example, suggestions that naturally broader noses or darker skin tones are less attractive according to the algorithm.

Such feedback risks encouraging patients to pursue unnecessary or excessively aggressive procedures aimed at “correcting” features that are, in fact, normal and culturally valued within their communities. Moreover, it can undermine self-esteem by implicitly framing certain ethnic characteristics as undesirable. This phenomenon represents a subtle yet pervasive form of digital neo-colonialism (the imposition of foreign cultural values and standards through technological systems, perpetuating power imbalances in the digital age), where AI functions as a unidirectional force imposing foreign beauty ideals onto local cosmetic practices.

Cultural Differences Aesthetic Ideals in the MENA Region

Extensive research has demonstrated that perceptions of beauty are deeply influenced by cultural context, challenging the validity of any singular, universal aesthetic standard. A seminal cross-cultural study by Broer et al surveyed both plastic surgeons and laypeople from over 50 countries, including populations from the Middle East, regarding ideal facial proportions.19 The study concluded that perceptions of facial attractiveness are strongly shaped by cultural and ethnic backgrounds and cannot be accurately defined by fixed numeric ratios or so-called “divine proportions”.

Regional Beauty Preferences

Communities within the Middle East and North Africa (MENA) region possess distinct aesthetic ideals shaped by a rich tapestry of cultural influences, including Persian, Arab, Turkish, and Amazigh traditions. These ideals often emphasize harmony with one’s natural features rather than adherence to Western standards such as facial symmetry or youthfulness. For instance, patients of Middle Eastern descent may seek rhinoplasty to reduce a dorsal hump while preserving a strong and dignified nasal profile, an aesthetic preference that contrasts with the smaller, upturned nose commonly portrayed in Western media.

A recent study by Arian et al offered comprehensive evidence of such cultural variation in aesthetic preferences across facial, breast, and gluteal features.20 The authors reported that commonly cited facial “golden ratios” are inadequate for representing attractiveness standards among diverse ethnic groups. The majority of studies reviewed emphasized that aesthetic procedures should aim not to Westernize, but rather to enhance and preserve ethnic identity.

Additional research has further documented specific preferences among Middle Eastern patients, highlighting aesthetic goals that differ markedly from Western norms.21,22 These findings reinforce the critical role of cultural context in shaping perceptions of beauty and desired outcomes in cosmetic interventions.

A growing body of literature has also challenged the universality of classical beauty standards, demonstrating that mathematical models such as the golden ratio do not reliably capture attractiveness across different populations.23–28 While appealing in their seeming objectivity, such formulas fail to reflect the complex cultural and individual preferences that define beauty across diverse societies.

Cultural Context in Practice

Patterns in cosmetic procedures across the MENA region further reflect culturally specific aesthetic priorities. Many surgeons report that patients frequently seek rhinoplasty outcomes that refine the nose while preserving ethnic identity, consciously avoiding an overly “Westernized” appearance that might seem incongruous with their facial features. There is often a deep sense of pride in traditional features, a reality that AI systems used in aesthetic medicine must be designed to respect.

Cultural attitudes toward cosmetic surgery also vary considerably across populations, with regional differences in both acceptance rates and procedural preferences.29,30 For clinicians working in multicultural settings, and for developers of AI tools intended for global deployment, an understanding of these cultural nuances is essential.

One-size-fits-all beauty metrics are particularly problematic in regions where religious and cultural values influence aesthetic ideals. In many MENA societies, modesty and proportionality are emphasized over overt expressions of sexuality, a contrast to Western beauty ideals often portrayed in advertising and media. An AI-based assessment tool that, for example, recommends a higher nasal bridge and thinner lips for an Egyptian woman, simply because those features align with a European reference model, risks generating results that the patient may perceive as unnatural or culturally misaligned.

Technical Approaches to Addressing AI Bias

To ensure that artificial intelligence enhances, rather than undermines, patient care in MENA cosmetic surgery, targeted technical interventions must be adopted:

Bias Detection and Measurement

Accurately evaluating fairness in AI systems used within aesthetic medicine requires the application of well-established bias detection metrics. Three key measures are particularly relevant:

  1. Demographic Parity: This metric assesses whether AI systems generate similar distributions of recommendations across different ethnic groups. For example, if 30% of Caucasian patients are advised to consider a specific cosmetic procedure, comparable rates should be observed among MENA patients with similar anatomical features and clinical indications.

  2. Performance Equity: This involves evaluating whether the AI model maintains consistent accuracy across diverse demographic groups. Target benchmarks may include a feature recognition accuracy of greater than 95% for culturally significant facial traits, and cultural concordance (the degree to which AI recommendations align with culturally appropriate aesthetic values and preferences) scores exceeding 80% when reviewed by regional expert panels.

  3. Individual Fairness (the principle that similar individuals should receive similar treatment from AI systems, regardless of protected characteristics): This metric emphasizes similarity-based fairness, ensuring that patients with comparable facial characteristics receive similar recommendations regardless of their ethnic or racial background.

The broader healthcare AI community has already developed robust frameworks for identifying and mitigating algorithmic bias, with validated methodologies aimed at promoting fairness across population subgroups.31–33 These established practices offer a valuable foundation for the development of bias detection systems specifically tailored to applications in aesthetic medicine.

Implementation Strategies

Building Culturally Competent AI Systems

Addressing algorithmic bias in aesthetic AI systems requires the integration of advanced machine learning techniques alongside ongoing performance oversight. Several implementation strategies have demonstrated promise in balancing model accuracy with fairness. However, these approaches come with significant implementation challenges and costs that must be carefully considered.

One approach is multi-objective optimization, which can be employed during model training to simultaneously optimize for predictive accuracy and fairness metrics. This ensures that improving one performance dimension does not come at the expense of equitable outcomes. While promising, this approach requires substantial computational resources and expert technical personnel, potentially increasing development costs significantly compared to traditional AI development.

Another effective method is adversarial training, whereby models are trained not only to make accurate predictions, but also to minimize their ability to infer sensitive attributes such as patient ethnicity. This reduces the likelihood of demographic bias being encoded in the decision-making process. The challenge lies in maintaining system coherence and avoiding contradictory recommendations, which requires careful coordination and ongoing monitoring.

Additionally, ensemble methods offer flexibility by combining outputs from multiple models, each trained on different demographic subsets. These models can be weighted based on patient ethnicity or regional aesthetic preferences, thereby improving personalization while reducing cultural bias. Finally, to maintain fairness over time, continuous monitoring systems should be implemented to track AI recommendation patterns across demographic groups. These systems can issue automated alerts when fairness thresholds are breached. Furthermore, real-time bias detection can help identify systematic disparities, such as disproportionately recommending certain procedures to specific ethnic groups, enabling timely interventions and retraining where necessary. Such monitoring systems require dedicated infrastructure and personnel, representing ongoing operational costs that healthcare institutions must budget for.

Diversify and Localize Training Data

The foundation of any AI model lies in the quality and representativeness of its training dataset. To ensure equitable performance in the MENA context, developers must actively curate training data that reflects the full spectrum of ethnic and phenotypic diversity within the region. This includes sourcing images and aesthetic preference data representative of Arab, Persian, North African, and South Asian populations.

One effective strategy is to establish collaborations with regional clinics to collect anonymized pre- and post-operative photographs, obtained under strict ethical oversight with appropriate consent and privacy safeguards. Such datasets would enable AI systems to learn and respect region-specific aesthetic goals rather than defaulting to externally imposed standards.

While publicly available datasets like FairFace include a “Middle Eastern” category, these classifications often lack the granularity needed to capture intra-regional diversity. The creation of comprehensive, culturally representative datasets requires significant investment in data collection, annotation, and quality assurance processes, with costs potentially reaching hundreds of thousands of dollars for adequately sized datasets.

Inclusion of Cultural Experts

The development of aesthetic AI systems must extend beyond the domain of data science and Western-trained surgical expertise. To ensure cultural relevance and sensitivity, interdisciplinary collaboration is essential, particularly with anthropologists, sociologists, and cosmetic surgeons from the MENA region. These cultural experts can provide critical insights into local conceptions of beauty and help redefine what constitutes a “successful outcome” in ways that go beyond standardized ratios or universal ideals.

Involving such stakeholders early in the design and validation process allows for the identification of cultural blind spots that may otherwise go unnoticed. Their input can guide AI systems to avoid recommendations that conflict with religious values or modesty principles, such as promoting overly dramatic alterations to features that are traditionally celebrated or meant to be subtly enhanced.

Ultimately, cultural competency should be treated with equal importance as technical competency in the teams responsible for building and deploying these systems. The inclusion of diverse voices not only enhances fairness and acceptability but also ensures that AI tools are aligned with the values and expectations of the communities they serve.

Regulatory Framework

Given the potentially profound impact that AI can have on patient well-being, it is imperative that regulatory bodies develop comprehensive guidelines to govern its ethical use in cosmetic procedures. Medical associations across the MENA region, in partnership with health ministries, should establish standards that mandate transparency regarding AI integration. Patients must be informed when AI tools are involved in their assessment or treatment planning.

Informed consent protocols must be culturally adapted to ensure patients understand AI’s role in their assessment, including potential cultural biases and limitations. This requires clear communication about how AI recommendations are generated and the importance of maintaining cultural aesthetic preferences. Practitioners should explain that AI systems may reflect training data biases and emphasize that final decisions should always incorporate cultural context and individual preferences. Such culturally sensitive informed consent processes require additional consultation time and specialized training for healthcare providers.

Furthermore, any AI system marketed within the region should be required to undergo rigorous bias testing and demonstrate cultural adaptability to ensure alignment with local aesthetic values and ethical norms. This regulatory oversight is essential to prevent inadvertent harm and uphold patient trust.

The United Arab Emirates has exemplified proactive leadership through its National Strategy for Artificial Intelligence 2031, which aims to position the country as a global AI leader while prioritizing robust governance and regulation.34 This strategy envisions integrating AI across multiple sectors to enhance quality of life and stimulate economic growth, with projections of up to $91 billion in additional economic value by 2031. In the healthcare domain, however, such advancements must be carefully balanced to avoid cultural erosion and respect the unique identity of MENA communities.

Recommendations for Implementation

For Technology Developers

Technology developers should mandate a minimum representation of at least 20% from MENA populations within all training datasets, ensuring balanced inclusion across diverse ethnic groups such as Arab, Persian, Turkish, Kurdish, and Amazigh communities. To maintain fairness post-deployment, real-time bias monitoring must be integrated into AI systems, with automated alerts triggered whenever demographic disparities surpass predefined acceptable limits.

Furthermore, developers need to design and adopt MENA-specific validation protocols that evaluate AI performance against culturally relevant and regionally informed benchmarks, rather than relying solely on universal Western standards. This approach will ensure that AI outputs remain accurate, equitable, and respectful of the region’s unique aesthetic values.

For Health-Care Providers

Healthcare providers should require comprehensive cultural competency training for all practitioners utilizing AI-assisted tools, enabling staff to identify and appropriately address algorithmic biases. Institutions must implement bias monitoring protocols to systematically track AI recommendation patterns across diverse patient demographics within their practices.

Additionally, robust patient feedback systems should be established, specifically designed to capture experiences related to AI-assisted consultations. These mechanisms should focus on evaluating cultural sensitivity and respect for individual aesthetic preferences, ensuring that patient voices guide ongoing improvements in AI integration.

For Policymakers

Policymakers must develop comprehensive regulatory frameworks that prioritize cultural equity in the deployment of medical AI. All medical AI systems should undergo cultural impact assessments prior to market approval, paralleling existing requirements for safety and efficacy. These assessments would ensure that systems are evaluated not only for technical performance but also for alignment with the values and diversity of the populations they serve.

Regulations must codify demographic representation standards for training data, preventing the deployment of biased systems that inadequately reflect ethnic diversity. Furthermore, robust oversight mechanisms should be established to govern AI use in healthcare, including regular audits of algorithmic performance across patient subgroups and enforceable penalties for non-compliance.

Next Steps: Innovation and Inclusion in Aesthetic AI

Future researchers should prioritize empirical investigations that translate theoretical frameworks into real-world evidence. Qualitative interviews with MENA cosmetic surgeons could provide critical insights into their lived experiences with AI integration, highlighting practical challenges and adaptive strategies that remain underrepresented in the current literature.

Patient satisfaction studies across diverse ethnic groups using AI-assisted consultations would offer direct assessments of cultural sensitivity and inclusivity from the patient’s perspective. Similarly, prospective trials comparing AI-generated recommendations with actual surgical outcomes across MENA populations would help validate the clinical effectiveness of culturally adapted AI systems.

From a technical standpoint, future innovation must focus on developing culturally responsive AI architectures. Approaches such as few-shot learning can improve model performance for underrepresented groups with limited data, while transfer learning techniques could enable AI systems to adapt knowledge from one cultural context to another, enhancing cross-cultural generalizability.

Finally, the creation of comprehensive ethical frameworks is essential to guide the responsible evolution of AI in aesthetic medicine. These frameworks must safeguard patient autonomy and uphold cultural values, ensuring that advancements in efficiency and precision do not come at the cost of inclusivity and respect.

Conclusions

The integration of AI into aesthetic medicine offers exciting possibilities to enhance outcomes and objectivity, but it also highlights a fundamental truth: Beauty cannot be divorced from culture. This is especially true in MENA societies, where standards of attractiveness are shaped by history, geography, and cultural values.

If we allow AI algorithms, often designed and trained far from the region, to dictate what is beautiful or recommend surgical changes without adaptation, we risk a homogenized standard of beauty that marginalizes cultural diversity. The evidence reviewed, from the biased results of Beauty.AI to the skewed outputs of generative AI systems, demonstrates that algorithmic bias is not a theoretical issue; it is already affecting how we see ourselves.

To move beyond algorithms, we must insist that these tools are developed and deployed with deep respect for cultural diversity and individual uniqueness. In practical terms, this means building bias-resistant AI, rooted in representative data and guided by local expertise. It means the surgeons in the MENA should critically appraise any AI-driven recommendations and use them only in tandem with their nuanced understanding of the patient in front of them.

Moving forward, we recommend immediate priorities in three key areas. First, technology developers must establish minimum representation standards, with at least 20% MENA populations in all aesthetic AI training datasets, coupled with real-time bias monitoring systems that alert when demographic disparities exceed acceptable thresholds. Second, healthcare institutions should implement mandatory cultural competency training for practitioners using AI-assisted tools and establish patient feedback systems specifically designed to capture cultural sensitivity concerns. Third, policymakers must develop regulatory frameworks requiring cultural impact assessments for medical AI systems prior to market approval, similar to existing safety and efficacy requirements. These actions, implemented concurrently, can create the foundation for culturally responsive AI deployment across the region.

The path forward can transform a potential pitfall into a strength. If properly built, AI can celebrate various definitions of beauty: imagine AI tools that can show patients how they would look after surgery in a way that aligns with their cultural aesthetics, or algorithms that can highlight examples of attractive faces from the patient’s own ethnic group as guides.

With Middle Eastern and North African nations such as the UAE taking global leadership in AI strategy, the region has the opportunity to also lead in ethical, culturally mindful AI in medicine. By investing in this opportunity now, through research, interdisciplinary collaboration, and regulation, we can ensure that the coming era of AI-enhanced cosmetic surgery empowers individuals and honors cultural identity, rather than enforcing uniformity.

As we advance toward more sophisticated AI systems in medicine, we must remain vigilant about algorithmic justice and ensure that technological progress serves to reduce rather than amplify existing healthcare disparities.35,36 The integration of AI in aesthetic medicine represents both an opportunity and a responsibility to create more equitable and culturally sensitive healthcare technologies.

In the end, the true promise of AI with regards to aesthetics is not to decide what is beautiful but to help each person safely achieve their own vision of beauty. Realizing this promise will entail going beyond the algorithm’s raw output and infusing it with the context, compassion, and cultural sensitivity that make medicine an art as much as a science.

Abbreviations

AI, artificial intelligence; FAI, facial aesthetic index; MENA, Middle East and North Africa.

Ethics

This perspective article did not involve human subjects and therefore did not require ethical approval.

Disclosure

The authors report no conflicts of interest regarding this work.

References

  • 1.Middle East and Africa medical aesthetic market report - industry trends and forecast to 2032. data bridge market research. Available from: https://www.databridgemarketresearch.com/reports/middle-east-and-africa-medical-aesthetics-market. Accessed May 19, 2025.
  • 2.Kalantar Motamedi MH, Ebrahimi A, Shams A, Nejadsarvari N. Health and social problems of rhinoplasty in Iran. World J Plast Surg. 2016;5(1):75–76. [PMC free article] [PubMed] [Google Scholar]
  • 3.Sattler S, Frank K, Kerscher M, et al. Objective facial assessment with artificial intelligence: introducing the facial aesthetic index and facial youthfulness index. J Drugs Dermatol. 2024;23(1):e52–e54. doi: 10.36849/JDD.7080 [DOI] [PubMed] [Google Scholar]
  • 4.Frank K, Day D, Few J, et al. AI assistance in aesthetic medicine–a consensus on objective medical standards. J Cosmet Dermatol. 2024;23(12):4110–4115. doi: 10.1111/jocd.16481 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Barone M, De Bernardis R, Persichetti P. Artificial intelligence in plastic surgery: analysis of applications, perspectives, and psychological impact. Aesthetic Plast Surg. 2025;49(5):1637–1639. doi: 10.1007/s00266-024-03988-1 [DOI] [PubMed] [Google Scholar]
  • 6.Fortune-Ely M, Achanta M, Song MSH. The future of artificial intelligence in facial plastic surgery. JPRAS Open. 2024;39:89–92. doi: 10.1016/j.jpra.2023.11.016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Park KW, Diop M, Willens SH, Pepper JP. Artificial intelligence in facial plastics and reconstructive surgery. Otolaryngol Clin North Am. 2024;57(5):843–852. doi: 10.1016/j.otc.2024.05.002 [DOI] [PubMed] [Google Scholar]
  • 8.Morris MX, Fiocco D, Caneva T, Yiapanis P, Orgill DP. Current and future applications of artificial intelligence in surgery: implications for clinical practice and research. Front Surg. 2024;11:1393898. doi: 10.3389/fsurg.2024.1393898 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Shuaib A. Transforming healthcare with ai: promises, pitfalls, and pathways forward. Int J Gen Med. 2024;17:1765–1771. doi: 10.2147/IJGM.S449598 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Weeks DM, Thomas JR. Beauty in a multicultural world. Facial Plast Surg Clin N Am. 2014;22(3):337–341. doi: 10.1016/j.fsc.2014.04.005 [DOI] [PubMed] [Google Scholar]
  • 11.Buolamwini J, Gebru T. Gender shades: intersectional accuracy disparities in commercial gender classification. In: Proceedings of the 1st Conference on Fairness, Accountability and Transparency. PMLR; 2018:77–91. Available from: https://proceedings.mlr.press/v81/buolamwini18a.html. Accessed May 19, 2025. [Google Scholar]
  • 12.Karkkainen K, Joo J. FairFace: face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE; 2021:1547–1557. doi: 10.1109/WACV48630.2021.00159. [DOI] [Google Scholar]
  • 13.Goldie K, Cumming D, Voropai D, Mosahebi A, Fabi SG, Carbon CC. Aesthetic delusions: an investigation into the role of rapid visual adaptation in aesthetic practice. Clin Cosmet Invest Dermatol. 2021;14:1079–1087. doi: 10.2147/CCID.S305976 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Levin S. A beauty contest was judged by ai and the robots didn’t like dark skin. The Guardian. Available from: https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people. Accessed May 19, 2025.
  • 15.Georgievskaya A, Tlyachev T, Danko D, Chekanov K, Corstjens H. How artificial intelligence adopts human biases: the case of cosmetic skincare industry. AI Ethics. 2025;5(1):105–115. doi: 10.1007/s43681-023-00378-2 [DOI] [Google Scholar]
  • 16.AlDahoul N, Rahwan T, Zaki Y, Abdul Karim H, AlDahoul N. AI-generated faces influence gender stereotypes and racial homogenization. Front Public Health. 2024;12. doi: 10.48550/ARXIV.2402.01002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Tiku N, Schaul K, Chen S. These fake images reveal how ai amplifies our worst stereotypes. 2023. Available from: https://www.washingtonpost.com/technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/. Accessed November 8, 2023.
  • 18.Lim B, Seth I, Kah S, et al. Using generative artificial intelligence tools in cosmetic surgery: a study on rhinoplasty, facelifts, and blepharoplasty procedures. J Clin Med. 2023;12(20):6524. doi: 10.3390/jcm12206524 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Broer PN, Juran S, Liu YJ, et al. The impact of geographic, ethnic, and demographic dynamics on the perception of beauty. J Craniofac Surg. 2014;25(2):e157–e161. doi: 10.1097/SCS.0000000000000406 [DOI] [PubMed] [Google Scholar]
  • 20.Arian H, Alroudan D, Alkandari Q, Shuaib A. Cosmetic surgery and the diversity of cultural and ethnic perceptions of facial, breast, and gluteal aesthetics in women: a comprehensive review. Clin Cosmet Invest Dermatol. 2023;16:1443–1456. doi: 10.2147/CCID.S410621 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Amiri L, Galadari H, Al Mugaddam F, Souid AK, Stip E, Javaid SF. Perception of cosmetic procedures among Middle Eastern youth. J Clin Aesthetic Dermatol. 2021;14(12):E74–E83. [PMC free article] [PubMed] [Google Scholar]
  • 22.Fabi SG, Galadari H, Fakih‐Gomez N, Mobin SN, Artzi O, Dayan S. Aesthetic considerations for treating the Middle Eastern patient: thriving in diversity international roundtable series. J Cosmet Dermatol. 2023;22(5):1565–1574. doi: 10.1111/jocd.15640 [DOI] [PubMed] [Google Scholar]
  • 23.Holland E. Marquardt’s phi mask: pitfalls of relying on fashion models and the golden ratio to describe a beautiful face. Aesthetic Plast Surg. 2008;32(2):200–208. doi: 10.1007/s00266-007-9080-z [DOI] [PubMed] [Google Scholar]
  • 24.Mommaerts MY, Bamml M. Ideal proportions in full face front view, contemporary versus antique. J Cranio-Maxillofac Surg. 2011;39(2):107–110. doi: 10.1016/j.jcms.2010.04.012 [DOI] [PubMed] [Google Scholar]
  • 25.Rhee SC. Differences between caucasian and asian attractive faces. Skin Res Technol. 2018;24(1):73–79. doi: 10.1111/srt.12392 [DOI] [PubMed] [Google Scholar]
  • 26.Sturm-O’Brien A, Brissett A, Brissett A. Ethnic trends in facial plastic surgery. Facial Plast Surg. 2010;26(02):069–074. doi: 10.1055/s-0030-1253496 [DOI] [PubMed] [Google Scholar]
  • 27.Burusapat C, Lekdaeng P. What is the most beautiful facial proportion in the 21st century? Comparative study among miss universe, miss universe Thailand, neoclassical canons, and facial golden ratios. Plast Reconstr Surg - Glob Open. 2019;7(2):e2044. doi: 10.1097/GOX.0000000000002044 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Hicks KE, Thomas JR. The changing face of beauty. Otolaryngol Clin North Am. 2020;53(2):185–194. doi: 10.1016/j.otc.2019.12.005 [DOI] [PubMed] [Google Scholar]
  • 29.Alotaibi AS. Demographic and cultural differences in the acceptance and pursuit of cosmetic surgery: a systematic literature review. Plast Reconstr Surg - Glob Open. 2021;9(3):e3501. doi: 10.1097/GOX.0000000000003501 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Samizadeh S, Wu W. Ideals of facial beauty amongst the Chinese population: results from a large national survey. Aesthetic Plast Surg. 2018;42(6):1540–1550. doi: 10.1007/s00266-018-1188-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Rajkomar A, Oren E, Chen K, et al. Scalable and accurate deep learning with electronic health records. Npj Digit Med. 2018;1(1):18. doi: 10.1038/s41746-018-0029-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Yang G, Ye Q, Xia J. Unbox the black-box for the medical explainable ai via multi-modal and multi-centre data fusion: a mini-review, two showcases and beyond. Inf Fusion. 2022;77:29–52. doi: 10.1016/j.inffus.2021.07.016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447–453. doi: 10.1126/science.aax2342 [DOI] [PubMed] [Google Scholar]
  • 34.The UAE national strategy for artificial intelligence 2031 | digital watch observatory. Available from: https://dig.watch/resource/the-uae-national-strategy-for-artificial-intelligence-2031. Accessed May 19, 2025.
  • 35.Shuaib A, Arian H, Shuaib A. The increasing role of artificial intelligence in health care: will robots replace doctors in the future? Int J Gen Med. 2020;13:891–896. doi: 10.2147/IJGM.S268093 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Benjamin R. Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press; 2019. [Google Scholar]

Articles from Clinical, Cosmetic and Investigational Dermatology are provided here courtesy of Dove Press

RESOURCES