The AI Dilemma: Harnessing Transformative Power While Mitigating ESG Risks
by Mark Tan
One more excellent paper came recently from our Harvard classes, this time Mark Tan from Singapore on the risks and opportunities surrounding AI, providing excellent framing for what has become the megatrend of our time.
Many sustainable investment portfolios are heavily weighted on technology. Such fund managers will have to seriously face this situation head on. It will be interesting to see what fund managers and asset owners alike choose to engage with technology companies on going forward.
Implications abound for communities as well.
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Artificial intelligence (AI) is now among us and here to stay. A powerful and pervasive technology that is already reshaping the global economy and society, AI is hailed as a catalyst for unprecedented progress by its advocates, while being forewarned as a harbinger of social devastation by its critics. Each side has its compelling reasons.
On one hand, AI holds great promises to revolutionize industries from healthcare to transportation, boost economic productivity, and help solve complex challenges like climate change to accelerate us into an unbelievable world of continual scientific breakthroughs, maximum industrial efficiency, and instant solutioning capabilities that can potentially boost global trade by some 40% by 2040 (WTO, 2025).
Yet, this promise is shadowed by complex Environmental, Social, and Governance (ESG) risks that threaten to undermine its benefits, when these same systems are exacerbating problems that they are designed to solve. AI systems consume vast amounts of energy and water, displace workers, codify and amplify societal biases, and operate within a governance vacuum, creating a fundamental contradiction in their value proposition (ProCon Editors, 2025). Yet this is not a simple binary of costs versus benefits. The inherent challenge lies in understanding which ESG trade-offs are acceptable and under what conditions, and whether the trajectory of AI development can be redirected toward more sustainable pathways without sacrificing its transformative potential.
Drivers And Potential Of AI: The Good, The Bad, And The ESG
We are in a period of intense innovation and investment in AI, enchanted by AI’s amazing reasoning power and agentic capabilities. The launch of sophisticated large language models (LLMs), like ChatGPT, has enabled generative AI that benefits countless users across the world, from drafting complex legal documents with just a few prompting words and writing flawless code for an entire website to synthesizing videos in minutes what used to take weeks or months to accomplish and tirelessly conducting engaging chat with anyone, anytime, anywhere. AI is virtually the genie of the lamp that can conjure not three, but endless, wishes.
The current AI boom is driven by a convergence of key technologies (i.e. Big Data, advanced algorithms, and immense computational power from new GPUs), unlocking staggering economic potential. Generative AI alone could add $2.6 to $4.4 trillion annually to the global economy (Chui et al., 2023), driving rapid integration across industries from finance (fraud detection) to pharmaceuticals (drug discovery).
The potential of AI for positive social and environmental impact is equally exciting. Google’s DeepMind demonstrated AI can reduce data center cooling energy by up to 40%, showing the technology’s potential to mitigate its own environmental footprint. Other examples of potential AI applications include the modeling of complex climate systems in climate science, optimizing renewable energy grids in the energy sector, diagnosing diseases from medical images in medicine, and solving longstanding problems in mathematics and science, such as the accurate prediction of protein structure (Jumper et al., 2021). The list goes on.
This wave of optimism has triggered a furious race among tech giants, investors, and nations, who are afraid of missing out on their claim of the ultimate prize in AI—a treasure trove of new and valuable intellectual properties in breakthrough technologies that could be unlocked by AI’s capabilities. However, this breakneck speed of AI development has simply outpaced our ability to comprehend and manage its impacts as more challenges come to light.
The Environmental Impact: AI’s Ravenous Appetite
One of the biggest ESG issues with AI is its huge environmental footprint. The intense training and use of large AI models require huge amounts of energy to run the computation and water for cooling systems to prevent servers from overheating. Other issues, like hazardous electronic waste at the end of the lifecycle of these data centers, are routinely ignored to stay in step with the competition. These environmental burdens are often invisible to users as they click carelessly away one application after another, often for petty amusement. Yet these concerns must be contextualized: AI’s environmental costs, while substantial, are part of the broader computing infrastructure that also enables cloud storage, streaming services, and digital communications—challenges that predate AI but are now amplified by it.
Huge energy demands. Training a single large language model like OpenAI’s GPT-4 was estimated to consume 50 gigawatt-hours of energy—enough to power San Francisco for three days and cost over $100 million (O’Donnell & Crownhart, 2025), though the exact training costs remain unverified as OpenAI has not released official figures. However, this one-time energy demand for AI training pales in comparison to the real sustainability challenge in deployment.
Every query, however simple or innocent, sent to a generative AI model like ChatGPT requires computational work. With these models integrated into numerous applications and being used billions of times in a day, the cumulative energy demand is astronomical.
Global electricity demand of data centers that power AI and other cloud services are projected to double over the next five years to consume as much electricity by 2030 as the entire Japan today (IEA, 2025). This represents a classic market failure where the energy costs are externalized to the grid and society while the benefits accrue to private companies, creating misaligned incentives that favor rapid deployment over sustainability.
Thirsty for water. The massive heat generated by AI computing clusters requires colossal water chilled cooling systems. A Microsoft environmental report in 2023 revealed that its global water consumption spiked by 34% from 2021 to 2022, largely driven by AI research, to reach 1.7 billion gallons, which is about 2,500 Olympic-sized swimming pools (O’Brien & Fingerhut, 2023). More critically, global AI demand is projected to require between 4.2 and 6.6 billion cubic meters of water by 2027, which is 4-6 times the water consumption in Denmark, to put the magnitude of water demand in perspective (Li et al., 2025). The water footprint of AI is posing a serious risk to local water resources, especially in water-stressed regions where data centers are often located. The high water demand also directly competes with agricultural and residential needs, creating environmental justice concerns.
Trailing electronic waste. The AI hardware race has accelerated the high turnover of computing equipment and components to improve output efficiency. This relentless demand for more powerful and efficient chips is shortening their lifecycle and contributing to an increasing stream of electronic waste that is toxic and difficult to recycle. The extraction of rare earth minerals required for advanced AI chips, which are often mined in developing countries with minimal environmental protection, adds another layer of environmental burden that extends far beyond the data center walls.
The toll of AI on the environment creates a conundrum where its potential to solve climate change is, in its current form, worsening it. The AI industry’s environmental impact is critical for sustainable investment assessment but has remained largely unregulated and opaque. However, progress is emerging with several tech companies committing themselves to carbon-neutral operations. The EU’s AI Act now requires disclosure of energy and water consumption for large models, establishing precedents that other jurisdictions may follow.
The Social Dilemma: Jobs, Bias, And Privacy
AI poses significant and evolving social challenges. Currently, the top concerns about AI are labor market disruption, algorithmic bias, privacy erosion, and cybercrimes. These challenges are not entirely unique to AI but represent accelerations and amplifications of existing technological trends, raising the question of whether AI requires fundamentally new social contracts or merely stronger enforcement of existing principles.
Massive job displacement. More than just automating manual tasks robotically, AI is now capable of independent assessment to handle non-routine tasks. It was estimated that almost half of total US employment would be at risk of being AI-automated out of relevance (Frey & Osborne, 2017). While subsequent analyses were more conservative with the numbers, narrowing the displacement to specific tasks within jobs (e.g. summarization and triaging), the disruption is very real and likely to be widespread, especially with the exponential growth of AI capability over time. Historical technological transitions from the Industrial Revolution to the computer age suggest that while new technologies destroy jobs, they also create them, often in unpredictable ways. However, the pace and scope of AI-driven change may outstrip society’s ability to adapt through traditional retraining programs. Roles that are heavily reliant on information processing and communication, especially digital tasks that can be performed remotely or does not require a physical presence are vulnerable (Castrillon, 2025). This trend would eventually hollow out certain economic sectors, resulting in vast inequality unless proactive and extensive reskilling efforts are undertaken. The psychological impact of job insecurity and the devaluation of human skills will present significant social challenges.
Erosion of privacy. With its pervasive access to content, including videos, audio, images, and texts, AI is effectively a super powerful surveillance agent capable of analyzing and finding connections within the content. When the primary business model of most tech companies is to use data to train AI models and profit from targeted advertising, this motivates tech companies to collect even more personal information, often without direct and informed consent, to preempt purchase decisions. The digital divide exacerbates this issue whereby users in developing countries and marginalized communities often have less awareness of privacy rights and fewer legal protections, making them disproportionately vulnerable to exploitative data practices.
AI-enabled cybercrime. In the space of machine visioning, more popularly known as facial recognition technology, and deepfakes, the public is gradually awakening to the sobering truth of the threats and dangers behind these quiet, innocuous, and constant surveillance and eavesdropping. AI and deepfakes are regularly used in cyber-scams, with thousands of people tricked and abducted to work for criminal syndicates to scam others. Between 2022 and 2023, there was a staggering 1,530% increase in deepfake cases in the Asia Pacific region, the second highest in the world after North America (Surasit, 2024). This pattern reveals how AI risks compound across borders, with technologies developed in wealthy nations disproportionately preying on vulnerable populations in the global south.
The Governance Vacuum: AI Is Watching Us, But Who Is Watching AI?
The rapid development of AI is happening in the absence of robust governance. Prevailing regulations, standards, and guardrails are not designed to handle the unique challenges of AI systems capable of autonomous reasoning that their creators are frequently unable to explain just how the machines arrive at their conclusions. The current governance framework, which depends on self-policing and outdated laws, has failed to mitigate the risks posed by AI, leaving society vulnerable to its negative impacts. However, this governance gap also reflects genuine technical and philosophical challenges on how to regulate what we cannot fully explain, and how to balance innovation with guardrails when AI’s full potential and impact are still uncertain.
Power concentration. Currently, AI development is concentrated in the hands of a few tech companies (e.g., Google, Microsoft, Meta, Amazon) and a handful of well-funded startups (e.g., OpenAI, Anthropic). This heavy concentration creates several risks. In creating their AI products, the players are effectively setting ethical and safety standards for the industry unilaterally, without engaging in a democratic and multi-stakeholder process. Moreover, under the intense competition, tech companies are compelled to launch new features rapidly to protect and grow their user base than to practice caution, especially when the rules are self-defined. Smaller AI players are also finding the industry becoming inaccessible with its high costs in computational resources and AI specialists. The massive capital requirements for cutting-edge AI development create a high barrier to entry that traditional antitrust frameworks struggle to address. The result is an oligopolistic market where a handful of companies control the infrastructure, data, and talent necessary for frontier AI research. It raises questions about whether alternative governance models (e.g. public utilities, cooperative ownership, or mandatory open sourcing of AI) might be better in serving the public’s interest.
Ethics-washing. While the tech companies have set up internal AI ethics boards and published their theses and principles on Responsible AI, this self-regulation has repeatedly proven to be insufficient when there exists a fundamental conflict between ethical practices and commercial motivations. The ethics boards often lack the authority to resolve this conflict, choosing instead to strategically select ethical principles that will not limit actions while presenting themselves as contributing towards the common good (Maanen, 2022). Consequently, these initiatives are often dismissed as ethics-washing—a superficial public relations strategy designed to placate critics and delay meaningful regulation, rather than a genuine commitment to ethical principles.
However, not all efforts are performative. Anthropic delayed Claude’s release for safety reviews; OpenAI initially withheld GPT-2 capabilities; Microsoft declined facial recognition contracts despite revenue loss. These demonstrate that internal governance can work when supported by leadership, employee pressure, and reputational risk. Yet voluntary measures could be reversed as shown by Meta who disbanded its Responsible AI team amid layoffs. The question is whether self-regulation provides sufficient protection against emerging risks of transformative technology.
A Single AI Query and Its Impacts
To understand how big the impact of AI is, let us follow the electronic journey of a user query (prompt) to a generative AI model (processor), like ChatGPT, to track its digital ESG footprint.
Albert is an avid cyclist who resides in Malaysia. He already owns four hard-track bicycles and is thinking of getting another one to add to his collection. Recently, Albert came across &AI (fictitious name), a new AI chatbot operating out of the U.S., and decides to try it out.
He whips out his cellphone, finds the &AI icon, and types a short prompt (with careless typographical errors) in the chatbot for a recommendation of bicycles based on a few simple criteria, before popping a coffee capsule into the automatic coffee maker for a shot of espresso.
The data bearing Albert’s request zips across long undersea cables and is received by a massive data center in Arizona. Thousands of specialized processors are activated to process the information, and the result is encoded and sent back to its requestor. While Albert is still waiting for his caffeine fix, &AI is already enthusiastically displaying the answer on the cellphone screen, listing its recommendations supported by pros and cons of each bicycle, and ending with additional suggestions related to the bicycle that &AI will gladly advise.
Even though the direct energy cost of an AI query is about 10 times that of a standard Google search, it is still small. However, when this tiny energy consumption is multiplied by billions of queries per day, the cumulative energy burden is immense (IEA, 2025). To prevent servers from melting from the heat generated by heavy computation workloads, a data center must pump thousands of liters of fresh water, fetched from local reservoirs, into its array of cooling systems.
Seven Olympic-sized pools of water are evaporated each day to prevent the data center from overheating. While this immense water footprint presents significant burdens to the environment and ecology as well as residents and companies who draw from the same water source, it is typically not accounted for in tech companies’ partial sustainability reports. In the case of &AI, its data center’s water consumption exceeds that of several nearby farming communities, creating tensions between tech infrastructure and local livelihoods that remain largely invisible to users like Albert.
Beyond the immediate environmental cost, Albert’s query, along with &AI’s response and all subsequent interactions, are logged and incorporated into the model’s training data. Such data, which contain personal information, creative ideas, and sensitive information, will now belong to &AI as a part of its vast and growing proprietary dataset. Besides privacy concerns, the AI model was pretrained using a dataset taken from the Internet, known for its stereotypical and biased content, and proprietary data of tech companies cull from their social media platforms and digital services that grant users “free” access in exchange for their rights to personal and transactional data. In &AI’s response to Albert’s query, its recommendation would have already embedded gender stereotypes, such as suggesting expensive models of bicycle decorated in bright red flame for a young and active working male, but a lightweight one in pastel color for a female cyclist.
What was originally an unbiased statistical probability turns into a calculated compelling certainty by the omniscient and influential presence of AI. Internally, while &AI’s ethics team is flagging out the environmental burden and potential discrimination, its feedback is sidelined by the excitement of the next product to expand market share and retain investment interest in the company. On the PR front, &AI continues to pledge its earnest commitment to practice efficient and equitable AI, despite its poor governance in enforcing its promise against commercial imperatives and fulfilling its duty as a good steward to the society it claims to serve.
While &AI is a typical illustration of today’s dichotomy in AI development where the pursuit of AI supremacy overrides the urgency of ESG priorities, it is not entirely representative. Some AI companies have implemented more robust governance, including third-party audits, slower release cycles, and genuine ethics board authority, demonstrating that responsible AI development is possible, if not yet widespread.
Multi-stakeholder Framework for Co-creating Responsible and Ethical AI
Although the ESG challenges are daunting, they could be overcome. A tech company wanting to harness the benefits of AI while mitigating its ESG impact will require a concerted, multi-stakeholder effort. A pathway for action will require all stakeholders—governments, tech companies, investors, and civil society—to work in concert, collectively playing their part in keeping AI safe, fair, and benign. The framework below outlines not just aspirational goals but specific mechanisms, drawing on both emerging best practices and lessons from other regulated industries.
The government, as the people’s representative and the country’s steward, must lead the charge for AI safety by setting up a robust regulatory framework that can protect citizens and the environment, and collaborating with countries and tech companies internationally to manage AI innovation and proliferation. Some ideas are as follows:
Mandating AI audit. Legislators must make full transparency and pre-deployment audits mandatory for high-risk AI systems (e.g., those used in hiring, lending, and criminal justice).
An example of a risk-based regulatory framework that other economies could model after is the EU’s pioneering AI Act, which requires tech companies to disclose energy usage, water consumption, and emissions data for training and operating large AI models.
Credible audits will require certified, technically expert third-party auditors. Auditing standards should be developed collaboratively by technical experts, ethicists, affected communities, and regulators, similar to how accounting standards evolved through multi-stakeholder processes. Audit reports should include not just compliance checklists but quantitative bias metrics, failure mode analysis, and impact assessments on vulnerable populations.
International and domestic regulations. Clear rules must be established to determine legal liability when an AI system causes harm. It will create powerful incentives for tech companies to build safer, and more reliable systems to stay operational. Liability frameworks could draw from product liability law, medical malpractice standards, or financial services regulation, depending on the application domain. The global nature of AI also requires international cooperation among governments to establish global AI standards and practices to prevent tech companies from relocating operations to jurisdictions with loose regulations or weak enforcements. However, international coordination is likely to face significant challenges as countries have divergent interests, with some prioritizing innovation over safety, and existing international institutions still lack effective enforcement mechanisms. Potential solutions would lie in bilateral treaties with mutual recognition of standards, regional harmonization (as the EU has pioneered), and market based incentives where access to major markets requires compliance with minimum standards.
AI investment as a public good. Applications that serve public interest (e.g., public health, cybersecurity, and climate modeling) should be funded by governments or public-private partnerships, rather than leaving these services to be delivered by commercial enterprises for profit. Public investment can be structured through challenge grants, research consortia, or direct government development (similar to how DARPA funds high-risk research). The government funding should also include training in AI literacy and reskilling programs to upgrade the workforce for an AI-driven economy. Singapore’s SkillsFuture and Estonia’s digital education initiatives are successful models that demonstrate how national programs can rapidly improve technological literacy across entire populations.
Economic incentives for sustainable AI. Governments should implement carbon pricing for data centers, water usage fees that reflect true scarcity, and tax incentives for companies that achieve verified reductions in AI’s environmental footprint. These market-based mechanisms can complement regulations by making sustainable practices economically attractive rather than mere compliance costs.
Tech companies and giant corporations developing and deploying AI on a global scale must assume responsibility and be held accountable for their practices (or inactions) in ensuring AI safety for society and the environment.
ESG intentionality. Tech companies must move beyond ethics-washing and using ESG as a public relations defense. ESG must be integrated in every stage of the AI lifecycle, from design and development to deployment and decommission, and across functions with rigorous check and control mechanism to ensure AI safety. A culture of responsibility must be fostered within tech companies, where safety and ethics are shared priorities integrated into every team’s objective and performance metrics. This requires structural changes. For instance, ethics teams must directly report to the company board, have veto power over launches that violate safety standards, and be equipped with adequate resources and authority. Compensation structures should reward long-term AI safety outcomes alongside short-term performance metrics.
Transparent and green. All tech companies have published sustainability reports on their AI resource footprint. The sustainability reports must be reviewed and audited by independent and credible firms (e.g. NGOs and scientists championing AI safety with expert knowledge of neural network and deep learning) for bias and safety. It is vital for tech companies to be transparent and disclose the limitations and potential failures of their technology. Tech companies should also provide a clear roadmap to significantly reduce impact on operations, such as shifting their power purchase agreements to renewable energy sources and investing in emerging technologies like liquid cooling and AI-optimized chip designs to reduce environmental footprints. Companies should publicly commit to regular reporting of measurable targets (e.g. carbon neutrality by specific dates or minimum percentages of renewable energy) verified by a third-party.
Worker transition. For tech companies deploying AI that will displace workers en masse, they must work with local authorities to develop proactive plans for the reskilling, upskilling and compensation of workers. A properly managed transition plan must be an essential requisite to acquire the license for operating in an economy. Models such as severance packages tied to training completion and tuition reimbursement programs can ease transition pains.
Other stakeholder groups, including investors, consumers, academia, and NGOs, also play a critical role in safeguarding AI. Financial access, public pressure, and expert insights are often powerful levers for change.
Investors, banks and FSIs. Banks and asset managers must conduct stringent and rigorous due diligence for tech companies to be incorporated into their portfolios as well as regularly assess such ESG risks of all companies involved or exposed to AI that are already in their portfolios. They should fully leverage their roles as shareholders and creditors to demand transparency, better governance, and sustainability practices, and be prepared to divest from companies that violate these principles.
This will send a strong market signal to tech companies and their investors to prioritize fair and safe AI to sustain their funding.
AI users and consumers. Consumers must demand transparency from companies about their privacy rights, how data will be used, and how AI systems will be audited for bias and abuse.
When consumers and users start to back organizations that advocate for digital rights and ethical AI, the powerful shift in market demand will make tech companies rethink their go-to-market strategies before rolling out products indiscriminately. Public awareness and AI literacy are crucial to empowering consumers and users in their choice of digital services.
Consumer movements have successfully changed corporate practices in other domains (e.g. organic food and conflict-free minerals), suggesting that informed, coordinated consumer action can influence AI development.
Academia and civil society. Universities and NGOs play a vital role as independent watchdogs over ambitious tech companies to safeguard society and the environment from ESG impacts. By performing independent research on AI impacts, developing rigorous auditing tools, and elevating public awareness of AI, they can encourage a balanced debate and hold tech companies and governments accountable to the people and the planet. Researchers have already developed important measurement frameworks (e.g. energy consumption calculators and fairness metrics) that enable more rigorous assessment of AI systems. Civil society organizations can amplify marginalized voices, document AI harms that companies might overlook or downplay, and mobilize public pressure for reform. International collaboration among these groups can counter the global reach of AI companies and create accountability mechanisms that transcend national boundaries.
To ensure accountability and track progress of AI developments, stakeholders need standardized metrics for ESG performance. Some proposed metrics are as follows:
• Environmental: Energy consumption per query (Wh), carbon intensity (gCO2 per inference), water usage per deployment cycle (m3), and e-waste generated per facility (mt).
• Social: Number of jobs displaced by sector/region, demographic representation in data training and outputs, privacy incidents and non-compliance rate, and accessibility scores for users with disabilities.
• Governance: audit frequency and findings, board oversight mechanisms, incident response times for safety failures, and transparency of AI model documentation.
By enabling investors to compare companies, regulators to enforce standards, and consumers to make informed choices, these metrics can transform ESG for AI from aspirational rhetoric into measurable accountability.
References
Castrillon, C. (2025, August 7). Which jobs are AI-safe? Microsoft’s surprising data. Forbes. https://www.forbes.com/sites/carolinecastrillon/2025/08/07/microsoft-reveals-the-most-and-least-ai-safejobs-where-do-you-rank/
Chui, M., Hazan, E., Roberts, R., Singla, A., Smaje, K., Sukharevsky, A., Yee, L., & Zemmel, R. (2023, June). Economic potential of generative AI. McKinsey & Company. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-ofgenerative-ai-the-next-productivity-frontier
Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280.https://doi.org/10.1016/j.techfore.2016.08.019
IEA. (2025). AI is set to drive surging electricity demand from data centres while offering the potential to transform how the energy sector works. https://www.iea.org/news/ai-is-set-to-drive-surging-electricity-demand-fromdata-centres-while-offering-the-potential-to-transform-how-the-energy-sector-works
Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S. A. A., Ballard, A. J., Cowie, A., Romera-Paredes, B.,
Nikolov, S., Jain, R., Adler, J., … Hassabis, D. (2021). Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873), 583–589. https://doi.org/10.1038/s41586-021-03819-2
Li, P., Yang, J., Islam, M. A., & Ren, S. (2025). Making AI less “thirsty”: Uncovering and addressing the secret water footprint of AI models (No. arXiv:2304.03271). arXiv. https://doi.org/10.48550/arXiv.2304.03271
Maanen, G. van. (2022). AI ethics, ethics washing, and the need to politicize data ethics. Digital Society, 1(2), 9. https://doi.org/10.1007/s44206-022-00013-3
O’Brien, M., & Fingerhut, H. (2023, September 9). Artificial intelligence technology behind ChatGPT was built in Iowa—With a lot of water. AP News. https://apnews.com/article/chatgpt-gpt4-iowa-ai-water-consumptionmicrosoft-f551fde98083d17a7e8d904f8be822c4
O’Donnell, J., & Crownhart, C. (2025). We did the math on AI’s energy footprint. Here’s the story you haven’t heard. MIT Technology Review. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usageclimate-footprint-big-tech/
ProCon Editors. (2025). Artificial intelligence: Pros, cons, debate, arguments, computer science, & technology. In Britannica. https://www.britannica.com/procon/artificial-intelligence-AI-debate
Surasit, N. (2024). Criminal exploitation of deepfakes in South East Asia. Global Initiative. https://globalinitiative.net/analysis/deepfakes-ai-cyber-scam-south-east-asia-organized-crime/
UNESCO, I. (2024). Challenging systematic prejudices: An investigation into bias against women and girls in large language models. https://unesdoc.unesco.org/ark:/48223/pf0000388971
Wei, X., Kumar, N., & Zhang, H. (2025). Addressing bias in generative AI: Challenges and research opportunities in information management. Information & Management, 62(2), 104103.https://doi.org/10.1016/j.im.2025.104103
WTO. (2025). Making trade and AI work together to the benefit of all (World Trade Report 2025). WTO. https://www.wto.org/english/news_e/news25_e/wtr_15sep25_e.htm

