Faculty & Staff Media

How Companies Can Mitigate the Harms of AI-Driven Inequality

By Bhaskar Chakravorti, Dean of Global Business at the Fletcher School

After more than two years of build-up and buzz, headlines are still promising that AI is introducing societal changes of revolutionary proportions. Meanwhile, there’s persistent unease about the socioeconomic fragility the technology could cause. Nobel laureates, such as MIT’s Daron Acemoglu, are worried about its capacity to worsen income inequalities, and ordinary American workers are anxious about AI’s impact on jobs. In fact, trust in AI has been declining, despite improvements in its performance.

A major source of the unease with AI is a systemic phenomenon I call “artificial inequality,” in which the advance of AI makes societies’ existing inequities even worse. It does so by concentrating socioeconomic opportunities and outcomes within narrow societal segments while depriving others.

Unfortunately, artificial inequality is complex and can be hard to fix. In my research on the impact of AI across a range of societal issues—jobs that are vulnerable to disruption, AI’s climate impact, and regions that might benefit most from AI development—I have found six distinct “divides” that contribute to artificial inequality: data, income, usage, geography, industry, and energy. These divides often reinforce each other. For example, someone who is more likely to be harmed by biases in data may also be less likely to benefit from AI productivity tools and disproportionately affected by higher energy costs.

Right now, the most natural solution—regulation and policy intervention—will likely be deprioritized. In the U.S., the world’s AI leader, the Trump administration is expected to deregulate and set aside guardrails on AI. Even the EU, which has a strong regulatory framework in place, is sending signals of prioritizing “action” and “opportunity” over safeguarding users.

The positive news is that company leaders can act to mitigate the risks of artificial inequality across the six divides. Artificial inequality can be detrimental to both producers and adopters of AI, and companies need to take action where they can.

There are three levers business leaders can pull:

  • Technologies: While new tools can create new problems, they can also help solve them. Companies need to know how their technology works and where it can fail, and what new tools can help them achieve their goals responsibly.
  • Institutions: Companies don’t need to do this work alone. They should look for third-party organizations that can act as partners, and external practices and frameworks that can help them learn and adapt.
  • Markets: While most companies can’t create or shift markets, they can read signals about user demand. Knowing what to look for in this emerging paradigm can help find the right solutions and business models.

With multiple divides contributing to artificial inequality, businesses may have limited abilities to address the whole problem. But by considering each divide individually, they may see specific places where they have more power to intervene.

Let’s look at each in turn.

[  1  ]Data Divide

AI combines mathematics with datasets. While math may not discriminate, datasets do—they often include biases in the form of incomplete information and even falsehoods. The data divide can have devastating consequences. For instance, algorithm-aided chest X-ray classifiers systematically underdiagnose patients of color and female patients. And when algorithms were used to evaluate mortgage applications, lenders in Chicago were 150% more likely to reject Black applicants relative to similar white applicants. In Waco, Texas, that number was even higher, at 200% for Latino applicants.

In the current U.S. political climate, guardrails against AI bias are unlikely to be a regulatory priority. Here’s how company leaders can take action and proceed responsibly:

Technologies

A primary concern for AI developers and companies here is imbalances in data that could create harm. Developers should consider: 1) seeking out datasets that are representative of the populations they seek to serve, 2) checking the distribution of potential sources of bias and alternative demographic attributes in the training data, and 3) using metrics such as disparate impact to measure outcome disparity between groups, or equalized odds to ensure that a model’s predictions are equally accurate across different protected groups.

Companies can also turn to available tools, such as IBM’s AI Fairness 360 Toolkit or open-source tools, such as Fairlearn, as they pursue such practices.

Institutions

Companies need to build new practices and routines around how employees work with AI, with extra emphasis on catching biases that contribute to the data divide. This can include red-teaming and scenario analysis; performing regular audits; and developing “bias impact statements” where teams self-regulate and probe biases in their algorithmic decisions.

Several independent organizations, such as Partnership on AI and the Algorithmic Justice League—with training tools and advocacy for awareness-building—can act as facilitators to making such practices commonplace.

Markets

Right now, there are intense pressures on companies to bring AI—and AI-powered—products to market. Companies often focus on how tests or de-biasing practices can slow down that process, but they need to consider the risk of losing market share as customer awareness and demand for such products becomes the norm. A study of 350 companies revealed that 36% of them have already suffered commercial losses due to AI bias. Meanwhile, others are differentiating their products as being less biased, such as SAP, which declared that bias is “bad business.”

[  2  ]Income Divide

AI’s adoption is projected to increase the productivity of certain workers while making others’ work redundant—a shift poised to accelerate income inequalities. Half of Americans are concerned that AI will lead to greater income inequality, and it’s easy to see why: The IMF projects that almost 40% of jobs worldwide will be affected by the technology. More conservatively, MIT’s Acemoglu expects 5% of all tasks will be profitably performed by AI in the next decade. People in occupations exposed to AI with roles that are highly substitutable will experience displacement and income loss, while occupations and roles complementary to AI can expect productivity gains and income boosts.

Company leaders should consider several options to mitigate the societal effects of this divide:

Technologies

Companies can work to address this gap by investing in their workers’ skills—particularly workers who are less experienced or have less training. Making AI tools and training widely accessible could raise earnings potential, especially for those who are lower performing and earn less than their peers. In addition, many smaller companies can invest in tools for themselves, such as business analytics tools, that can help them reduce costs and better compete with larger competitors.

Institutions

Partnerships can help close the skill gap. For example, AI4ALL offers hands-on experience in AI tools to a wide cross-section of users, Charity Excellence provides free AI tools for nonprofits, and the ITU’s AI Skills Coalition targets similar goals. Large AI producers also have initiatives, such as Google’s AI Opportunity Fund, Microsoft’s AI skills for nonprofits, IBM’s AI “democratization” efforts, and Mastercard’s competitions and awards highlight the use of AI to accelerate inclusion and put a spotlight on how AI tools can narrow income gaps.

Markets

As AI tools that narrow income gaps are adopted more widely, companies will have access to a wider pool of qualified workers. And as smaller enterprises become more cost-effective, the market will expand due to greater competitiveness in the workforce and among companies. In light of this, companies will have the incentive to apply such tools—and ought to do so more consistently—to establish and sustain business models that can go up against the competition.  

[  3  ]Usage Divide

AI adoption has been uneven. In the U.S., for example, people with more education and higher income are more likely to trust and use AI tools, and its use is concentrated geographically in a few “superstar” cities and emerging hubs. The growing distrust in AI suggests that this divide will likely grow. And while it isn’t quite clear how AI use will change jobs, it’s fair to assume that people who use it will be better positioned to navigate the coming change

Distrust is driven primarily by concerns about authenticity, the reliability of AI-generated information, social and environmental impacts, and more. Company leaders aiming to close the AI usage gap can consider several levers to build trust:

Technologies

Companies can invest in technologies that improve AI trustworthiness. For example, there are tools that can enhance the volume and variety of training data (e.g., data augmentation techniques using TensorFlow and Keras), build in feedback loops (e.g., C3AI’s Reliability application), and monitor and test advanced AI architectures. Other tools, such as a diagnostic tool used in nuclear energy generation, can help incorporate expert systems into machine learning and neural networks. Organizations can also consider pairing other technologies with AI to improve their quality: IoT sensors, for example, can conduct real-time monitoring of systems and yield data to help the algorithms to learn and adapt.

Institutions

Several organizations are working to instill trust-building in AI and can offer guidance on how to do so. These include academic institutions, such as the Institute for Ethics in AI at Oxford; non-profits, such as the Partnership on AI; and initiatives funded by multiple philanthropies, along with scorecards, such the Stanford Center for Research on Foundation Models’ transparency scores.  “AI interpretability” studies in healthcare, and training from the AI developers or intergovernmental bodies are also key to ensuring trustworthiness of AI. Employers can also train workers in detecting unreliable AI-generated information.

Markets

Companies that invest in trust-building tools will find that trust enhances demand. A study of AI voice assistants found that trust has a positive effect on adoption, while trust was also key to users’ willingness to persist with chatbots for health services. Customers are twice as likely to engage with trustworthy AI, with workers being two-and-a-half times more likely to utilize employers’ AI tools at work if they trust them.

Many companies are already differentiating themselves on AI trustworthiness: Microsoft emphasizes safety, security, and privacy, while Salesforce promises “humans at the helm” to engender trust. Conversely, untrustworthiness comes at a cost: for instance, Zillow experimented with buying homes based on its AI valuation model, only to shut the project down at a $300 million write-down eight months later—a failure that led the company to lay off 25% of its staff and undermined confidence in its AI price estimates.

[  4  ]Global Divide

According to IMF research, AI’s productivity and income benefits will likely be skewed in favor of high-income nationsSixty percent of jobs may be exposed to AI in these nations, compared to 40% and 26%, respectively, in emerging market economies and in low-income countries. As AI’s adoption is projected to enhance GDP growth and productivity overall, it is expected to do so in proportion to the exposure in any given country; as the IMF anticipates, this raises “the risk that over time the technology could worsen inequality among nations.” U.S. policies that limit access to advanced chips for many parts of the world, if continued, will likely exacerbate these divisions.

This splintering of markets worldwide limits the technology’s potential, raising entry barriers and costs. Company leaders can consider a several mitigating actions:

Technologies

Companies—especially those outside of the U.S.—can hedge against restrictions and being tied to a single ecosystem by adopting widely accessible, open-source AI initiatives, including “open-weight” ones, where trained parameters are publicly available, but the training code and datasets are not. The Chinese lab, DeepSeek has demonstrated that such open-source AI models can perform almost as well as the best proprietary AI models and do so at a fraction of the costs and resources.

Companies, particularly those in the developing world, can also use expanding access to AI tools to develop “small AI” innovations targeting specific sectors and enhancing them with small injections of relevant information. For example, the app Plantix helps smallholder farmers identify crop-destroying pests and treat them.

Institutions

Open-source practices are key to expanding global access to AI tools, and there are many organizations that can help companies make use of them. Consider the International Computation and AI Network, which promotes AI access worldwide, and AI for Good which focuses on solving for global challenges.

Markets

Open-source tools and applications create more opportunities for companies across the world to leverage AI. The commercial benefits can be significant, as the democratization of the technology can not only solve for local problems but may generate demand in multiple markets. Consider the examples of AI-aided disease identification in plants from Africa, telehealth services to expectant mothers from India, AI-aided healthcare for diabetics developed in Mexico, and forest monitoring systems from Brazil—all with wider global revenue potential.

[  5  ]Industry Divide

The AI value chain, thus far, has been dominated by a handful of companies. Moreover, these companies spend money on each other through product or channel exclusivity, reinforcing concentration and effectively locking out new entrants. For example, Meta buys cloud services from Amazon to power its AI ambitions and major AI developers use Nvidia’s highest-performing chips to stay ahead in the race. These practices skew investments towards those most commercially attractive to a few dominant companies—and leave companies buying AI services feeling locked in by a handful of tech giants that can dictate prices and terms.

As companies are keen to try out AI tools, but are not yet willing to pay the prices commanded by commercial leaders, they can consider cheaper alternatives that offer additional options without compromising performance for most uses.

Technologies

Similar to hedging against splintering global AI ecosystems, companies can consider open-source AI, such as DeepSeek and other AI companies from China, or non-Chinese players, such as Meta (U.S.), Cohere (Canada), or Mistral (France). In addition, they can also derive benefits from small language models trained on narrower datasets for customized applications, or “edge AI” tools that perform tasks on interconnected devices with data stored close to the devices, with processing done at the network edge to get around the industry concentration points.

Institutions

Companies can also source products from organizations building “digital public good” AI models—publicly accessible rails on which applications are built. For instance, the Digital Public Goods Alliance, a multi-stakeholder initiative, promotes open-source software, open data, open AI models, and open content collections that adhere to privacy standards. To accommodate commercial considerations, it also promotes a “tiered openness” model, which allows varying levels of access to different AI components.

Markets

Companies can benefit from the AI industry becoming more competitive and in the growth of specialized AI applications tailored to different needs. These could include “small” AI companies and models that target much needed interventions in low productivity sectors in the developing world, such as agriculture or education, and unlock a disproportionate amount of value by solving for an unmet need.

These developments must also address AI’s trust problem, however. While open-source AI facilitates competitiveness, it can also introduce new security vulnerabilities. Trust-building investments that also keep high-performing AI accessible will be key marketplace differentiators; companies can consider tools, such as Dependabot, Renovate, or Snyk to check open-source models for known security vulnerabilities.

[  6  ]Energy Divide

AI has huge energy and water demands: by 2026, data center energy consumption is expected to grow by 35% to 128%. Even with the anticipated investments in new energy infrastructure, demand will continue to remain ahead of supply increases. While AI can be used to make smarter use of energy, it is more likely to contribute to energy poverty, as smart energy systems are concentrated in richer areas and over a billion people live without access to affordable energy. Moreover, many companies risk missing their own net-zero goals, as investments in AI have thrown them off course. To address these issues, company leaders—particularly at AI companies or those with extensive AI needs—can consider several mitigation options:

Technologies

There are a number of innovations that can make AI use more efficient. If you run your own data centers, Google’s DeepMind AI can cut cooling expenses by up to 40%. Also, Nvidia is developing more efficient GPUs that can deliver up to 30 times the performance while using 25 times less energy.

You can also explore novel approaches to hardware design, such as locating memory inside computing cores, which can cut energy dissipation by shortening data travel distances. Or, deploying devices mimicking brain functions, which have been show to use 1,000 times less energy than current standards. Other models have experimented with running on low-powered microcontrollers. New components, such as photonic accelerators3D chips, and new chip cooling techniques can deliver computing power with less energy usage.

There are also efficiencies to be found in model design. DeepSeek, for instance, demonstrated how it could save on compute and energy resources by deploying a “mixture of experts” technique that splits the AI’s neural networks into different categories. The company also used other creative approaches, such as lopping off decimal places on numbers used in calculations without noticeable losses in performance.

Institutions

Given the significance of AI’s energy demands, many organizations are working on this problem and offer training programs to develop expertise in AI and energy conservation. For instance, the Cornell AI for Sustainability Initiative is leading research, innovation, and education on how to manage AI’s energy use, as well as how AI can help us use energy more efficiently.

Markets

Companies benefit from the fact that energy efficiencies contribute to overall cost and resource efficiencies and enhance their competitiveness. AI producers are making data center location decisions by factoring in the environmental impact along with other criteria, such as cost, technical issues, and proximity to users. Innovations in energy pricing, using predictive analytics or pay-as-you-go systems, are making energy access more affordable and can help grow demand. While the energy-efficiency benefits of open-source models are still debatable, according to one analysis, DeepSeek requires 11 times less computing resources than a similar one from Meta. According to the company, this corresponds to 10 to 40 times less energy than similar U.S. models—and is thereby cheaper to run. Such changes in the competitive playing field and the energy saving technologies available puts pressure on all AI companies to increase their energy efficiencies.

• • •

Compensating for these six divides is a business imperative, especially at a moment when governments and regulators are less likely to step in. This means company leaders—both AI producers and adopters—must serve critical unmet need of users by pulling on alternative levers. Fortunately, businesses have options to counter artificial inequality as long as they focus on technologies, institutions, and markets.

AI acceleration does not have to translate into a more fragile, divided world. Striking the right balance can facilitate AI’s wider adoption and help realize its many revolutionary promises.

(This post is republished from Harvard Business Review.)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.