Student Research

Summer Research Grants

Every summer, students are supported with research grants to explore the intersection between technology and their fields of interest.

Farhana Sarwar
(MALD ’24)

Geospatial Educational Vulnerability Analysis of Children

Farhana focuses on formulating a GIS Model that can be useful for the determination of children’s risk towards accessing education and the external threats that increase their vulnerability in Colombia. The main output of this project is a decision-support tool that underpins efficient, targeted, and necessary resource distribution to increase access to education.

Anushka Shah
(MALD ’24)

Anushka’s research examines the cybersecurity challenges posed by the prevalent use of QR codes by scrutinizing the inherent characteristics of QR codes and their susceptibility to exploitation by malicious actors.

Aanchal Manuja
(MALD ’24)

Aanchal learned techniques and skills for extrapolating data from social media, more proficient ways of conducting digital ethnography, and gauging how disinformation and extremism find hosts and popularity online.


Capstones in Technology

Fletcher students complete a capstone project during their final year for students in 2-year programs or their final semester for students in 1-year programs. Below are a few recent capstones that incorporate the field of technology into the research.

Lakshmee Vinayak Sharma
(MALD ’23)

Designing Equitable Civic Tech in the U.S

This capstone explores the transformative potential of civic technology (civic tech), inspired by the author’s experience during the COVID-19 pandemic in India. It highlights the effectiveness of community-driven, digitally-enabled, approaches to unlock civic agency. However, access, ownership, and design remain crucial for equitable reach and impact. Shifting focus to the U.S., the report delves into market and socioeconomic conditions that systemically limit the development of civic tech for minorities. The primary aim is to uncover barriers that hinder the development of equitable civic tech solutions in the U.S., addressing both demand and supply-side constraints and challenging assumptions about the digital divide and trust in the so-called Global North.

Dominique Ramsawak
(MALD ’23)

Rewriting History:
Is AI Disinformation’s New Secret Weapon?

Is AI Disinformation’s New Secret Weapon?” explores the increasing use of Generative Adversarial Networks (GANs) in creating highly realistic and convincing AI-generated images, raising concerns about their misuse for disinformation and other malicious purposes. The study focuses on the ability of individuals to differentiate between AI-generated and authentic images, highlighting the challenge of the “Liar’s Dividend” – the phenomenon where even authentic content may be doubted due to the prevalence of deepfakes. Key findings include the difficulty individuals have in distinguishing AI-generated images from real ones, with only a slightly better identification rate for authentic images. The study underscores the significant impact of AI-generated images on public trust in media and political discourse and emphasizes the need for increased education, awareness, and advanced detection technology to mitigate the potential negative social impacts of AI-generated images.

Somya Banwari
(MALD ’23)

Mapping Mobile Money Potential in West Africa: A Geospatial Decision-Making Framework for Financial Inclusion via Mobile Money Expansion

This paper presents a geospatial framework for modeling mobile money accessibility to promote financial inclusion in West Africa. It covers mapping key mobile money stakeholders in the region, identifying primary enabling factors (e.g. telecommunications infrastructure), introducing the geospatial framework itself, and offering a policy recommendation to improve regulatory cooperation and enhance financial access data.

Shruti Katiyar
(MIB:QM ’25)

When AI Doesn’t Break the Law But Breaks the System

What the EU’s optimism about AI in finance gets right  and what it quietly misses.

This article explores how AI can weaken financial oversight without violating existing laws. While regulators emphasize human oversight and low-risk use cases, everyday AI tools in areas such as AML subtly reshape judgment, prioritization, and escalation. Over time, this creates automation bias and reduces the depth of human review, even as systems remain fully compliant. The central risk is not autonomous AI, but lawful systems that gradually erode accountability and effective supervision.

Artificial intelligence isn’t an experiment in finance anymore. If you look at the European Parliament’s 2025 report on AI in the financial sector, it’s clear: AI systems are woven into the fabric of back-office operations, from fraud detection and anti-money laundering (AML) to sanctions screening, credit assessment, and customer support. The report sounds reassuring. Deployment has been “prudent,” most use cases are deemed low-risk, and humans are still “in the loop.”

Much of this is true.

But it’s an incomplete picture.

The most serious risk AI poses to financial governance isn’t some sudden, autonomous, unlawful, or malicious event. The deeper danger is far quieter: AI changes how decisions are made, reviewed, and escalated all while staying completely compliant with the rules we have today.

This is the hidden vulnerability in modern systems, especially in AML.

The Assumption Doing the Most Work

The European Parliament’s analysis is built on a fundamental assumption: that our current financial regulation, with the EU AI Act layered on top, can manage AI risk, provided we preserve human oversight, explainability, and proportionality.

The core issue? These aren’t solid, fixed safeguards.

They are capabilities that slowly, almost invisibly, degrade.

In the real world, this looks like:

  • Humans remain formally “in the loop,” but their attention is increasingly dictated by AI-generated prioritization.
  • Models are explainable in isolation, but their logic collapses when you look at the end-to-end system interacting across multiple vendors, workflows, and teams.
  • Supervisors get documentation, but they simply lack the tools to truly interrogate adaptive or generative systems in real time.

Even the Parliament itself rings the alarm, noting that supervisory authorities lack “AI-specific expertise and adequate supervisory tools,” and that risks from large language models are “hard to measure,” pointing specifically to phenomena like hallucinations.

This isn’t a problem for the future. It’s a governance mismatch happening right now.

This perspective is grounded in my capstone project at The Fletcher School of Law and Diplomacy, Tufts University, titled “Architecting AML for the Algorithmic Era: Identity, AI Governance, and the Shift to Co-Supervision.” The research analyzed systemic breakdowns in AML enforcement by deconstructing cases like the Bangladesh Bank cyber-heist. Through detailed examination of transactional data, regulatory filings, and incident reports, I found that failures often stem from latency in escalation and cross-border coordination gaps, vulnerabilities that AI can inadvertently widen by automating prioritization without robust human checks. These findings underscore how ‘low-risk’ AI tools can drive the cognitive offloading and automation bias discussed in this article. My research informs a proposed predictive SAR-prioritization model benchmarked against FinCEN and EU AMLA frameworks to improve efficiency while strictly preserving human accountability

Why Automation Bias Matters More Than Autonomy

Regulators are primarily focused on preventing fully autonomous AI decision-making. But evidence from other high-stakes domains shows that the real harm emerges from automation bias, not autonomy.

Think about healthcare. Studies repeatedly show that human judgment weakens once AI advice is introduced, even when that advice is wrong.

In one well-known study, clinicians overrode their own correct medical judgments in 6% of cases, dropping the right diagnosis in favor of an erroneous AI recommendation. In another, less experienced radiologists saw their cancer-detection accuracy plunge from nearly 80% to just 22% when an AI suggested an incorrect result.

More broadly, researchers have documented a negative correlation between frequent AI tool usage and critical thinking, driven by cognitive offloading: the slow, gradual transfer of analytical effort from the human to the machine.

In these instances, nothing illegal happened. The humans were present. Oversight, technically, existed. Yet, decision quality deteriorated dramatically.

The Parallel in AML and Financial Crime Control

AML systems depend on people making fast judgments under uncertainty, noticing weak signals, questioning rankings, and pushing edge cases up the chain.

AI is meant to help: it reduces false positives, detects networks, and prioritizes alerts efficiently.

But these exact mechanisms are also quietly reshaping outcomes.

The “low-risk” AI tools, alert triage systems, document summarization, LLM-assisted investigations, are determining:

  • Which cases get surfaced first.
  • Which alerts are delayed.
  • Which risks are silently deprioritized.

Nothing visibly breaks. No rule is violated. A human still signs off. Yet, latency increases, rare risks are suppressed, and accountability becomes diffused across systems and vendors.

The failure is systemic, not technical.

From Compliance to Negligence without Illegality

This brings us to the unavoidable concept of algorithmic negligence.

Here, negligence doesn’t mean a bug in the model or a rogue deployment. It means continuing to rely on AI systems despite mounting evidence that:

  1. Human review has become procedural rather than substantive.
  2. Explainability collapses at the system level.
  3. Supervisors cannot realistically reproduce or stress-test outcomes.

The foreseeability is already documented—in regulatory reports, supervisory admissions, and empirical research on automation bias.

When the harm eventually appears, missed laundering networks, delayed intervention, biased exclusion, the defense that “no rule was broken” will feel empty.

The law was followed. The system still failed.

The Real Question AI Forces Regulators to Answer

The question is no longer whether AI should be used in financial services; that battle is over.

The question is whether governance frameworks are designed to handle systems that remain lawful while quietly hollowing out human judgment itself.

AI doesn’t need to break the law to undermine enforcement. It only needs to erode the assumptions the law relies on: meaningful oversight, timely escalation, and accountable decision-making.

That is where the next generation of financial regulation will either adapt or fail