October 21, 2024
Finding security in digital public infrastructure
Digital public infrastructure (DPI) has evolved as a term used to describe everything from state-run digital payment systems to national cloud and data-exchange platforms to comprehensive backups of public documents and societal information. There is no single, cohesive, standard approach to digital public infrastructure—and examples range from Kenya to India to Ukraine—but DPI efforts share state involvement in the creation or operation of key digital platforms, are intended to be used country-wide, and have significant impacts on digital trust, privacy, and cybersecurity.
This issue brief examines the potential opportunities and risks of DPI across digital trust, data privacy, and cybersecurity and resilience. As part of an working group, academic, civil society, and industry experts from the United States and South Asia explored these questions as they relate to DPI payment, public service delivery, data backup, cloud, and other projects and proposals—with an eye toward the biggest unresolved public policy, legal, and technological questions associated with state development and guidance of these systems. The working group’s virtual convenings were held under the Chatham House Rule.
The group discussion’s major considerations, themes, and recommendations are meant to provide an overview of the issue and are described below across digital trust, data privacy, and cybersecurity and resilience. The group focused on these issues for several reasons, including their pertinence to advancing DPI with respect for human rights (e.g., protecting and respecting privacy); their visibility in the international discussion on responsible technology practices (e.g., accountability to the public for digital tech); their centrality in many national and subnational laws and regulations on technology best practices (e.g., data-protection regimes); their necessity to DPI safety and security (e.g., using encryption, creating backups); and their lack of prioritization in numerous DPI circles that would be better served by a deeper understanding of digital trust, data privacy, and cybersecurity and resilience issues. Approaching these issues with a dual policy and technology lens will hopefully help decision-makers eliminate or mitigate some of DPI’s most serious risks—while enabling the maximization of public-interest, rights-centered opportunities for societies around the world.
Digital trust
DPI does not exist in a vacuum. When assessing whether to trust a country’s DPI projects, citizens, domestic and foreign companies, and even other governments (among other interested parties) must have trust in technological mechanisms themselves, in the surrounding political and economic environment, in the country’s policymaking and lawmaking (both the substance and the process), and in “trust proxies” that can attest to DPI projects and hold them accountable.
- Technological mechanisms for trust, discussed more below, could include potentially publishing code or using open-source code, creating systems for independent third-party privacy and cybersecurity audits, requiring transparency in state procurement and public-private partnership agreements, incentivizing business models and design practices geared toward trust, and clearly explaining how, when, and why certain kinds of data are collected, analyzed, stored, and shared.
- The broader trust environment depends on factors such as the perceived legitimacy of the government in power, privacy and cybersecurity laws and regulations, and meaningful transparency—wherein not everything is necessarily public, but where government agencies and companies make available as much information as possible before systems are deployed. Citizens, domestic and foreign companies, and other governments should also consider a country’s rule of law, judicial independence, checks and balances, and whether the government engages in meaningful multistakeholder consultations before developing a project or associated laws and regulations. These considerations will also inform how prone a state might be to abuse of a DPI system, such as to manipulate markets or intrusively collect personal data.
The timeline of rolling out DPI projects matters greatly for building trust. In India, for example, what is now being portrayed as a “unified stack” of DPI technology was, in fact, a series of products and services rolled out on top of one another. Citizens first saw the value in certain payment projects when they could see public benefit payments in their bank accounts; now, the Indian government’s articulated vision of a comprehensive Unified Payment Interface (UPI) builds on that foundation of trust. Simultaneously, this example demonstrates a considerable trust gap, because the Indian government has stated broad plans for DPI that raise many more questions about digital trust, privacy, and cybersecurity than those raised by government subsidy disbursement.
Digital trust questions about DPI in India today also circle back to the importance of the rule of law, transparency, and checks and balances, some of which are eroding under the current government. If citizens cannot trust their government to follows laws and regulations across areas ranging from corruption to respect for a free press, that lack of trust completely undercuts their ability to trust the government to follow its own governance mechanisms for DPI. Similarly, private-sector companies are going to be more suspect of governments that violate the law and do not respect due process because this increases the likelihood that they can be coerced and reduces their ability to seek recourse in the event of a dispute. For example, civil society has importantly recommended that a digital identity DPI system capture the minimum data necessary to function, employ end-to-end encryption, and not collect location and other data for user verification. But implementing these recommendations in practice depends on the Indian government supporting robust encryption in law and policy—not mandating encryption “backdoors” for law-enforcement and national security purposes—and having legal and accountability measures to prevent unnecessary, hidden collection of personal data. As discussed more below, these protections and structures are not entirely in place. India’s Digital Personal Data Protection Act 2023 implemented a variety of requirements for private-sector companies processing personal data—with many mirroring the European Union’s General Data Protection Regulation (GDPR)—but has large and problematic carveouts for government surveillance and data-use activities, including judicial functions and “prevention” or “investigation” of any crime. These gaps in trust matter when it comes to convincing companies—especially non-Indian companies—to buy into systems like UPI without skepticism about the ways the state can access the data, including proprietary company data and network data.
Ukraine’s Diia, a mobile app linking more than 19 million Ukrainians with more than 120 government services, is another instructive example of the importance of sequencing for building digital trust. Diia was developed in pieces as the Russian war on Ukraine evolved and Ukraine’s needs evolved with it, such as enabling the government to instantaneously deposit funds in citizens’ bank accounts in response to war-related damage complaints. But Diia goes well beyond payments and exemplifies the potential expansiveness of DPI. The Diaa app portal allows Ukrainians to access identification cards, foreign biometric passports, student cards, driver’s licenses, vehicle registration certificates, and much more. It also allows them to access grants for veterans and their families, apply for grants for businesses, document property losses due to the war, apply for greenhouse grants, process unemployment benefits, process marriages and divorces, and get COVID-19 vaccine certificates, among many other options. Ukraine’s goal is to ultimately make 100 percent of public services available online.
Diia’s current services catalogue
For citizens: References and extracts (thirteen services), transport (three services), environment (three services), land, construction, and real estate (twenty-one services), security and law and order (one service), licenses and permits (six services), family (twelve services), health (seven services), entrepreneurship (twenty-six services), and pensions, benefits, and assistance (nineteen services).
For businesses: Land, construction, and real estate (nineteen services), medicine and pharmaceuticals (three services), licenses and permits (twelve services), extracts and certificates (six services), transport (two services), creating a business (nineteen services), booking (two services), and “action city” (three services related to resident status).
Rolling out a program like Diia step by step does not automatically absolve governments of other important digital trust questions. For example, Diia is not open source, does not have the most transparent privacy guardrails, and has evolved considerably in scope since its initial concept. The Diia website describes security measures such as encryption and privacy measures such as collecting minimal data, but it is light on details. But media reports about the system’s cybersecurity and the global partnerships involved with protecting the system underscore that Diia remains trusted amid Russia’s 2022 full-scale war on Ukraine.(Of course, there might be many strategic, operational, and tactical reasons why Ukraine is not publishing more information about the system itself—primarily Russia’s concerted efforts to infiltrate Ukrainian digital systems to exfiltrate information, poison data, or disrupt or degrade systems entirely. Ukraine’s actions might very well be intentional, boosting security through obscurity.) Delivering tangible benefits to citizens in pieces is a potentially powerful way to build trust, and doing so in the context of a country under attack impacts the trust citizens are willing to place in government systems.
Data privacy
Privacy is often portrayed in the DPI context as a binary. In this characterization, DPI either affords more privacy or less privacy than alternative digital infrastructure. In reality, privacy in the DPI context is about the protection of people’s information and people’s ability to have autonomy over the disclosure and use of their information in different contexts—not a sliding scale of “more” or “less” privacy with DPI per se. It’s about different kinds of privacy, such as from whom, with which data, and so on. Citizens can have concerns about private companies dominating digital ecosystems and harnessing data for targeted advertising purposes. They can simultaneously have different, equally valid privacy concerns about the government building and managing all digital systems for key services. All of this matters because DPI systems typically collect and produce lots of data, and many have digital identity as a significant or core feature. Governments should not implement systematic identity schemes without a corresponding, systematic privacy scheme for first-party, third-party, and derived data because of the threats to privacy, human rights, and freedoms they would pose, as well as the systemic risks to governance and public trust.
The context is critical to understanding the many data privacy questions at play with DPI, including which entities get access to which kinds of data for which purposes; how the data are stored, analyzed, transferred, and shared; how long the data are kept; and what other data and metadata are available to organizations operating in a DPI ecosystem. It is not just about the data that are gathered and described in terms of service. Metadata, or data about data, provide incredible insights into individual and group behavior. Data holders can also use data to derive or infer additional information about individuals, such as deriving family information from housing records, attempting to predict financial status and income from education information and home neighborhood, and using geolocation to derive information about religious practices, political interests, and health conditions.
This combination of data collection and data derivation, or inference, prompts privacy risk questions focused within government, about which government agencies have access to which data, how, and why. For example, if a public benefits agency has a DPI project that gives it access to a wealth of data and potentially derived or inferred data, does or can it share the data with a law-enforcement agency? Governments might instinctually want the data shared for actual or alleged security reasons, but that creates substantial privacy harms. The creation and derivation of data through DPI projects also creates privacy risk questions about the nongovernmental entities that get or could get access to DPI-related data. For example, if a company is contracted to build the underlying operating system for a digital public services program, does or could it receive data about users who register, the information users enter into forms, or metadata about system usage? As governments increasingly purchase or acquire commercial data—and look to use machine learning (ML) and artificial intelligence (AI) models, such as in DPI projects—these questions of data collection, generation, and derivation will become essential components of evaluating privacy risks and identifying necessary responses.
Some countries have comprehensive data privacy regimes that can provide a foundation for approaching data privacy protections around DPI. Kenya, for example, enshrines privacy as a fundamental right in its constitution and passed a comprehensive data protection law in 2019. As Kenya moves to expand digital payments infrastructure, the regulations around data collection, consent, security of data, disclosure of data, retention of data, accuracy of data, governance of data, and much more will apply to many of the companies working on DPI in the country. Yet, it leaves open critical questions about what governments do vis-à-vis privacy and DPI. If a privacy regulator is set up to police private-sector practices, what are the governance and accountability mechanisms in place for public-sector actors collecting, storing, analyzing, using, and sharing data?
However, not all countries have these laws. Some countries’ data privacy laws have significant gaps that are especially consequential in the DPI context (e.g., consumer protection-focused privacy laws that exempt state uses of data), and some countries working on DPI projects might choose to pursue or create exemptions for government and government-led DPI activities. India’s new, landmark data privacy law introduces many protections for citizens against company use of data but, in this vein, also has broad exemptions for state collection and use of data, creating surveillance risks and exacerbating the privacy concerns emanating from state-led efforts to undermine virtual private network (VPN) privacy, coercive police raids of social media company offices, and other actions. Robust judicial oversight—as recommended by a government committee report accompanying an early version of the bill—could be one way to mitigate some of these concerns and contribute to boosting trust. Other countries have robust discussions on data privacy—think of the Organisation for Economic Co-operation and Development (OECD) principles for government access to private-sector-held data for national security purposes—but the discussions are focused through just that, a national security lens. DPI projects require a more comprehensive privacy approach than many laws take, one that will encompass the activities of public and private organizations working both separately and together.
Absent or alongside laws and regulations, there is also space for companies and civil society to identify and promote privacy best practices with DPI. The OECD’s privacy principles could be one example (even as the above-referenced, recent discussions have a national security focus). Developed in 1980 and updated in 2013, the principles include collection limitation, data quality, purpose specification, and use limitation. These principles could be integrated into Ukraine’s Diia, such as by more clearly describing the purpose for collecting each and every kind of data involved with Diia services, or into India’s DPI stack, such as by developing strict technical and policy controls to limit data use. Overall, though, there are fewer widely adopted privacy standards than cybersecurity ones (discussed more below). This creates more space for governments, companies, and civil society to develop DPI-tailored privacy principles.
Companies, civil society groups, and other stakeholders can also create co-governance mechanisms, frameworks, and best practices for data privacy in DPI that are specific to the country, project, and context. In fact, governments should ideally be willing to collaborate with businesses and civil society in such efforts so that DPI projects reflect a whole-of-society approach. Governments should consider it a best practice to carry out meaningful, multistakeholder engagement, make sessions public rather than closed door, invite individuals from the public and not just selected civil society organizations to attend or ask questions, and set up a process by which members of the public—including companies not involved in the DPI project—can submit comments that must be reviewed as part of a due-diligence and risk-assessment process at the DPI ideation stage. In practice, of course, governments might have little interest in slowing DPI rollouts and involving civil society organizations in their DPI design efforts and procurement processes. Also, civil society organizations do not have the budgets of large tech companies, such as major software vendors that might receive DPI programming contracts or large cloud vendors that might host a government’s DPI platform or service. In this scenario, the companies would be the ones building the components, products, and services, while civil society organizations could be relegated to advocacy, research, and education (as occurs in many other contexts). This is why governments should demonstrate trust and accountability by engaging in these good-faith, multistakeholder efforts—and why civil society organizations, the public, and even companies should treat a lack of this engagement as a sign of weak trust and poor accountability.
Some companies, meanwhile, may already feel incentivized to demonstrate privacy best practices. If a company builds a DPI payment app in country A, and then decides to package this into a new business vertical and do the same in countries B through Z, it is plausibly in the company’s own interest to implement robust privacy practices and show all governments it can be trusted—that it’s not just an infrastructure provider coming from country A to take all the data. Companies behind DPI projects with cross-border ambitions should similarly see the value in privacy standards that are interoperable across borders without compromising on core privacy principles and practices (e.g., encrypting data in transit and at rest, only collecting data necessary for a defined purpose). This happened in Kenya, where telecom operator Safaricom decided in 2022 to hide more user data when processing M-PESA mobile payments, following public outcry about data breaches. Such efforts reflect a retroactive approach to data minimization but underscore the importance, in a purely business calculus, of ensuring that DPI projects can be trusted if they are to be truly sustained and possibly replicated (that is, sold) elsewhere.
Of course, technology will not solve governance challenges. Not all technology projects can be pursued in ways that maintain and respect strong data privacy measures—and not all involved actors, such as governments and companies, prioritize or would prioritize privacy in practice. Therefore, the first step for companies, civil society, and other stakeholders should be evaluating the privacy prospects of a project before proceeding. For example, digital identity systems that are highly centralized and link many disparate data points together could create too many surveillance risks to be built in such a fashion while respecting privacy. Rather than leaping into a DPI project in that vein, the better option might be exploring a completely different underlying design or technical approach to better protect privacy.
DPI’s privacy implications also include which data are not gathered. In Ukraine, for example, the government has digitized housing records, bank agreements, and other sets of documents and records from 2013 onward as part of its e-recovery program. Citizens who need to access records from 2012 or earlier, therefore, have no digitized record. There are consequently multiple contextual privacy questions at once, including what protections exist for data that are digitized and collected (e.g., encryption, data minimization, minimum-size thresholds for sharing aggregated data) as well as what recourse citizens have when they do not have digital records and are not seen by the state in those contexts, in ways that could adversely impact them. Data correction rights (e.g., as found in Kenya’s privacy law or the GDPR) are thus part of the privacy picture.
Cybersecurity and resilience
Cybersecurity and resilience are necessary for DPI systems to operate with trust, protect individuals’ data and system data, and facilitate their predictable and reliable use. Strong cybersecurity and resilience practices are a process rather than an end state. However, DPI projects vary widely in their cybersecurity practices. There is no comprehensive, standard, and recognized framework for DPI projects that points to existing cybersecurity and resilience principles or standards—or even identifies a floor of best practices for cybersecurity. Governments and DPI project-involved companies are taking their own approaches to everything from encryption to third-party audits to the centralization of data storage and software functions.
Concentration or centralization of infrastructure can significantly impact cybersecurity risk. For example, a government that puts all its DPI data in one server is both creating a highly attractive target for malicious actors and risking the failure of the entire system—and even the loss of all the data, if the server goes down or a hacker encrypts and then deletes the data. Conversely, distributing the data according to cybersecurity best practices could avoid creating clear single points of failure and potentially minimize the amount of data stolen if one server gets hacked.
DPI efforts around the world take varied approaches to centralization—including who builds and controls the underlying digital infrastructure and who can then build on top of that digital infrastructure (e.g., making payment apps). Ukraine’s Diia platform is built by the Ukrainian government, in partnership with others such as the US government (which provided legal, financial, and technical assistance). India’s DPI stack, in contrast, is being driven forward by the government with the involvement of many private-sector actors. The development of the Aadhaar digital identity system, in which more than 1.3 billion Indians are enrolled, was led by the Unique Identification Authority of India but with ongoing procurement of private-sector services and equipment to support the identity program. Companies have also created services based on the identity system: Aadhaar data are stored by the government in thousands of servers in Bengaluru and Manesar, but government organizations and businesses using Aadhaar data can also store data in cloud computing systems, such as those operated by Amazon Web Services. India’s Unified Payments Interface, by contrast, allows banks to build mobile apps using the protocol, which is ultimately government built and controlled, and the Indian government has been vague about how future DPI projects will be developed.
Sometimes, centralizing the management or layout of technical infrastructure can have cybersecurity benefits. For example, individual organizations managing large and globally dispersed infrastructure, such as cloud service providers, might be able to build institutional knowledge, cybersecurity resources, threat information-sharing networks, and lessons learned from security at scale that smaller infrastructure managers do not and will not have. Centralizing an underlying digital infrastructure under one developer also shapes the supply chain in different ways and could, for instance, decrease the number of third-party software vendors involved in building the backbone infrastructure for a DPI project.
On the other hand, centralization can reduce resiliency and enhance cybersecurity risks. If the system is hacked and data are stolen, or if the system is degraded or disrupted entirely, there is no independently managed alternative option or backup in place. Systems built and maintained by one organization can also become top-priority targets for malicious actors, for both compromise and coercion. For example, countries excited about building a single backbone for a payments app, or using one provider to back up all of their citizens’ records, could find many cybersecurity benefits (e.g., security at scale, cost savings of consolidated contracts and development efforts) but might find themselves simultaneously facing elevated cybersecurity (and digital trust and privacy) risks. Kenya faced these risks in 2023 when a distributed denial-of-service (DDOS) attack overloaded servers for e-Citizen and M-PESA, the government’s e-services portal and the country’s mobile payment system, respectively, among others, and took them offline for more than forty-eight hours. In total, more than five thousand public services were rendered inaccessible, disrupting citizens’ abilities to access passports, pay their electricity bills, and purchase railway tickets. Questions about concentration and cybersecurity risk are highly complex, but this discussion and these examples are meant to underscore that concentration of infrastructure can create significant risks—risks often overlooked in many DPI efforts.
Importantly, governments can achieve interoperability and create a standard architecture, or set of standards, for a DPI system’s construction without having a single entity build the entire system. In other words, governments can pursue using a standardized query language for a DPI database, creating a template set of server specifications for a DPI identity system, or can use the same visual interface backend for a public services app—and then have multiple agencies or private companies refer to those standards as they build pieces of the DPI system. This fact should hopefully encourage governments to pursue interoperability and the implementation of technical standards (including standards that better protect privacy and cybersecurity) without thinking they must delegate the building of a DPI data farm or online portal backend to just one government agency or private-sector company.
Much like with privacy, a country’s legal, regulatory, and industry environments matter. In countries where governments are intent on creating backdoors for commercial encryption or otherwise reducing cybersecurity (e.g., India’s VPN rules), those measures will impact how secure DPI projects are against hackers and other bad actors. A country with a collection of sector-specific cybersecurity regulations is also going to face a different cybersecurity risk landscape, and a different level of DPI trust questions, than a country with a single cybersecurity law or a country with nothing but voluntary industry best practices and risk-management standards. This is especially relevant as global DPI project proposals and discussions span everything from payments to government services to identity systems to data-exchange platforms.
Policy recommendations
Promote meaningful transparency. Governments and companies involved in DPI projects should promote meaningful transparency by leading multistakeholder engagement throughout the entire DPI project lifecycle, setting up independent third-party cybersecurity and privacy audit mechanisms, and making as much information about procurement processes public as possible. Governments should also be sure to develop transparency mechanisms and policies before DPI projects are rolled out, rather than during or after the rollout, and civil society groups and citizens should hold them accountable for being transparent about any DPI efforts. The open sourcing or publication of code is more complicated from transparency and cybersecurity standpoints. While some experts endorse countries making DPI code public or using open-source code across DPI systems, countries might also see value in developing DPI systems in which the source code is not publicly available for vulnerability identification and exploitation—or in locking down development processes so contributors cannot introduce vulnerabilities through open-source development. Critically, this is not just about technology. Digital trust depends on the perceived legitimacy of governments, checks and balances, rule of law, social context, and much more that has nothing to do with encryption or privacy-enhancing technologies. Governments approaching DPI projects must consider this broader trust environment and do what they can to boost trust—and civil society organizations, citizens, and companies evaluating the transparency of DPI projects must consider this broader context.
Define and implement frameworks for digital trust, privacy, and cybersecurity. High-level trust is going to come down, in part, to governments aligning their DPI projects with specific, best-practice frameworks that describe policies, processes, and practices for digital trust, privacy, and cybersecurity, such as operational processes, transparency mechanisms, audits, and privacy measures. This starts with governments looking to major, well-recognized frameworks and standards for cybersecurity, such as the NIST Cybersecurity Framework and standards published by the International Organization for Standardization. There are fewer frameworks for data privacy, but DPI policymakers and implementers can still use best practices found around the world—such as robust encryption, simple and clear explanations of data collection and use practices, and data minimization—collecting only what is needed for a specific, defined, and disclosed purpose. Private-sector actors supporting DPI projects should also implement these policies, processes, and practices. As rights-respecting and best-practice data privacy and cybersecurity measures are implemented, DPI implementing organizations should include the details in public and private DPI contracts, user-facing DPI terms of service, and public transparency reporting on DPI projects. But these international frameworks, while an important baseline for cybersecurity, are not a substitute for a comprehensive digital trust framework and, on their own, are not sufficient to justify a DPI project as trustworthy.
Emphasize that DPI needs privacy and cybersecurity guardrails. The US government, civil society organizations, and others involved in tech development and capacity building should rhetorically challenge the false binary that portrays DPI needs and motivations as incompatible with data privacy, cybersecurity, and development goals. Just as some large multinational tech conglomerates portray themselves as the foremost champions of development when trying to win contracts on the ground, some governments pursuing DPI programs portray them as highly urgent projects that cannot be slowed down over privacy, cybersecurity, and other concerns.
There are certainly situations—Ukraine’s development of Diia amid the Russian war chief among them—in which there is an urgency to DPI-related projects based on an evolving domestic or international crisis. But most countries are not currently in Ukraine’s position and, even then, there is still a place for data privacy and cybersecurity. The Ukrainian government is well aware that any system like Diia built without robust cybersecurity protections is only going to create additional, exploitable security problems for the country. For instance, if the widely used Diia had numerous, basic, and obvious cybersecurity vulnerabilities, this would simply provide the Russian government with an easily attackable and high-impact target.
India and Kenya face challenges here, too. India’s Aadhaar system has already been replicated in the Philippines and Morrocco and has received interest in Kenya, Vietnam, Sri Lanka, Brazil, Mexico, Singapore, and Egypt. The Indian government wants to also export other DPI systems abroad. But Aadhaar itself, despite some safeguards, already has several privacy and cybersecurity problems, including not recording the purpose of authentication, a lack of purpose limitation for data collection, the large-scale centralization of biometric data, and the creation of new opportunities for data-linkage attacks. Experts have also expressed concerns that as the uses of Aadhaar identities expand, so does the government’s, or even a company’s, ability to use the number as an anchor point to track citizens. Hence, when countries such as Kenya look to adopt Aadhaar—and potentially other Indian DPI systems that lack appropriate privacy and cybersecurity safeguards—they are opening themselves up to additional risk. For its part, the Indian government might be both exporting digital risk and potentially undermining its own DPI messaging in the process.
This is why the US government, civil society organizations, and other stakeholders should, as one working group member put it, “limit the zone of false choice.” DPI projects can be compatible with data privacy and cybersecurity. The success of viable (including rights-respecting) DPI projects also depends upon cybersecurity and privacy guardrails that enhance system functionality, mitigate risk, boost resilience, rightfully earn public trust, and enhance innovation within a rights-protecting context. Building these guardrails depends on getting past false binaries and evaluating if and how the specific DPI proposal at hand would positively or negatively impact data privacy and cybersecurity considerations—and identifying best-practice ways to mitigate risk.
About the author
Justin Sherman is a nonresident senior fellow at the ’s Cyber Statecraft Initiative. He is also the founder and chief executive officer of Global Cyber Strategies, an adjunct professor at Duke University, and a contributing editor at Lawfare. He previously ran the Cross-Border Data Flows and Data Privacy Working Group for the ’s Initiative on US-India Digital Trade.
Working Group Members
- Dan Caprio, Providence Group
- Shyam Krishnakumar, Pranava Institute
- Venkatesh Krishnamoorthy , BSA
- Jeff Lande, The Lande Group &
- Srujan Palkar,
- Anarkalee Perera, ASG
- Allison Price, New America
- Nikhil Sud, Ashoka University
- Atman M Tivedi, ASG &
- Prem Trivedi, New America
Acknowledgements
The author would like to thank all the individuals who provided comments on earlier drafts of this paper, including Nikhil Sud, Atman Trivedi, Jeff Lande, Ananya Kumar, and Trey Herr. Thanks as well to all participants in the working group for their generosity with their time, insights, and expertise—noting that all views stated within are my own and do not necessarily reflect the positions of individual working group members or their listed, affiliated organizations.
This report was made possible in part by the generous support of Mastercard.
This report is written and published in accordance with the Policy on Intellectual Independence. The authors are solely responsible for its analysis and recommendations. The and its donors do not determine, nor do they necessarily endorse or advocate for, any of this report’s conclusions.