Members of the public sector, private sector, and academia convened for the second AI Policy Forum Symposium recently to explore critical directions and questions posed by artificial intelligence in our economies and societies.
The virtual event, hosted by the AI Policy Forum (AIPF) – an undertaking by the MIT Schwarzman College of Computing to bridge high-level principles of AI policy with the practices and trade-offs of governing – brought together an array of distinguished panellists to delve into four cross-cutting topics: law, auditing, health care, and mobility.
Clockwise from upper left: MIT Schwarzman College of Computing Dean Dan Huttenlocher moderates a discussion on artificial intelligence laws with panelists Jonathan Zittrain, Eva Kaili, and Bitange Ndemo during the second AI Policy Forum Symposium
European Union Artificial Intelligence Act
In the last year there have been substantial changes in the regulatory and policy landscape around AI in several countries – most notably in Europe with the development of the European Union Artificial Intelligence Act, the first attempt by a major regulator to propose a law on artificial intelligence.
In the United States, the National AI Initiative Act of 2020, which became law in January 2021, is providing a coordinated program across federal government to accelerate AI research and application for economic prosperity and security gains. Finally, China recently advanced several new regulations of its own.
Each of these developments represents a different approach to legislating AI, but what makes a good AI law? And when should AI legislation be based on binding rules with penalties versus establishing voluntary guidelines?
Jonathan Zittrain, professor of international law at Harvard Law School and director of the Berkman Klein Center for Internet and Society, says the self-regulatory approach taken during the expansion of the internet had its limitations, with companies struggling to balance their interests with those of their industry and the public.
“One lesson might be that actually having representative government take an active role early on is a good idea,” he says. “It’s just that they’re challenged by the fact that there appears to be two phases in this environment of regulation. One, too early to tell, and two, too late to do anything about it. In AI I think a lot of people would say we’re still in the ‘too early to tell’ stage but given that there’s no middle zone before it’s too late, it might still call for some regulation.”
Common theme: notion of trust
A theme that came up repeatedly throughout the first panel on AI laws – a conversation moderated by Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and chair of the AI Policy Forum – was the notion of trust.
“If you told me the truth consistently, I would say you are an honest person. If AI could provide something similar, something that I can say is consistent and is the same, then I would say it's trusted AI,” says Bitange Ndemo, professor of entrepreneurship at the University of Nairobi and the former permanent secretary of Kenya’s Ministry of Information and Communication.
Eva Kaili, vice-president of the European Parliament, adds: “In Europe, whenever you use something, like any medication, you know that it has been checked. You know you can trust it. You know the controls are there. We have to achieve the same with AI.”
Kalli further stresses that building trust in AI systems will not only lead to people using more applications in a safe manner, but that AI itself will reap benefits as greater amounts of data will be generated as a result.
The rapidly increasing applicability of AI across fields has prompted the need to address the opportunities and challenges of emerging technologies and the impact they have on social and ethical issues such as privacy, fairness, bias, transparency, and accountability.
Enormous promise for improving quality and efficiency
In healthcare, for example, new techniques in machine learning have shown enormous promise for improving quality and efficiency, but questions of equity, data access and privacy, safety and reliability, and immunology and global health surveillance remain at large.
MIT’s Marzyeh Ghassemi, an assistant professor in the Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science, and David Sontag, an associate professor of electrical engineering and computer science, collaborated with Ziad Obermeyer, an associate professor of health policy and management at the University of California Berkeley School of Public Health, to organise AIPF Health Wide Reach, a series of sessions to discuss issues of data sharing and privacy in clinical AI.
The organisers assembled experts devoted to AI, policy, and health from around the world with the goal of understanding what can be done to decrease barriers to access to high-quality health data to advance more innovative, robust, and inclusive research results while being respectful of patient privacy.
Over the course of the series, members of the group presented on a topic of expertise and were tasked with proposing concrete policy approaches to the challenge discussed.
Drawing on these wide-ranging conversations, participants unveiled their findings during the symposium, covering nonprofit and government success stories and limited access models; upside demonstrations; legal frameworks, regulation, and funding; technical approaches to privacy; and infrastructure and data sharing. The group then discussed some of their recommendations that are summarised in a report that will be released soon.
One of the findings calls for the need to make more data available for research use. Recommendations that stem from this finding include updating regulations to promote data sharing to enable easier access to safe harbours such as the Health Insurance Portability and Accountability Act (HIPAA) has for de-identification, as well as expanding funding for private health institutions to curate datasets, among others.
Another finding, to remove barriers to data for researchers, supports a recommendation to decrease obstacles to research and development on federally created health data. “If this is data that should be accessible because it's funded by some federal entity, we should easily establish the steps that are going to be part of gaining access to that so that it's a more inclusive and equitable set of research opportunities for all,” says Ghassemi.
Ethical principles that govern data sharing
The group also recommends taking a careful look at the ethical principles that govern data sharing. While there are already many principles proposed around this, Ghassemi says that “obviously you can't satisfy all levers or buttons at once, but we think that this is a trade-off that's very important to think through intelligently”.
In addition to law and healthcare, other facets of AI policy explored during the event included auditing and monitoring AI systems at scale, and the role AI plays in mobility and the range of technical, business, and policy challenges for autonomous vehicles in particular.
The AI Policy Forum Symposium was an effort to bring together communities of practice with the shared aim of designing the next chapter of AI. In his closing remarks, Aleksander Madry, the Cadence Designs Systems Professor of Computing at MIT and faculty co-lead of the AI Policy Forum, emphasised the importance of collaboration and the need for different communities to communicate with each other in order to truly make an impact in the AI policy space.
“The dream here is that we all can meet together – researchers, industry, policymakers, and other stakeholders – and really talk to each other, understand each other's concerns, and think together about solutions,” says Madry. “This is the mission of the AI Policy Forum and this is what we want to enable.”