[ad_1]
In the six months since a new chatbot confessed its love for a reporter before taking a darker turn, the world has woken up to how artificial intelligence can dramatically change our lives—and how it can go awry. AI is quickly being integrated into nearly every aspect of our economy and daily lives. Yet in our nation’s capital, laws aren’t keeping up with the rapid evolution of technology.
Policymakers have many decisions to make around artificial intelligence, like how it can be used in sensitive areas such as financial markets, health care, and national security. They will need to decide intellectual property rights around AI-created content. There will also need to be guardrails to prevent the dissemination of mis- and disinformation.
But before we build the second and third story of this regulatory house, we need to lay a strong foundation and that must center around a national data privacy standard.
To understand this bedrock need, it’s important to look at how artificial intelligence was developed. AI needs an immense quantity of data. The generative language tool ChatGPT was trained on 45 terabytes of data, or the equivalent of over 200 days’ worth of HD video. That information may have included our posts on social media and online forums that have likely taught ChatGPT how we write and communicate with each other. That’s because this data is largely unprotected and widely available to third-party companies willing to pay for it. AI developers do not need to disclose where they get their input data from because the U.S. has no national privacy law.
While data studies have existed for centuries and can have major benefits, they are often centered around consent to use that information. Medical studies often use patient health data and outcomes, but that information needs the approval of the study participants in most cases. That’s because in the 1990s, Congress gave health information a basic level of protection, but that law only protects data shared between patients and their health care providers. The same is not true for other health platforms like fitness apps, or most other data we generate today, including our conversations online and geolocation information.
Currently, the companies that collect our data are in control of it. Google for years scanned Gmail inboxes to sell users targeted ads, before abandoning the practice. Zoom recently had to update its data collection policy after it was accused of using customers’ audio and video to train its AI products. We’ve all downloaded an app on our phone and immediately accepted the terms and conditions window without actually reading it. Companies can and often do change the terms regarding how much of our information they collect and how they use it.
A national privacy standard would ensure a baseline set of protections, no matter where someone lives in the U.S. And it would restrict companies from storing and selling our personal data.
Ensuring there’s transparency and accountability in what data goes into AI is also important for a quality and responsible product. If input data is biased, we’re going to get a biased outcome, in other words, “garbage in, garbage out.” Facial recognition is one application of artificial intelligence. These systems have by and large been trained by and with data from white people. That’s led to clear biases when communities of color interact with this technology.
The United States must be a global leader on artificial intelligence policy.
But other countries are not waiting as we sit still. The European Union has moved faster on AI regulations, because it passed its privacy law in 2018. The Chinese government has also moved quickly on AI, though in an alarmingly anti-democratic way. If we want a seat at the international table to set the long-term direction for AI that reflects our core American values, we must have our own national data privacy law to start.
The Biden administration has taken some encouraging steps to begin putting guardrails around AI, but it has been constrained by Congress’ inaction. The White House recently announced voluntary artificial intelligence standards, which include a section on data privacy. Voluntary guidelines don’t come with accountability, and the federal government can only enforce the rules on the books, which are woefully outdated.
That’s why Congress needs to step up and set the rules of the road. Strong national standards like privacy must be uniform throughout the country, rather than the state-by-state approach we have now. It has to put people back in control of their information instead of companies. It must also be enforceable so that the government can hold bad actors accountable.
These are the components of the legislation I have introduced over the past few Congresses and the bipartisan proposal the Energy & Commerce Committee advanced last year.
As with all things in Congress, it comes down to a matter of priorities. With artificial intelligence expanding so fast, we can no longer wait to take up this issue.
We were behind on technology policy already, but we are falling further behind as other countries take the lead. We must act quickly and set a robust foundation. That has to include a strong, enforceable national privacy standard.
Congresswoman Suzan K. DelBene represents Washington’s 1st District in the United States House of Representatives.
The views expressed in this article are the writer’s own.
[ad_2]
Source link