GDPR and Privacy Law: A Paradigm Shift for Technology Companies

GDPR Artificial Intelligence

The technology sector is currently undergoing a regulation renaissance. Lawmakers are increasingly skeptical of Internet companies’ ability to handle their own affairs. The European Union’s enactment of the General Data Protection Regulation (GDPR) is perhaps the biggest example of this in the past two decades.

Traditionally, Internet businesses have vacuumed up as much personal data as possible. The hope has been that even if there was not a clear path to monetization in the present, some future opportunity would present itself. However, the GDPR legally requires organizations (including companies and nonprofits) to strictly limit their collection and usage of personal data.

The GDPR itself applies when someone inside the EU transmits personal data to a website, even if that website has no physical presence in the EU. Furthermore, a report by Attorney IO in consultation with leading law professors reveals that it may be illegal discrimination to deny GDPR rights to everyone in the United States.

Specifically, the Civil Rights Act of 1964 (known for ending racial segregation by US businesses) prohibits discrimination on the basis of “national origin.” Courts sometimes view policies that disproportionately advantage one minority group over another as unlawful. US immigrants from EU countries have a much greater chance of having rights under the GDPR, as many of them will have sent personal data to Internet companies prior to leaving the EU. This could lead courts to require equal treatment of all groups, instead of selectively advantaging these particular immigrants.

Artificial intelligence companies face unique legal hurdles under the GDPR. The success of these companies largely tracks the explosive growth of big data availability. The more data there is about people, the more artificial intelligence can extrapolate about them.

AI GDPR Affects USA

The GDPR specifically allows people to request that companies avoid making important decisions about them as a result of automated processing of their personal data. For example, it is no longer legally permissible for a company to use artificial intelligence to deny people banking, mortgages, insurance, or similar products based on what an AI views as a poor data profile. Such usage of AI and data can form a portion of the analysis, but it must be accompanied by other factors reviewed by an actual human.

The law does not ban all collection or usage of people’s data. Instead, organizations can legally process this data if they have valid and freely given consent. They can also comply with the law if data is collected without consent so long as they have a legitimate interest in the data.

However, as with most new and broad laws, there are aggressive and conservative interpretations of these requirements. Whether an organization truly has a “legitimate interest” in collected data or even has valid consent will likely be the subject of court challenges for many years to come.

Few organizations will want to be so aggressive as to become the test case that clarifies these requirements. Fines under the law can add up to the greater of €20 million or 4% of an organization’s worldwide revenue.

Contributing Author: Alexander Stern, Attorney and Founder of Attorney IO

Alexander Stern earned his Doctor of Law degree from UC Berkeley School of Law. He is an attorney and the founder of the legal AI startup Attorney IO. The AI provided by Attorney IO empowers legal professionals to extrapolate patterns and insights from millions of cases. These inferences allow lawyers to improve their arguments and better serve their clients and communities.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.