Kennedy Maloney is a rising sophomore honors student majoring in Data Science at the University of Arkansas. She’s involved in the Associated Student Government, Alpha Delta Pi, Pi Sigma Alpha, Alpha Lambda Delta, the Society of Women Engineers, and the Arkansas Data Science Association. Later in life, Kennedy plans to earn a graduate degree in Data Science; she wants to specialize in entertainment industry analytics and machine learning.  In this post, Kennedy provides commentary on a lecture done by professor Karl Schubert during the Honors Arkansas course on privacy.

Computer Servers

In a world that’s constantly creating data, there’s an important question that remains unanswered: who should have access to our data, and when. From the readings provided and the lecture given by professor Karl Schubert during the Arkansas Honors form on privacy, it’s safe to say there isn’t a clear answer. There’s a constant push and pull between giving up data to receive a better product and restricting data for security reasons. So, what should the government be doing to further secure the privacy of its people?

To begin, I’d like to discuss something called the General Data Protection Regulations (GDPR). The GDPR is a revolutionary set of laws in the EU that affords people certain rights pertaining to their data. It covers issues such as security, the right to be informed, and — most importantly — accountability. One prime example of the GDPR’s accountability process is the 12.7 million pound fine issued to TikTok over the misuse of the data of users under the age of 13. While the fine issued was a mere half of what TikTok was originally expected to pay, I believe the UK and EU are on the right track and that this set of laws can be used as a model for other nations when it comes to the protection of user data. In the US, there may be numerous federal laws regarding specific types of data, but there is still more to be done.

One change I’d like to see in the United States’ data privacy laws is an increase in the regulation of data sharing between companies. Today, practically every company produces data and needs it to operate. It’s natural for companies to want to aggregate data from other sources. Perhaps they need certain information that another platform has on consumers, and vis versa. It seems like a win-win for everyone. However, there is an important group that gets left out in this trading process: the consumers. From an ethical perspective, there is far more to be done when regulating data trading. Companies don’t consult their consumers when making these trades, and this is where the US could learn from the GDPR’s concept of the right to be informed. US citizens deserve to know where their data is going and why.

Another change that the United States should seek to adopt is an increase in data security on social media apps. America can use its local social media giant, Meta, as an example for other countries. In the past, Meta CEO Mark Zuckerberg has been on trial before Congress concerning loose security and haphazard selling of user data. Instead of putting firm legislation into place, Facebook was fined by the Federal Trade Commission. In the future, while the federal government should continue to monetarily fine social media and Big Tech companies who “break the rules”, they should also consider making these rules and ethics into actual legislation.

In general, the GDPR is not perfect for every country and would need modifications to suit different countries’ data usage. I respect the EU’s willingness to update the GDPR as technology evolves and feel that it’s crucial to the integrity of data privacy. America faces a difficult future in the world of tech if its legislation falls behind innovation. As A.I. becomes more widely accessible, America has a choice to make: decide on regulations now while the technology is in its’ infancy or decide later after the technology has advanced and our private information is exposed.