Van in front of Grand Tetons

Photo generated using Adobe Firefly

Regulations on Artifical Intelligence: A Must for the Future

Jackson Nielsen

From computers that filled entire rooms to computers that can fit in a pocket, the technological advances made by humans in recent memory have been astounding. But with the recent development of ultra intelligent artificial intelligence programs (AI), this technology may be getting too advanced for the general population. With the creation of programs like OpenAI’s ChatGPT and DALL·E, anyone with access to the internet can utilize these tools however they would like. These systems have huge potential and many benefits but conversely, they also have the potential to do more bad than good. For this reason, there needs to be a tighter handhold on what these programs are able to do for the masses. 

 

Currently “there are 407 AI-related bills active across 44 U.S. states, according to an analysis by BSA the Software Alliance, an industry group that includes Microsoft and IBM” (Vynck and Zakrzewski 2024) . But even with all of these bills in place, the use of AI to cause harm to individuals is still a relevant issue. With the creation of powerful AI softwares also came the creation of deepfakes, videos where a person’s body or face is altered digitally, usually to cause harm or spread false information. These deepfakes have the potential to harm a person’s reputation and even complicate elections by spreading misinformation. To combat this, an organization with global outreach needs to be created to regulate what these AI systems are capable of. While the government is attempting to do this job, there needs to be people solely dedicated to this issue because as time progresses, these programs are only going to become more and more powerful. 

 

This organization would have the power to govern AI systems, like Chat GPT and DALL·E, to create a safer online atmosphere. Through the process of extensive testing of future generations of AI, people will become more protected from issues regarding deepfakes and the privacy of personal information. To many, this may seem unnecessary and that the public has the right to use these platforms as they please per their First Amendment rights. But, “prominent researchers and AI leaders from companies including Google and OpenAI signed a letter stating that the tech was on par with nuclear weapons and pandemics in its potential to cause harm to civilization” (Vynck and Zakrzewski 2024). If the creators of these models are saying this, then this is a legitimate reason to create a governing body for artificial intelligence.

 

While the creation of a global AI governing body may be years in the making, it would still be worthwhile as these programs will only continue to learn and grow leading to the potential of more harm to individuals and companies. Based on the progression of room-sized computers to pocket-sized computers in a few decades, the future of artificial intelligence must be regulated for the sake of human safety.

In Big Tech’s backyard, California lawmaker unveils landmark AI bill