Potential Regulations on AI & Their Forecasted Impact

AI is maturing rapidly and each year, novel applications of AI for business and everyday living are debuted or gain traction. However, as the technology has become more prominent and prevalent, so have calls for regulations on AI. AI is expected to face an increase in regulatory challenges in 2023, most notably in the European Union (EU). The Biden Administration in the United States is also encouraging policies and legislation that could impact AI’s stunning growth.

In 2021, the European Commission proposed the Artificial Intelligence Act, also known as AI Act or AIA. This legislation would create a comprehensive regulatory and legal framework for AI and manage AI’s risks.

However, some experts are concerned that if passed, AIA could have a chilling effect on AI’s development, specifically in open-source efforts. Further, if the AIA passes, it could create impacts outside the EU, a phenomenon known as the “Brussels Effect,” which was observed with the EU’s GDPR regulations.

In the United States, the White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights in October 2022. The document offers principles to protect the American public and guide AI’s use and development with the principles of civil liberties and human rights. Currently, the AI Bill of Rights is not a law but an encouraged framework for policymakers and business leaders to consider as they work with AI.

The AI Bill of Rights has been criticized for its lack of actual enforcement power, as it’s not legally binding, and for being too general about implementation.

 

Potential Impacts on AI

As AI becomes more regulated, its development and advancement could be impacted in a few ways:

  • Regulations on data protection and privacy might limit the types of data AI systems can access, affecting their accuracy and capabilities.
  • If companies are held liable for their AI systems, businesses will likely be more reluctant to develop and use AI for fear of negative consequences.
  • Regulations on AI’s transparency could:
    • Limit the types of AI that are developed and used, likely hindering the development and use of deep learning or black box machine learning.
    • Make it more difficult for smaller companies to implement AI use cases because making all models explainable would require additional effort and resources.
    • Lead to a preference for simpler models that are more transparent and explainable.  

Further, mandates towards simpler AI models like the Explainable movement could limit development and restrict algorithms from reaching their full potential. There’s a tradeoff with simpler models – they are generally more transparent, explainable, and less bias-prone. However, for some tasks, simple models cannot reach the same level of accuracy as more complex models. The specific use case should dictate what type of model is most useful. But restrictions on more opaque models will likely limit some of AI’s most effective use cases.

AI’s potential is nearly limitless, and 2023 is expected to bring technical advancements and further investments to the industry. As AI matures, though, regulatory challenges are growing, which could have the side effect of limiting the tech’s potential.

 

ChatGPT was used in researching this blog post. 

The featured image was generated using Midjourney.

Leah Moore
Brand Journalist Posts

Latest Posts