Home ChatGPT Under Fire: Congress Sets Boundaries on Staff Access – What You Need to Know

ChatGPT Under Fire: Congress Sets Boundaries on Staff Access – What You Need to Know

Congress is taking action to restrict the use of artificial intelligence (AI) as it continues to transform numerous sectors. The House of Representatives has imposed restrictions on the usage of ChatGPT, an AI-powered chatbot developed by OpenAI and supported by Microsoft, in an effort to address the potential ramifications and ensure responsible AI integration. Let’s look closer at this revolutionary breakthrough.

See also: ChatGPT: Everything You Need to Know and Timeline of Events

The House has developed regulations for the use of ChatGPT among its employees. The premium version of ChatGPT, known as ChatGPT Plus, is the only version permitted for use in congressional offices, according to a memo distributed by the chamber’s Chief Administrative Officer, Catherine L Szpindor. Due to the seriousness of the risks involved in protecting sensitive House data, the use of the free version of ChatGPT, which lacks key privacy measures, is strongly prohibited.

The use of ChatGPT is also restricted to “research and evaluation” rather than being integrated into the operational workflow. Employees are not allowed to share private information with or use the chatbot with sensitive data. These precautions are an attempt to balance the benefits of using AI with the need to protect sensitive information.

The House’s decision to control AI and its uses by setting usage caps on ChatGPT is consistent with this broader movement. Legislation to regulate the application of generative AI models like ChatGPT is currently being drafted by lawmakers including Senate Majority Leader Chuck Schumer and a bipartisan group of senators. Their goal is to promote innovation while also ensuring that AI technologies be used in an ethical and safe manner.

Senator Schumer rightly pointed out that AI has the ability to propel scientific progress, technological innovation, and economic development. To successfully encourage innovation, however, safety and abuse concerns must be addressed. Important issues with generative AI are being addressed by new laws, including how to notify users, how to tell it apart from other types of AI, and how to handle information that was made by both machines and people.

The restrictions placed on ChatGPT by Congress are consistent with those imposed by governments and international organizations around the world. Concerns over data harvesting and the lack of restrictions prohibiting minors from using the chatbot led Italy to be the first country to ban its use. This highlights the importance of the ongoing worldwide dialogue on AI regulation and the necessity to set thorough rules to safeguard users and guarantee ethical AI practices.

Organizations across industries confront comparable difficulties as Congress does in using generative AI into its procedures. Concerned about potential breaches of confidentiality, tech titans like Apple and Samsung have already limited the usage of ChatGPT and other generative AI technologies in the workplace. Plagiarism using generative AI is also a problem in the education industry, especially in universities.

The chief executive officer of OpenAI, Sam Altman, has been pushing for stronger AI regulation for months. Altman has been working to have the EU water down the impending AI Act, which is designed to safeguard citizens from the dangers posed by AI development. There is a growing consensus, as expressed by Altman, that strict rules are necessary to guarantee the responsible and secure application of AI technologies.

A comprehensive regulation package covering disclosure, enforcement, and differentiation from other forms of AI is scheduled to be released in the coming weeks. While legislators work on a sweeping framework, individual bills are being submitted with the intention that their contents would be rolled into the final package. This group is working together to encourage ethical AI development and define the future of AI law.

First reported on Yahoo Finance

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Deanna Ritchie
Lead Editor

Deanna is an editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind, Editor in Chief for Calendar, editor at Entrepreneur media, and has over 20+ years of experience in content management and content development.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.