OpenAI set off an arms race and our security is the casualty

After ChatGPT’s debut in late 2022 and its significant contribution to popularizing artificial intelligence (AI), there has been a surge of interest from various entities, including tech and non-tech companies, established businesses, and new startups. They have all been releasing AI assistants and showcasing innovative applications or enhancements to capture the public’s attention.

Tech leaders’ assurance that AI can handle all tasks and embody multiple roles has led to AI assistants taking on various functions in our lives. They have become trusted advisors and consultants in business and relationships, offering guidance and support. In addition, they function as therapists, companions, and confidants, attentively listening to our personal information, secrets, and thoughts.

The companies offering AI-assisted services understand the confidential nature of such conversations and guarantee they’re implementing robust safeguards to keep our data secure. But can we truly trust their promises?

AI assistants — friend or foe?

New study released in March by University of Ber-Gurion researchers revealed that our confidential information may be at risk. The scientists developed a method to decrypt AI assistant responses with remarkable precision, bypassing their encryption. This technique takes advantage of a design flaw in the systems of major platforms such as Microsoft’s Copilot and OpenAI’s ChatGPT-4, but Google’s Gemini remains unaffected.

In addition, the study revealed that an attacker who developed a means to intercept a chat, such as with ChatGPT, could apply this method to other similar platforms without much extra work. Consequently, this tool could be disseminated among hackers and used extensively.

There have been previous investigations revealing vulnerabilities in the creation process of AI helpers, including ChatGPT. Findings from various academic institutions and Google DeepMind were published towards the end of 2023. These researchers uncovered that by providing specific prompts, they could make ChatGMT recite memorized parts of its training data.

Researchers successfully retrieved exact passages from books and poems, webpage links, distinct user identifiers, Bitcoin wallet addresses, and coding scripts from ChatGPT.

Malicious actors might deliberately create deceptive prompts or feed the bots with false data in order to manipulate the training process. This could result in the inclusion of confidential personal and professional information within the dataset.

Open-source models present additional security challenges. For instance, a study revealed that an attacker could manipulate Hugging Face’s conversion service and take control of any submitted model. The consequences of such a breach are severe. An intruder could replace the targeted model with their own malicious one, upload harmful models to repositories, or even gain access to private dataset repositories.

These organizations, including Microsoft and Google, with a total of 905 models on Hugging Face that underwent modifications via the conversion service, could have been susceptible to hacking attacks and may have already been breached.

Things can worsen

Giving AI assistants greater capabilities can be enticing, but it also increases the risk of being targeted by cyber attacks.

Last year, Bill Gates wrote in his blog about the future role of advanced artificial intelligence (AI). He referred to this AI as a “comprehensive assistant” or “agent,” which would be granted access to all our personal and work devices. Its function would be to merge and examine the data collected from these devices, acting as an efficient and effective assistant for us by managing information and tasks.

As Gates wrote in the blog:

An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.

This is not science fiction, and it could happen sooner than we think. Project 01, an open-source ecosystem for AI devices, recently launched an AI assistant called 01 Light. “The 01 Light is a portable voice interface that controls your home computer,” the company wrote on X. It can see your screen, use your apps, and learn new skills”

OpenAI set off an arms race and our security is the casualty

Having a personal AI assistant could be quite thrilling. But, it’s crucial that security concerns are addressed promptly. Developers must thoroughly check the system and code for vulnerabilities to prevent potential attacks. If this agent falls prey to malicious actors, your entire life could be at risk – not just your information but also any related individuals or organizations.

Can we protect ourselves?

Starting from late March, the US House of Representatives instituted a firm restriction against the usage of Microsoft’s Copilot by its legislative staff members.

According to House Chief Administrative Officer Catherine Szpindor, the Microsoft Copilot app has been identified as a potential risk by the Office of Cybersecurity because it may inadvertently transfer House data to unauthorized cloud services.

In the beginning of April, the Cyber Safety Review Board (CSRB) released a report accusing Microsoft of a series of security lapses that permitted Chinese hackers to breach the email accounts of American government officials during the summer of 2023. These incidents were avoidable and should have been prevented.

Based on the report’s findings, Microsoft’s current security practices are subpar and need significant improvements, which may also affect their Copilot product.

Several tech firms, like Apple, Amazon, Samsung, and Spotify, as well as financial institutions such as JPMorgan, Citi, and Goldman Sachs, have previously restricted the use of AI assistants by their staff.

Last year, tech giants like OpenAI and Microsoft made a commitment to develop and use ethically sound artificial intelligence. However, so far, there haven’t been any significant steps taken in this direction.

Making commitments isn’t sufficient; authorities and decision-makers need concrete actions instead. In the interim, it’s advisable to withhold sharing confidential personal or business details.

If we all refrain from utilizing bots collectively, we may increase our chances of being listened to, compelling businesses and creators to prioritize our safety by enhancing necessary security features.

Dr. Merav Ozair is a guest author for CryptoMoon and is developing and teaching emerging technologies courses at Wake Forest University and Cornell University. She was previously a FinTech professor at Rutgers Business School, where she taught courses on Web3 and related emerging technologies. She is a member of the academic advisory board at the International Association for Trusted Blockchain Applications (INATBA) and serves on the advisory board of EQM Indexes — Blockchain Index Committee. She is the founder of Emerging Technologies Mastery, a Web3 and AI end-to-end consultancy shop, and holds a PhD from Stern Business School at NYU.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of CryptoMoon.

Read More

2024-04-10 23:53