AI is becoming a cesspit of private data – small businesses need to think twice

Almost overnight, the ideals of user privacy and data security have been shoved aside by the AI boom tempting users to hand over sensitive information

Our experts

We are a team of writers, experimenters and researchers providing you with the best advice with zero bias or partiality.
Written and reviewed by:

With the explosion of AI tools like ChatGPT and Bard, users worldwide have appeared willing to put misgivings about sharing personal data aside in a rush to enjoy exploring these tools.

This, despite the fact that AI chatbots are subject to the same longstanding and proven vulnerabilities as other cloud-based apps that have refused to upgrade to Web3’s more secure infrastructure.

In March this year, for example, ChatGPT had to briefly go offline due to a bug that caused issues including the first message of a newly-created conversation being visible in someone else’s chat history. There was also the unintentional, brief, visibility of payment-related information of some ChatGPT Plus subscribers.

While OpenAI acted quickly to fix these issues, they’ve exposed the ongoing risks associated with inputting potentially sensitive data into platforms that have so far not proven watertight with how they handle it.

Startups and small businesses are rushing to get aboard the AI bandwagon, letting these tools crunch numbers in microseconds to help inform decisions. But it’s time to pause and ask whether we should be trusting these chatbots with sensitive business data at all.

Don’t confuse ChatGPT with your accountant

Alarmingly, ChatGPT and Bard have already tempted people to start sharing even more private and sensitive information than they ever did previously. This is especially concerning when it comes to startups using AI tools that offer quick – and often free – services as replacements for traditional and expensive professionals.

For example, if you ask ChatGPT how it can “help me with my business accounts”, it will first warn you that it is “not a certified accountant or a substitute for professional advice.” Then, it will go on to tell you a number of different ways it can assist you in return for your private and personal financial and business information. These services include:

  • “Financial Statement Analysis: I can help you interpret and analyse financial statements, such as balance sheets, income statements, and cash flow statements. You can ask questions about specific line items, trends, or ratios to gain insights into your business’s financial performance.”

What’s more, while it warns you in its answer that professional advice is recommended, it does not warn you about the dangers of sharing this highly confidential and potentially regulated business critical data.

If you specifically probe ChatGPT about its data privacy, however, it will warn you to be “cautious and avoid sharing any sensitive personal, financial, or confidential information” because “it cannot guarantee the security of information shared in this chat.”

Nonetheless, this warning is not offered when you ask the question. Instead, the AI simply suggests what can be achieved by sharing your financial statements.

Why are AI companies using outdated and vulnerable infrastructure?

Essentially, generative AI tools are just the latest in a long line of cloud apps focused on turning users into products through direct and indirect means of data monetisation, such as targeted advertising.

But crucially, some huge AI companies have chosen to rely on a legacy infrastructure that has continually proven to be vulnerable to security attacks and accidental data leaks – just like the one seen on ChatGPT in March.

The AI boom and onslaught of press coverage celebrating its use has, coincidentally, timed with a period of instability for cloud computing. This is because businesses and individuals had become increasingly concerned around the potential misuse and security of their data. Indeed, this concern was so widespread that some industry experts were even predicting that 2023 could be ‘the year of public cloud repatriation’.

Web3 on the other hand, makes it impossible for anyone but the data owner to access their data. As such, Web3 data platforms such as OmniIndex effectively eliminate the risk of these attacks.

This is achieved through a combination of advanced cryptography and smart contracts, established security protocols, decentralised technology, and immutable data storage.

While this Web3 technology would fully protect users from potential data leaks like the ones we have already seen, it would also mean that it would be impossible for companies to monetise or use the data shared with their AI chatbots and tools.

Where next for AI data responsibility?

It is important to note that OpenAI has made efforts in recent times to reassure users that their data is safe on ChatGPT. For example, earlier this year, new features were added enabling users to easily disable chat history and prevent inputted information being used to train and improve the service.

Nonetheless, these conversations are still retained for thirty days and are therefore still vulnerable to exposure during that time either through attacks, or potential system errors like the legacy infrastructure exposing other people’s conversation histories.

It is also our responsibility as users to carefully consider what data we are sharing with these AI tools, and to make sure it is worth any risk of exposure. For business owners, that includes making sure your staff are educated on what not to disclose to a helpful-sounding chatbot.

However, if these AI tools are telling people that they should share their sensitive and confidential data with them, and they are, then they also have a responsibility to ensure that data is safe. Today, that is simply not the case.

Simon Bain, CEO of OmniIndex

As a leader in the IT industry for over 20 years, Simon is a fast-paced entrepreneur, technologist, and inventor as well as a thought leader for simplified approaches to technology implementations. He identifies and develops innovative applications that are market ready, launching that technology by building successful sales strategies for key markets.

Written by:

Leave a comment

Leave a reply

We value your comments but kindly requests all posts are on topic, constructive and respectful. Please review our commenting policy.

Back to Top