We recently posted some of UniVista’s concerns regarding ChatGPT, however it has come to our attention that some may not be very familiar with the technology, nor its benefits and especially some of its risks.  To help clarify things a bit more, here is a basic rundown of what it is, what it does and where and why you need to tread carefully with the technology.

ChatGPT is an Artificial Intelligence (AI) chatbot developed by OpenAI, which was released in November 2022.  ChatGPT (Chat Generative Pre-trained Transformer) is an app (available free for its basic version and via a monthly subscription for an enhanced version) that can assist users with writing tasks by suggesting ideas, helping create outlines and even suggesting dialogue.  Initially, ChatGPT’s AI language model was trained using a large body of text from numerous sources including Wikipedia, books, news articles, scientific journals, etc.  However, that was only the start of its education.  The more the platform is used, the more it will “know” and some of that knowledge will come directly from users like you. 

The ChatGPT app can be useful when used to help with writing tasks like blogs, reports and can even assist with writing correspondence like emails and texts.  However, as we pointed out in our previous blog and communication from our president (link to blog here), though the technology is exciting there are still many bugs in the platform, in addition to many security risks.  Because, as we mentioned above, ChatGPT is a system that incorporates and absorbs user-fed information into its data, inadvertent sharing of confidential information may occur. Even under the most innocent of circumstances such sharing would be in violation of privacy agreements you may have in place.

Something to keep in mind is that ChatGPT isn’t perfect, it only “knows” what it has been taught.  And, unfortunately sometimes what it has been taught may be wrong, or incorrectly interpreted, and may give you incorrect information or suggestions. Another issue to be aware of is that the platform does collect user information including your IP address, your browser type and settings, along with data regarding your interactions on the site, which includes the type of content, features and actions you are engaging with.  Additionally, ChatGPT saves your chat history, which goes back to our warning about inadvertently sharing sensitive information.  In fact, there are a host of companies, including JP Morgan Chase, Amazon, Verizon and Accenture, that have voiced concerns about the security of the platform and have barred their employees from using ChatGPT for work.

So, while the technology is exciting and full of potential it is not without its risks.  Time will tell as more bugs are discovered and addressed whether it’s something to rely on during your daily business practices.



As a final note – it seems that the old adage that “every action has an equal and opposite reaction”, is true.  To help determine whether a body of text has been written via AI vs a human, a few new “AI detectors” have recently hit the market.  Essentially these Written AI detectors use various means to determine if something is human-written or if AI wrote the piece.

As one would expect, the written AI phenomena has caused an issue in academics, and it became important to ascertain whether a student actually wrote the essay or term paper they turned in.  To that end a young (22 years old) Princeton Student, Edward Tian, developed GPTZero (GPTZero). GPTZero has a 98% accurate detection rate, and as an exercise we ran this human-written article through their platform and was given this result:




I’m betting that we have all heard of, and have probably even played with, ChatGPT. If you are unaware, ChatGPT is a natural language, artificial intelligence, processing tool developed by OpenAI with multi-billion-dollar investment from Microsoft, and other tech firms, that can answer questions and assist you with tasks like composing emails, essays, grammar, writing code, etc. (But it will not pick your NCAA  March Madness brackets, I tried!)

The use of ChatGPT has become increasingly popular as we all have discovered how it can help us though our busy days. For instance, it even helped write this article! While it has numerous benefits, it also has security implications that cannot be ignored. For me, the most significant concern is data privacy. ChatGPT requires a lot of data to train and improve its performance, and the information used could be personal, sensitive, and confidential. According to ChatGPT itself, “It is essential to understand that the conversations on the platform are not private, and the data can be collected, analyzed, and used by third-party companies for various reasons. Therefore, users should be cautious about the information they share on the platform.”

ChatGPT is trained using all the data on the internet, real and false. According to ChatGPT, this data creates “….the potential for the platform to replicate existing biases, prejudices, and falsehoods present in the data used to train the system. For instance, if the system is trained on a dataset that contains biased language or discriminatory language towards a specific group of people, the platform may generate similar language when responding to queries related to that group. This could perpetuate and amplify existing prejudices and biases, which could have significant social and ethical implications.”

For all its wonderful potential, ChatGpt is still very new and in development. It is therefore still buggy. For instance just a few days ago a bug was discovered that allowed users to see other users’ searches on the platform. The bug was caused by an error in the platform’s code, which resulted in users’ search queries being shared with other users. This was a significant security breach as it compromised users’ privacy and could potentially expose sensitive information. Consequently, OpenAI shut down ChatGPT temporarily and disabled the site’s history function when it was brought back online.

Of course, hackers are not hesitating to act upon ChatGPTs potential. Beware! There are already tools circulating that are supposed to make it easy to integrate ChatGPT into your business. Many of these tools are front ends that intercept your data and steal your login information. Again, according to ChatGPT, “…attackers are using this platform to generate convincing phishing messages that could be used to trick users into divulging sensitive information.”

So should ChatGPT be used in my business? The answer, again to quote ChatGPT, is, “Use this platform carefully. Businesses and Developers must implement robust security measures to prevent bugs and protect users’ data, ensure that the platform is not biased or used for malicious purposes, and educate users on best practices to safeguard their accounts. By doing so, we can harness the potential of ChatGPT while minimizing the security risks associated with its use.”

We at UniVista, not ChatGPT, suggest the following measures:

  1. Do not use sensitive or privileged data in ChatGPT Searches
  2. Only use ChatGPT through its web interface. Do not use any unverified 3rd part apps. Microsoft and other companies are busy integrating Chat GPT into their programs. Wait for these official integrated releases. You can see a demo of Microsoft’s integration into its Office suite here: https://www.youtube.com/watch?v=ebls5x-gb0s
  3. Do not use a business account to access ChatGPTs search function.
  4. Ensure that multi factor authentication is used with any account that accesses ChatGPT.
  5. Double-check any output for accuracy. When I asked ChatGPT to use real examples in its responses to my questions, it cited several examples I could not verify online. 

ChatGPT has amazing potential to shave hours off our days and make us all more productive but it’s still new and buggy with major security implications. Please use it carefully. If you do I think you’ll discover its potential in your business.

Have fun, let me know what you think, and ask us any questions that you have. We’re here to help.