Unit8’s mission statement is to help non-digitally native companies embark on their analytics journey and turn data into value. A key component to that journey is bridging the “people” gap (i.e. change management) to ensure a harmonised user-to-tool fit at the finish line. However, as the barriers to AI are vanishing, more and more non-technical profiles will (have to) engage with these tools. Herein lies the issue; there is a substantial gap in the understanding of what AI actually is between technical and non-technical profiles. After all, the only frames of reference most people have (if they never worked with or studied AI) are the news and Hollywood, both of which have a flair for exaggerating a “doomsday” narrative. That preconceived notion, combined with the release of the astonishing ChatGPT and we can begin to see how fear can dominate dialogues instead of curiosity. We’re here to reframe that narrative, set the story straight and provide clarity on what exactly generative AI is and is not.
Before diving into some myth-busting, let’s reach some common ground on definitions…
Its not a question of replacement but rather augmentation. The current state of generative AI is in no shape to take full ownership of a task or process. It can however serve as an exceptional ally in tackling mundane or repetitive work, freeing up valuable time and resources for individuals to focus on more critical and strategic aspects of their roles. You can really have an excellent companion to help you complete the work you’d rather not do – but it always needs a person to verify its work. After all, its a next-best-word predictor and not a critical thinker.
Despite ChatGPT’s stunning natural language capabilities it can not be considered as having “general” intelligence. The current applications of generative AI models are limited in scope and subject to logical flaws, such as hallucinations where the model generates information that may not be accurate or grounded in reality. These limitations highlight the distinction between the current state of generative AI and the broader concept of artificial general intelligence.
This is false. These models are riddled with logical flaws and risk of hallucinating complete made-up or partially incorrect facts/statements. There needs to be a human-in-the-loop protocol to mitigate the risks associated with relying solely on AI-generated content. By incorporating human oversight and verification, businesses can ensure that the output aligns with factual accuracy and avoids unnecessary risks that may arise from the inherent limitations of generative AI models.
Beware of free-to-use, publicly available models like ChatGPT for they utilise any data given by the user to train and improve itself further. A well-known case was the Samsung leak where an employee shared data that compromised its competitiveness. To mitigate risks, it is advisable to ensure that the data shared with ChatGPT or any similar model has already been made public, minimizing the potential impact of data exposure or unauthorized use. By being mindful of the data shared, businesses can prioritize data privacy and protect their confidential information.
Its easy to think, based on the previous myth, that it is indeed true. However, there are enterprise solutions such as Microsoft’s Azure OpenAI service. This service makes ChatGPT available for your company without any risk of the data being shared with Microsoft or OpenAI. Thus, it is possible to use sensitive data with generative AI, as long as its an in-house instance.
We hope this has provided some clarity on what exactly generative AI is and, more importantly, reduced some “dooms-day” anxiety. Unit8 is always happy to discuss any questions you may have regarding generative AI. As certified AI partners of Microsoft, we’re in a strong position to show the very best of what generative AI can do for your organization. We encourage you to reach out to us to explore how generative AI can empower your business and drive innovative solutions.