AI
| Radu Orghidan |
14 June 2023
Generative AI and Large Language Models (LLMs) have emerged as transformative technologies in the fast-evolving field of Artificial Intelligence (AI), reshaping numerous sectors ranging from education to employment. However, their rise has also given birth to a series of misconceptions that often cloud our understanding and potential utilisation of these powerful tools.
Before we dive into the most common misconceptions, let’s see what generative AI means and where the fear of it comes from.
IS AI HERE TO ENHANCE OR REPLACE HUMAN CAPABILITIES?
The evolution of AI has been punctuated by alternating periods of hype and disillusionment, known as AI winters, as researchers strived to enable computers to solve problems without explicit coding for every conceivable scenario.
The recent convergence of large-scale computing, vast data repositories, and accumulated research insights has catalysed unprecedented advancements in a relatively short time span. This progress has culminated in the development of currently most advanced large language model (LLs) such as OpenAI's ChatGPT-4, but now open-source communities are racing to close the gap by their own models. These solutions, reflecting the essence of generative AI, exhibit capabilities that far surpass those of their predecessors, which is the reason for many doubts that need to be clarified.
So, what arguments stand against the most common misconceptions?
MISCONCEPTION 1: “GENERATIVE AI CAN REPLACE HUMAN JOBS”
Rather than replacing jobs, it's more accurate to say that we're delegating tasks to AI. The rise of generative AI has spurred a burgeoning ecosystem of startups, focusing on developing tools for model customisation, fine-tuning, managed services and Reinforced Learning from Human Feedback (RLHF). Companies like Stability AI are at the forefront of this revolution, creating open-source models capable of generating images, text and even large-scale open-source chatbots trained via RLHF.
This shift mirrors the concept of “centaur” tasks, popularised by Gary Kasparov, named after the half-human, half-horse mythical creature. These tasks represent a harmonious blend of human and AI capabilities, where AI is integrated deeply into our workflows. In this context, AI becomes an invaluable tool, augmenting human potential rather than replacing it.
So, while AI might be the new kid on the block, it's not here to take our jobs but rather to help us do them better. After all, even a centaur needs its human half to be complete!
MISCONCEPTION 2: “THE BIGGER AI MODEL IS ALWAYS BETTER”
This belief, while seemingly logical, overlooks the nuanced nature of AI applications. It's not about having a model that can encompass the entirety of the internet, but rather about having a model that is tailored to your specific context.
Data is indeed the lifeblood of AI, but it's not just about quantity; quality and relevance are equally, if not more important. The focus should be on providing the AI with as much relevant and unbiased context as possible. A model trained on vast amounts of data may still falter if the data lacks the specific context needed for your application.
In essence, it's like trying to find a needle in a haystack. A larger haystack (or in this case, a larger model) doesn't necessarily make the needle easier to find. What you need is a magnet (or a contextually relevant model) to draw the needle out. When it comes to generative AI, the quality and the context always take precedence over the AI model’s size.
MISCONCEPTION 3: “LLMS USUALLY HALLUCINATE” OR “LLMS ARE ALWAYS ACCURATE”
Both these extremes are misrepresentations of the true capabilities of LLMs.
LLMs, much like overconfident students, can indeed produce incorrect answers when not provided with sufficient context. They generate outputs based on patterns learned from their training data, and without the right context, their predictions can go awry. This is not so much a hallucination as it is a misinterpretation or misapplication of learned patterns.
On the other hand, the belief that LLMs are always accurate needs to be revised. Despite their impressive capabilities, LLMs are not infallible. They lack the ability to fact-check or verify the information they generate, while their outputs are only as good as the data they were trained on.
In essence, LLMs are like skilled artisans working in the dark. They can produce remarkable work based on their training and experience, but without the right light (or context), they might just as well create a masterpiece as a misshapen lump. So, while LLMs are indeed powerful tools, they are not psychic nor infallible. They need the right context to shine, just like our overconfident students need a good teacher to guide them.
MISCONCEPTION 4: “GENERATIVE AI WILL RUIN EDUCATION AND ENABLE PLAGIARISM.”
This perspective fails to consider the transformative potential of AI when used responsibly and ethically.
Rather than viewing generative AI as a threat to education, we should see it as a tool that can enhance learning and foster creativity. The key lies in teaching our students how to use AI tools effectively and ethically. Just as we teach students to cite their sources and avoid plagiarism, we can teach them to use AI responsibly.
Designing assignments for healthy use of Large Language Models (LLMs) involves tasks requiring critical thinking, creativity, and subject understanding. Assignments can include problem-solving tasks, cross-domain studies, creative projects, and data analysis. To prevent plagiarism and foster critical thinking, educators should stress the importance of original thought and responsible AI use, employ plagiarism detection tools, and prompt students to reflect on their AI usage and learning outcomes. The goal is to use LLMs as tools to enhance learning, not as a shortcut to bypass critical thinking and original work.
In essence, generative AI is like a powerful calculator. It can do the heavy lifting, but it's up to the user to understand the problem and interpret the results. So, while generative AI might make it easier to copy and paste, it also opens up a world of possibilities for those willing to think outside the box.
After all, it's not the tool that makes the scholar, but how they use it!
MISCONCEPTION 5: “LLMS POSSESS CONSCIOUSNESS LIKE HUMANS”
A common misconception about generative AI, particularly Large Language Models (LLMs), is the tendency to anthropomorphise them, treating them as conscious entities. However, it's crucial to remember that these neural networks, while sophisticated, do not possess consciousness.
However, even though they are not sentient beings, they are more than mere factory machinery.
LLMs, such as ChatGPT, are trained on human language and, as such, they reflect our biases. Research has shown that certain input conditions can lead to increased exploration and bias in the responses of LLMs. Specifically, conditions that might be considered anxiety-inducing in human communication can lead to significantly more biased responses than conditions associated with positive communication. Therefore, when interacting with LLMs, it's recommended to maintain a neutral description of the situation and steer clear from biased or emotionally charged language.
In essence, while it's tempting to treat LLMs as our digital doppelgängers, it's important to remember that they are more like sophisticated mirrors, reflecting our language and biases, rather than conscious individuals with emotions and intentions. So, while LLMs might be the life of the party in the world of AI, they're not ready for a heart-to-heart chat over a cup of coffee!
Generative AI and Large Language Models (LLMs) are powerful tools when used responsibly and ethically. They can augment human capabilities and transform various sectors. However, it's crucial to dispel the common misconceptions surrounding them.
As we move forward, the future of AI lies not in replacing humans, but in augmenting our abilities, fostering a symbiotic relationship where we leverage AI's strengths to overcome our weaknesses. As we continue to debunk these myths, remember – AI is like a pet chameleon: it can mimic its surroundings, but at the end of the day, it's the people who enable it to be part of these surroundings.
Radu Orghidan
VP Cognitive Computing
Radu is passionate about understanding the inner mechanisms of innovation and using them to solve business challenges through cloud and on-premises cognitive computing systems. He is currently focused on machine learning and generative AI to create systems that enhance users’ ability to understand and interact with the physical and digital reality. In Endava, Radu is also looking at strategic approaches to align novel technical tools with business goals. In his free time, Radu is a keen motorcycle rider and loves spending time with his family.All Categories
Related Articles
-
20 September 2023
What Businesses Need to Start Innovating
-
14 September 2023
The Spark That Drives Machine Learning to Shine
-
25 August 2023
Tuning Out the Noise: Picking the AI for Practical Business Impact
-
17 August 2023
The AI Boost in Gaming: Gameplay, Narrative and Production
-
12 July 2023
Regtech - Necessary evil or competitive edge?