In the 1980s, I was fortunate to be taught by an ICT lecturer at Watford College named Ian Watson, who shared a vision that now seems remarkably prescient. Watson predicted that, in the not-so-distant future, expert computer systems would be developed—machines capable of mimicking human experts by providing vast arrays of information across countless fields. He explained that people would interact with these systems much like interrogating a sage, extracting desired information with ease. At the time, he drew a crucial distinction between two types of knowledge: explicit knowledge, which can be codified, stored, and retrieved through technology, and tacit knowledge, which is experiential, deeply personal, and intangible—a form of wisdom that is learnt through doing and immersion rather than mere transmission.
Today, Watson’s vision has unquestionably materialised. We live in an era flooded with expert systems and artificial intelligence (AI) applications that generate prodigious volumes of explicit knowledge, readily accessible to anyone with an Internet connection. This burgeoning AI revolution has transformed the way humanity interacts with information, dramatically altering educational paradigms, research methodologies, and everyday problem-solving. We stand at the dawn of what can rightly be called the AI era—a time when knowledge is more instantly available than ever before.
There is undeniably much cause for optimism in this new world. The good news about AI is that it empowers humans to access complex and intricate information with unprecedented speed and accuracy. Tasks that once required hours—even days—of painstaking manual research can now be accomplished in mere moments. For teachers, students, professionals, and hobbyists alike, AI becomes a powerful pedagogical tool, augmenting human ability to learn, understand, and apply information. This technological boon allows the wise to become wiser, providing them not only with huge repositories of raw data but also with the analytical capabilities to make informed decisions across a wide spectrum of pursuits.
Moreover, AI democratises expertise, making specialist knowledge available to novices and the self-taught. Do-it-yourself practitioners in fields ranging from music to engineering can obtain expert advice and technical guidance at their fingertips, turning once complex endeavours into achievable projects. For instance, an aspiring musician can use AI-driven platforms to improve their composition and harmony, while an amateur technologist might learn to troubleshoot and innovate without formal training. The boundaries of self-education are pushed further, and the possibilities for personal and professional growth multiply exponentially in this context.
However, it would be naive—and potentially dangerous—to celebrate AI without recognising the significant risks that accompany it. The darker side of this technology lies in the potential for over-reliance and misuse. If users approach AI uncritically or without proper guidance, they risk becoming dependent on it to the detriment of their own intellectual development. This phenomenon is especially alarming amongst students, who may fall into the trap of treating AI not as a learning aid but as a shortcut to academic success. Instead of nurturing curiosity, critical thinking, and effortful learning, some may be tempted to offload entire assignments or exam answers to AI, thus bypassing the necessary cognitive engagement required for genuine education.
An example from my own academic environment illustrates this vividly. At the university where I teach, a student was caught cheating during an examination: having left the examination hall supposedly to visit the restroom, he was found consulting a mobile phone displaying relevant answers generated on a ChatGPT platform. This incident serves as a stark warning against the improper use of AI. It is not a substitute for learning, scholarship, or intellectual integrity. On the contrary, such misuse undermines the very purpose of education and jeopardises the credibility of academic institutions.
Furthermore, entrusting AI with conducting research on behalf of scholars is equally problematic. Genuine research is a painstaking process of enquiry, experimentation, critical analysis, and insight—a human endeavour that pushes forward the frontiers of knowledge in every discipline. While AI can undoubtedly assist by organising data, running simulations, or identifying patterns, it should never replace the researcher’s role as the driver and originator of inquiry. True academics take pride in their contributions, cultivating original ideas and validating findings through rigorous scrutiny. AI, in this context, ought to be viewed as a supportive instrument rather than a wholesale replacement for creative and independent thought.
Beyond education and research, the implications of intelligent systems stretch into myriad sectors, including healthcare, law, and governance, promising significant benefits but also raising ethical and societal concerns. The delegation of decision-making to AI systems necessitates careful oversight and responsibility. The human capacity for judgment, empathy, and ethical reasoning remains indispensable, particularly in contexts where nuanced understanding and moral considerations come to the fore.