AI Was Here All Along, Waiting for Us to Let It Happen
- Francisco José Torres Ribeiro

- Jan 16
- 6 min read
Updated: Apr 10
Author:
Francisco José Torres Ribeiro - AI Engineer
Researcher at NYU Abu Dhabi & PhD in Informatics Engineering
Abstract
In this article, I will be using AI’s present impact and future potential as an excuse to briefly talk about its past. I think its history is often overlooked. And understandably so. After all, everyone likes to enjoy the moment and be excited about the future.
However, because I am a researcher working on AI and Software Engineering, I inevitably enjoy geeking out and elaborating on how we got here. Granted, if the reader is someone with a technical background in these areas, you may find my words to be oversimplifying the matters at hand. Still, I stand by this attempt to frame the subject in a simple, yet effective way to motivate readers to learn more.
Why is AI Buzzing Now? Looking Back to Understand the Present and Future
Artificial Intelligence (AI) seems to be everywhere these days. People talk about its potential to change industries, revolutionize our daily lives, and even replace some jobs. Governments, companies, and individuals are all trying to figure out how to navigate this new wave of technological change. But while we're busy debating AI’s future, it's worth pausing to ask:
Why is AI such a big deal now? After all, the idea of intelligent machines is not new.
AI has been part of our collective imagination for decades—appearing in books, movies, and TV shows, from HAL 9000 in “2001: A Space Odyssey” to the Replicants in “Blade Runner”. Yet it’s only in recent years that AI has made a huge leap from fiction to reality. What changed? Why is this moment different from all the hype that came before?
The answer lies in two key factors: an abundance of resources and a new way of harnessing language.
Why Now? Resources Unlock AI’s Potential
One of the main reasons for the AI explosion is the availability of vast resources: data, computing power, and advanced techniques (let's call them "smart algorithms"). For years, we have been teaching machines to follow strict instructions—think about how software applications helped us sort through mountains of documents or handle complex calculations. But as useful as these systems were, they could only tackle highly structured problems, where clear rules could be programmed in advance.
However, the real breakthrough came when researchers started thinking:
What if computers could learn to solve problems on their own? To do this, we needed two things—plenty of data and powerful machines. The past decade delivered both. You might remember the buzz around “Big Data.” Companies scrambled to collect every bit of information they could, often without a clear plan for how to use it. Meanwhile, computers became more powerful and better equipped to process these enormous datasets in reasonable timeframes.
The mathematical foundations for machine learning have existed for decades. But until recently, the computational horsepower and massive datasets needed to make those theories practical simply weren’t there. Once they were, AI models began producing results that seemed almost magical.
Structured vs. Unstructured Tasks
To give you an example, computers have long excelled at structured tasks—those with clear rules, like sorting data in a spreadsheet or balancing a budget. But human beings are good at something much harder: unstructured tasks. Think about writing a story, making a decision based on gut feelings, or even carrying out a conversation. These tasks don’t follow rigid rules. They are more fluid, messy, and, ultimately, human.
This is where AI has made the most dramatic strides. Machines are now better than ever at handling unstructured tasks, and that leads us to the second key factor behind AI's meteoric rise: language.
The Language Ingredient: Understanding Us Better Than Ever
AI has improved in an area that most of us take for granted—language. Recent advancements in language models have been stunning. By feeding vast amounts of text to these models and allowing them to process it repeatedly, we’ve seen the rise of something called "Large Language Models" (LLMs). These models are the driving force behind chatbots like ChatGPT and Google Gemini, which seem capable of doing almost anything we ask.
The magic lies in the fact that these models don’t just repeat words or follow pre-set instructions. Instead, they learn the nuances of language itself. They understand context, can generate meaningful responses, and even provide insights into problems that don’t have a straightforward solution. With a little extra fine-tuning, LLMs become incredibly powerful tools, able to assist with everything from writing code to answering complex legal questions.
Where Does That Leave Us?
So, while AI might seem like the latest tech fad, the truth is that it’s been in the works for a long time. The key difference now is that we finally have the right mix of data, computational power, and language mastery to make AI systems genuinely useful in everyday life.
What we’re witnessing now isn’t just hype—it’s the realization of decades of effort. The future of AI will undoubtedly bring even more changes, but it’s essential to remember how far we’ve come.
Sometimes, the future is really about unlocking the potential that’s been there all along.
Conclusion
With the exception of the introductory and the final paragraphs, the entire text was mostly written by AI based on an afternoon of note-taking and brainstorming relevant points I find to be fundamental. There are some tweaks from me here and there.
Using an AI tool, which in this case was ChatGPT, does not absolve me of responsibility for the above text, and I stand by every word. No matter how fluent I am in English, these AI systems will always be more eloquent than me. Undoubtedly, they will also produce text much more quickly. This combination enables me to iterate much more quickly on different versions of any idea I want to express.
Going even further, these tools allow me to even “talk to” my text and refine it meticulously. As a consequence, I can focus deeply on the unfiltered note taking that is the foundation of this text, writing down as many ideas that come to my mind during that moment. Parallel to what I said about people being great at unstructured tasks, I believe these tools can unlock more of our potential as we can engage much further in the messiness that is our imagination and creativity, and leave the structured task that is calculating the best words to convey our wonderful (and sometimes scary) ideas for machines.
This leaves me wondering if androids will one day wake up from their dreams of electric sheep and begin to think of electric ideas…
Biography of the Guest Expert
Francisco is currently a Postdoctoral Associate at NYU Abu Dhabi. He completed a PhD in Informatics Engineering from the University of Minho (Portugal), specializing in areas like Large Language Models (LLMs), automated program repair, and software testing.
He also completed research internships at the National Institute of Informatics in Tokyo and holds an MSc in Informatics Engineering with a focus on intelligent systems.
Francisco has been a University Teaching Assistant at the University of Minho since 2019 and his technical expertise spans programming languages like Python, Java, and Haskell. Specifically, he co-supervises theses on LLMs for Automatic Repair of Type Errors (PhD) and the Performance and Energy Efficiency of LLMs (MSc).
Francisco has authored multiple papers on Large Language Models for Automated Program Repair and Energy Efficiency across Programming Languages that were presented at prestigious conferences like SLE, SPLASH, and ICSE.
His speaking experience includes: being an invited Lecturer at New York University (UAE) in Abu Dhabi, where he presented "LLM-Powered Debugging: Insights from Type Errors and Legal Applications"; Lecturer at the Sustrainable Summer School 2023 in Coimbra (Portugal) where he addressed the use of Large Language Models for program repair; and a participating speaker in Green Software Lab workshops in both Braga and Coimbra.
He has served as a program committee member and subreviewer for major conferences such as: International Joint Conference on Artificial Intelligence (IJCAI 2023); International Conference on Software Testing (ICST 2021); International Conference on Quality, Reliability, and Security (QRS 2022, 2023, 2024) and International Symposium on Functional and Logic Programming (FLOPS 2024).
_PNG.png)
Comments