The AI boom is surging, fueling discussions about future super-intelligence, job displacement, and even existential risks. Central to these debates is the question: Can an AI become self-conscious? And more specifically, is this possible within the current paradigm of architectures like Large Language Models (LLMs)?
LLMs have become the focus of this discussion due to their accelerating sophistication. Let's address the core question immediately: Can LLMs be self-conscious? A quick answer, grounded in the general principles of transformers, is No. An LLM is a static, statistical model—a vast set of numbers. It remains unchanged during inference and possesses no internal, dynamic state that would constitute self-awareness.
A spark of self-awareness will not arise within the weights of a large language model.
However, modern AI agents are far more than just the LLM. While the LLM forms the impressive core, it is the other components and the overall system architecture that hold the key to the emergence of self-consciousness.