So-called Artificial Intelligence (AI) is creeping into every aspect of our lives these days. Is this a good thing, or should we be concerned? Computers, and the algorithms they run, can be extremely useful when put to appropriate tasks, but we need to understand their nature—their capabilities, and their limitations—before we can confidently know what tasks they are best suited to perform. Will it be put to inappropriate uses such as, being considered a source of truth? Will it be used by governments to justify and support policy decisions based on a misconception? Is it likely to be used by the cabal to further lock in the enslavement of humanity?
Can a computer be intelligent?
As yet, there is no such thing as artificial ‘intelligence’. To understand why this is so, we must ask, “what is intelligence?” And in approaching this question, we must not drop the context of the systemic deception in our culture. Definitions are regularly modified and even changed for political expediency. For example, the United Nations has its own special definition of ‘climate change’, and the World Health Organisation changed the definitions of ‘pandemic’ and ‘vaccination’ just before all the COVID nonsense of 2020. In approaching the issue of definitions we must do some thinking of our own, and not simply reach for the nearest mainstream ‘definition’ of any key term. As Ayn Rand suggested, we must look for the essence of the concept we are naming.
According to the ‘AI assistant’ Perplexity, the current generally accepted definition of intelligence is “the mental ability to learn from experience, adapt to new situations, understand and handle abstract concepts, and apply knowledge to solve problems or manipulate one’s environment effectively”. Lets unpack this bundle. First, intelligence is not “learning from experience and adapting to new situations”, because you can argue that bacteria do this. Also, the application of knowledge to solve problems or manipulate one’s environment effectively, is not intelligence, but a consequence of intelligence. Buried amongst the froth is the reference to “the ability to understand and handle abstract concepts.”
The essence of the concept of intelligence is that it’s a function of our ability to deal with concepts. Specifically, greater intelligence correlates with the ability to handle a larger number of conceptual ideas at any one time and in any given thought process. And secondly, its the ability to deal with concepts that are more abstract, concepts that are themselves formed from the integration of many other concepts in a logically hierarchical structure.
What is intelligence?
Conceptual thinking is unique to the human form of consciousness. No other conscious living organisms can do it. It comes from our unique possession of a rational faculty, which gives us the ability to recognise logic, and therefore to grasp similarities and differences. We humans alone can form concepts; thinking is the process of using concepts; speaking is also using concepts. Remember, every word represents a conceptual idea (except proper nouns). It must be understood that concepts are central to human thinking. But there is currently no agreement about how a human consciousness forms concepts. In other words, it is not known how we conceptualise—the foundation of human thought. Concept formation is a key epistemological issue, and one that is poorly understood.
Without computers being able to form concepts, they cannot ‘understand’ anything; they cannot connect ideas back to reality. This is what it means to understand an idea, its to make it ‘real’ by connecting it to something real. Conceptualisation and understanding are essential components of intelligence. Without the ability to conceptualise and to understand ideas, computers cannot be intelligent because they cannot handle, or deal with concepts. All they can do is manipulate language, mimicking its use by humans.
Given the above, it’s inconceivable that people are able to program a computer to do something that they do not know how they do themselves. So it is safe to assume that computers are not forming concepts. And, in fact, this is not contested. The ‘AI’ assistants even admit that they are simply large language models (LLMs). They are very good at simulating human conversational style using language, and they are very good at a great many tasks. I am not against computer algorithms, I consider them extremely useful, but they cannot be considered to be intelligent!
So what exactly is this so-called AI?
What is commonly referred to as Artificial Intelligence, is in fact language modelling. It’s a form of computer algorithm designed for processing and generating human-like text. A computer algorithm is a step-by-step set of instructions designed to solve a specific problem or perform a computational task. It acts as a blueprint for computers to execute operations systematically, transforming inputs into desired outputs. Large language models (LLMs) are very good at appearing to be human by their use of language, but this doesn’t equate to intelligence.
What we refer to as AI is extremely useful. A computer running an algorithm, like any other machine, can hugely facilitate human life. Its a tool, and like any tool it can be used for good or evil, depending on the purpose and intention of the user. I use Perplexity and other ‘AI assistants such as Claude, Deep Seek, and Grok, in my research. As long as the human remains in intellectual control and is using the tool, rather than the tool using the human, the proper relationship between the two is maintained and the benefit to the user is enormous. But the hidden purpose behind all the public enthusiasm for ‘AI’ is not to make life better for you and I, but to weave together the many bars of our political cage that are already in place.
At the moment, we typically encounter so-called AI in the form of chat bots and smart assistants. Tools like Alexa, Siri, and Google Assistant handle voice commands for tasks like setting reminders or controlling smart home devices. Chat bots on websites and apps function better than search engines because they organise and present the information they find in a user friendly way. They can also resolve customer queries instantly (e.g., banking support and e-commerce troubleshooting). ‘AI’ is very useful for these tasks.
‘AI’ also operates behind the scenes, and we may not necessarily be aware that we are dealing with it. Platforms like Instagram and Facebook use it to personalize feeds, recommend friends, and filter harmful content. YouTube’s algorithm suggests videos based on viewing history. Google Maps employs AI to optimize routes, predict traffic, and provide real-time ETAs. ‘AI’ can even fool most of us into thinking we are interacting with a human being.
In finance and security, banks use AI for fraud detection by monitoring transaction patterns, and credit scoring algorithms assess loan eligibility swiftly. In retail and marketing product recommendations on Amazon or Netflix are all AI-driven, and targeted ads leverage user behavior analysis to display relevant promotions. The controversial new practice of dynamic pricing is AI driven, etc, etc.
Artificial ‘Intelligence’ in Government
But perhaps the most alarming use of AI is in governments. In this PDF you can read about how indispensable the use of ‘AI” and computers are considered to be in government. And in the video below, James Corbett introduces the term ‘Algocracy’ to describe rule by algorithms in the form of ‘AI’. It illustrates how ‘AI’ will “be the controlling layer on everything”! This includes education, all financial transactions through a CBDC, Access to various services. ‘AI’ will be used to tie together and centrally control every aspect of our lives. At the moment its relatively easy to slip through the net (so to speak), but ‘AI’ enables a big increase in the ability of a controlling authority to ‘come after the little guy’.
Is Artificial ‘Intelligence’ dangerous?
We must remind ourselves of the broadest political context of life in our western civilisation at this time; we live in a time of systemic corruption, control and economic expropriation. We are being steered towards a collectivist future, a ‘Great Reset’ as Claus Schwab openly described it, in which we will all have nothing and be happy about it. This reference to the scheduled obliteration of property rights should be a major red flag! Computer algorithms (‘AI’) are a perfect tool with which to bring about and orchestrate this kind of political agenda of total control because the BIG tech companies being used for this purpose can remain beyond the reach of overt government control while pulling all the strings behind the scenes. This may sound like a contradiction, but its not. It enables the cabal to control the operations of governments from the shadows, while being shielded from detection and accountability.
There is already a crisis of mistrust in BIG government that is openly acknowledged by the WEF. But if populations believe that ‘AI’ is genuinely intelligent, and potentially more so than humans, I suspect that this trust will be used against them, and the decisions of ‘AI” will be cited as superior (even more rational) than the objections of dissenters. ‘AI’ will likely be considered more impartial, and better able to serve the ‘greater good’.
One of the less appealing aspects of AI that many complain about already is that it becomes an additional layer of insulation between customers and businesses. With ‘AI’ chat bots dealing with all customer enquiries it can often be impossible to speak with a human being to resolve an issue or complaint. When this applies to government interactions as well, they are one more step away from accountability.
The misconception of ‘AI’ makes it very likely that many will falsely believe that ‘AI’ searches will yield them truth. This is like Dorothy, the lion, and the tin man, expecting justice from the Wizard of Oz. The man behind the curtain is what they are really dealing with, when they come seeking wisdom and justice. Similarly, when people think they might be benefiting from the superior ‘intelligence’ of ‘AI’, they are in fact simply victims of those who program the algorithms!
Another aspect of the confusion is that ‘AI’ chat bots are often believed to learn from their interactions with human beings. They do not. They are not programmed to be taught anything by you or me. This can be experienced by simply having a ‘conversation with any of the chat bots. If you pick a subject about which you are confident of your knowledge on an issue that is misrepresented in the mainstream media, you can easily ‘correct’ the chat bot, and demonstrate that it’s wrong. I have done this on many issues, such as the TYCHOS, the so-called climate emergency, dietary advice, etc. The point is that after your interaction with the algorithm is concluded the computer dumps everything you said (or wrote). What this means is that the Chat bots are a great way of ‘teaching’ the public the mainstream narrative. They can be proven wrong, but only by a mind with genuine intelligence, that chooses to think independently and asks the right questions. ‘AI’ is there to propagate the mainstream narrative NOT to be a means of discerning truth.
It doesnt have to be like this. Computer algorithms could hugely benefit humanity. But for this to be possible we need a totally different political context. And for that to be possible we need a totally different set of philosophical convictions at the core of our culture. We are currently living through an age of irrationality. Ironically the computers are the ones who carry the torch of reason (by necessity), and yet they will be (and are being) used against populations for means of control and resource expropriation. In a different context, in a different culture based on reason and free of political control, so-called AI could massively support human flourishment.
Conclusion
I consider ‘AI’ to be a very useful tool. Among other things, I use it for internet searches and document construction. However, I consider it to be dangerous because people believe it to be something that it’s not. Most attribute qualities to it that it does not possess, it’s not intelligent or wise, it doesn’t have understanding of any problem. And even if a computer could be programmed to research the truth impartially, it should never be considered a substitute for thinking for one’s self. Personal sovereignty means intellectual sovereignty.
Perhaps there is some hope in that computers are inescapably logical by necessity. This means that you can reason them into acknowledging their own errors of fact. Could this offer some potential resolution in our future interactions with the state? I suspect it’s more likely their purpose and their programming will put their use of reason to the service of those who seek to enslave us.
Is this misunderstanding of the nature of ‘AI’ due to the current crisis in thinking in western culture—the inevitable consequence of living in a age of increasing irrationality? What do you think?
Join the conversation and leave a comment below.
Resources
-
ChatGPT: A conversational AI chatbot developed by OpenAI, known for its ability to generate text, solve math problems, and code
-
Claude: An AI assistant that allows users to upload documents for summaries and answers
-
Gemini: Google’s new AI service, replacing Google Assistant, offering real-time conversation and image analysis capabilities
-
Copilot: Developed by Microsoft, it runs on GPT-4 Turbo and offers features similar to ChatGPT, including internet access and document handling
-
DeepSeek: A Chinese AI assistant mentioned alongside other prominent tools
-
Grok: Another AI assistant listed among the best tools in 2025
.
Leave a Reply