• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

  • Home
  • My Blog
    • Health
    • Philosophy
    • Thinking
    • Politics
    • Economics
    • Education
    • Psychology
    • Climate
  • Podcast Episodes
    • Health
    • Philosophy
    • Thinking
    • Politics
    • Economics
    • Gold & Silver
    • Crypto
    • Education
    • Psychology
    • Science
    • Technology
    • Climate
    • Current affairs
  • My music
  • Books
  • About Lawful Rebel
    • Mission Statement
  • Defining Terms
  • Contact

Ep 166 – Will AI save us or enslave us – JJ Kelly introduces the idea of ‘Intellopy’

24/03/2023 by Nigel Howitt 5 Comments

https://traffic.libsyn.com/secure/lawfulrebel/EP_166_Intellopy_-_will_technology_save_us_or_enslave_us.mp3

Podcast: Play in new window | Download (Duration: 1:48:29 — 124.1MB)

The technology of today can track our every movement, monitor all our communications and transactions, and gather data on every aspect of our lives. Central Bank Digital Currencies (CBDCs), that threaten to remove the last vestiges of our economic freedom, are on the agenda of those steering western culture towards collectivism. In this context, it is hard to imagine that technology could come to the rescue of the ordinary man on the street, by increasing the level of human intelligence to a level that could avert what looks politically inevitable.

However, that is the possibility proposed by JJ Kelly, ex-CEO from the telecommunications Industry, Ex-Navy Pilot and now Author of “Intellopy – Survival and happiness in a collapsing society with advancing technology”. In this episode of “Living outside the Matrix” we discuss the context of today’s technological capabilities and how technology may enslave us, or help to set us free.

JJ presents a thorough and clear analysis of how the human consciousness works, incorporating the perceptual and conceptual level, and he presents a fascinating view on what is possible given the right political context. The question is, can a computer be build to be equal to, or superior to, the human mind, given that we don’t yet fully understand how the mind works?

It is true that Ayn Rand’s revolutionary theory of concept formation gives a very credible account of how some aspects of the mind works. But more importantly, it is inconceivable that those programming the AI would give credibility to and therefore incorporate Rand’s theory — the only one that solves the so-called “problem of universals” — and therefore would stand a chance of success as the basis for a computer based ‘intelligence’.

Personally, I have doubts about the possibility of AI and transhumanism — the merging of humans with technology — and I remain unconvinced of the desirability of such a thing. In Episode 144 of the Podcast, I spoke with Patrick Wood, and he warned of the dangers of Technocracy. JJ not only thinks it’s possible, but that it could be beneficial. In my opinion, the current political control and manipulation of society, in the direction of greater centralised control, is the dominant factor, due to the philosophical reasons that we discuss. And thus, that slavery under technocracy is the more likely outcome, unless there is a radical u-turn in the dominant fundamental beliefs of western culture.

 

I have doubts about JJ’s assumption of the exponential curve in the advancement of technology. Again, because of the widespread rejection of reason in western culture. Although I agree that the ‘sciences’ are somewhat insulated from this by necessity, there is considerable evidence to suggest that what passes for modern science is in fact impressive sounding wishful thinking. Theoretical physics left the rails of reality many decades ago, and has been followed by mainstream medicine, alleged space technology, and alleged nuclear technology. All of these are riddled with contradictions and false claims.

Crucially, as JJ concedes, an advanced AI would only be possible if modeled on the correctly functioning human mind, one that has ALL of its premises based in reality, NOT fantasy. There could be no false premises programmed in, carried over human error. This would act like a ‘virus’ and scupper any ‘thoughts’, rendering them not accurate enough to base action on – thus negating the whole purpose of AI. JJ explains that the AI would also have to have sensory data coming in from sensors that could replicate the human ability to sense reality directly. This is the necessary base of concept formation.

All in all, this is a very interesting conversation encompassing philosophy, history, epistemology, politics and technology, all in the context of evaluating an uncertain future, and what we can best do to prepare for it. This is a much needed wake up call.

If you want to pick up a copy of ‘Intellopy’ go to amazon UK here, or in the US here.

JJ’s website is intellopy.com

You may also be interested in these related podcast Episodes

Ep 155 – exploding the myth of the Big Bang Theory

Ep 144 Technocracy or Freedom – with Patrick Wood

Ep 130 The myth of rocket science: why leaving the earths atmosphere is impossible

Ep 85 The dangers of Electromagnetic fields – and what to do about them

 

Filed Under: Podcast, Technology Tagged With: Intellopy, JJ Kelly, Technocracy, Transhumanism

Important information

I am not a medical doctor, lawyer or financial adviser. I do not offer medical, legal or financial advice on lawfulrebel.com. However, I research issues extensively and follow evidence to find truth. I speak from my experience of asking root-cause questions and discovering surprising truths. I write about and share what I choose to do as a result of my own reasoned questioning of the assumptions and conclusions in the mainstream narrative.

I offer value, in my conclusions of truth, to you the thinking individual. Please support me with a donation.

Nigel Howitt

Reader Interactions

Comments

  1. Loose Tooth says

    28/07/2023 at 8:14 pm

    Please note that I only read the text and did not listen to the podcast.

    > Crucially, as JJ concedes, an advanced AI would only be possible if modeled on the correctly functioning human mind, one that has ALL of its premises based in reality, NOT fantasy. There could be no false premises programmed in, carried over human error. This would act like a ‘virus’ and scupper any ‘thoughts’, rendering them not accurate enough to base action on – thus negating the whole purpose of AI.

    If you look at the large language models that are popular right now , like ChatGPT, (which basically hallucinate text), that’s exactly what they are trying to do: make certain topics ‘off-limit’, whatever the ‘creators’ decide, that is. Of course, these language models are nowhere near ‘general intelligence’ levels. As such, the real goal in this case is not to limit the AI’s ‘intelligence’, but rather, limit the information that the users can receive from the model to a narrow politically-correct ‘scope’.
    I believe that by doing that they shoot themselves in the foot, as by the time a non-politically-correct (and thus way more interesting 🙂 ) large language model arrives, users should prefer to use that one.

    > JJ explains that the AI would also have to have sensory data coming in from sensors that could replicate the human ability to sense reality directly. This is the necessary base of concept formation.

    Interesting idea.
    Not really comparable to what I assume JJ meant (real-time sensory input), but still worth mentioning:
    This is (in a way) already happening with current deep learning methods, where large amounts of data are fed to an algorithm in order to ‘train’ the network. However, this data can still be synthetic, or ‘real’. Using synthetic data has some advantages, because we’re able to automatically generate lots of it. An example of a physical system that was fully trained on synthetic data: https://openai.com/research/solving-rubiks-cube

    Reply
    • Nigel Howitt says

      29/07/2023 at 11:09 am

      Interesting link. I watched the video, but I am not convinced that the AI is doing anything more than perhaps reversing a series of disruptive moves learned by starting with complete cube. Perhaps I am overly skeptical on this, but the motion of the artificial hand doesn’t look purposeful. It doesn’t appear to be driven by any reasoning.

      You make a great point about how the limits of AI, and having non-reality based inputs could/will actually serve the controllers by rendering certain topics effectively off limits. I find this fascinating. I am not remotely convinced that Genuine AI is possible, and I am not convinced that it will or could possibly rise up and control us all. And yet this is precisely what could be achieved by deception. While in fact it is the controllers who would be behind the machines rising up and controlling the people. As though the algorithm programmers are simply the modern version of the wizard behind the curtain in “The Wizard of Oz”

      Reply
      • Loose Tooth says

        30/07/2023 at 9:23 am

        > but I am not convinced that the AI is doing anything more than perhaps reversing a series of disruptive moves learned by starting with complete cube. Perhaps I am overly skeptical on this, but the motion of the artificial hand doesn’t look purposeful. It doesn’t appear to be driven by any reasoning.

        I believe that you are correct. From my (fairly limited) knowledge of neural networks, there is no reasoning (or ‘general intelligence’) at all, the behavior could be better described as a series of reflexes, based on the ‘sensory’ inputs it receives.

        > I am not remotely convinced that Genuine AI is possible, and I am not convinced that it will or could possibly rise up and control us all. And yet this is precisely what could be achieved by deception. While in fact it is the controllers who would be behind the machines rising up and controlling the people.

        Again, I agree here. We are nowhere near general intelligence levels. We’re not even near ‘insect level’ reflex reproduction. It’s better to look at the current state of AI as a technological tool. Not as an intelligence replacement, or surrogate.

        This tool (like all technology) can be used for good and for evil. And if the populace regards it as some ‘magical AI that rises up’, your point is valid, in that it might just be the ‘controllers of the AI’ waging war against their people. (Like we actually see governments do already.)

        On the other hand, I believe that the current cutting edge of AI is in the hands of big players that have access to a lot of resources to create these models. It’s to be expected that these models in their current state have, for example, censorship built-into them, according to the policies of the parent company.

        In the future, we’ll be able to build and run equivalent models with much less resources (like a good PC at home). When that happens there are probably going to be models equivalent to, for example, ChatGPT, but without the censorship.

        Reply
        • Nigel Howitt says

          31/07/2023 at 12:39 pm

          The problem I see with (so-called) AI, is this. There is no agreement on how the human mind works. more specifically, how it forms concepts. Intelligence is the measure of the ability to handle numerous conceptual ideas — conceptual abstractions, as opposed to concrete things and simple ideas. Without knowledge of how to form a concept, there is no way to program a machine to do it. There is also the issue of volition, which implies consciousness. How to program a computer to do that?

          As it happens, the brilliant thinker of last century, Ayn Rand, did figure out the nature of conceptualisation, but no one is listening to her ideas. In her book “introduction to objectivist epistemology” she makes a very compelling case for the process.

          Reply
          • Loose Tooth says

            01/08/2023 at 8:46 am

            > Without knowledge of how to form a concept, there is no way to program a machine to do it. There is also the issue of volition, which implies consciousness. How to program a computer to do that?

            I agree with what you say, these are very abstract concepts, and it is true that we don’t understand how our own intelligence works.

            However, it is not true that you need to understand something in order to recreate it, or even change and improve it. Some random examples I’m thinking of right now:

            1. Dog breeders can create new breeds of dogs through selection of the dogs with the traits they want. They do not need to understand all the -complicated- details of genetics, etc. in order to achieve the goal of creating a new breed with desired traits.

            2. AI deep learning models can achieve -some level of- generalization. Some examples of this are: text recognition, image recognition, speech recognition.

            3. AI deep learning models can achieve a level of generative abilities. Some examples are: text generation (ChatGPT), image generation (Dall-E, Midjourney)

            It is generally not understood in detail *how* these models work. What is understood however, is the method by which the models are trained on existing data in order to achieve the generalization goal. There is also the field of Explainable Artificial Intelligence (XAI) that tries to make these models more comprehensible. But I don’t know the state of this field.

            Then there’s also my personal experience that I gained during my PhD research. I tried to evolve the brains of little spider robots in simulation, in order for them to control their little robotic spider bodies, to run forwards. I used the ideas of evolution and ‘natural’ (in this case artificial) selection. It worked pretty well, and I had some robots running forwards. I did not understand exactly how their brains worked, but I did understand the process I had to use to get there.

            Regarding Ayn Rand, I’m going to start reading ‘introduction to objectivist epistemology’. Thanks for sharing.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Primary Sidebar

“Living Outside the Matrix” – The podcast for curious people who dare to think for themselves

lawful rebel on itunes

Subscribe to Our Newsletter

We keep your data safe, see Privacy Policy.

Check your inbox or spam folder to confirm your subscription.

Click on the image below to check out my book – The Truth Seekers Guide

Do you know what Freedom is? Can you define it? Do you know what ideas its depends on?

If you want freedom, you must know all about rights

Home Educate Your Children

effective parenting

Pursue Optimal Health in Mind and Body

Quick Contact Link

Contact Us by Clicking Here

Directly Jump to Topics

  • About (1)
  • Blog (160)
  • Climate (14)
  • Crypto (7)
  • Current affairs (10)
  • Economics (19)
  • Education (57)
  • Gold & Silver (4)
  • Health (92)
  • Music (1)
  • Philosophy (73)
  • Podcast (176)
  • Politics (73)
  • Psychology (18)
  • Resources (1)
  • Science (26)
  • Technology (12)
  • The Book (4)
  • The Truth Seekers Guide (2)
  • Thinking (66)

EMF Protection

For grounding sheets and other earthing needs we use Groundology. Groundology - Earthing for Health and EMF Protection

Footer

Website by ffWeb.tk

Click here for Affordable Website Packages!

© 2017-2022 LawfulRebel.com. All rights reserved.