Saturday, July 16, 2022

The Plight of the Humans

 I haven't written song lyrics for quite some years but was recently inspired to update the lyrics of a song I wrote back in the late 90s or early aughts -- "Plight of the Humans" aka "I see it"

Some of the newer portions of the lyrics are morphed from lines in the most insane prose-poem any of my subpersonalities ever came up with,

A pretty crude, off-the-cuff solo version of the song is here,  ... in time there may come a version with Jam Galaxy Band (see our gig at NAMM a couple months ago here,  -- this was a short set so all songs were reduced to 6 min or so; soon there will be video/audio from our follow-up gig at DROM in New York where things were improvised and outprovised at much greater length... and another gig in the metaverse, and at Rare Bloom Cardano communityevent ... )

The phrase "circus of the empty eye" is cribbed from Octavio Paz; the phrase "shadow of a dream" from Rudyard Kipling.   


***
THE PLIGHT OF THE HUMANS
***


I see it in the delicate Martian gleam of the distant mountains

I see it in the geometry of the car crash in my eyes

I see it in the sunrise bleeding love and death and magic through the frights that polarize

I see it in the mottled fire in the soul of the alligator

I see it in the wet dream of the four trillionth femto-memristor

I see it in the glory of human flesh as it loses its measure and its mind

What else to find




I see it in the protocol of awakening and dissolving

In the calorimeter that measures the expansion and contraction of our sighs

I see it in the love that weaves individuation and self-transcendence

Into black and white holes whirling singularities through the sick bowels of the night

I see it in the howling hound guarding the exit of humanity

I see it in the passionate, deranged mammal topology of her smile

Of her smile sweet smile




I see it in the irresistible force too strong for specification

I see it in the Drumcloud Dionysus versus the Crucified

I see it in the glimmering needles weaving the cosmic background fabric of the her shrieks converged in time

I see it in the furious abandon of utter metaphysical nudity gleaming crystalline in the double-sun sky

I see it in the heart of the heartless, the soul of the soulless, the mind of the the mindless

The beauty in the center of the madness of the center of the circus of the empty eye
The circus of the empty eye



I see it, I see it

I see it, I see it




Everything

Is just the shadow of a dream

Everything

Is just the shadow of a dream




Everything we do and feel

And hear and think and see

Is just the shadow of the shadow of a dream

To me




Everything we do and feel

And hear and think and see

Is just the shadow of the shadow of a dream

To you and me




I see it in the screams of a billion brachycephalic ice floes

as they stretch like magic-mushroom membranes across the Multiversal Eye

I see it in the Law of Madness delivering discipline in the dung-heap,

In the ghettoes, in the prison, in young love and ancient blight

I see it in the truth that follows from the fallacy that leads the army of paraconsistent thought-sparks

All venting chaos through their memes

And stenting veins in the necrotic imagination

Of the Universal Mind




I see it, wrought with fever, sporked with pain and funked with glee

In the ecstatic torture of being me, and knowing I'll never be me

I see it in the robot mouthing words of necromantic need and spastic subtlety

While my fingers stagger across the keys

Of black and white and perjury




I see it, I see it

I see it, I see it




Everything

Is just the shadow of a dream

Everything

Is just the shadow of a dream




Everything we do and feel

And hear and think and see

Is just the shadow of the shadow of a dream

To me




Everything we do and feel

And hear and think and see

Is just the shadow of the shadow of a dream

To you and me




All the love and hate that gets

Unfolded and unfurled

Is just the shadow of a dream

Here in this simulated world




All these words, ideas, sounds

Emotions and inclines

Are just the shadow of a dream

In an imaginary mind




Every fantasy that weaves

Its wonder through your mind

Is just a shadow of a shadow

In illusory spacetime




Everything we do and feel

And hear and think and see

Is just the shadow of the shadow of a dream

To you and me




Wednesday, November 25, 2020

Eternal Childhood of the Post-Singularity Mind

It has occurred to me many times, as I played with my various offspring (4 kids and one grandchild so far, one more little daughter due February!), that in many respects our post-Singularity selves are going to closely resemble children today.

When children play, they’re not just randomly experimenting and exploring and enjoying themselves -- they are doing that, but they’re also whimsically rehearsing and practicing stuff that they (consciously and unconsciously) know they’re potentially going to be doing “for real” once they grow up.   They’re practicing, and they’re role-playing roles that they at least half-know they could really get called on to play in the future.   But they’re doing this in a way that’s often more fun than the real thing -- because it doesn’t come with all the constraints and risks that the real thing does.   

So when a kid plays “house”, or plays “fighting dragons” -- on the one hand they know one day they may be managing a real house or fighting some actual dangerous thing.   On the other hand, they also know in the back of their minds how tedious or scary these real activities might be -- and that their make-believe versions are joyous mockeries of the real thing.

Implicit in childhood play is a half-knowledge of what one is going to become -- after one gets radically transformed beyond what one is -- and a desire to creatively and lightheartedly explore these future potentials .

At the same time, as a child, one also knows that one is being cared for by other beings who understand the world much better than one does in important respects -- and have superior power -- and who are setting the context for one’s play activities.

Now put yourself in the position of a post-Singularity human, who is in a world populated by AGIs with far greater general intelligence and capability, most of whom are basically benevolent toward you.   

In such a situation there are several sorts of opportunities open to you: you can remain a legacy human, you can rapidly upgrade yourself to massively superhuman effective-godhood, or you can gradually upgrade yourself, retaining a sense of self and continuity as you grow and transcend.

Let’s consider the latter option: So, suppose you are in the process of gradually upgrading  your own intelligence and capability.   

Suppose you have chosen to upgrade your intelligence relatively slowly -- say, multiplying it by a factor of 1.5 or 2 each year -- so as to get the full and rich experience of growing and self-transcending (as opposed to the process of increasing one’s intelligence by a factor of 1000 in one second, which in essence would be a kind of death in which the self is suddenly replaced by a radically different entity with a subtle connection to the previous version).

Then in key respects, your experience will be like that of a human child.   

You’ll be in a world mostly guided by minds far beyond your own -- and bearing some resemblance to what you expect to eventually evolve into.   

And you’ll be highly tempted to play at the activities that you know you’ll be able to partake in more fully once you’ve upgraded a bit more.   While at the same time enjoying the activities you can currently partake in with full success.

By playing at the activities your future self will master, you’ll integrate aspects of these activities into your self, thus smoothing the transition between your current self and your future more-intelligent self.

Simple example: If you know that a year later you’ll be able to sculpt complex virtual reality worlds rapidly and near-instantly at will, keeping all the different aspects of the world in your mind at once -- then as this ability gradually emerges for you, you’ll likely feel drawn to experiment with world-building even though you’re not that good at it yet. 

Two years of intelligence-evolution away from being able to cognitively interface directly with sensors at the quark level, and grok some of the thoughts of the intra-particulate intelligences resident there?   Won’t you be curious what this is like -- and eager to dip now and then into the now-mostly-incomprehensible sensor output -- and creatively improvise on what snippets of meaning one has been able to extract?

Once humanity finally grows up and moves from the economy of scarcity into the economy of abundance, and we get root access to our own brains and minds and the ability to actively sculpt our own futures -- then we will enter a new era of childhood, one that neither terminates in the relative stasis of an adulthood nor gets repetitive as it would if one remained a typical human eight year old kid forever.   

In this sort of childhood, the moment one seriously approaches the sort of maturity one can understand, one sees that there is a yet higher form of maturity, with respect to which one is just a little child again.

But wait a minute -- isn’t this our situation anyway?   It is, of course.   At age 53, I haven’t stopped growing and advancing and changing, and in some ways I’ve personally evolved more in the last 5 years than I did in the 20 years before that.    Even in our current human situation, without AGIs or cyborgs, the more we mature our understanding and consciousness, the more clearly we see what the next levels of consciousness are, out there on the horizon.

But still there are senses in which human development does slow down once adulthood sets in.   While the mind can keep advancing, the brain mostly stops developing in the early teens.   The last really exciting physical change we experience is puberty -- the changes that occur as one meets old age are generally significantly less inspiring.   

Post-Singularity, our bodies as well as our  minds will keep gaining new capabilities, and the computing infrastructure supporting our thinking and feeling activity will keep getting richer and more powerful -- and these improvements will unfold in sync with our advances in cognition and consciousness, just as happens in human childhood.

Given the myriad difficulties of human life on Earth right now, and the likelihood of a difficult transition from here to Singularity, it can sometimes be hard to believe we could be so close to such radical positive changes.   But this is the nature of nonlinear processes and exponential advance.   

As a kid I loved Arthur C. Clarke’s novel Childhood’s End.   But in fact, I think the Singularity will bring an acceleration and enhancement of the human childhood experience, rather than its termination.

Here’s to the eternal childhood of the post-Singularity Mind!









Monday, September 14, 2020

Sand Talk, Singularity and Psi


It’s not that often I encounter a newly authored book that I consider a “must read” and avidly recommend to all my friends and colleagues -- but Sand Talk by Tyson Yunkaporta genuinely falls into this category.   


I was introduced to the book by Jim Rutt who interviewed the author on his podcast -- an episode I also strongly recommend.


Yunkaporta is an Australian Aboriginal whose core goal in the book is to present and analyze modern civilization society from the perspective of Aboriginal indigenous society.   He does a bang-up job of this and raises a lot of related interesting issues along the way.


I’m not going to give a proper review or summary of the book here, but am mostly going to share various indirectly-related thoughts that occurred to me upon reading the book.   If you want a basic overview of the themes of the book try this brief interview with the author.


Was Civilization a Step Forward, or Backward?


It is hard to really give an "outsider" view of something as big as *human civilization* , but Yunkaporta manages to take a decent stab at it....  He is able to pull this off by virtue of being fascinatingly between the two worlds -- ensconced enough in Aboriginal culture to understand their indigenous world-view on the multiple relevant levels; but also embedded enough in the modern intellectual world to explain aspects of the indigenous world view (in the context of contrast with modernity & civilization) in ways that are clear and straightforward and compelling  to those of us accustomed to modern verbal/analytical modes of expression ...


There are hilarious things in the book -- e.g. the analogy and possible historical relationship btw modern education systems and methodologies for animal domestication.   


There are beautifully conceptually insightful things, like the discussion of multiple forms of human understanding -- pattern mind, kinship mind, dreaming mind, story-mind, ancestor-mind -- and the observation of how biased modern civilized culture is toward some of these and away from others.  


There are disturbingly thought-provoking things, like his discussion of public vs. private violence and the relation between physical and psychological violence (he believes that occasional/ moderate physical violence in indigenous societies plays a valuable social-psychological role, which it can't play in modern society due to the different sort of organization).  


All in all he makes a reasonably compelling case that the states of consciousness of indigenous people -- and the "collective minds" of indigenous tribal groups (to use my own fancy language that he probably would not approve of, he is very very down to earth in the book) -- were much more satisfied and fundamentally healthy than the vast majority of individual and human-collective-group minds on the planet today...


He views civilization as mostly destructive both to human minds and families and bodies, and to the rest of the physical environment on the planet…


I largely agree with the core points Yunkaporta makes in the book, but of course I have a somewhat different slant on the conclusions/ideas.  (The rest of this post is now my own musing and  rambling, some of which Yunkaporta might well utterly disagree with...)  


What Drives Humanity?


The transition from hunter-gatherer to agricultural society and then to modern civilization probably has degraded happiness and mental and social health in many important

senses, as Yunkaporta points out.   


So it is certainly meaningful to question whether these transitions have really been “advances” as is commonly assumed.


However, it’s also important to understand that the driver of these transitions has never really been to increase happiness or mental/social health!   Put crudely, “happiness” (a concept very tricky to define) is not necessarily what humans have been after...


Humans and human groups are complex with multiple motivations -- and the drive for fundamental novelty and growth is one of them.


Aboriginal society was stable for 60,000 years, and certainly allowed individuals to pursue their drive for novelty and growth  through stories and dreaming and battles and adventures -- but it did not allow human-groups much avenue to pursue the drive for novelty and growth.   


It would seem that, once human-groups got a taste of fulfillment of their novelty-and-growth goal, they became addicted to it and things just snowballed till we got where we are today…


Singularity as Paradise Regained and Transcended


So what do to now?  Yunkaporta does not really suggest rolling back to indigenous society, because he's a realist and can see this is not too likely to happen except in the aftermath of some disaster scenario (which seems strangely possible to me at the moment, as I write these words sitting here on an island near Seattle with the air so full of smoke one can barely see a few hundred meters out into the ocean… smoke due to forest fires that are spreading like mad through the US West due to heat and dryness that is likely substantially due to industry-induced climate warming....  But yeah yeah... the smoke is supposed to clear over the next couple days...)


Personally, as you probably know, I think we are on the verge of another shift even bigger than the shift from hunter-gatherer to civilization (Singularity, anyone?) ... and as we attempt to guide ourselves through this shift, it seems very important to keep in mind the various aspects of human individual and group life that have been squelched in the transition to civilization -- Perhaps these aspects can be regained in a different form as we move into the next phase....


Yunkaporta refers to modern civilized thinking as "context-free" thinking , whereas indigenous thinking is contextually-embedded -- i.e. in the context of a network of social relationships, a specific area of land, etc.   This is a deep point that I am still in the process of fully absorbing.   There is an indirect connection here to the relational interpretation of quantum mechanics, in which there are no pure phenomena, only (observer, phenomenon) pairs.   But much of our thinking these days is done as if there are pure phenomena.


On the other hand, fundamentally, embedding thinking in contexts defined by kinship and physical place is only one possible way of embedding thinking -- there are many other sorts of possible contexts.   It's arguable that these particular sorts of contexts are intrinsically central to humanity, due to the way our bodies and brains are built.   On the other hand the Singularity is partly about going beyond the restrictions of legacy humanity.   


It seems clear that we want to be sure post-Singularity minds including early-stage AGIs and cyborgs are able to engage in richly context-sensitive thinking, regarding contexts defined by kinship networks and other social networks and specific physical locations -- but also other contexts beyond these historically critical examples.


From an indigenous view -- or a conventional modern civilized human view -- one could view this sort of idea as overweening and narcissistic .. i.e. how can we, who are born to human moms and live our lives walking around in the dirt and stuffing ourselves with food grown in the earth and nourished with water and sunlight -- sensibly talk about going beyond kinship and physical location into some sort of airy  domain of  abstract contexts?   But we have built computer chips and brain scanners and open-heart surgery and brain surgery and virtual reality and birth control and IVF and gone to the moon and blow up atom bombs and on and on -- while we have indeed lost a lot of important stuff relative to our indigenous forebears, there is also obviously a huge amount of power in the crazy path we civilized folks have blazed...


Inspiration Regarding Psi from the Indigenous Perspective


Another aspect of Yunkaporta’s book that jumped out at me -- though it was fairly peripheral in his narrative -- was the Aboriginal approach to psi (psychic, paranormal phenomena…).   (For some references on the science of psi, see  here.)


In the indigenous view psi is just there along with a lot of other phenomena -- it's part of the patterns people observe, and part of the correlation between dreaming mind and everyday life, and part of the correlation between ancestors and nonhuman life (including e.g. rocks which Aboriginals consider to have their own sort of consciousness) and human minds, etc.    


The mercurial nature of psi phenomena, which gives us such a headache as scientists seeking replicable results, is not a problem from the indigenous view -- it's just how the world works.


In fact, conjecturing a bit, I vaguely suspect Yunkaporta would view the modern civilized-society obsession with the small percentage of phenomena in the universe that are reliably repeatable as an indication of our modern civilized mental un-health.   


Consider e.g. the analogue of romantic relationships.  There is all sorts of special-case magic in romantic relationships -- special moments that will never be repeated -- ad hoc exploitations of beautiful situations or unique moods -- and that is fine and wonderful and part of the beauty of it all.   Obsessing on those portions of a romantic relationship that are highly reliably repeatable in the same precise form, without contextual variability -- would seem crass and ridiculous.   (Every time I give her a sufficient number of flowers of the appropriate species and purchase her an evening meal at a restaurant with a sufficiently high rating by a reputable source, she must respond by inviting me for a night of passionate lovemaking -- and if this  fails too often because of some unpredicted variability in aspects of the context in the world or her brain or body or one of our life-processes, then that proves the pattern is inadequate and we must seek a more rigorous and reliable pattern !!) ....  


One could view the obsession with repeatability and reliability (in psi research and elsewhere) from this perspective, as a weird/twisted focus on certain small corners of the broader universe, most of which just doesn't play by that sort of highly simplistic rulebook....


I've suggested before that  basic progress on psi might require modification of our concept of science in the direction of "second person science", where e.g. brain computer interfacing is used to enable one observer to directly perceive another observer's perception of phenomena -- so that we can compare subjective observations directly without needing to project them wholly into non-experiential data.


Quite possibly, another sense in which our current conception of science may need to be obsoleted/ transcended, is that we need to move beyond our simplistic notion of "repeatability"....  Psi is contextually dependent in the same sort of way that indigenous knowledge is contextually dependent (which is related to but maybe stronger than the sense in which quantum knowledge is contextually dependent).   Charlie Tart's wonderful notion of state-specific science is going in this direction -- it's a particular sort of example of context-specific science....


I am reminded of a chat I heard among psi researchers not long ago, which may be paraphrased as:


A::

Can you think of ANY pre-registered, high statistical power, multi-lab psi replication that was successful?


B::

The PEAR consortium in 2000 tried to replicate the original PEAR findings and although the main effect of intention was not significant, they did find a number of interesting effects and anomalous aspects in the data.


C:

I’m aware of those results, but the question was about robust replication.  Finding post hoc "interesting  effects” doesn’t qualify.


A’s question initially seems like a good one; but I just wonder if, somehow or other, to really grapple with psi we need to find a broader notion of success in which finding post hoc "interesting internal effects" DOES qualify...


Yes, one can fool oneself and see interesting post hoc effects in random noise.  However,one can also see genuinely interesting post hoc effects that are not the same as what one was looking for, yet have some meaty surprisingness value to them...


How to quantify this?  Taking a broad-minded Bayesian sort of view, one could ask: In a universe with mercurial psi, versus a universe with none, would this post-hoc effect be more likely to occur?    


Across a host of different experiments with weird post-hoc effects but not accurate replicationof prior results, this sort of question becomes more and more meaningful.   It's not an easy way to look at things, and it's better asked in the context of meaningful concrete models of "universes with mercurial psi", but something in this direction may be better than trying to shove funky / trickstery / mercurial phenomena into a bin of "highly repetitive, replicable phenomena" where they just plain don't fit...


The general notion that psi is "trickstery" is definitely far from new.   However what I’m thinking that may be new is:  How to make a data-analytics methodology that accounts for this aspect...


What if one simply takes the mass of data from a bunch of psi studies and


  1. searches for surprising psi-ish patterns in the data using general pattern-analysis tools

  2. does a comparable search for such patterns in appropriately shuffled versions of the data


To keep things statistically valid, one would want to explore 1) using a variety of pattern-analysis approaches, and then once one feels one has really got something, one tries 2) for validation. Obviously 2) should not be part of one’s main pattern-search loop.


Of course the cosmic trickster could also troll this analysis by -- as soon as one has done this analysis -- starting to tweak experimental results so as to give bad results according to one's particular mathematical definition of surprisingness, but good results according to some other definition, etc.


But yet -- the cosmic trickster is obviously not being infinitely tricky in all instances, sometimes just highly tricky -- and trying to outsmart their tricks is part of the grand eurycosmic dance in which we find ourselves ...


When I shared some of these thoughts with a leading psi researcher, he recalled a time he gave a talk in Australia,, and an aboriginal elder from the audience approached him afterwards, whispering that her people had been using psi for over 50,000 years, but that it was nice to see science starting to catch up.


GPT-f -- One More Funky Experiment in Guiding Theorem-Proving without Understanding

 


 … and some Thoughts on Syntactic vs. Semantic Approaches to Guiding Automated Theorem Proving


Quite a few people have been asking me these last few days about GPT-f, OpenAI's new foray into the domain of automated theorem proving (ATP).   

   

It’s not often that AITP (AI for Theorem Proving, an AI field with a long history and exciting recent progress, but a fairly high level of obscurity compared to e.g. video processing or NLP) gets into the popular media -- but if OpenAI is good at one thing, it’s getting AI into the popular media...


GPT-f is in the same basic technological vein as GPT-2 / GPT-3, but is focused on math rather than natural language -- and like the other GPT systems, it is yielding results that are interesting in some ways and frustratingly idiotic in others.


My own sense is that, just as GPT-3 is not the right research direction to get to real human-level language facility, GPT-f is not the right research direction to get to real human-level (let alone superhuman) mathematical theorem-proving.


Let me explain a bit...


GPT -- Prediction and Simulation Without Understanding


You can find my general take on GPT-3 here ; and if you haven’t read it, you should also take a look at Gary Marcus's somewhat similar-in-spirit analysis, nicely titled “GPT-3, Bloviator


TL;DR is that while the GPT algorithms can be fiendishly good at prediction and generation, they have no understanding of the underlying meaning of what they are predicting and generating, and because of this they systematically make a lot of dumb mistakes among their impressive-looking feats… and there seems no clear way to fix this dumb-ness without introducing radically different architectures and approaches.


GPT-3 is a fascinating artifact and deserved at least a nontrivial fraction of the attention it got, but in my view it is not really a breakthrough in NLP.  I look at it more as an incremental improvement in implementation and deployment of the breakthrough concept of transformer NNs, which was introduced by Google e.g. in the BERT model.   


GPT-3  does constitute a significant step forward in some aspects of NLP functionality --  however the fact that it doesn’t understand what it is talking about constraints its utility in applications where meaningful reasoning or analysis or invention (as opposed to casually  meaningful-looking simulacra of these things) are required.   The modest but nontrivial percentage of utter nonsense it generates makes it hard to apply in context like customer support, education or  medical chat where it’s not OK for 5% or 20% (or even 1%) of what comes out of your AI to be plausible-sounding bloviatorial bullshit with no foundation in reality.


Viewed in the general context of ongoing work in the ATP field, it seems to me GPT-f is less of a step forward than GPT-2 or GPT-3 -- but for sure it is meaningful incremental progress on ATP, fitting in comfortably with a large amount of ongoing progress by others in the field (which however does not tend to get covered in the tech media, because most researchers working on ATP don’t have OpenAI’s PR budget or facility).   GPT-f also has a similar core shortcoming to the other GPTs -- it does not understand math any better than GPT-2 or GPT-3 understand language, which seems likely to constrain its utility as a theorem proving tool.   


Just as the NLP field needs a substantial breakthrough to get to systems that can really interact linguistically like people, similarly the ATP field needs a substantial breakthrough to get to systems that can really prove theorems at the level of human mathematicians.   It seems extremely clear from looking at the pattern of errors made by GPT-2, GPT-3 and GPT-f that GPT type systems will not constitute this breakthrough.  (I have my own ideas about how to get to this breakthrough, and will touch on a few of these a little later on in this post, but that’s not my main focus here.)



Accelerating Automated Theorem Proving


The first thing to understand about ATP (automated theorem proving) is that it’s basically a solved problem in one concrete sense: Current automated theorem provers, if you let them run long enough, can prove or disprove any mathematical hypothesis you give them in a variety of standard formal mathematical systems.   This is a mature technology.   


The catch of course is that “long enough” is often way too long.   So the ATP field focuses a lot of attention on “guidance” of theorem provers, which use various forms of generalization and learning to help ATP systems avoid running down too many dead ends before getting to the proofs or disproofs they’re looking for.   


(It’s worth noting that, according to Hutter’s AIXI theory, AGI in general is a solved problem in a similar sense -- algorithms like AIXI^tl can in principle solve powerful formalizations of the AGI problem with simple code, given unrealistically much compute power.   However, the practical state of the art with non-AI-driven ATP systems exceeds that with AIXI^tl like AGI systems; i.e. brute-force-ish ATP systems guided by simple-ish heuristics can get more done than AIXI^tl like AGI systems currently can in most other domains.)


Compared to even a very complex game like Go, math is an extremely open-ended domain.  However, there are subsets of math that are more constrained and Go-like, and it seems plausible that methods roughly similar to those that worked for Go -- integrated appropriately into general-purpose ATP systems - could solve theorem proving in these domains.   (I’ll mention a couple of these below.)


The idea to use transformer neural nets for guiding ATP systems is not original with OpenAI.   I believe the first work in this direction was done by my son Zar’s PhD thesis advisor Josef Urban, one of the long-time leaders in the ATP field and the organizer of the annual AITP (AI for Theorem Proving) conference, which is being held this upcoming week in France with an online component as well.    Josef’s work from February 2020 appeared online in March and was published in July at CICM.  The ideas and links in the rest of this section draw heavily on some recent conversations I had with Josef.


When I asked Josef for a good example of "semantic nonsense" generated by transformer NNs in the theorem-proving domain, he pointed me to the proof at the top of p. 318 here.   Anyone who knows basic set theory and is willing to spend a few minutes focusing attention can confirm that this is an utter gibberish arrangement of simple inference steps.   Each of these steps would be sensible in some other context -- but to my eye they are radically out of place and senseless here, and clearly indicative that GPT is doing "the wrong sort of thing".   No human  math student would ever string together steps like this, unless they were randomly copying inference steps from proofs in their textbooks without paying attention to their meaning or relevance.


The work of Josef and his colleagues has touched a bunch of areas generally related to the GPT theorem proving work -- e.g. showing that neural nets with attentional mechanisms can with premise selection , and showing that neural net based embedding vectors capturing limited aspects of proof semantics can help guide theorem proving. However more symbolic approaches have also shown promise, along with work on automatic creation of higher-order proof mechanisms like tactics and heuristics -- with no approach being a clear silver bullet yet. 

 

It’s also interesting to reflect on the OpenAI team’s choice of the  Metamath corpus for their experiments -- this is a reasonable corpus to use, but it’s just one among a host of different corpora of theorems and proofs used in the ATP field.   Why choose that one in particular?   Compared to many alternatives, Metamath’s distinguishing characteristic is that it’s very “low level” -- i.e.  the inference steps in the proofs in Metamath’s formalism tend to be  very small, without e.g. use of more abstract "tactics" allowing bigger leaps.  This corpus is ideally suited to GPT-f’s  "brute force"-ish approach, the greatest strength of which is the ability to extrapolate from large numbers of combinations of micro-scale proof steps.

 

All in all, looking at the GPT-f work in the context of the recent thrus of work by Josef and his students, and the scope of papers presented at AITP-20, and work done on ATP on other corpora as well as MetaMath -- one sees that GPT-f is one among a bunch of different approaches using various ML algorithms to guide theorem-provers, and one doesn’t see any clear sense in which GPT-f is the  most promising avenue being explored.

 

In fact my own suspicion as an AI researcher is that the more exciting and interesting paths to making AI-guided ATP work lie entirely elsewhere.   

 

Math Reasoning, Scientific Reasoning, Commonsense Reasoning

 

I’m not really active in the AITP space currently, but my PhD was in math and it’s an area I’ve followed  with fascination for decades.   And I have kept up with the area relatively closely lately due to my son’s PhD work in the area -- as well as due to the close relation between AI for math theorem proving and certain aspects of what we’re doing in the OpenCog project as regards biological data interpretation and automated agent control.   

 

In OpenCog we are not currently concerned with making AIs that prove math theorems -- but we are concerned with making AIs that use theorem proving in probabilistic logics to do things like understand why certain combinations of genes tend to impact certain diseases, or figure out how an NPC in a Minecraft-like game should move blocks around to be able to get to some object it wants.   The formal problem of logical inference for commonsense and scientific reasoning is very similar to the formal problem of logical inference for mathematical reasoning -- we hit up against similar problems to the folks in the AITP community, and are taking closely related strategies in terms of using ML algorithms to guide the proof process.

 

Likely Strengths and Weaknesses of GPT-f


My strong suspicion is that with the GPT-f style approach to theorem-proving,, there will be weak generalization to proofs/theorems that are qualitatively different than the ones in the training data.   The same will hold for any other approach that involves training ML models on low-level representations of proofs, without some sort of internal representation (engineered or learned) that corresponds to higher level structures like tactics, more abstract proof-patterns or concepts, etc.


It also seems likely that some applications of theorem-proving these limitations will not matter as much as in others.   E.g.


  • formal verification of the smart contracts that actually occur in current blockchain systems

  • existence and uniqueness theorems for differential equations useful in practical modeling for everyday physical systems


would seem to be cases where the theorems and proofs are "all kinda similar" in a way that might make the GPT style of pattern recognition relatively effective.


However, if confronted with a theorem from a new domain of mathematics (say, if put in the position of Galois when inventing abstract algebra; or Weierstrass etc. when inventing real analysis), one would expect that a system learning patterns at this low level would not be able to perform much transfer learning, and would need a huge amount of time to bootstrap itself up to functionality in the new domain.


Similarly, if confronted with a theorem from a familiar domain that requires a counterintuitive sort of proof, I’d expect this sort of system would be unlikely to be able to find it.   E.g. the Fundamental Theorem of Algebra is different from the bulk of algebra theorems in that it's most conveniently proved via recourse to theorems from complex analysis; but if a GPT type theorem prover had been trained on algebra theorems with more traditional algebra-style proofs, it would be quite hard put to make the leap to try a proof involving complex analysis.  Of course most human mathematicians have a hard time making leaps like that as well -- but some are able to, and they do it via having abstract conceptual representations that bridge different areas of mathematics…


Sketch of a (Possibly) Better Way


Zar and I have mused a few times about doing something similar to the methodology taken in GPT-f and other similar systems, but with a measure of interestingness/ surprisingness in the loop.   I.e., to simplify a fair bit, what OpenAI has done here -- and what Josef did with his earlier related work with GPT-2 for ATP, and others have done with other ML systems in other prior work --  is


  • generate a bunch of theorems

  • have their theorem-prover prove those theorems

  • learn proof-patterns from these proofs

  • use these proof-patterns to make the theorem-prover more effective

  • lather, rinse, repeat


With this approach, each time around the cycle you can choose theorems that are at the edge of what your prover can currently do.


What Zar and I have mused about doing (which is surely not original and has likely also been a desire of many others thinking about the area) is


  • generate a bunch of  *interesting* theorems (according to a formula for interestingness)

  • have our theorem-prover prove those theorems

  •  learn proof-patterns from these proofs

  • use these proof-patterns to make the theorem-prover more effective

  • lather, rinse, repeat


In fact I talked about this in my presentation at AITP last year https://goertzel.org/aitp-19/, with some elaborations on different ways of looking at “interestingness” in a mathematics context.


Among the multiple ways to assess "Interestingness" in this context, two critical ones are


  • surprisingness relative to the proofs and theorems already known to the system

  • utility as a lemma in proving other surprising theorems


("Surprisingness" can be measured information-theoretically in various ways, and there is plenty of subtlety here, as purely statistical information theory is a bit lame in this context yet algorithmic information theory in its full splendor is intractable, so one can concoct various intermediary measures.)


Neither Zar nor I has  proceeded with this kind of work so far -- due to having lots of other stuff on our plates,  and also due to the computational  resources and development time needed for this sort of thing  (i.e. among other factors, we don't have a billion dollars from Microsoft... though of course this wouldn't really take remotely  that much resources...)


The subtlety here is that if you're generating interesting theorems (where "interestingness" embodies a notion of compositionality, as is implied by the "utility as a lemma" aspect of the definition of interestingness hinted above), you're presumably generating theorems that involve some abstract representations and structures, rather than just algorithmically complex tangles of low-level mathematical operations.   


So the approach which includes interestingness in the loop, would seem more amenable to learning approaches that include higher-level tactics and other sorts of abstractions -- hence more amenable to learning approaches capable of significant transfer learning.


In short -- in terms of the challenge of automatically generating interesting new theorems, one suspects the GPT type approach is not much more likely to succeed than the proverbial army of monkeys at their typewriters, whereas an interestingness/abstraction driven approach would have much more hope.


The task of estimating the utility of a theorem as a lemma for other interesting theorems has a lot of overlap with the task of identifying useful proof-patterns from a corpus of proofs.  My own preferred approach to these tasks would involve importing the proofs into the OpenCog Atomspace (a weighted, labeled hypergraph knowledge store that we now are using for probabilistic commonsense and scientific inference) and then using a multi-paradigm AI approach combining neural graph embeddings, hypergraph pattern mining and probabilistic logical inference.   This leads to various fascinating recursions, including the potential use of the same surprisingness-based AI-for-ATP approach to accelerate the probabilistic logical inference involved.   But while this could be done using the current version of OpenCog, various issues with scalability and implementation awkwardness occur, and this has led some of my colleagues and I to put our focus recently on designing a radically more flexible and scalable version of OpenCog, OpenCog Hyperon.   But this now leads beyond the scope of this blog post…


Semi-Concluding Ramble…


Whether Hyperon will actually wind up to be the silver bullet for automated theorem proving and other AGI-ish applications remains to be seen -- and it won’t be seen this year, as there is a monster amount of work to be done to make Hyperon a reality.   However, the point I want to make right now is that this would be a non-trivially different direction than what the AITP community is currently taking.   Josef Urban and others at the heart of the AITP field have the intuition that a more semantic approach to mining patterns from proofs and using them for proof guidance will be valuable -- but using Hyperon based probabilistic logic to represent and infer proof patterns would be a big leap in this direction, substantially different from what one sees in the AITP literature so far.   


On the other hand, as I’ve emphasized above, GPT-f -- which does indeed work creditably well on the MetaMath corpus to which it has been applied -- is very much in the vein of what others in the AITP field have been doing for a while.   It’s really cool to see big companies get into the automated theorem proving space, which not long ago was more of a tiny obscure academic corner -- no doubt this is going to help accelerate progress in the field in multiple ways.  However, let’s not be under any illusion about where the main source of innovation and progress is in AITP -- at this stage it’s definitely not in the big tech companies. OpenAI may be better known than, say, Josef Urban's AITP research group, but there's no doubt who has made more contributions to AI progress. To make the breakthroughs needed to solve theorem-proving and other major AGI-ish challenges is going to require lots of free-flowing creativity, and quite possibly will emerge from the decentralized mess of university labs, open source projects and early-stage startups rather than the secretive, massive-data-and-processing-centric efforts of big tech firms.