Friday, March 28, 2008

Buckets of Crumbs!!!

I just posted a way deeper and more interesting blog post a couple hours ago (using multiverse theory and Occam's Razor to explain why voting may often be rational after all), but I decided to post this sillier one tonight too because I have a feeling I'll forget if I put it off till tomorrow (late at night I'm willing to devote a little time to blogging in lieu of much-needed sleep ... tomorrow when I wake up there will be loads of work I'll feel obliged to do instead!)

This blog post just re-"prints" part of a post I made to the AGI email list today, which a couple people already asked me if they could quote.

It was made in response to a poster on the AGI list who made the argument that AGI researchers would be more motivated to work on building superhuman AGI if there were more financial gain involved ... and that, in fact, desire for financial gain MUST be a significant part of their motivation ... since AGI researchers are only human too ...

What I said is really simple and shouldn't need to have been said, but still, this sort of thing seems to require constant repetition, due to the nature of the society we live in...

Here goes:



Singularitarian AGI researchers, even if operating largely or partly in the business domain (like myself), value the creation of AGI far more than the obtaining of material profits.




I am very interested in deriving $$ from incremental steps on the path to powerful AGI, because I think this is one of the better methods available for funding AGI R&D work.




But deriving $$ from human-level AGI really is not a big motivator of mine. To me, once human-level AGI is obtained, we have something of dramatically more interest than accumulation of any amount of wealth.




Yes, I assume that if I succeed in creating a human-level AGI, then huge amounts of $$ for research will come my way, along with enough personal $$ to liberate me from needing to manage software development contracts or mop my own floor. That will be very nice. But that's just not the point.





I'm envisioning a population of cockroaches constantly fighting over crumbs of food on the floor. Then a few of the cockroaches -- let's call them the Cockroach Robot Club -- decide to spend their lives focused on creating a superhuman robot which will incidentally allow cockroaches to upload into superhuman form with superhuman intelligence. And the other cockroaches insist that the Cockroach Robot Club's motivation in doing this must be a desire to get more crumbs of food. After all, just **IMAGINE** how many crumbs of food you'll be able to get with that superhuman robot on your side!!! Buckets
full of crumbs!!!


(Perhaps after they're resurrected and uploaded, the cockroaches that used to live in my kitchen will come to appreciate the literary inspiration they've provided me! For the near future though I'll need to draw my inspiration elsewhere as Womack Exterminators seems to have successfully vanquished the beasties with large amounts of poisonous gas. Which I can't help feeling guilty about, being a huge fan of the film Twilight of the Cockroaches ... but really, I digress...)

I'm also reminded of a meeting I was in back in 1986, when I was getting trained as a telephone salesman (one of my lamer summer jobs from my grad school days ... actually I think that summer I had given up on grad school and moved to Las Vegas with the idea of becoming a freelance philosopher ... but after a couple months of phone sales, which was necessary because freelance philosophers don't make much money, I reconsidered and went back to grad school in the fall). The trainer, a big fat scary guy who looked and sounded like a meaner version of my ninth grade social studies teacher, was giving us trainee salespeople a big speech about how everyone wanted success, and he asked us how success was defined. Someone in the class answered MONEY and the trainer congratulated him and said: "That's right, in America success means money, and you're going to learn to make a lot of it!" The class cheered (a scene that could have been straight out of Idiocracy ... "I like money!"). Feeling obnoxious (as I usually was in those days), I raised my hand and asked the trainer if Einstein was successful or not ... since Einstein hadn't been particularly rich, I noted, that seemed to me like a counterexample to the principle that had been posited regarding the equivalence of success and financial wealth in the American context. The trainer changed the subject to how the salesman is like a hammer and the customer is like a nail. (By the way I was a mediocre but not horrible phone salesman of "pens, caps and mugs with your company name on them." I had to use the name "Ben Brown" on the phone though because no one could pronounce "Goertzel." If you were a small business owner in summer 1986 and got a phone call from an annoying crap salesman named Ben Brown, it was probably the 19 year old version of me....)


Thursday, March 27, 2008

Why Voting May Not be Such a Stupid Idea (A Multiversal Argument)

I haven't voted in any election for a heck of a long time ... but, in some conversations a couple years ago, an argument came up that actually seems like a reasonable argument why voting might be a good idea.

I'm not sure why I never blogged this before ... but I didn't ... so here goes ...


Why might voting be worthwhile, even though the chances that your vote breaks a tie in the election are vanishingly small?

Consider this: Would you rather live in a branch of the multiverse where the people like you vote, or where the people like you don't vote?

Obviously, if there are a lot of people like you, then you'll be better off in a branch where the people like you vote.

So: You should vote so as to be sure you're in one of those branches.

But, wait a minute. How do you know you won't end up in a branch where most of the people like you DON'T vote, but you vote anyway?

Well, you can't know that for sure. But, the question to ask is, which of the two swaths of possible universes are more probable overall:

Type 1) Ones in which everyone like you votes

Type 2) Ones in which most people like you don't vote, but you're the exception

Adopting an "Occam prior" that favors simpler possible universes over more complex ones, you arrive at the conclusion that Type 1 universes are more probable.

Now, this isn't an ironclad, universal argument for voting. If you're such a freak that all the people like you voting wouldn't make any difference, then this argument shouldn't convince you to vote.

Another counterargument against the above argument is that free will doesn't exist in the multiversal framework. What the heck does it mean to "decide" which branch of the multiverse to go down? That's not the kind of thing you can decide. Your decision process is just some dynamics that occurs on some branches and not others. It's not like your decision process steps out of the branching-process governing the multiverse and chooses which routes you follow....

But the thing is, deciding still feels like deciding from within your own human mind -- whether or not it's REALLY deciding in any fundamental physical sense.

So, I'm not telling you to decide anything. I'm merely (because it's what my internal dynamics are doing, in this branch of the multiverse that we're in) typing in some words that my internal dynamics believe may encourage you to carry out some of your own internal dynamics that may feel to you like you're deciding something. Right? Because, this is simply the way the universe is happening ... in this branch of the multiverse....

Don't decide anything. Just notice that these words are making you reflect on which branch of the multiverse you'd rather be in -- the one where everyone like you votes, or the one where they don't....

And of course it's not just about voting. It's really about any ethical behavior ... any thing such that we'd all be better off if everyone like us did that thing.

It's about compassion, for that matter -- we'd all be better off if everyone was more compassionate.... Would you rather be in the branch of the multiverse where everyone like you is compassionate, or....

Well, you get it.

But am I voting in this year's Presidential elections?

Out of all the candidates available, I'd definitely support Obama ... but nah, I think I'll probably continue my long tradition of lame citizenship and not vote.

I just don't think there are that many people like me out there ;-)

But if I read enough other blog posts like this one, I'd decide there was a large enough population of similar people out there, and I WOULD vote....

Tuesday, March 25, 2008

Quantum Voodoo in "Classical" Systems?

Way way back in the dark ages, when I was 19 years old and in my second year of grad school, I wrote a paper called "Holistic Indeterminacy" and submitted it to the journal Mind.

The basic idea was that, in some cases, very complex "classical" physical systems might literally display the same kind of indeterminacy associated with quantum systems.

The paper was printed out crappily on a dot matrix printer with dimly printed ink, and written in a not terribly professional way. It got rejected, and I've long since lost the thing. Furthermore, I never since found time to write up the ideas in the paper again. (Had there been a Web back then I would have posted the thing on my website, but this was the mid 1980's ... if I recall correctly, I hadn't even sent an email yet, at that point. I might actually have the paper on some old floppy disk in the basement, but odds are the data's long corrupted even if the disk is still around...).

But anyways ... please pardon these reminisces of an old man!! ... these old ideas of mine came up today in a conversation I was having with a friend over lunch, so I figured I'd take a few minutes to type them into a blog post (way less work than a paper!).

In fact these ideas are far more topical now than in the 1980's, as quantum computing is these days finally becoming a reality ... along with macroscopic quantum systems and all sorts of other fun stuff....

Partly because of these advances, and partly because the ideas have had decades to pervade my brain, I think I can now express the idea a bit more crisply than I did back then.

Still, it's a freaky and speculative train of thought, which I am not fully convinced makes any sense.

But at very least, it's some amusing hi-fi sci-fi.....

The basic idea is as follows.

Premise: Quantum logic is the logic of that which, in principle, cannot be observed. Classical logic is the logic of that which can, in principle, be observed.

The above may sound odd but it's not my idea -- it's the conclusion of a lot of work in quantum physics and the quantum theory of measurement, by serious physicists who understand such things far better than I do. It's way clearer now than it was in the mid 80's, though it was known to all the cool people even then....

Now is where things start to get weird. I want to make the above premise observer-dependent in a manner different from how quantum theory does it. Namely, I want to introduce an observer who, himself, has a finite capacity for understanding and observation -- a finite Kolmogorov complexity, for example.

This leads to my

Modest proposal: An observing system should use quantum logic to reason about anything that it, as a particular system, cannot in principle observe.

There are some things that a worm cannot observe, because it is just a worm; but I can observe. From the perspective of the worm, I suggest, these things should be reasoned about using quantum logic.

Similarly, there are some things that I cannot observe, in principle, because I am just a little old me.

Yes, I could potentially expand myself into a dramatically greater being. But, then that wouldn't help ME (i.e., my current self) to observe these things ... it would just help {some other, greater guy who had evolved out of me} to observe these things.

Of course, you can't step into the same river once ... and there is not really any ME that is persistent beyond an individual moment (and there are no individual moments!). But you can talk about a class of systems, and you can say that some observables are simply NOT observable by any system within that class. So systems within that class need to reason about these observables using quantum logic.

Where does complexity come into the picture? Well, among the things I can't in principle observe, are patterns of more complexity than can fit in my brain.

And among the things my deliberatively conscious mind can't in principle observe, are patterns of more complexity than can fit within its own very limited capacity.

So, if we interpret "quantum logic is the logic of things that can't in principle be observed" subjectively, as applying to particular real-world observing systems (including subsystems like the deliberatively conscious component of a human brain), then we arrive at the funky conclusion that maybe we should reason about each others' minds using quantum logic ... or maybe even, that we should reason about our own unconscious using quantum logic....

Funny idea, hmmm?

Way back when I wrote down some mathematics embodying these notions, but I don't feel like regenerating that right now. Although I'm a bit curious to see whether it had any validity or not ;-)

What made me think of this today was a discussion about consciousness, and the possibility (raised by the friend I was talking to) that some sort of wacky quantum voodoo is necessary to produce consciousness.

Maybe so. On the other hand, it could also be that any system complex enough to display the kind of rich deliberative consciousness we humans do, is complex enough that humans need to reason about it using quantum logic ... because in principle we cannot observe its dynamics (without becoming way more complex than we are, hence losing our self-ness...).

Ahhh... well I'll get back to doing the final edits on the Probabilistic Logic Networks book now ...

Monday, March 10, 2008

A New, Improved, Completely Whacky Theory of Evolution

This blog posts presents some really weird, speculative science, that I take with multiple proverbial salt-grains ... but, well, wouldn't it be funky if it were true?

The idea came to mind in the context of a conversation with my old friend Allan Combs, with whom I co-edit the online journal Dynamical Psychology.

It basically concerns the potential synergy between two apparently radically different lines of thinking:


Morphic Fields

The basic idea of a morphic field is that, in this universe, patterns tend to continue -- even when there's not any obvious causal mechanism for it. So that, for instance, if you teach thousands of rats worldwide a certain trick, then afterwards it will be easier for additional rats to learn that trick, even though the additional rats have not communicated with the prior one.

Sheldrake and others have gathered a bunch of evidence in favor of this claim. Some say that it's fraudulent or somehow subtly methodologically flawed. It might be. But after my recent foray into studying Ed May's work on precognition, and other references from Damien Broderick's heartily-recommended book Outside the Gates of Science (see my previous blog posts on psi), I'm becoming even more willing than usual to listen to data even when it goes against prevailing ideas.

Regarding morphic fields on the whole, as with psi, I'm still undecided, but interested. The morphic field idea certainly fits naturally with my philosophy that "the domain of pattern is primary, not the domain of spacetime"

Estimation of Distribution Algorithms

EDA's, on the other hand, are a nifty computer science idea aimed at accelerating artificial evolution (that occurs within software processes)

Evolutionary algorithms are a technique in computer science in which, if you want to find/create a certain object satisfying a certain criterion, you interpret the criterion as a "fitness function" and then simulate an "artificial evolution process" to try to evolve objects better and better satisfying the criterion. A population of candidate objects is generated at random, and then, progressively, evolving objects are crossed-over and mutated with each other. The fittest are chosen for further survival, crossover and mutation; the rest are discarded.

Google "genetic algorithms" and "genetic programming" if this is novel to you.

This approach has been used to do a lot of practical stuff -- in my own work, for example, I've evolved classification rules predicting who has cancer or who doesn't based on their genetic data (see Biomind); evolved little programs controlling virtual agents in virtual worlds to carry out particular tasks (see Novamente); etc. (though in both of those cases, we have recently moved beyond standard evolutionary algorithms to use EDA's ... see below...)

EDA's mix evolutionary algorithms with probabilistic modeling. If you want to find/create an object satisfying a certain criterion, you generate a bunch of candidates -- and then, instead of letting them cross over and mutate, you do some probability theory and figure out the patterns distinguishing the fit ones from the unfit ones. Then you generate new babies, new candidates, from this probability distribution -- throw them into the evolving population; lather, rinse, repeat.

It's as if, instead of all this sexual mating bullcrap, the Federal gov't made an index of all our DNA, then did a statistical study of which combinations of genes tended to lead to "fit" individuals, then created new individuals based on this statistical information. Then these new individuals, as they grow up and live, give more statistical data to throw into the probability distribution, etc. (I'd argue that this kind of eugenics is actually a plausible future, if I didn't think that other technological/scientific developments were so likely to render it irrelevant.)

Martin Pelikan's recent book presents the idea quite well, for a technical computer science audience.

Moshe Looks' PhD thesis presents some ideas I co-developed regarding applying EDA's to automated program learning.

There is by now a lot of mathematical/computational evidence that EDA's can solve optimization problems that are "deceptive" (hence very difficult to solve) for pure evolutionary learning. To put it in simple terms, there are many broad classes of fitness functions for which pure neo-Darwinist evolution seems prone to run into dead ends, but for which EDA style evolution can jump out of the dead ends.

Morphic Fields + EDA's = ??

Anyway -- now how do these two ideas fit together?

What occurred to Allan Combs and myself in an email exchange (originating from Allan reading about EDA's in my book The Hidden Pattern) is:

If you assume the morphic field hypothesis is true, then the idea that the morphic field can serve as the "probability distribution" for an EDA (allowing EDA-like accelerated evolution) follows almost immediately...

How might this work?

One argument goes as follows.

Many aspects of evolving systems are underdetermined by their underlying genetics, and arise via self-organization (coupled to the environment and initiated via genetics). A great example is the fetal and early-infancy brain, as analyzed in detail by Edelman (in Neural Darwinism and other writings) and others. Let's take this example as a "paradigm case" for discussion.

If there is a morphic field, then it would store the patterns that occurred most often in brain-moments. The brains that survived longest would get to imprint their long-lasting patterns most heavily on the morphic field. So, the morphic field would contain a pattern P, with a probability proportional to the occurrence of P in recently living brains ... meaning that occurrence of P in the morphogenetic field would correspond roughly to the fitness of organisms containing P.

Then, when young brains were self-organizing, they would be most likely to get imprinted with the morphic-field patterns corresponding to the most-fit recent brains....

So, if one assumes a probabilistically-weighted morphic field (with the weight of a pattern proportional to the number of times it's presented) then one arrives at the conclusion that evolution uses an EDA ...

Interesting to think that the mathematical power of EDA's might underly some of the power of biological evolution!

The Role of Symbiosis?

In computer science there are other approaches than EDAs for jumping out of evolutionary-programming dead ends, though -- one is symbiosis and its potential to explore spaces of forms more efficiently than pure evolution. See e.g. Richard Watson's book from a couple year back --

Compositional Evolution: The Impact of Sex, Symbiosis, and Modularity
on the Gradualist Framework of Evolution


and, also, Google "symbiogenesis." (Marginally relevantly, I wrote a bit about Schwemmler's ideas on symbiogenesis and cancer , a while back.)

But of course, symbiosis and morphic fields are not contradictory notions.

Hypothetically, morphic fields could play a role in helping organisms to find the right symbiotic combinations...

But How Could It Be True?

How the morphic fields would work in terms of physics is a whole other question. I don't know. No one does.

As I emphasized in my posts on psi earlier this year, it's important not to reject data just because one lacks a good theory to explain it.

I do have some interesting speculations to propound, though (I bet you suspected as much ;-). I'll put these off till another blog post ... but if you want a clue of my direction of thinking, mull a bit on

http://www.physics.gatech.edu/schatz/clocks.html

Sunday, March 09, 2008

Brief Report on AGI-08

Sooo....

The AGI-08 conference (agi-08.org) occurred last weekend in Memphis...!

I had hoped to write up a real scientific summary of AGI-08, but at the moment it doesn't look like I'll find the time, so instead I'll make do with this briefer and more surface-level summary...

Firstly, the conference went VERY well. The tone was upbeat, the discussions were animated and intelligent, and all in all there was a feel of real excitement about having so many AGI people in one place at one time.

Attendance was good: We originally anticipated 80 registrants but had 120+.

The conference room was a futuristic setting called "The Zone" that looked sorta like the Star Trek bridge -- with an excellent if mildly glitchy video system that, during Q&A sessions, showed the questioner up on a big screen in front of the room.

The unconventional format (brief talks followed by long discussion/Q&A) sessions was both productive and popular. The whole thing was video-ed and at some point the video record will be made available online (I don't know the intended timing of this yet).

The proceedings volume was released by IOS Press a few weeks before the conference and is a thick impressive-looking tome.

The interdisciplinary aspect of the conference seemed to work well -- e.g. the session on virtual-worlds AI was chaired by Sibley Verbeck (CEO of Electric Sheep Company) and the session on neural nets was chaired by Randal Koene (a neuroscientist from Boston University). This definitely made the discussions deeper than if it had been an AI-researchers-only crowd.

Plenty of folks from government agencies and large and small corporations were in attendance, as well as of course many AI academics and non-affiliated AGI enthusiasts. Among the AI academics were some highly-respected stalwarts of the AI community, alongside the new generation...

There seemed to be nearly as many Europeans as Americans there, which was a pleasant surprise, and some Asians as well.

The post-conference workshop on ethical, sociocultural and futurological issues drew about 60 people and was a bit of a free-for-all, with many conflicting perspectives presented quite emphatically and vociferously. I think most of that discussion was NOT captured on video (it took place in a different room where video-ing was less convenient), though the workshop talks themselves were.

The media folks in attendance seemed most energized by the section on AI in virtual worlds, which is because in this section the presenters (me, Andrew Shilliday, and Martin Magnusson) showed movies of cute animated characters doing stuff. This gave the nontechnical observers something to grab onto, which most of the other talks did not.

As at the earlier AGI-06 workshop, one of the most obvious observations after listening to the talks was that a lot of AGI research programs are pursuing fairly similar architectures and ideas but using different languages to describe what they're doing. This suggests that making a systematic effort at finding a common language and really understanding the true overlaps and differences of the various approaches, would be very beneficial. There was some talk of organizing a small, invitation-only workshop among practicing AGI system architects, perhaps in Fall 2008, with a view toward making progress in this direction.

Much enthusiasm was expressed for an AGI-09, and it was decided that this will likely be located in Washington DC, a location that will give us the opportunity to use the conference to help energize various government agencies about AGI.

There was also talk about the possibility of an AGI online technical journal, and a group of folks will be following that up, led by Pei Wang.

An "AGI Roadmap" project was also discussed, which would involve aligning different cognitive architectures currently proposed insofar as possible, but also go beyond that. Another key aspect of the roadmap might be an agreement on certain test environments or tasks that could be used to compare and explore various AGI architectures in more of a common way than is now possible.

Lots of ideas ... lots of enthusiasm ... a strong feeling of community-building ... so, I'm really grateful to Stan Franklin, Pei Wang, Sidney DeMello and Bruce Klein and everyone else who helped to organize the conference.

Finally, an interesting piece of feedback was given by my mother, who knows nothing about AGI research (she runs a social service agency) and who did not attend the conference but read the media coverage afterwards. What she said is that the media seems to be taking a far less skeptical and mocking tone toward AGI these days, as opposed to 7-10 years ago when I first started appearing in the media now and then. I think this is true, and it signifies a real shift in cultural attitude. This shift is what allowed The Singularity Is Near to sell as many copies as it did; and what encouraged so many AI academics to come to a mildly out-of-the-mainstream conference on AGI. Society, including the society of scientists, is starting to wake up to the notion that, given modern technology and science, human-level AGI is no longer a pipe dream but a potential near-term reality. w00t! Of course there is a long way to go in terms of getting this kind of work taken as seriously as it should be, but at least things seem to be going in the right direction.

Balancing concrete work on AGI with community-building work like co-organizing AGI is always a tricky decision for me.... But in this case, the conference went sufficiently well that I think it was worthwhile to deviate some time from the R&D to help out with it. (And now, back to the mass of other work that piled up for me during the conference!)

Yet More Rambling on Will (Beyond the Rules vs. Randomness Dichotomy)

A bit more on this nasty issue of will ... complementing rather than contradicting my previously-expressed ideas.

(A lot of these theory-of-mind blog posts are gonna ultimately get revised and make their way into The Web of Pattern, the sequel to The Hidden Pattern that I've been brewing in my mind for a while...)

What occurred to me recently was a way out of the old argument that "free will can't exist because the only possibilities are RULES versus RANDOMNESS."

In other words, the old argument goes: Either a given behavior is determined, or it's random. And in either case, where's the will? Granted, a random coin-toss (quantum or otherwise) may be considered "free" in a sense, but it's not willed -- it's just random.

What occurred to me is that this dichotomy is oversimplified because it fails to take two factors into account:

  1. A subjectively experienced moment occurs over a fuzzy span of time, not at a single physical moment
  2. "Random" always means "random with respect to some observer."

To clarify the latter point: "S is random to system X" just means "S contains no patterns that system X could identify."

System Y may be able to recognize some patterns in S, even though X can't.

And, X may later evolve into X1, which can recognize patterns in S.

Something that was random to me thirty years ago, or thirty seconds ago, may be patterned to me now.

Consider the perspective of the deliberative, rational component of my mind, when it needs to make a choice. It can determine something internally, or it can draw on an outside source, whose outcome may not be predictable to it (that is, it may make a "random" choice). Regarding outside sources, options include

  1. a random or pseudorandom number generator
  2. feedback from the external physical world, or from another mind in the vicinity
  3. feedback from the unconscious (or less conscious) non-deliberative part of the mind

Any one of these may introduce a "random" stimulus that is unpatterned from the point of view of the deliberative decision-maker.

But of course, options 2 and 3 have some different properties from option 1. This is because, in options 2 or 3, something that appears random at a certain moment, may appear non-random a little later, once the deliberative mind has learned a little more (and is thus able to recognize more or different patterns).

Specifically, in the case of option 3, it is possible for the deliberative mind to draw on the unconscious mind for a "random" choice, and then a half-moment later, import more information from the unconscious that allows it to see some of the patterns underlying the previously-random choice. We may call this process "internal patternization."

Similarly, in the case of option 2, it is possible for the deliberative mind to draw on another mind for a "random" choice, and then a half-moment later, import more information from the other mind that allows it to see some of the patterns underlying the previously random choice. We may call this process "social patternization."

There's also "physical patternization" where the random choice comes from an orderly (but initially random to the perceiving mind) process in the external world.

These possibilities are interesting to consider in the light of the non-instantaneity of the subjective moment. Because, the process of patternization may occur within a single experienced moment.

The subjective experience of will, I suggest, is closely tied to the process of internal patternization. When we have the feeling of making a willed decision, we are often making a "random" choice (random from the perspective of our deliberative component), and then immediately having the feeling of seeing some of the logic and motivations under that choice (as information passes from unconscious to conscious). But the information passed into the deliberative mind is of course never complete and there's always still some indeterminacy left, due to the limited capacity of deliberative mind as compared to unconscious mind.

So, what is there besides RULES plus RANDOMNESS?

There is the feeling of RANDOMNESS transforming into RULES (i.e. patterns), within a single subjective moment.

When this feeling involves patterns of the form "Willing X is causing {Willing X plus the occurrence of S}", then we have the "free will" experience. (This is the tie-in with my discourse on free will and hypersets, a few blog posts ago.)

That is, the deliberative content of recursive willing is automatized and made part of the unconscious, through repeated enaction. It then plays a role in unconscious action determination, which is perceived as random by the deliberative mind -- until, toward the tail end of a subjective moment, it becomes more patterned (from the view of the deliberative mind) due to receiving more attention.

Getting practical for a moment: None of this, as I see it, is stuff that you should program into an AGI system. Rather it is stuff that should emerge within the system as a part of its ongoing recognition of patterns in the world and itself, oriented toward achieving its goals. In this particular case the dynamics of attention allocation is key -- the process by which low-attention items (unconscious) can rapidly gain attention (become intensely deliberatively conscious) within a single subjective moment, but can also have a decisive causal impact prior to this increase in attention. The nonlinear dynamics of attention, in other words, is one of the underpinnings of the subjective experience of will.

What I'm trying to do here is connect phenomenology, cognitive science and AGI design. It seems to work, conceptually, in terms of according with my own subjective experience and also with known data on human brain/mind and my intuition/experience with AGI design.