vzn

vzn
Location
denver,
Birthday
January 01
Bio
software engr, young at heart. coding from early age. "digital brain" but with lots of feelings too. writing here mainly to publicize a few key issues, let off some steam, & for the feedback. plz write me comments, very much appreciated!! even on old posts!! helps me gauge reader interest/ reaction & steer direction of new posts. oh, and IMs often make my day & I usually reply. and long IM conversations are my favorite.

Vzn's Links

My Links
vzn on politics
etc
vzn on cyber/geek stuff
vzn on big luv
vzn misc essays
OCTOBER 20, 2009 12:02AM

BIG Brains--artificial intelligence, singularity, or skynet?

Rate: 7 Flag

hi all, the balloon boy circus makes me think that maybe its a slow news week or something, so I was scrounging to come up with something not as much "ripped from the headlines" because the headlines are kinda bland/empty/vacuous it seems.

over the months Ive accumulated enough links on the following to sew up a post--somewhat in the Frankensteinian sense. (happy Halloween...)

so anyway, living with a "digital brain" has its own unique set of advantages and hazards, but one of them is scifi perspective on our universe. the painful part of this is that you have an intense feeling for the stupidity of humans, the daily wasted potential [like a large part of the sentient population focusing its undivided attention on the plight of balloon boy]. on the other hand, you can imagine the possibilities if we dont annhilate ourselves in a nuclear terrorist war first.

the scifi event of the millenium, possibly of all time, is named the Singularity. if you google this you will turn up myriad wondrous links. several large, very good books have been written on the subject. and then theres wonderboy Kurzweil, sort of the First Acolyte.

Singularity-- I wish I could summarize it. summarizing the Singularity is sort of like trying to summarize Christianity or some equally old, or older religion. and in fact there are a lot of parallels between the two insofar as the Singularity has a lot of religious connections.

the Singularity is a sort of amplified Skynet concept, if you have seen Terminator. imagine Skynet becoming powerful enough to engulf or take over, or swallow humanity completely. "Posthuman". marriage of man and machine. cybernetics. the Matrix movie was obviously highly influenced by Singulatarian ideas/ideals, and if you need a quick glimpse, thats the dystopian version. but there is potentially a Utopian version also.

of course, you could describe it differently, and Id certainly be interested to hear your own POV in the comments.

lately a few links have been popping up that give me hope that artificial intelligence (AI) at least is within the realm of our lifetimes, if you are not more than middle aged. (sorry Kurzweil, I think you are pushing the envelope and will be extremely lucky to see it in your lifetime.)

Reverse-Engineering of Human Brain Likely by 2020, Expert Predicts | Gadget Lab | Wired.com
http://www.wired.com/gadgetlab/2010/08/reverse-engineering-brain-kurzweil/

The Ultimate Escape: The Bizarre Libertarian Plan of Uploading Brains into Robots to Escape Society
Led by Futurist Roy Kurzweil, "Transhumanism," promotes the adoption of technologies that will eventually help “humans transcend biology."
http://www.alternet.org/media/147978/the_ultimate_escape_the_bizarre_libertarian_plan_of_uploading_brains_into_robots_to_escape_society

Ray Kurzweil On 'The Singularity' Future
The noted futurist has released a movie, The Singularity is Near, exploring how technology may reshape the fabric of our physical reality and life experiences.
http://www.informationweek.com/news/hardware/reviews/showArticle.jhtml?articleID=225701887

it is my great pleasure to share with you some of those links. they are on several basic subjects that I feel are converging toward AI. even just a few years ago, they would have sounded pretty scifi. but here they are, right in front of us. sometimes, its a wondrous world.

footnote: this post & topic is dedicated to/partly inspired by CA, a weird twisted dude on OS but one of the most intelligent & well-read ppl Ive ever met in cyberspace, and who has nailed probably over a hundred comments on my posts over the few months we've been jousting in cyberspace, and who encourages me to be my better self & write to my fullest/most meaningful potential, whatever that is. one of the few who has the background knowledge to appreciate the wider implications of the contents herein.



(a) blue brain

a charismatic, grandiose, showy, but very accomplished scientific guy named Henry Markram in europe is leading a project called Blue Brain. its based on the basic hypothesis that a cortical column in the brain, still someone weakly understood by scientists, is the basic building block or atomic unit of the brain and intelligence, and that simulating it in silico will lead to real intelligence as an emergent property.

markram proposes one of my lifelong fantasies Ive had ever since a teenager. (you know, the one that didnt involve farrah fawcett or bo derek). that is, to build a simulation that includes a virtual environment, in which a virtual agent interacts and behaves. a sandbox for intelligence experimentation.

this is a breakthrough idea that I would argue is actually rarely encountered in the literature. I dont know who first proposed it, I would be very interested in its academic geneology.

you would create different "algorithms" for intelligence, plug them into the virtual brain, and then see how the virtual agent interacts with its environment, which might include objects, laws of physics, and other intelligent entities (generally provided by humans interfacing with the VR). after he recently announced the simulated agent idea, I think markram is closer than anyone in the world to discovering the algorithm for intelligence, when many are not even smart enough to be looking for it.

Blue Brain in a Virtual Body | h+ Magazine
http://hplusmagazine.com/articles/ai/blue-brain-virtual-body

BBC NEWS | Technology | Artificial brain '10 years away'
http://news.bbc.co.uk/2/hi/technology/8164060.stm

postscript. pentagon announces plan & funding to reverse engineer the cat brain. 

Scientists work on artificial cat brain
Pentagon backs neural reverse-engineering effort
http://www.msnbc.msn.com/id/36872308/ns/technology_and_science-science/


(b) million-node windows botnet simulation

this is not quite as scifi, but promising. los alamos has announced that they are simulating entire botnets with a million virtual computers running windows. this will help them analyze the behavior of real botnets.

[I am sure they will be missing something, because I have this theory, confirmed by evidence, that botnets behave based on the exact data that is found on the target machines, and that they adjust their strategies based on the specific machines they compromise. for example, a home computer might just be turned into a spam node, but a financial computer with sensitive financial data might be "custom hacked" by the botnet controllers.]

anyway, the collective supercomputing firepower in this project is probably rivalling that of a smaller biological brain, maybe a mouse. so its good indication that the supercomputing power will be available whenever we figure out the magic "algorithm" that is running our brains.

Computer scientists successfully boot one million Linux kernels as virtual machines
http://www.physorg.com/news173104436.html

Botnet Project Could Yield Internet Secrets
http://www.internetevolution.com/author.asp?section_id=699&doc_id=180127&

Scientists get a million Linux kernels to run at once
http://www.tgdaily.com/content/view/43480/108/

Researchers Boot Million Linux Kernels to Help Botnet Research
http://www.eweek.com/c/a/Security/Researchers-Boot-Million-Linux-Kernels-to-Help-Botnet-Research-550216/



(c) mousebrain-driven robot

the media approached this as a sort of cute gimmick, but anyway, a scientist hooked up a slice of rat neurons to a robot and its controls, and the neurons actually seemed to learn to navigate, and possibly engage in obstacle avoidance. now, this is much deeper than many realize or the articles mentioned, because why would the neurons, divorced of their in-situ, in-vivo environment, "know" to do that?

it tends to support my long developed theory that neurons may intrinsically be implementing some kind of algorithm that is a basic formula for intelligence. they probably self-organize to generate emergent behavior that we call "intelligence", in my opinion. there is a scientist Jeff Hawkins who has written a book on the subject that basically presents this hypothesis, check it out.

anyway, I havent seen any papers that attempt to analyze, what algorithm are the neurons running? again, generally, it seems scientists seem not even smart enough to ask that question. oh well, maybe next lifetime. in contrast to Kuhnian theories about punctuated equilibrium being felt in scientific meme-scapes.... maybe science actually advances via one parlor trick to the next? for scientists to impress women? hmmmm



(d) "brain on a chip"

other scientists are working on similar projects like Markrams called "brain on a chip" that I suspect will all tend to converge someday.

Brain On a Chip
A Roundup of Projects Working on Silicon Intelligence
http://www.hplusmagazine.com/articles/ai/brain-chip

IBM gets $16 million to bolster its brain-on-a-chip technology
http://www.networkworld.com/news/2009/080509-ibm-brain.html?hpg1=bn



(e) improved brain imaging technology

brain imaging technology is getting very sophisticated, and leading to new insights and even breakthroughs.

Intelligence Explained
Tracking and understanding the complex connections within the brain may finally reveal the neural secret of cognitive ability.
http://www.technologyreview.com/biomedicine/23695/

The Young and the Neuro
http://www.nytimes.com/2009/10/13/opinion/13brooks.html



(f) Titan Robot (Dubai)

if a robot can entertain a mesmerized crowd with Frank Sinatra tunes and weak dancing, how much of the Turing test has been fulfilled? Real AI cant be far behind right? or maybe its already here...



postscript--- CA asks "does that rat brain like silicon cheese?" uh, I dont know, but I do know from the New Yorker that robotic dogs urinate in a distinct way, wink

(Sony-type robot mechanical dog urinates nuts and bolts on fire hydrant.)

Your tags:

TIP:

Enter the amount, and click "Tip" to submit!
Recipient's email address:
Personal message (optional):

Your email address:

Comments

Type your comment below:
have you seen the movie AI by spielberg? theres a woman in there that will suit your exact purposes. trying to remember which actress played her & I cant figure it out on imdb. also, the movie "virtual girl" has the same theme. I also liked "s1mone" a lot. all similar themes.... robotic or virtual femme fatales
its been years since I saw AI, but I somehow recall this brief female character, opposite jude law's gigolo character, wearing a killer skintight black leather thing-- a sexy femmbot.. gotta watch the movie again to try to spur my memory..
of course the ultimate would be if they turned rudy ruckers books into movie(s).. have you read any of his stuff?? the best man....
"make them more like the brontosaurus with a second brain in its tail...so that these female units learn how to put out their tails when I push an enable button in my remote...just a suggestion."
heh heh or maybe just "put out," period.... wink
"There's no reason to believe that taking components from disassembled biological brains and stitching them together in little robot machines can accomplish more than rudimentary processing"
move that goalpost!!
before you heard of the experiment, Im sure you would have argued it would have not worked at all....
One of the essential problems befalling concepts of "super intelligence" is dealing with the unknown-- which, by definition is-- unknown. For simple cases of the unknown, such as a missing value or a faulty procedural step, the information may be easily discernible, but the ability to analytically determine the answer diminishes rapidly as the level of complexity-- e.g. number of pieces of missing information-- increases. At some point the pure analytic computational strategy fails.

In order to continue to make progress from this point alternative strategies must be employed which attempt to determine the level of "value" or "risk" the missing information might have-- in other words how important "knowing" an unknown answer could be and how detrimental not knowing it might be. This type of strategy is a coping strategy-- i.e. "risk mitigation"-- which, once determined, can be utilized as a "synthetic" value and plugged back into the more determinate types of analysis methods and used to compute a range of potential outcomes based on their statistical probabilities.

Utilizing first-order values and subsequent order synthetic values can be useful in plotting out a continuum of "what if" scenarios with decreasing levels of certainty and / or plausibility. But at some point that strategy too reaches a point where the level of uncertainties-- i.e. "unknowables"-- is too great and becomes no longer reliable, meaning that no further insight is possible compared to the background "noise" level of the problem set.

Regardless of the methodology utilized to arrive at this juncture, this is the point of the "event horizon", which for nearly all purposes real or abstract, forms the outer "boundary" of the "singularity" condition-- the point at which no further information can be gleaned or extracted or synthesized and no further hypothesis can be made and proven. In the classical terminology, "Here be dragons".
It is perhaps also useful, when considering "intelligence" and the design of "intelligent" systems, to ask oneself "What is the purpose of this system?" and "With respect to the stated purpose, what actions or responses would be considered 'intelligent'?" A pair of thought exercises to begin to get at the set of 'requirements' in a sufficiently complex (or even not complex) system.

"Intelligence" is an elusive element, on the one hand seemingly understood by everyone in a general sense, and yet on the other hand, hard to actually pin down into a set of formulations which can then be hard-coded and replicated. In other words, intelligence is often defined as a "I'll know it when I see it" type of thing.

By one overly simplistic definition of intelligence, one might optimistically state that "intelligence is the determination of the 'right output response' (whatever that is) in reply to a particular set of input circumstances." Which certainly sounds like a profound and prophetic definition of the term until one further considers that by that same definition an ordinary rock would likely be considered intelligent.

The devilish details in this instance are swept up in the catch-all term "right output response", which must be, in turn, anchored to a set of requirements in order to know how to interpret. If a 'rock' has no requirements then any response or no response at all is sufficient to satisfy the directive and thus the rock is 'intelligent'.

And yet everyone would quickly conclude and agree that a rock is 'not intelligent'. Therefore the directive must be modified accordingly to guard against that type of anomaly.

One of the classic fundamental questions / arguments of intelligence is whether or not a dynamic processing element (we can think "CPU" for this context) is considered "bounded" or "unbounded" in its analytical ability.

The essence of this issue is framed on the one side by the notion of a "state machine", which is a (possibly hypothetical)set of pre-defined states that, however large, are not infinite and include all of the possible conditions and responses to the problem set-- which is known as the "state table". The machine may *only* be in any ONE of these states at ANY time and intermediate states are not allowed-- i.e. fault conditions, by definition.

The other end of spectrum is based around a general-purpose processing element which uses "recipes" (programs) of pre-defined algorithms to state, approach, analyze and resolve problems. This is the classical "computer" as most people know it. Whether internally a Von Neuman style architecture, a Harvard style architecture, or some other style, ultimately it uses the collection of components to set up, support, and "compute" answers based on these "recipes". It is the "recipes" themselves that form the pivotal point of contention: Is the set of "computer recipes (programs)" that can be processed by the computer finite or infinite?

In order to begin to resolve this issue, it is necessary to clarify whether or not the system is bounded by physical limitations or else is only being considered in the abstract. If, for example, the former, then the system must be limited to a discrete physical amount (size) of "memory", "hard disk (storage)", "I/O", etc. And if the latter, it is purely an abstract exercise based on the concept of the "dynamic processing".

If the system in question is the former, then the answer is "yes", it is finite, because there can only be a finite number of recipes "in-use" or "being stored" or in the process of "being retrieved" by the system at any given time. And when the physical extents of these resources have been reached, no further additional recipes may be added.

[And just to make the point-- if one were to say "Yes, but you could always erase a program and load a new one." The proper response to that point would be "Yes, but then you're introducing a new storage device, which must be included in the original functional specification. A limit must be determined somewhere in order for the system to be able to exist in the physical realm.]

If, on the other hand, there are no bounds on the resource elements-- i.e. memory, storage, I/O, etc.-- then the system can only be considered in its abstract perspective and the answer might be "Yes, it can process an infinite number of recipes".

But is that really the answer? Perhaps not. Even if we are discussing the processing element in an abstract situation, the processor itself must still be defined, even if abstractly, in order to determine whether or not its capable of running (executing the methods of) some particular recipe or other. Not all processors are based on the same type of analytical architecture, or even structural materials. Additionally there are many ways a processor can be fashioned, constructed and organized which affect its ability to process data.

Furthermore the "processing" of the data-- the performance of the set of procedures contained in the recipe, etc-- in the physical realm requires, at the very least, a measure of energy and an amount of time in order to reach an outcome. Those two, at least when considering the question in its physical aspect, are limiting factors that bound the answer to the finite.

Newcomers to the debate often posit something akin to "Yeah, but I can always create a recipe that takes a number and adds one to it and thus create an infinite sequence." There are two problems with this type of statement: first, it confuses the concept of the 'recipe' (the procedural portion of the processing) with the 'data', the stuff that's being processed. And secondly and often insidiously overlooked, is the point that any physical processing element must have a finite ability to *store* that data and at some point the mantissa will grow so large (or small) it is no longer physically able to represent the value within its physical confines. And thus again, proves to be a finite problem space.

What is the ultimate answer to the question? Actually, personally, I'm not sure. I lean toward the 'Very large but ultimately finite' side of things. Even if you unhook the "resource components" of the problem from the physical and consider the problem mainly in the abstract-- i.e., simply as a "defined" processing element-- whatever it is, however its made, exists-- even if only in theory-- in the physical realm and ultimately must reach the limits of its structural components, its energy, and the amount of time available to perform its operations, and ultimately-- I think-- is bounded and finite in its processing ability.
One final comment--

Regarding knowledge, intelligence, processing, AI, etc-- it is my observation that all of this stuff requires the concept of a "relationship" in order to have any validity. Thus it may be possible to posit that 'intelligence cannot occur independently' and correlatively that 'intelligence can only form relative to something else'.

This has been the basis of my own thoughts and questions for many years -- you begin with "the one", which is as yet undistinguished and merely noted as "the one" for convenience of reference. In the absence of "the other", meaning *anything* else, is formless and quiescent, and without any necessity or possibility for discrimination. For it requires the existance and "knowledge" of "the other" in order to form the basis of its first relationship: "the one versus the other". Or stated another way: "The one versus everything *not* the one." In my philosophy, this is the singular relationship that forms the basis of every subsequent relationship or expression.

Consider: What is true without false? True and False are defined as polar opposites (inverts) of each other. True is Not False. False is Not True. This is simply a manifestation of the singular relationship applied to the 'expressive' realm of information space.
I say that Deep Blue is intelligent but only in a narrow context. Let's suppose for a moment we had 'Deeper Blue' which was a complete and total Chess Playing State Machine. All moves pre-analyzed (or whatever) and a list of every single move along with its counter was developed for the whatever 10^40 or 50 some odd possibilities there are and no matter what you tried, Deeper Blue already had the answer and your best possible outcome, assuming you don't start off as white and make no "wrong" moves, is a tie.

Suppose you meet me in the park and we sit down for a quick game of chess. I move, you move, we all move, but no matter what you do you just can't win. We play match after match and sometimes you tie me, but you are never able to beat me. Your opinion of me would probably be pretty high-- a chess fuckin god. But unbeknownst to you I'm really cheating my ass off and am secretly in cahoots with Deeper Blue over there.

So what's our verdict at this point? I think we have at least two possible answers, perhaps a few more based on your nature and perspective...

Situation #1: You don't know I'm cheating...

Your Assessment: I'm pretty damn brilliant. I must possess the intelligence the size of a small planet.

or

Situation #2: You know I'm cheating and in cahoots

Your Assessment: I'm a $(*&@ so and so and you don't care who knows it.


So -- let's tinker with the setup a little.

Supposed you found out in all those games, I occasionally played for real, no cheating, and still whopped your pansy ass...

What's your assessment now?

If I played honestly ONE time out of all of the games.. ?

TEN times out of all the games..?

Ten percent out of all the games.. ?

Fifty Percent?

At what point does your perception / assessment of me change and go from being a scam artist to being impressed that I was able to consistently beat you...?

What if you found out that despite my assertion that I cheated, I really didn't?


This little thought exercise describes, IMO, the problems and subtleties associated with the definition and recognition of intelligence. And it also points out that a large part of that recognition, at least subjectively, is based on some sort of emotional component perhaps-- or comparison between the players (in using our example). You know you're not cheating and you assume (let's say) that I'm not, so I appear intelligent. Until you believe I *am* cheating and then I don't seem so intelligent after all... until you find out that I lied about cheating and suddenly your assessment changes yet again.


What is the formal definition of intelligence?

I'm not certain there is one, or if there is, that its much better than my original stab at it-- we'll soften it up a little-- something that generally does the (so-called) "right" thing a substantial amount of the time in response to a given set of circumstances (inputs). If you want we can toss in some mumble about "adaptability" and "heuristics" just to make it sound good. But I question whether either of those latter elements are *truly* required to make the grade.

Nice additions? Sure, most certainly. In addition to slicing, its also a dicer, a potato peeler, toothpick, and it comes in really handy as a back-scratcher too.

Those are just bells and whistles-- adaptability is great. Heuristics are fun too-- but are those really necessary components in order to be considered "intelligent"? You can have different answers of course. Right now I'm thinking "Yes".... now I'm thinking "No"... "Yes"... "No".... "Yes"...."No".... (um, I'm stuck, can somebody help me out of my loop here??? Anybody?)

Furthermore, in support of that statement, let's think about (most... many) AI (I prefer MI, Machine Intelligence) projects-- they do some stuff, setup a can of whatever, pour some data and algorithms in the top and turn the crank. The machine starts processing and doing whatever it is that's its supposed to be doing.

Whenever it gets the wrong answer or produces the wrong result, BZZZZZZT-- somebody pushes the zillion-volt buzzer and zaps the poor thing to kingdom come (no wonder MI's are quirky. I'd have indeterminate states too if it was me...) So however the feedback's getting applied, its being programmed, wired, trained, whatever to eliminate "false" or "wrong" responses. One way or another a "state table" is being generated-- or at least a "pseudo state table".

So on and on it goes-- iteration after iteration-- or maybe its just engineered "smart" to begin with-- whatever, doesn't matter-- sooner or later some propeller-head is gonna slap a QA sticker on its head and call it done.

[I contend that humans won't tolerate a piece of equipment that's allowed to be wrong-- MS Windows excepted-- they'll keep at it, dickin and tinkerin until they work out all the kinks-- which I think we can probably both agree either is, or is approaching a "state table" style design.]


So where are we with our intelligence analysis here? Just because you know how something is built internally-- *and* understand how it works-- you can't / won't call it intelligent ??

With advances in medicine and technology, its probably only a matter of time before "human intelligence" is completely understood. Will we stop being intelligent then too?


Here's another thought experiment... Pull the lever, feel free to stop playing at any time.

We'll posit for the experiment that we consider the human brain to be intelligent. We approach it in its natural habitat (where else, a seedy stripper bar doing tequila shots off some poor college girl's cleavage... but I digress...) We assume its hooked up and working (as well as it can considering the circumstances) and that nothing we're about to do is going to interfere with the *supply or support* aspects of the mechanism.

So now, every time you pull the lever, our hapless human gets a random bit of his brain neatly excised. (Its okay, he's probably too busy bouncing quarters off the table and between her butt cheeks to notice)... how many times do you need to pull that lever before he stop's being considered "intelligent"?? (And remember, we stipulated that he *was* intelligent as part of the premise for the experiement)...

Can we agree the count doesn't matter-- whether its ten or ten thousand-- sooner or later he loses something vital and the system is no longer a functioning intelligent processor...??

If so, great-- look at all of the parts we've excised that are laying around (hmmm... somehow I thought it would be a bit more than that... but no matter) which one of those parts is "intelligent"? Which part contains the "intelligent" bits? Any of them? None of them?

And yet, as we stipulated at the outset, in their original configuration they *were* intelligent. What's changed? The parts are all still there... Obviously we relocated some stuff-- but if it was "intelligent" in its original location, why not now when its located in with the peanut husks?

Organization? Structure? Form and Function? This part connecting to that part and doing some small but important function?

Okay, let's modify our test-- we'll assume for the sake of the discussion that every time you pull the lever we excise a whole and complete (but "atomic") assembly of some sort. Still at random, but in whole chunks.

Do we still have the same outcome? And yet we have a bowl full of operational parts... conceivably we might be able to reassemble them into something, or put them back where they came from, or something. Are the *parts* intelligent? Or is it only when they come together and operate as a whole that they're considered "intelligent"??

What about if we only pull the lever a few times-- by now our frat boy's got his schnoz where it ought not be anyway-- he's not likely to miss much of his higher functioning.... is he still "intelligent"

How many times do we have to pull that lever before he stops being intelligent? Or conversely, how few times can we pull it and leave him in a state we would continue to call "intelligent"...?

Do you fall back on the ideas of adaptability again? After all humans can have strokes or accidents which can wipe out whole regions of neural matter and render them speechless or paralyzed or incapacitated in some fundamental neural sort of manner-- and yet they can do other stuff-- or even recover (adapt / retrain) what's left of their functioning to take over and be "healed".

But is that really a fair apples to apples comparison between the "Intelligence" you want to bestow on our now fully-schnockered cooze-meister and the completely state-machine functioning of our Deeper Blue chess buddy (I bet you thought I had forgotten about him, eh?)

Isn't the "extra" intelligence quotient you want to afford the human really a bogus endowment? His processing system is much more complicated, redundant and designed to withstand faults, damage, etc., and that if we wanted to (and had the technology to back it up) we could similarly endow Deeper Blue too?

So that's not really the answer.

Is it just that you're a "structure snob" and prefer the human form (or some other form) of "intelligent structure" over the simple state-machine design of Deeper Blue?

Would you feel better if the state machine aspect could be reduced to an absolute set of algorithms and heuristics designed to completely and absolutely mimic the state machine's functioning? What would be the difference?

Where is the intelligence? Is it in the components? The structure? The wiring? The algorithms and heuristics? Is it pre-programmed in? Is it hard-wired? Is it learned? Does it count if you know how its made or how it works? How much of the system must be functioning correctly so as to still be considered "intelligent"? Conversely, how much must be non-functional before its no longer intelligent?

These are the questions, scenarios, subtleties and issues that people-- scientists and stripper bar patrons must consider and address (or in the case of the strip bar, undress) before a coherent, unified "standard" definition of intelligence can be developed and applied. Check your emotions at the door, and keep your hands to yourself at all times.... (unless you paid extra of course).
CA, now you're dragging "Theory of Mind" into the mix and thoroughly muddying up the waters. I contend that a theory of mind is not strictly required in order to examine the concepts and fundamentals involved in intelligence, and in fact, it often gets in the way since you start to compare theory of mind "brands" and their feature sets instead of thinking at the most basic principles.

Alan Turing, as you are probably aware, proposed a simple test as a way to decide whether something was intelligent or not. That an external interrogator would question two entities and at the end would decide based on external observation whether one or both was intelligent. Of course there are some design issues involved with the setup, the biggest one being that the recognition of intelligence depends on the judge's opinion-- and by extension, probably his experiences, his perception of his own intelligence along with that of other similar beings he's encountered-- and he may be predisposed to "recognize" intelligences that are more aligned with his preconceptions of what intelligence is rather than *an* intelligence that might be in front of him but may not have many similarities.

Something I've often thought about-- suppose we encountered visitors from another star system-- our own personal Roswell event-- how could we relate to them? How *would* we relate to them? Do we consider them "hostile" until proven otherwise? And if we take that stance btw, what criteria would constitute "proof" anyway? Do we greet them with open arms? Do we exchange knowledge? Or should we keep it to ourselves?

And why do we always expect extra-terrestrial intelligences to be smarter than us anyway? Or more advanced? What makes us think they would even be interested in us in the first place? Or recognize *our* intelligence as intelligence?

I think about the only smart thing one can observe about an alien intelligence is that it is *alien*. And by alien I mean *foreign*, otherworldly, no built-in commonalities or protocols or even mutually shared interests. We shouldn't assume they would have any compunctions about shooting us with their death rays if they felt threatened. After all, we wouldn't hesitate to nuke them into oblivion either.

That's one of the things people take for granted when they meet another terrestrial intelligence-- that there *are* commonalities and mutually understood conventions and interests-- no matter how slight-- we at *least* have our basic "humanity" as a binder and conversation starter.

So while theory of mind is really interesting, it is distracting in the initial study of "what is intelligence" as it tends to color expectations and unfairly call upon more advanced forms of processing, structural components, algorithms, etc. that may not be-- and probably aren't-- available at the simplest levels.
Assume the "other intelligence" is the extra-terrestrial alien... You have nothing in common, nor even an insight into the internal workings of its "mind", whatever it is. Your only choice is to accept _what it does_ and act accordingly. _How it does it_ is irrelevant.
I make no assumption whatsoever of the alien's lineage. All I see is a ship and a great big death ray aimed at my planet. I'm really hoping they're friendly. I have no idea what they're made of. Whether they're silicon, carbon, or something else entirely. I don't know how they evolved-- or *if* they evolved. All I know is what I can observe. I have no commonalities with which to make inferences. I cannot assume anything at all about them, other than what I have observed, and even that is "suspect" and based on very few statistical samples.

I have no way of discerning whether you, CA, are in fact carbon-based, other than your assertion. For all I know you're a very clever machine trying to trick me.

But again I ask-- what difference does it make? Why do I need to discriminate against one type of intelligence or the other? Either type could pull the trigger on the death ray and in which case my planet is toast either way.

Then let me ask a pointed question -- how do you, meaning YOU, *know* anything? What constitutes *knowing*? Explain what *knowing* means.

Why do you have a prejudice against machines? BTW, just for the record, I never said the aliens had to be machines, I think you just assumed that. I only said they were "alien" and had no commonalities that you could readily discern.

So what does *knowing* mean?

*I* know what it means. I have a very good definition for it. And it doesn't require anything organic. Or even complicated. But in order to arrive at a good definition, you have to deconstruct the term and the concept into basic principles.

You say you "know" a particular fact to be true-- HOW do you know that fact? How do you know it is TRUE? What mechanism is being used to implement that function?

You say you are intelligent and something more than a "machine". How do you know that? How do you know you aren't simply *programmed* to believe you are something more? Or that your "center of being" / "god's eye" isn't an abstraction hoisted on top of a sufficiently complex processing mechanism so that it doesn't seem or "feel" like its any one place at all?

If you are like most people, you "feel" your center of being is right behind your eyeballs, perhaps slightly higher, but somewhere in the general area and located toward the front of your head / mind. Why is that? Especially when the optical processing centers are located in the back of your head. The auditory processing centers are located on the sides. The muscle control center is a little bit about the size of a walnut located at the very top of your spine- where your higher level brain ends and your spine (body stem) begins.

You know, when I design a robot-- that's pretty much the same general design I use. Not because there's some "cosmic purpose" to it, but simply because its the most natural / logical positioning of the components to achieve the control, drive and feedback aspects that are necessary for functioning.

So again I make the point, what difference does it make *how* it works as long as you agree that it *does* somehow work? If it "acts" intelligently, then it must be intelligent. Even in my original example where I was cheating in our gameplay it was still "intelligent" from your perspective. HOW it was achieved is irrelevant. You got your butt kicked over and over. And while you were playing the game the details didn't matter-- you were involved in acting and reacting and not at all concerned with the internal workings of your opponent. And lastly-- even if you were, did it change the outcome? Did it matter at all during the game? Did you change any aspect of your actions, deliberations, or reactions based on your assumptions?

Also, remember that the original thought experiment was You playing against Me, an assumed human opponent. Then you found out I cheated and you were really playing a machine. And then it was revealed that I lied about cheating and it was really me all along.

Amidst those revelations did the underlying "intelligent" components *REALLY* change, or just your *perception* of what was intelligent?
holy @#%&*$!!!!
while the cats away the mice will play!!!
walk away for a few seconds and LOOK WHAT HAPPENS!!!
geez guys it will take me a day to read all this... dang!!!

ok,ok, well glad you're all havin fun... I hit my page & saw 28 comments and was thinking FINALLY!! AN EP!!! but NO!!! just two jokers dinkin around.... geeez man
as the pussycat dolls sing "maybe next lifetime... possibleee..."
What a facinating and amazing discussion going on here! This took a bit to digest, but I am impressed all around.
Personally I believe the Singularity concept it approaching what I see as a natural conclusion to pancomputationalisim and emergence. In essence, not only will we one day have 'A.I.' beyond the Turing Test standard, but true digital neural hybrid intelligence using nanoengineering that approaches a level of data storage, input/output hardware and processing capability to be described best with the term God.
Thank you for posting this, and thank you to the other contributers for this outstanding discussion.
this is all great, but something is missing....
let me think....
yes...
WHERE ARE THE BABES?!?!
I guess I gotta write about something different.. but what!??!
hmmmm
shoes???
clothes???
kittens???
dunno man....
I guess I am gonna have to retitle this post now...
something like
CYBERSPATIAL GEEKFEST
jackpot!! hit the GSpot!!
ie..
the GEEK SPOT!! haha
now all I have to do is get David Brin to post on this... a boy can dream...
but has he written anything about the singularity on here?? I THINK NOT!!!
anyway, thanks guys,
this does motivate me to add some more to the post....
I found some new link(s) & will add them.....
CA, I sense illumination imminent. Any moment now...

CA: "your operational definition of intelligence, the observable behavior, without having an underlying theory of what intelligence is, can be very misleading. If we consider those nice German mechanical clocks, or other mechanisms"

You are very nearly there...

If it *appears* intelligent then it *is* intelligent. Pure and simple. Even if it is a simple automaton the intelligence is still there. It, however, is simply once-removed from the scene and in the hands of its creator. If you are able to discern that-- meaning "the hand of the creator"-- then you are artful, insightful and wise. And also observant.

Let's take a question from the audience-- you there, in the back... step on up to the microphone and don't be shy..

CA: "Are state machines theoretical, or have they actually been built?"

Excellent question!

Yes Virginia, there *is* a State Machine! It is not a made-up thing. It is an actual construct which can be used in both realms of logic, the abstract and the real (physical). Here are some simple abstract state machines which embody the very essence of the concept. Odds are overwhelmingly good that you've encountered them before:

Truth Tables: Where you have inputs and outputs that are generally (though not always) expressed in simple terms such as "On/Off" "0/1", "Yes/No", etc. And why I say "not always" is because there can be additional types of input or output, not just binary. A state machine simply lists *every possible condition* of the system.

Truth Table for a Binary Logic "NOT":

Input --> Output
0 --> 1
1 --> 0

[Pardon my unusual formatting of the table. I am considering that not everybody's web browser will render it the same way. So I'm sticking to simple examples and unequivocal notation.]

The binary logical "Not" simply "compliments" (negates) its input to create its output. Therefore an input condition of "Zero" produces an output condition of "One" and vise versa. Binary logical "Nots" have another very interesting characteristic which we'll visit a bit later.

Here's a couple more binary logic truth tables, the logical "AND" and the logical "OR". Both of these have two inputs and an output. We'll label the inputs as "A" and "B", and the output as "C":

AND

A B C
0 0 0
1 0 0
0 1 0
1 1 1

In the concept of "AND", output 'C' is *only* True (1) if inputs 'A' and 'B' are both asserted (True / 1). *Any* other condition results in output 'C' being False (0).

OR

A B C
0 0 0
1 0 1
0 1 1
1 1 1

As you can see in the table output 'C' is True (1) when either inputs 'A' or 'B' are asserted (True / 1). Output 'C' is *only* False (0) when both inputs are False (0).

These three basic truth tables form the foundation of nearly all computational logic. They can be combined and recombined in various ways to create nearly any advanced computational function. Add a handful of additional processing elements and you have the foundation for some serious computational ability... not to mention a handy little orange book published by Texas Instruments... but I digress... :-)

XOR

A B C
0 0 0
1 0 1
0 1 1
1 1 0

The "Exclusive Or", which is a handy item, is a compound function that can be broken down into a collection of AND's, OR's and NOT's. Here is its logical equivalent (stated in English):

c = ( (a OR b) AND NOT (a AND b) )

The 'Exclusive Or', as you perhaps noticed, is only true when either input 'A' *or* 'B' is asserted (True / 1) but not both.


But that's abstract binary logic. Where else might you see a state machine in the real world?

-- A light switch
-- Your sump pump
-- Your microwave oven
-- Your dishwasher
-- Your Washing Machine
-- Your Clock

State machines are all around us in every direction. We are inundated in our everyday lives by state machines and simply don't recognize them as such. Your light switch (and lamp) for example, is an example of a simple state machine. Its input is the switch and its output is the lamp. It has two possible states, "ON" and "OFF". If the switch is "ON" then the lamp is "ON". If the switch is "OFF" then the lamp is "OFF". We could technically ascribe a third state if we wanted, which is "BROKEN".

Your sump pump, for instance, is an example of a simple automaton. It has the ability (we'll posit) to sense whether there is water in the trap or not. Based on that sensory input the state machine has two states: On and Off, just like the light switch. Except it operates independently (autonomously). When water is sensed the pump comes on. When water is not sensed, the pump shuts off again. And like the light switch we can ascribe an additional state if we want, which is "Broken".

I'm not being facetious when I call "Broken" a state / condition. While most texts and discussions overlook it or leave it out as irrelevant, it is a real condition and can have (and really does) serious real-world implications. For example if you consider your local friendly atomic bomb as a state machine device, you would probably be really interested in a possible "Broken" state.

Other state machines are more complicated but no less state machines. Your washing machine, for example, and we'll utilize a simple example. You load up the clothes, put in the soap, and push the button to start the cycle. Everything that follows is a carefully choreographed sequence of events designed to clean your clothes.

First a valve opens up for a bit and permits water to flood the compartment containing the clothes and soap. It may be on a timer or it may be controlled by a water sensor. Either way at some point it shuts off. Then an 'agitating' action occurs for a period, most likely controlled by a timer, and eventually concludes. Finally a 'spin cycle' is begun which opens another valve to drain the water and the whole clothes compartment begins to spin very fast to centrifugally expel the water from the clothing. It also is controlled by a timer. At the end of the spin cycle the overall washing cycle is complete. A buzzer might sound for a short period, again controlled by a timer. Your machine might have additional features such as an electric door latch, controlled by a electro-mechanical solenoid device, some fine-tuning adjustments to permit you to select alternative combinations of hot or cold water, different degrees of agitation, etc.

The state table for your washing machine is a bit more complex but it does exist. There are a fixed number of finite states that the machine can have (including various "Broken" states) and that's it. Your washing machine is also a little more complicated in that it also requires an "executive" in the form of a "sequencer" to "kick off" the various sub-events (fill, agitate, spin, etc) in their proper order and for the prescribed time periods. Each sub-event likewise has a state table with various state conditions. The whole machine is understandable from end-to-end. It is predictable. It does what it does and you can stop it at any point, observe its conditions and then look them up in the various state tables to confirm its state. There are no "extra" states-- unless its malfunctioning.

We could go through similar discussions about your microwave, cell phone, alarm clock, oven, lawn mower, automobile, etc. Just about anything you can name is subject to a set of state tables, albeit potentially complex. Even a CPU, as complex as it is, is ultimately a machine that is governed by a complex set of state tables. [Note that it might be useful to go back and re-read my previous commentary on dynamic processors and whatnot-- this is where it begins to slot in.]


All of these devices are machines of various types. And all of them involve inputs and outputs and sensing and timing and sequencing of operations, and existing in various states. Let's put aside the question of "Are *WE* Machines" for the moment. For many people, that's an emotionally loaded question.

And I'd actually like to reach back now and pick up a thread I promised to revisit earlier-- that additional thing you can do with "NOT" state tables...

No doubt you've played with mirrors at some point in your life and have noticed that when you put one in front of the other you seem to get an "infinite" sequence of mirrors each reflecting the other's visage...

You can cleverly arrange a "NOT" device in such a fashion that its output connects to it input. (Not quite like having your head up your ass, more like having your ass up your head-- not sure I want to continue with that image...) The interesting thing when you connect a "NOT" device's output to its input is that it begins to "vibrate", or cycle between its On / Off states very rapidly-- which is usually at the upper limit of the device's physical ability to switch itself from one state to the other. With this switching On / Off, we have just created a self-organizing "Clock" (as its called) or binary "Crankshaft" as you could also conceive it. Everytime it produces an output, the output is fed around to its input which forces the opposite output, which is fed around to its input and-- ad infinitum.

"Clocks" are interesting components too. It doesn't seem like they do all that much, just cycling back and forth between On and Off, but they are the veritable "crankshaft", as I mentioned, in the logical world-- both abstract and physical.

Want a real-world example? Okay, how about this-- what is the "speed" of your computer? 1GHz? 2GHz? 3GHz? More? Whatever it is its measuring the speed of the central "clock" that cycles the central processing unit (CPU) inside the box. Every time the clock makes a complete cycle from Off to On (or vice versa, doesn't matter), that's one "Hertz" (a unit to measure frequency in terms of "cycles per second"). A gigahertz, or GHz would be one billion of them, in a second.

If you toss in a handful of other components, the "timer", the "counter", the "gate", the "latch", the "buffer" (which is really just a unity gain "amplifier"), you have the basic ingredients for creating nearly any type of autonomous device, including modern computers, which can and are able of all sorts of seemingly intelligent characteristics.

It is probably also useful to point out another set of valuable components-- the sensors. In biological systems-- let's say "Us"-- we're used to our various so-called "senses"-- touch, taste, sight, sound, etc. There are more than just five. Lots more. "Tilt", for example. It keeps you upright or let's you know which way to swim to get air when you're underwater. Its housed and maintained in your inner ear. But we'll leave the complete enumeration as an exercise for the reader.

In "mechanical" terms, sensors permit devices to "sense" some aspect of their environment, to quantify it-- meaning to reduce it to a known "state"-- which can then be examined and acted upon. Human senses are the same, we just don't think of them in those terms very often. In this day and age there are sensors to detect and measure just about any little thing your mammalian brain can wanna know about. And you can hook 'em up to your computers, sequencers and state machines to create a nearly endless array of automata... even, dare I suggest... an intelligent machine.... as if we weren't already such.

Complicated?

Yes.

Machines.

Yes.

Does that deny the hand of God?

Not implicitly. Although that argument is what has nearly universally derailed investigation and study into intelligence and intelligent machines and the understanding of ourselves, both our physical selves and our minds.


Like a candle sputtering into flame, open your mind, your senses, your imagination to the possibilities and become illuminated. The first step is scary. All the rest are easier.
I ask you again, what does it mean to KNOW something?

What is "to know"? How do you know you know it?

*I* have a definition for it. I know what "to know" means. And I can state it simply and it is a core concept that can be used to hoist additional constructs. My definition is a building block....

But I want YOU to tell me what "to know" means-- if you insist upon requiring a formal theory of mind as a requisite basis for intelligence, then there must be a mechanism, clear and concise, for "knowing".
BTW, I mostly concede your point here, especially insofar as you're really making mine... :-)

CA: "First, back to the aliens. I think you were being unfair in suggesting that there are no commonalities whatsover with humans. The aliens are subject to the same laws of physics, chemistry, and mathematics. Even though their symbology, in representing their knowledge, would be vastly different, nevertheless, at the highest levels of abstraction...in geometry, topology, number theory..."

I agree, there is *some* basis for commonality. We are both governed by the laws of physics. And not for nothing, *might* have, by pure chance, other commonalities undiscovered-- and not necessarily of the types you alluded to.

CA: "a common basis could be discovered, and working outwards from that"

And this was the larger element of my central point-- all you can do is observe and react. What goes on inside the box is irrelevant. You simply must accept that it does what it does and respond to it accordingly.

I think *maybe*-- I don't want to put words in your mouth-- but maybe you are confusing-- wrong word-- mixing up-- involving-- entangling-- the concept of "awareness" or "consciousness" into your premise for intelligence?
CA: "Our brains' functioning is built on a long, complex pyramid of construction reaching down to biology, chemistry and physics at the foundation."

I think you believe I am a reductionist, when in fact I am just the opposite. I do not reach *down* to biology, chemistry and physics. I use those as my building blocks to build *up* more compex constructs and hoist into existence more dynamic building blocks capable of even more advanced or esoteric functionings.
CA: "And you are also assuming that simulations of a quality like intelligence, if they are good enough, are equivalent to intelligence as we have historically known it. So a simulation of combustion, or a spider weaving silk, or a thunderstorm, or a nuke war, etc., in a computer program, reducible to the operations of a formal system, is the same thing as the Real Thing in the exterior world? Are you sure you're not a clone of vzn, bandying about the idea that virtual reality is the same thing, if cleverly enough constructed, as our heretofore REAL reality?"


(long slow breath)

Oh boy.

Okay here we go... (remember, you started this)

Humans operate in "Meat Space", meaning we are bound to the physical realm and must abide by its precepts. Which does not preclude other possible "spaces" or "physical circumstances" (formalisms of "physics"), we just happen to inhabit the one we inhabit and that's our exposure the concepts and our basis for understanding and dealing with it.

Furthermore we are "machines"-- however carefully you wish to define that-- that operate semi-autonomously-- to whatever degree you want to define 'semi'-- within that "meat space".

From the perspective of a creature living in that space, we have various parts: 'sensors', 'organs' (I'll call them that), motivators, effectors, etc. that permit us to interact with the space we live in. And since we are 'of that space' we are adapted to that space. Our senses, perceptual abilities, processing, etc., is all adapted and-- evolved (I'll leave this dangling for now)-- for that space. We are used to that space. We feel at home in that space. We are acclimated to that space. We ignore that space-- we "tare" it out of our perception.

Yet, nonetheless, we are governed by that space.

All of our physical systems, sensory apparatus, processing / perceptual systems are governed by the specific physics involved with that space. Further, we are regulated by "time", which seems to be an immutable quality that pervades all spaces, both real and imagined. And at least in 'real' spaces, we are also made of the matter from within that space and are bound and regulated by its properties and its energies, and the propagation of same.

So when you 'sense' something, perhaps visual, auditory, or even a touch-- your perception of the event is delayed by the amount of time required for the information related to the sensing to reach the appropriate processing center(s) and to be incorporated into the information stream of your consciousness (whatever that is-- see: theory of mind).

So even in the most conservative view of how it all works-- us, stuff, and the universe and all-- we are only seeing (perceiving) what *WAS* and not what *IS*.

And, anecdotally we can see that as well-- we generally consider someone who is "faster on the uptake" as being "more intelligent" or "tuned in", when perhaps in reality they just have slightly faster circuits.

Furthermore we DO NOT HAVE A CHOICE in the matter when it comes to how we perceive the universe and stuff-- we have what we are made of and that's it. We are forced to deal with the universe as a construct and synthesis of our various senses-- plus if you like, our internal musings as feedback, an advanced concept perhaps, but it could be incorporated as an additional "synthetic" internal sense.

And still even more furthermore, I cannot even conceive of how a "real" "meat space" being, whether animal, vegatable, or mineral, could do hardly much better. Ultimately anything existing in real space has to deal with the physical limitations of that space. So while it may be possible to create a being with faster circuits, or more optimized processing capacity, or smaller, lighter power storage capacity, or any of the myriad engineering-related details regarding that being's physical -- or software -- construction, ultimately there is a limit imposed by the physical properties of that space.


So what's real anyway?

If I am *of* that space, I am forced to accept whatever exists at the juncture of it and my available sensory facilities as "real". I have no choice in the matter. Thus it does not matter to me if what's "out there" is *really real* or a projection or an illusion or anything else. I cannot independently determine which, if any of those, is the true "reality", or even if "reality" is some combination of all of them. Maybe reality takes a holiday sometimes and leaves us with reruns until it returns? Would we even notice? Maybe that's what Deja Vue is all about? Maybe you really *have* seen it somewhere before! :-)


I do not consider people struggling with these types of concepts to be "crackpots"-- at least not for that particular sin. It *is* difficult to consider it, to talk about, to reason about it-- there are so few unencumbered terms that can be used to understand it, and that are standardized enough to relate it to someone else. So most people are forced to try to use their own terms to describe it and their thoughts about it. And to use their own backgrounds to try to relate to it.

If you are confronted by a truly *new* thing that is completely outside your realm of experience, you have no basis for even which to identify it. Much less to express it. So you have to start from the beginning and painstakingly piece it together as a series of relationships to things you *do* know about. Does it have sides? Edges? Shape? Color? Sound? Etc., go through all the potential adjectives and indexes to create a new "standard prototype" (archetype) for whatever that is.

And also much like relating to our mythical alien.

So sure, I believe in alternate realities, other universes, different physical realms, altered perceptions, illusory spaces, simulated universes and all of that-- how can I discount it? I cannot. But, just for the record, nor can I prove it one way or the other. So for my own peace of mind-- I choose to ignore it and chip away at other things.
When I say that "we are machines"-- I'm not sure if I actually said it, but hey, I'll out myself-- we're machines.

When I say that we're machines, I do not mean to imply we are simple machines, or to devalue our abilities or intelligence. I only mean to state that we can be understood. That our parts can be probed and picked-apart, analyzed and the inner workings revealed. I mean that our brains can be-- however slowly and painstaking the process-- studied until their mechanisms are fully understood-- which will lead to a definitive "theory of mind" for humans.

Also I do not mean to imply that "one size fits all". Each of us is unique, to a degree-- and within a small range, along a number of variables. So theoretically, if you understood the mechanism, understood the variables, you could compute how many "Us's" there could potentially be.

But even that would not be the true end of it, since that would only speak about the possible *types* of Us, and would not take into account the data we process, our sensory experiences that we encounter throughout our lives, the thoughts we think, our beliefs, who we interact with, what we interact with-- etc. There is a whole realm of things that go together to make up "who we are" that are decidedly NOT part of our basic structure that we can ONLY get from "being there".

And yet, conceivably-- at least in the realm of imagination-- it could be possible to somehow "extract" (record) that data and insert it into a copy of ourselves. But as soon as the copy was made, it would instantly begin to diverge from our original self and be, for all intents and purposes, a separate individual. It would begin accumulating its own sensory experiences and interactions, which would be different from our own.

I have often thought about how to go about replacing myself without experiencing "death" in the classic sense. All the scifi movies and fiction set the scene so the original dies while the clone takes his place. That's great for the clone. Not so much for the original.

So how could you do it in such a way that the original would not have to die? The answer is slowly, piece by piece. Integrate a new element into the original, permit it to acclimate or replace, and then repeat the process until all of the sub-elements have been replaced and the original continues to exist, albeit with new parts. Perhaps the process could be expedited and paralleled to a good extent. I'm not ruling out improvements, only pointing out that you have to work with the original in order to save the original.

Okay, coming back from fantasy land...

Realizing that we are machines allows us to study the machine. Admire the creator's handiwork if that's your particular mindset. Marvel at the wonders of evolution if you hold a different perspective. Either way, its the beginning point for a long and beautiful journey of discovery and delight into the realm of complex, dynamic, mobile, interactive autonomous machines. Whether your medium is abstract, biological, mechanical, or something else, or all of the above-- there are striking parallels and crossovers between the various disciplines that heretofore have been little recognized.

["Heretofore" meaning starting about in the mid 40's or so and really getting going with the advent of the modern computer in the late 50's-early 60's. Which led to the rise of the "robot" as a laboratory curiosity and then into an independent field of study.]

I do not insist that people are *simply* machines, only that they *are* machines, and as such, capable of being understood and fully documented.
ok, Ill try to be serious for a minute.
hi guys I will try to post some comments as I read yours.
CA mentions "chunking". that is a very deep concept in psychology that is somewhat new, only a few decades research, but which I think is very relevant. I theorize that intelligence involves "chunking" ad infinitum in a sort of fractal way. concepts or memes are built on concepts. it is one subtle area where intelligence can seem to have similarity to mathematical structures (fractals). I think the solution to the problem very likely will involve "chunking" as some kind of aspect....
but arguably, this is understood in neuron structure. a neuron takes a set of inputs-- a "chunk" so to speak. it takes different inputs from different neurons. each other neuron is a piece of the chunk and taken together, its a chunk. so maybe its already explained in our mental structure.
some of mr Es writing reminds me a lot of a book I read many yrs ago, but which is very famous. "zen and the art of motorcycle maintenance". I read it before I found out my brother has schizophrenia. the book also reads well with the movie "beautiful mind". recommend others check it out. also there is a rumor that "zen and the art of motorcycle maintenance" may be turned into a movie.. apparently it has gone through many iterations & even involved sean pean (prior husband of madonna).. not making this up!!
speaking of self-similarity of fractals, heres another article some may be interested inspired by CA
brain in a bottle-- all fizz and no body.. maybe a little apropos for halloween...
as for definitions.. maybe an intelligent entity is one that can.. ponder its own intelligence. but always (inherently) without resolution.... :)
re "chunking" it does seem to be an aspect that deep blue does not mimic our own [human] methods of handling chess. I think there may be some scientific papers on the subject in which chess players were actually studied to try to figure out how they recognize chess patterns. I think it was found that they did poorly on "nonsensical" arrangements of chess pieces whereas the computer played superior in those conditions!! advantage deep blue!! in other words the humans found deeper patterns on the board, ways to "chunk" the configurations (based on working with chess encyclopedias and much human play) and the nonsensical arrangements did not fit into any of those patterns.
I do think chunking may be sufficient for intelligence, but it is a deep ("blue"?) question whether it is *necessary* .. ie could some other system that does not really fundamentally rely on chunking still be intelligent? but I must admit, it really does seem like chunking is a fundamental aspect of human intelligence. and by the way, I believe the concept was introduced by a semifamous psychologist based on observation. (ie, not a computer scientist, not a philosopher!) I may look into this more later.
the philosophers have some contributions to the subject in ideas like "intentionality" and "qualia" etcetera, but I find it not quite as inspiring. I think dawkins "memes" are very useful however & tend to think in terms of them.... there is some overlap.... I give the philosphers a hard time.... its all in good fun right :)
mr E mentioned "meeting an alien intelligence" .. if you guys havent seen it highly recommend the movie Iron Giant. its by a semifamous cartoon director that went on to Pixar and made Ratatoulle .. he's unorthodox & surprising.. sort of like a cartoon tim burton.. think you'll like it
CA: "As to aliens...if they are organic beings then we are entitled to attribute intelligence to them based on the solid assumption that organic evolution occurs elsewhere in the universe creating intelligence as it has for us."
this is just a variation of what is known as Anthropocentric Bias.
it could be called Organic bias.
you're an organic chauvinist!!!
"Dualism, there is a great divide between the material and the mental. vzn loves dualism, like Ben Affleck loves that Puerto Rican whore, what's her name?"
very funny, I am not a Dualist (that is the topic of your Brain in a Jar post & comments)..
unless we are talking about women in which case I have no problem with two at a time.
and by the way her name is jennifer lopez. yowza.
CA, you continue to be hung up on robots, computers and automata which are distracting you from the essential aspects of our conversation-- IMO. When I say "people are machines" or "machines can be intelligent", which I've been getting around to, you seem to heap on a lot more stuff than I do on top of those concepts.

As I pointed out earlier, there are not many terms that are unencumbered when trying to discuss, understand or relate this stuff. We all bring a lot more stuff to the party-- me included-- than perhaps we ought to.

However I will 'fess up to one thing in particular-- I use the word "machine" to serve a number of purposes, not the least of which is to poke the establishment dogmata squarely in the eyeballs. People seem to have this innate *need* to deny their own physical and mortal existence. I think there is a definite purpose in reminding people of their corporeal nature, and more specifically that if it weren't for geek technologies and the layering of civilization that a lot of them would end up as sabre-tooth tiger poop.

Somewhere in one of your comments you get around to dragging in life, survival instinct, propagation of species, and a half dozen other things all related to higher-order aspects of intelligence-- not intelligence per se, but rather what to do with it if you have some. Which I grant, intelligence and evolutionary design and all that other stuff do probably go hand-in-hand in natural systems development. They sort of have to-- survival of the fittest and all that. But its also important to point out that it isn't really survival of the fittest-- while that's the popular notion its not really quite right and not even what Darwin was saying. More correctly it would be put something like survival of the fittest from amongst the challenged.

Not every niche is challenged nor is every entity that occupies a niche necessarily the baddest-ass player. And there's a correlary too- nature has no problem leaving in traits, features or behaviors that do not grossly affect the primary mission of survival or species propagation. Nature doesn't give a rat's ass if you get hypertension and die in your forties as long as you were able to spread your wild oats around in your younger years. And even that's an oversimplification.

Regarding your statement about "no machines have ever had emergent properties"-- that's just incorrect. Any sufficiently complex system is going to have emergent properties, regardless of the type of machine or system it is. Behaviors or conditions that weren't expected. Sometimes the origins of the properties are so subtle that they are very difficult or even impossible to discern.

Another item to hammer down. When I talk about machine intelligence I am not necessarily implying that your desktop PC is gonna spontaneously develop a personality and free will and start mocking you-- even though I happen to know it already does that secretly... ;-)

I am not mandating the architecture of the system, merely using the word machine as a hammer on the table to constantly remind us that we are nothing special-- everything about us is ordinary, "mechanical", and can be both understood and documented, given sufficient time, resources and study. The fact that we are biological machines who evolved to be what we are does not negate the fact that we are-- machines.

And who says machines have to have a purpose? Who says you or I have a purpose? What *IS* our purpose, if you want to go down that road? Whatever your answer-- how do you know that? What difference does it make, cosmologically, whether I sit on my ass for my whole life and do nothing, or become a Nobel laureate, or a mass murderer? What difference does it make if I live my life as a saint or a sinner? What difference does it make if I claw my way to the top of the heap or stay at the bottom of the heap?

Whatever your answer-- is it the same for mold and slime? Dogs and cats? Dolphins? Ants and honeybees? Aliens? What makes us so special that we get a whole division of physics, natural law, and moral purpose all to ourselves?


We are machines. We obey all the natural laws of nature and physics. Every part of us can be dissected and understood. The only reason we have some hesitation or repugnance at the thought is that we consider ourselves to be sentient beings-- self aware and aware of the world around us. And we have some sort of idea of a "golden rule" and other blather that makes us pause whenever we want to rip one of us up in the name of scientific advancement. Morals and ethics-- which for the most part is a shared social "contract"-- I agree not to do it to you if you agree not to do it to me.

But we aren't so bothered when its mold or slime or a fish or a dog or cat or a dolphin or a monkey or something we don't empathize with and ascribe "awareness" or "feelings" or other aspects-- even if after study we even *KNOW* full well that some of those creatures DO feel pain, DO have thoughts, ARE aware of their surroundings, and ARE aware of themselves. We just don't give a shit. It doesn't serve our purpose to care enough to stop. And in some places and at some times, including the present day, it serves some people's purposes even to use other human beings for test subjects-- to try stuff out on, and then pick them apart to see what happened. So some people don't even have *THAT* degree of morality.

I'm just sayin.
I am not totally following MrE or CA even after reading all the words, but something MrE mentioned deserves further comment. I have another geek friend who told me that the purpose of intelligence is somehow tightly coupled to evolution. I am not totally clear on what his position was, I will have to try to remember.

but, my position is closely with MrE on this. I think evolution *LED* to intelligence as an adaptive quality. but, intelligence can be decoupled from evolution. the main purpose of evolution I would argue is to deal with what in economics is called SCARCITY. scarce resources. food, water, evolutionary niche etcetera.

so imagine a robot that does not need to eat or sleep or mate. what role does evolution play here? the answer.. zilch. there can be an evolution in the sense of improving its functioning by the designer, but I think the whole point of artificial intelligence is this decoupling that MrE is speaking of to some degree. these robots, "when" they achieve intelligence, need not have limitations that are intrinsic to us and highly coupled with evolution, such as hunger, pain, lust, etcetera....
CA writes: "What I'm saying is that, so far, no one has created a machine with emergent properties."

yow, thats quite a doozy. yeah I agree that is basically false. now heres the deal. "emergence" is indeed a very young science. there is an excellent book of that title that describes the birth of this new science. I would rate it as less than two decades old. it would be interesting to pinpoint its origination.

heres a datapoint for your consideration. I have been working ~10 weeks at a new job at a very large banking company, fortune 500. we handle ATM transactions across the country. our system processes ~6M transactions per day. my sr mgr gave me a very difficult problem to diagnose when I started. after many weeks of analysis and a few recurrences, we are closing in on a fix. but its a very subtle glitch that intermittently crashes one of our servers, not always the same one. the sharpest minds in the company (and there are some very sharp people here) have looked at this apparent bug and have not been able to isolate the problem. we have LOTS of very expensive analysis software, some of the best in the biz, and many people and myself have pored over graphs and logs painstakingly. we have mountains of data-- literally gigabytes. we can look at many statistics weeks back in 15s increments and graph them. and this problem started in the middle of the summer possibly-- july even!! we still dont have the answer. but we are closing in on it. every time one of those machines crashes, hundreds of ATM and other monetary transactions fail.

now, why is it so hard to isolate this bug? some of this is due to the typical institutional headaches, but.. my answer-- because we are working with a highly complex system, one of the most complex Ive ever had the pleasure/pain of dealing with personally in my career. its a very Big Dragon as in Here There Be Dragons. it is literally beginning to manifest many emergent behaviors. some which we can loosely understand, others that we have studied in detail after many weeks, and still dont fully have a picture of. this thing is possibly more complex than some FAA or NASA systems. and we struggle every day to create a mental picture that matches what is happening inside this massive black box with a zillion moving parts. thats my current definition of Emergence. for a similar idea/odyssey, see a great book called "The Bug" by ellen ullman.

so yeah, we as humans weakly understand Emergence, but we may pull off the trick of creating highly complex systems with remarkable emergent properties that shock us, that seem to transcend their designers in some ways. sometimes its a bug, sometimes its a feature. I have analysis of deep blue moves by chess experts, and some are literally shocked, flabbergasted by certain moves that the machine makes. they see the move, they would have never have picked it, but after the machine picks it, they say it is a recognizably "brilliant" move. what kind of brilliance are we talking about? I would say, the brilliance of emergence.

open salon-- also an emergent system. who can predict what will happen? at heart its just software. but it goes in surprising, unexpected directions.

so humans & Emergence have a lot of possibilities together that we're just starting to explore. its a brave new world. terra incognita. it will look like scifi realized over the next few decades.
at some point, one has to recognize a sort of basic paradox or irony. this type of debate on artificial intelligence converges toward, or amounts to almost a religious debate -- among atheists.... or at least humanists...
another strange twist. on this ATM transaction system.. its a new experience for me in the following way. all IT workers have the experience of working on failed projects. statistics suggest many projects "fail" in the end after many months, sometimes years of work. I know, Ive seen it, Ive lived it 1sthand. its difficult to avoid the perception that its your fault, or your teams fault, or whatever, but after enough of such experiences, one begins to discover a pattern. its the IT version of what schumpeter called "creative destruction", and capitalism in motion in the [very fast moving/paced and *evolving*] IT realm.

anyway, this system has the opposite problem. its a victim of its own *success*. the executive want to *double* the transaction volume in a short amount of time. its getting loaded down with exotic new changes. people cannot keep track of it all. as the poet whats-his-name famously said [gotta review that for ideas!!], "the center cannot hold". sometimes, I have the feeling its kinda like a big supernova that could spin apart....

aiieee, captain!! I donta think the dilithium crystals canna take it!!
VZN: Here's an open question for you or anybody...

What exactly is "emergence"? Is it something "new" and "unexpected" that arises out of the sum of the parts and the collective workings of the system? Or just something the designer(s) didn't think of but that makes perfect sense when you work it all out and understand the principle? Wouldn't "emergence" really be more of an indictment on the designers for not completely thinking their system through before unleashing it?

(And I'm not really picking on designers, human or otherwise)


Another thing to be careful of is not to confuse "intelligence" with "mutations" that lead to new behaviors. In many cases items that one might ascribe to "intelligence" may be nothing more than a mutant meme. Who says DNA is the only thing allowed to mutate anyway?
I just added a new section on "improved brain imaging technology" & a link, and I gotta go add a second one soon.
Y'all are confusing intelligence with life, evolution, adaptive behavior, Darwinism, and whole bunch of other stuff. You are failing to separate the system from the behavior.

I say again: If something acts intelligent then it is intelligent, regardless of how it got that way, or whether it is alive or has feelings or can evolve into something else. Intelligence does not require the survival instinct. Nor a mating instinct.

Intelligence is simply a word used to describe the ability of a system (a "machine") to respond to its inputs-- and do it correctly.

Certainly many biological beings, humans included, have evolved or adapted to have intelligence-- and also through their "intelligent actions" have affected their continued evolution. Once its in the mix it obviously has (or should have, or could have) an effect on the outcome.

But you're confusing the needs and goals of living / biological / evolutionary systems with the basic requirements needed to build and exhibit intelligent behaviors. Certainly the former can make use of the latter, but the latter has no requirement for the former.
"What exactly is "emergence"? (a) Is it something "new" and "unexpected" that arises out of the sum of the parts and the collective workings of the system? (b) Or just something the designer(s) didn't think of but that makes perfect sense when you work it all out and understand the principle? Wouldn't "emergence" really be more of an indictment on the designers for not completely thinking their system through before unleashing it?"

these are exactly the kind of questions that a new science of Emergence would (and does) seek to dissect and answer. and the answers wont come early, quickly, or cheaply. my question to you is, can (a) and (b) be discriminated? are they really different? yes, it could be an "indictment", but we could also never "unleash a system" into the world too... I would argue that the Challenger disaster due to the melted/frozen O rings discovered by Feynman is an example of Emergence. he didnt attribute to that, but imho it exhibits all the classic symptoms. not just in the O rings but also in the bureacracy that surrounded them. examined by Feynman in his great book.
Okay, I threw it out there last night-- what does it mean "to know" ?

One can mumble about all sorts of metaphysical psycho-babble, but "to know" is a very simple concept.

"To know" is to identify and remember. As a system gets more complex it can include more abstract types of knowledge such as facts, opinions, etc. But on a basic and very fundamental level-- before we've introduced a theory of mind, "to know" can only be considered in terms of identify and remember-- simple things.

Remember in one of my previous comments I alluded to relationships being the fundamental element of intelligence. And the most fundamental relationship, to paraphrase it slightly, is "me versus not me" (i.e., anything else that's not me). And its also important to point out that when WE-- you and me-- discuss this, we do so with words, and a stream of consciousness-- we have a hard time dissociating those elements from this fundamental idea-- but when we are talking about the root relationship, we have no words for it, we have no consciousness surrounding it. We have no opinion about it. We don't even really have any awareness of it per se-- and yet it is the basis for *any* awareness we can ever have, as a being.

The reason we cannot articulate it, realize it, "know that we know it", etc-- is because we do not as of yet have any OTHER relationships that could clarify or relate the concept. We have no words, which are simply definitions-- a "let" statement, if you will. A simple relationship-- let "ME" = (me). Let "EVERYTHING ELSE" = NOT(me). But "not" would have to be an innate function. *Something* at the _hardware_ level has to be able to supply that function or we can't ever be able to perform it. It is a fundamental essential ingredient which, if you don't have it, you can't synthesize it.

Knowledge, at the basic level, is all about identification and remembering that which has been identified-- even in the absence of words, the ability to think abstractly, or to even be "aware" that its happening.

So, in order for it to occur, it *must* also be a supplied item in _hardware_ or else it can never happen. It cannot be synthesized.

So how can you supply "to know" at the hardware level-- or let's use the more basic idea, to identify and remember? This sounds difficult, especially the remembering part... How can it be accomplished. And moreover, how can it be accomplished *naturally* and hoisted into existence using some naturally occurring phenomenon?
Identification.

What is identification?

Identification is a function that permits the detection / discrimination / discernment between two or more elements (things). And via that function becomes an "index" into the "set of those things". And the more ways you have to "identify" things, the better you are able to detect, discriminate, and discern them.

If you're afraid of "dimensionality", you might want to slip your head under the covers now. I'll let you know when you can peek. We're headed for N-dimensional space.

Everytime you create an index ("a function for identifying things") you create a dimension in our N-dimensional space. But the most basic dimensions-- speaking from the point of biological or evolving complex systems-- are not the classic "X", "Y", Z" coordinates plus "Time", as we learned about in physics or mathematics or some other place-- rather they are going to be very subtle functions and probably related to temperature, proximity, salinity-- as an example-- pressure, etc.

And in order to be able to exercise that function there *must* be a corresponding sensor located in _hardware_ to perform it.

Let's take temperature as our example. In order to be able to identify temperature there must be some physical sensor able to do it. But once its there, the ability is enabled and it becomes a primitive index. A One-Dimensional index. And also note that it has no vocabulary so it cannot articulate that something is "HOT" or "COLD" even though it can sense temperature and IF it were able to articulate it, those would be useful tokens with which to label the extents.

At this point we have no reactive element. So its merely identifying items along the hot/cold index, though at this point its immediately forgetting them when they disappear (which means simply that they are no longer present at the sensor). Presenting something "hot" or "cold" will cause the sensor to output accordingly along its axis to whatever degree of resolution it can resolve, and along its available range.

At this point we might create a machine, or have a biological element that's able to use this one sensing ability to gain some sort of advantage. Perhaps its some primitive organism that uses the information to flap / swim / vibrate / whatever more when its hot or cold than vise versa. Perhaps in its environment, in competition with others similar to it, that could convey an advantage of some sort and improve its odds of survival and producing offspring, and thus increases the odds of passing along both the ability to do that and the predisposition (primitive meme) to actually do it. And if it confers and advantage, it would be at some sense an "intelligent" behavior since it would be an advantage over organisms not able or lesser abled to do that.

I recognize I'm using a biological / evolutionary example here. So sue me :-)

Then one of the little buggers has a mutation. Somehow the sensor gets messed-up and instead of detecting temperature, it instead detects salinity. And doing that confers an advantage of some sort on it. And it survives and passes along its mutation.

And on it goes-- and then another mutation occurs at some point. A beastie is generated with *both* the ability to detect temperature and salinity. We now have Two-Dimensions of identification. Still no memory yet, but we don't really need it yet either.

Having both abilities really confers an advantage to organisms that have it. And they pass it along. And so on.

Then you get a mutation that confers the ability to detect light.

Or pressure.

Or a type of molecule that's bad.

Or whatever.

Each of these sensors confers additional advantages on the organisms and permits those that have it to survive and thrive better than their brethren without them.

But on an intelligence index, we're still only barely registering on the meter. These things are pretty dumb. They don't know enough to swim out of their own feces. Or even what feces is, for that matter. And there is no propensity for survival either. No concept of self. No concept of other. They simply react in some simple way based on the inputs they're able to sense. They have no purpose, but they exist, and will continue to exist and function in their simplistic way, until something better comes along, or else their environment is poisoned / altered somehow to the point they are no longer able to be sustained by it. Over time and successive generations, organisms that have one combination of sensors or another may be able to outperform (or "compete", if you want to use that term) others and become a dominant or even victor organism in the environment.

Now let's consider that there is a "sweet spot" in the environment such that a particular spot along the index (a small range let's posit) is particularly good for the organism. So that organisms that are better able to discern the sweet spot will be better able to thrive. So a primitive "trigger" mechanism develops which activates more heartily when the favorable conditions are detected.

Over time and successive generations other similar triggers are developed to detect the sweet spots that exist along the other available indexes. Which, in turn conveys yet more advantage to the creatures that are able to do it, etc.

There is a very primitive intelligence function at work here, btw, though it is so low on the scale it is hard to discern or recognize as intelligence, and hardly meets our usual definition of the term.

At some point, we'll conjecture, the organism develops enough triggers along enough axi to detect when it is in the presence of a potential mate (using cues from the temperature, pressure, light, salinity, etc). And that confers an advantage and those that can are better able to thrive and so on.

Thus far our creatures are all completely reactive, and yet with their primitive sensors and reactive abilities, they have what they need to thrive in their environment.

Stepping away from our happy little friends now--

How does our identification mechanism look thus far? We can think of it as an N-dimensional space where each coordinate is a position on one of the sensed indexes.

Whenever something comes into contact with the sensor array it is automatically mapped, via the indexes, into that N-dimensional space. Each item encountered has a unique signature (we'll stipulate for the discussion) comprised of its particular combination of values along those various axi.

Using *only* this simple mechanism, note that we already have the basis of a pretty sophisticated "content-addressable storage system", except without the "storage" part. Every item that presents itself at the sensor array self-addresses itself along a unique set of values (vectors if you prefer) of the various axi. All we need left to provide is some sort of "memory" and we'll have the whole thing. And, just to make the point, its no small thing either-- the whole identify and remember system is a big deal and is one of the fundamental building blocks of more sophisticated intelligent systems.

Another important point to make is that there is quite a lot of ability even before we have a "rememberer" part-- just the ability to normalize along a standard reference axis and create triggers that span axi is another huge big deal too. It permits a whole slew of reflexive behaviors that can confer all sorts of advantage on a system without even requiring more advanced forms of intelligence.


(Taking a break here for a bit-- wanna read through what you all have been posting)
Okay VZN: Emergence, (a) something new, (b) something overlooked. Is it possible to discern the difference?

Yes.

Maybe not easy, but possible.

And it leads the question-- what is *true* emergence?

The answer of course would be (a).

Thanks for playing :-)


True emergence is something that *is not* in the design. Anywhere.

If you can look through the system and figure out how it works from a deeper, more detailed study of the design, then it is not truly an emergent property. It is simply sloppy engineering.

(Again, not really ripping on engineers)

A true emergent property is not in the design of the system. Even if it wasn't realized, anticipated or understood by the engineers at the time of the design.

(And CA, as far as it goes, from that perspective I might agree that emergent properties are rare items, if they even exist.)

And if they do exist, they are likely to exist in the realm of "randomness"-- things that are or can be based or factored off a random event.

But then that would lead the question: "Can anything really be considered 'random'?" Which would have to decide between the idea of "true" randomness versus "random enough to suit our purposes" (whatever they are).
CA: "hat is why I am literally shocked that Mr. E would bring in washing machines, heaters, clocks, simple mechanical automata, etc., as examples of state machines that might be described by children or primitives as 'intelligent.'"

Let me stop you right there. I made no claim that a washing machine is "intelligent", nor inferred it, nor even remotely considered that as I was typing. It *is* however an example of a state machine, which is what you were asking about and I was describing.
CA: "First, we have no formal theory of intelligence because intelligence is a too broad, slippery, ambiguous, all-encompassing kind of quality to be amenable to the kinds of formal theories that we have in logic/mathematics/particle/quantum physics. As we climb the complexity ladder, even in physics, formal theories start disappearing...so there is no theory of turbulence, and other complex non-linear phenomena, and more so for the zillions of chemical compounds and their properties, and moreso for biology, and then psychology, the cognitive sciences, and sociology, economics, the social sciences, because all of these are astronomically too complex."


CA, gently-- it was not so long ago we had no theory of flight, or sound, or relativity, or gravity, or reproduction, or electricity, or nearly any other phenomenon you can name. In fact, it is only recently-- at least in the last century, but I'm thinking more recently than that-- that there has even been any really good thinking done about "time" itself. People have bandied it about, discussed its properties back and forth, but until very very recently, there have been precious few theories advanced about what time is, how it works, why its there, what it does, etc. By which I don't mean there have never been attempts, but people just haven't had the perspective and the requisite predicates to build upon to really give it more than a passing whack.
CA, I correct myself a little, I did say this:

"Intelligence is simply a word used to describe the ability of a system (a "machine") to respond to its inputs-- and do it correctly."

Which might be indirectly construed to mean that a washing machine is intelligent, insofar as its a state machine, and that would meet the technical definition I did advance.

But I wasn't thinking of that when I used the washing machine as an example.

So, let's agree that perhaps there may need to be some refinement to the concept-- but I *absolutely and unequivocally* reject the notion that intelligence *must* be bound to life, evolution, survival, and all that. It can be canned, even if nobody's done it yet.

And I defy you-- *really think about this now* -- to state that just because you understand a thing, how it works, why it works, its data, its processing mechanisms, etc-- it can't be intelligent.

(I'm not really being that defiant, just sorta pseudo-defiant. It wouldn't sound quite so ominous if I just said "You know, uh, just think about it, okay? :-)

CA: "washing machines, heaters, clocks, simple mechanical automata, etc., as examples of state machines that might be described by children or primitives as 'intelligent.'"

And how would someone with that mindset act if they knew how we worked, how we were built, how our brains processed information, how "consciousness" is manifested? How we make decisions? Etc.

I continue to hold my stance that if it acts intelligent, then it is intelligent.

However, I do understand also your resistance and reluctance. But I think its the same kind of reluctance as people have to the notion that the bullet can come out of the person and travel back to the gun. Or that particles can spontaneously wink into existance and be entangled with each other, even at a distance. That really makes you call into question your concepts of time and space, doesn't it?
On the contrary, I don't care a whit if you *do* examine the inner workings. That's been a fundamental aspect of my argument all along.

Let us take the one example of "intelligence" that you and I seem to agree on-- us.

If you are able to peel back the lid and understand how we work, and it turns out that in the end it *can* be understood, documented and even *replicated*, will you *STILL* consider us to be "intelligent" ?
CA, I'll also take one step closer to you and agree with something you're saying...

You are correct, there is a difference between a Lawn mower and a human being in terms of intelligence. There is a difference between your mechanical wood-cutter on rails and a human being. But I do not agree that there *must* be a difference between a machine and a human being, with respect to intelligence.

So clearly there is a humble essence that we are overlooking in our discussion that is the threshold of intelligence. Perhaps that's what we ought to be working to articulate.
hi CA, your arguments are indeed very similar to those historical arguments that argued that the world was flat, or that nothing heavier than air could ever fly, or that atoms do not exist, that humans could not be descended from apes, that computers would be of no use in the home, etcetera. it would be very interesting to make a long list like this, maybe I might do it sometime. "history of scientific and technological revolutions and paradigm shifts". and maybe you might make a list of the opposite, where people tried very long to conquer something that couldnt be done, and it was proven scientifically that it truly couldnt be done. two cases I can think of would be perpetual motion machines (disproven by thermodynamics), or squaring the circle (disproven by mathematics). but, I think my list would be way longer than your list, dont you think?
"But 'intelligence', because we have not produuced it, is vexatious. I bet you that if e could create AIs as easily as generating 'lectricity, all the pesky scientific/technical/ philosophical questions about AI would be of little interest....the society would say: "We got various AIs for different purposes, like we have power stations, and portable generators and batteries, etc., so...there's no problem. You philosophers can talk about it...for the rest of us it's a non-issue.""

I totally agree with you on this statement, and you seem to be indirectly classifying yourself as a philosopher based on this statement. but, the thought is incongruous and shows a crack, a doubt in your Worldview, which so far you have held steadfastly that such a scenario is impossible due to the nature of reality or consciousness or intelligence or [x]. therefore, why do you contemplate it in a hypothetical scenario?
you mention the seemingly abstract nature of physics in the form of electrons and/or schroedingers equation for wavelike reality etcetera. yes it does seem phenomenal that we can *reliably* and *consistently* harness aspects of physics that are arguably, still not fully understood or pinned down in highly abstract physics theories. but, as you note, it does seem to be a pattern in human engineering, and R&D such that various sophisticated aspects/properties of reality can be harnessed before they are fully or largely understood. for example, the steam engine. radio. etcetera. and yes, I think there is a good chance this will happen for Artificial Intelligence also.
however, there is a minority contingent of physicists that tended to agree with you, and it was led by Einstein and his EPR paper, which was very abstract and philosophical at heart. but Einstein actually devveloped his physical theories with thought experiments that very much verged on philosophical inquiries. but in a systematic, methodical way. even after we have an amazing quantum mechanical theory developed now about 8 decades ago, this minority contingent is still, at times doggedly, pursuing and attempting to nail down or expose its weak points, its possible incompleteness. but, the theory is very strong and resists uniformly.
"I still find it hard to believe that what you claim is happening there is really happening....that rat brain neurons are being used to understand navigation in a realistic way, and not just as an artifact of an arbitrarily created input/output configuration. It may just be that this rat brain guy has found a way to program neurons in a very general way and he has constructed an input-output program in a very artificial way for the rat brain so that it only seems that the rat navigates in 3D. "

I worked in a neurobiology laboratory 4 years as an undergraduate 89-93. this was a holy grail experiment at the time and the researchers had made some attempts that had failed. basically the idea was to create an interface between real neurons and silicon systems, ideally a larger clump of neurons. this experiment has finally been realized in an even larger way with the rat brain robot. I assure you it is real. and as I understand it, it DOESNT MATTER which neurons you hook up the robot to. the robot sonar is hooked up to some neurons in the clump, and the robot drive mechanism is hooked up to other neurons in the clump, and you get the SAME BEHAVIOR from the rat robot after a period of "disorientation" and random driving. clearly, THE NEURONS SELF ORGANIZE. its very similar to the stem cell concept except on a neural level. its almost as if the neurons "figure out" what they are supposed to do, or specialize/differentiate based on their neighbors firing patterns-- very much the way stem cells are undifferentiated and then differentiate in the same way, except based on chemical cues instead of electrical signals.

"The programmer could just have easily constructed an input output table for the rat neurons that simulated investment decisions in the stock market"
precisely correct, but, in fact, the evidence is that it might do something quasi-"intelligent" in that case also. something like an "obstacle avoidance" algorithm in the stock market. dont know exactly what that would be, but it might turn out to be something like "buy low, sell high".... I am not totally joking here. this is phenomenal evidence for an idea of a self-organizing property of a neural system that leads to intelligence on the high level based on localized interactions of neurons on the low level. what exactly is the algorithm? there is some possibility it is similar to algorithms already existing in Machine Learning books.
"If it acts intelligent then it is intelligent."


Touching only briefly on the Wood-cutter and washing machine... both of those are "artificial constructs" created by human beings for their own purposes. Both contain some "intelligence", but only "borrowed" intelligence-- wouldn't you say?-- endowed upon them by their creators. They do what they do, what they're supposed to do-- unless of course they're broken-- and don't complain, don't yearn for more, and don't have an opinion on the subject-- and moreover, what if they did, they're only "slave apparatus" and their opinion would not be recognized.

People of course, are an excellent example of intelligence because we, um.. er, you know, *act* intelligent. Therefore we must, uh-- you know, *be* intelligent. We've set our own standard for intelligence and naturally, we've selected it to include us. How noble we are.

How grand we are.

How intelligent we are.

We don't need no stinkin truth tables.

We don't need no stinkin conditioning

We don't need no stinkin training

We don't need no stinking mothers and fathers

We don't need no sisters and brothers

We don't need no peers or society

Because we spring from the womb fully-formed and with all our faculties fully-intact. We are....

uh... wait... roll that back...

We spring forth with our faculties fully-intact.

Yeah?

Since when?

Who nursed you and fed you and changed all your poop-ed diapers and taught you nursery rhymes and sang you songs and told you stories and showed you which plants were safe to eat and where to run to avoid the sabre-tooth tiger?

And how did you accumulate all that wonderful knowledge? That book learnin? That readin, writin, and 'rithmatic? Where did you learn "up" is Up? And water is "wet"? And grass is "green"?

How did you come to be edumacated?

How "intelligent" would you BE, really-- come on now, fess up-- if you weren't living on borrowed thoughts and algorithms? Someone ELSE'S intelligence that was bestowed upon you so that YOU could... um... you know, APPEAR to be intelligent yourself?

Not that you don't have the propensity for it. You've got some good hardware parts there. You've got everything you need to sling around some good ideas while walking and whistling Dixie...

But surely you're not going to sit there and claim that everything you know, everything you think about, everything you ponder, all your ideas and wisdom were there when you started-- courtesy of.... whom really? The creator? Your parents? Mother nature? -- Doesn't really matter-- it was all there and it is all yours and there was nothing borrowed from anybody else... right?

Right?

... right?

Hey-- come back-- where are you going?


What portion of your intelligence would you estimate is someone else's and not really your own?

Would you be sabre-tooth tiger poop by now if it wasn't for borrowed intelligence?
...by the way, I feel it only sporting to point out that the next move is 'Check' and 'Mate'...
CA: The question for you...

"What portion of your intelligence would you estimate is someone else's and not really your own?"
CA, I am enjoying reading your response tonight (I enjoyed the others too, but tonight's is especially nice).

I wanted to quickly respond to this though before we get too far afield:

CA: "do you really think that increasing the computational power...to mega peta flops...even with quantum computing...maybe in the end a brute force approach will simulate bus driving, dry cleaning, cooking, etc., well enough for robos to do it...but it can not make up the deficit in inherent 'logical operational' poverty well enough for an elegant, sophisticated, sweet, and compact enough solution to qualify as real AI."


I will say "yes" before I say "no" only in this one aspect: I do believe that it *might* be possible to brute-force your way into intelligence with a sufficiently speedy and resource-laden computer. Especially if it does not have to be based around a typical serial von-neuman style architecture.

But, that said, I do not think it is very likely and I personally would not take make more steps down that road. I think there are other avenues to explore that would have a much better chance of bearing fruit.

And I don't believe a "machine" must be digital, or "mechanical" (in the classic sense-- i.e. metal or something), nor do I believe a machine *must* have a purpose or be "bound" to its creator.

Just to reveal some of my background, I have many years of electronics and computer programming, and robotics / automation has been an avocation of mine ever since I was a little 'e' -- though it has not been my only avocation... too little time, too many interesting things in the universe to explore.... sigh.
Btw, it was Jacquard who invented multi-threaded computing...

:-)
CA, its interesting you mention "Society of Minds"-- was that a subtle ref to Minsky? ;-) I hadn't thought of it quite that succinctly before, even though that's what I wrote-- it was your phrasing that capped it for me-- thank you.
Okay, so we have another point we can agree on-- probably anybody reading along would too--

CA: "hammers and combs and explosives, are not 'intelligent' just because they are well-fashioned tools"

I agree-- even though it seems obvious, its worth noting for the record that I am not, and neither are you-- and probably not VZN either-- attempting to make a case for simple tools being intelligent... (although a friend of mine and I did many years ago come up with a whole rationale for 'inanimate life' ... remind to tell you about it some day... it requires much consumption of alcohol in preparation ;-)


But, I would not be so quick to rule out a more complicated tool, even if the intelligence was-- as we seem to have agreed to term it: 'borrowed'. Going back to Minsky's book-- marvelous tie-in here-- a sufficiently advanced tool could be considered an 'Agent', working on our behalf-- at our direction. Its actions limited to performing whatever task we direct (within its abilities of course) and doing it as autonomously as possible.

There is a parallel in the human realm-- agents of any kind-- real estate agents, insurance agents, lawyers (of the good kind :-) researchers, interns, grad students, key grips, etc. Anybody who is "bound" (albeit however temporarily) to another's directives, will limit themselves to achieving those directives and doing so as autonomously as possible.

I am mulling over your term "in and of itself" -- not ready to respond on that quite yet-- but thinking about it.

One of the places that you and I seem to have difficulty meeting is where the concept of lawn-mower (wood-cutter) ends and autonomous mobile system begins.

I can easily conceive of a situation where I build a sufficiently complex machine, provide it with the means to forage for fuel-- probably gas or diesel or propane or some readily obtainable real-world substance. Solar just isn't ready for that type of power density in a small package yet-- might never be. Provide it with enough sensors to figure out up from down, here from there, read a map, converse with a human, etc. Give it a little sensor that says "FEED ME SEYMOUR!" (I'm outta gas). Make its prime directive be seek out gas stations, entertain the humans there until they relent and fill up the tank, and then go off and do anything it likes until its hungry again.... fly, be free-- go be you.

Sure it doesn't know how to reproduce. At least not via its own mechanisms-- perhaps it could have a downloadable copy of its plans, specs, software and data-- and if it could talk a human into building a copy, it could boot it up and share its programming and data, including any enhancements its made, etc.

On one level that would be a type of "living", "reproduction", and a "self-survival" mechanism. And we would have an "artificial" evolution-- which isn't really a good term for it, but good enough for now. Maybe we would need to build in some "avoidance" directives to help it stay away from nosy humans or humans with a nasty look on their faces who just want to take it apart to see how it ticks. Those I guess would be a form of "artificial" instincts-- again with that 'artificial' word, but it'll do for now.


My guess is that you believe intelligence must be driven by a need and refined over time.

I don't agree that's true, although I do agree that the various examples of intelligence that we typically think of, have experienced in our regular lives, and probably (possibly) considered as an extension of our interest in robotics, machine intelligence, etc. has been that of natural systems. So I can even believe / understand how a bias towards believing that those are the *only* types of intelligences possible can emerge.

It is a bias until another intelligence is encountered.

It is snobbery thereafter.


You seem well-rounded-- you probably are already aware that there are great problems with "standardized" intelligence tests. Theoretically such tests are constructed in a manner to discern essential abilities. But in reality tests that are truly "universal" have proven very elusive. You take a test that's been designed to give to a regular kid from the U.S.A. of middle-to-upper class social background, with regular 'normalized' schooling and education, and etc., and give it to someone else, perhaps a starving kid in Ethiopia and there will be a vast difference in the scores.

Of course, you'll say-- as others have said-- its differences in language, culture, background, education, etc. that is the difference.

So its not *REALLY* an intelligence test. Only a test of how well you've been indoctrinated and brainwashed and absorbed the mainstream educational propoganda. There may be *some* small merit to it under-the-hood, because you need to have a brain capacity "this tall" in order to even register on the scale-- since that's the bare minimum required to suck up all that pablum and bullshit. But the test itself is highly skewed to how well you've assimilated into mainstream ("white", "middle-to-upper class", normalized education, society).

There are other tests which claim to be more egalitarian and rely less on culture and attitudes and more on actual aptitude. And I don't doubt that they are more discerning than the more usual kind, but they are still coding for *HUMAN* intelligence.

If you received a broadcast from another world galaxies away-- you have no knowledge of that world, other than the supposition that physics is the same everywhere so they probably at least have that in common with us-- assuming they've advanced to the level that they have a significant conceptualization of physics.

What questions could you ask to determine their intelligence and aptitude-- and just to tweak your nose a bit-- how would you know you're not talking to a machine?
CA, one of the points you make a lot is the "river and course" analogy-- I agree completely. Whatever "intelligence" is-- "mind"-- it is comprised of both the hardware / software -AND- the data.
CA: "of trying to create innate goals for the robot that are something 'more' than mere desscriptions of states...end states, intrmediate states...said states being coded in terms of the strings of the elements of the formal system that the robo/machine is just a physical expression of...no wonder this seems to be a dead end."

I do not know if it is possible to create a system that is intelligent through pure symbolic processing alone-- i.e., hoisting a personal computer into "intelligence". Of course I don't mean an actual PC-- if nothing else it would be thwarted by its Windows OS... ;-) If anything of the type is possible, it would likely be purpose-built, massively parallel-- perhaps to the point of using all processors for both processing elements *and* storage. Why not, after all? Might as well, CPU's are ultra cheap and if you've got zillions, they don't need to be highly complex. And massive parallelism has got a *lot* of good things going for it. There would probably also be some specialization-- some areas might revolve around 'sound' sensors, some around 'sight' sensors, some around 'touch' sensors, etc-- so that, while they are technically generic, they are in fact typically used to support the area / region to which they belong. Some would probably be allocated to supervisory functions, some to anachronism detection, some to shoving everything into a 'story line', others with storing and retrieving things, and still others with manipulation of external things, etc.-- lots of generic processors, either logically or lightly-in-hardware connected together in processing pools and reaching out into other pools for communciations, etc.

It could get really complicated. Even darned-near impossible to follow or understand. And certainly, to drive home your point-- which is really one that we've both brought in-- it would interact with the real world, experience novel inputs, and rapidly (as in near-immediately) differentiate itself from its clone as it did so. It would be a unique individual, if for no other reason that those separate (real world) sensory / data interactions.
CA: "that this 'world' is the creation of the neural processes in our brains. So that, if these processes could be deciphered, and mimicked in another medium, we would experience a 'virtual reality.'"

Why do you suppose it is you dream?

Which is not the question I really mean to ask, which is:

Why do you suppose we dream in STORIES?
CA: "So the brain's intelligence producing function is not reproducible in other media...to get the output of the meat brain...maybe you need parts of the meat brain..and more parts...pretty soon you're back to Natural Intelligence machines rather than more fully AI machines...."


Checkmate.


You've just come full circle.
CA: "Did Marvin Minsky first propose the 'Society of Minds?'"

He wrote a book by almost that title: "Society of Mind" in which he introduced a lot of interesting concepts, among them the idea of autonomous agents acting in our behalf.

IMO, the book was kind of a sleeper, but don't tell Marvin, okay?
CA: "That is the Great Divide...between signal and symbol. Now, for an old fashioned Dualist, this just reenforces the divide between the physical and the material. Signals are physical phenomena...they convey the 'meaning' of Ideas once they have become symbols...and Ideas are members of the Mental realm."

To this point I see no real disconnect between us.
CA: "So we humans inhabit the two worlds of the Signal and the Symbol, or the Material and the Mental, simultaneously."

This might be where we begin to separate. In order to have the 'symbol' and 'mental', you must (a) have a hardware capacity to hoist and serve them, and then (b) a mechanism by which to "load" (i.e. train / teach / bestow them with wisdom).

At what point does rote instinct give way to cunning and intelligence? Which does the cobra possess and which the mouse? Or the sabre-tooth tiger? Or a newborn (human) baby?
CA: "Epiphenomenon...a byproduct of physical brain functioning with no causal power of its own. So when we think...I will lift my arm in 2 seconds...and in 2 seconds I/You lift an arm...physicalists will say that a sequence of euron firings preceded our thought...and a subsequent set of motor neurons fired to move the arm."

But in your particular example, does your thinking the thought actually have *any actual* relationship to the subsequent movement? Or is there in reality some deeper, undiscern-- essentially invisible element that does the actual moving because you somehow "willed" it, as opposed to "thinking the thought"-- a subtle but distinct different in modality?

If you put your hand on a hot stove, do you have to wait for your brain to think "OH MY FUCKING GOD THAT'S HOT!!! Sheeeeit, I better move my hand !!!!" Or does something deep and elemental just *move* it and tell you about it later?

What happens if you think to stop your heart?

Then there may be other types of activation-- for example a very bodacious and well-proportioned babe of the most persuadable persuasion happens by and drops her pencil in front of you... just the act of watching her bend over to retrieve it is enough to give you a rock-hard woody sufficient to pound nails.

And then again later, alone in your room... you think about her again and *schwiiing* up it comes again, rock hard and looking for the party....

And then your thoughts turn to your grandmother-- Blech, how disgusting! And everything wilts and you go back to doing whatever you were doing before it all started.

So you have an autonomous reaction based on external stimuli. Then later an autonomous reaction based on recalled (or even imagined) stimuli. And then you have an opposite autonomous reaction based on other imagined (we hope! :-) stimuli.


Or what if you thought about your intestines no longer processing food. Is there some deeper internal mechanism that would make that happen? I suppose you could stop eating, but that's not really the same thing.
I wasn't supposed to be serious about my sister's dresses???

uh... be right back... gotta take care of somethin...
CA: "physicalists would say that 'mental states' are completely irerelevant to our functioning as organisms...the aggregate of neurons has already done the work for us, of deciding how to walk, dress, what to say"

Okay, this is where things get challenging and fun. To borrow from computer terminology, but NOT to be bound to its whole and complete meaning-- one of those areas where its hard to find unencumbered terms...

There is a hardware aspect, an "OS" aspect, an "APP" aspect, and 'data' & sensory inputs. Divorce these terms from your desktop PC, I'm speaking in a more abstract manner.

The 'hardware', as it were, sets up the "physicality" of the system-- it could technically be simulated probably, but that forces us into more complicated discussion. There are essential aspects to processing that *must* be present in hardware-- I posit-- to support and substantiate certain types of higher-level functioning. Much like in our simpler machine we discussed earlier, if the ability to do negation is NOT built in, it cannot be synthesized. It is a core function. What are all the "core functions" required to suppose, say, human intelligence? I don't know. But I'd wager a week's pay (okay, a week of VZN's pay... ;-) that there is a list.

Next an "OS". Some basic common elements, whether created in hardware parts or software "algorithms" or in "firmware" or some biological equivalent, there are likely major functions that are core to the overall "system" that have been developed (by one means or another) and included in the system. A "story engine" is one of the concepts that I believe is important. A (thing) that forces sensory inputs, whether external or internal, into some sort of "storyline" that the system is then *forced* to accept, no matter how strange, out-of-place, or weird... hence the manner in which dreams are manifested-- however they occur, for whatever reason, they are always presented in the manner of stories and are always "believed" while having them-- until you wake up of course and start to correlate it with the "real" world and realize it was just a dream.

Then there are specialized "applications", which may not be completely like a "program" in the classic sense, but are specialized areas that supervise something, sequence something, make executive decisions, discriminate, or perform some other critical function related to one of the organism's "cosmic missions" (for lack of a better way to put it). Not really "thinking" with "consciousness", but more closely hardware or firmware than symbolic.

Then-- for entities so enabled-- there are goals, directives, algorithms that are imposed by the entity itself-- out of its "mind" not directly out of its hardware components-- though must be enacted by the hardware components-- which are abstract in nature, or seemingly so, that most likely form the basis of the "ME" (Godhead) layer. There are probably underlayers that are required but undiscerned by the individual that support the "ME" layer.

And the data / sensory inputs are present to each of these aspects, including perhaps internal or synthetic inputs fed back into the system in various manners to control, unfluence, suppress, or otherwise stimulate different types of behaviors, thoughts, actions or propensity to actions.

In natural systems the "programming", "intelligence", "wisdom", etc. is essentially non-physical (we'll stay out of "firmware" concepts here) and is evolved-- i.e. memetic evolution, as opposed to physical evolution-- over successive generations. Good ideas live on, bad ideas are suppressed / weeded-out over generations. Ideas that don't grossly affect the system's survival or propogation don't matter. So crackpots are okay as long as they can somehow manage to reproduce.
VZN: "at some point, one has to recognize a sort of basic paradox or irony. this type of debate on artificial intelligence converges toward, or amounts to almost a religious debate -- among atheists.... or at least humanists..."


Yes, I have seen conversations such as this degrade into "my theory can beat up your theory" contests. Its sad when it occurs because it drowns out what is often an otherwise enjoyable discussion.

Fortunately, this one does not yet seem (to me at least) to have reached that point. I am impressed very much by the good-naturedness and reasonableness of you and CA and the others who have posted to bandy ideas back and forth without getting stuck on something and becoming a religious debate.

But I don't think there's any paradox here, and I don't think the positions expoused are mutually exclusive, rather more akin to the parable of the six blind men describing an elephant. Each encounters a different part and calls out what he thinks an elephant is, all are right, but only from their limited view, and only in comparing their observations are they able to piece together the true(st) vision of the elephant.

I have *very much* been enjoying this discussion and haven't had this much fun in I can't tell you when. Just goes to show you what happens when you get stuck in a rut in a no-thanks grind-ya-up-spit-ya-out IT department. And I thank you and CA very much for the pleasure.
VZN: "open salon-- also an emergent system. who can predict what will happen? at heart its just software. but it goes in surprising, unexpected directions."

I assume this was humor?

:-)
VZN: "I have analysis of deep blue moves by chess experts, and some are literally shocked, flabbergasted by certain moves that the machine makes. they see the move, they would have never have picked it, but after the machine picks it, they say it is a recognizably "brilliant" move. what kind of brilliance are we talking about? I would say, the brilliance of emergence."


No, you see a bunch of old farts sitting around wishing they had the computational power of Deep Blue. That's one of the key differences between "human" intelligence and "Deep Blue's" intelligence. Deep Blue is able to look ahead many more moves than even a grand chess master. It also has thousands of heuristics to assist it in analyzing its position on the board, and select a move.

Humans, on the other hand, have a few moves ability to look ahead. Grand chess masters a few more. But they have an ability to develop strategies, to see the board in terms of its overall development toward an ultimate goal. Watch as the pieces maneuver for their final positioning, develop their interlocking positions of strength, etc. Humans are able to understand elements such as "sacrifice" better than the machine who can only really come at it from some quantitative numeric appeal. Humans-- at least the ones who are good at it-- play the game as a battle. Advancing onto the field, maneuvering to take and hold "key" positions and then use them as pivots to "assault the keep" (king)-- or whatever your analogy.

But humans also have a weakness-- they are highly vulnerable to a pure "by the numbers" attack, meaning playing against an opponent who is able to far out-compute the decision tree of potential moves for any given position. They also can put too much emphasis on a particular strategy or tactic and fail to notice or react in time when it becomes no longer effective or even downright disadventageous. They can become "sentimentally attached" to pieces or positions and not see or realize an obvious threat-- or advantage for that matter-- as a result. They can become too attached to a particular playing style, opening, sequence of advancement, etc., and thus not get as much practice and development playing or warding off unusual openings or advances. They can make boneheaded moves and lose the intiative, for example.
"If the machine can seemingly discourse intelligibly about beaches, without having been to a beach, or seen a beach in pictures or movies, or having learned about beaches through dialogue with others..."
CA is concerned if he got fooled by a machine that had never been on a beach, that convinced him it knew about beaches-- possibly only from 2ndhand information with other intelligent entities (eg humans). but, that sounds like a highly intelligent machine to me. shrewd, clever even. possibly even beyond intelligent. crafty. I would rate it as a "pass".
Well, even though I keep pushing the "acts intelligent / is intelligent" perspective, I understand CA's position very well. Intelligence, from a practical standpoint, seems to be more of an "I'll know it when I see it" kind of thing, meaning its recognition, using that criteria, is very subjective-- and moreover, can only be judged from the perspective of another intelligence. A tautology if there ever was one.
it is true that humans and machines tend to play chess differently, and there are various strategies that tend to favor the former or the latter. however, human superiority in chess was a huge milestone that fell now over a decade ago, and basically machines are now unrivalled in the area.... its just a precursor of whats to come... winning the/whole full game... . as mr e says, "checkmate"....
here is an objective way that intelligence may be rated in the not-so-distant future. we will find some algorithm that works, and then, measure the performance of this algorithm. for math, the case was FLOPs or floating point operations per second. in military research, they have a benchmark that measures LIPS I believe.. "logical inferences per second". so anyway, it may end up to be something like "neuron summations per second" or something like that. I know it sounds unlikely to CA, but that is a pattern that has many prior cases/examples.
I will concede to CA that the lack of an objective measure of intelligence is annoying, and I am not really totally approving of Turings famous idea. it was an interesting thought experiment (on the level of some of Einsteins) but unsatisfactory in the end. and arguably, a kind of elaborate intellectual joke at heart. I almost wonder if turing was totally serious when he proposed it. he seems to have been influenced by behaviorism in the proposal. (in fact I would like to see an analysis of that cross pollination. its too much to be a coincidence. it reminds me of how it would seem that the philosphy of positivism highly influenced qm philosophy. for more on that, see the great book, "the meaning of QM" by Baggott)
VZN: "an objective way that intelligence may be rated in the not-so-distant future. we will find some algorithm that works, and then, measure the performance of this algorithm. for math, the case was FLOPs or floating point operations per second. in military research, they have a benchmark that measures LIPS I believe.. "logical inferences per second". so anyway, it may end up to be something like "neuron summations per second" or something like that. I know it sounds unlikely to CA, but that is a pattern that has many prior cases/examples."

While something may (and likely will) eventually be worked out to test for intelligence, the broad nature of the methodologies you've outlined above only really test the speed of the hardware and not its processing ability. It doesn't test for its reasoning ability, ability to deduce or infer facts not in direct evidence, nor does it attempt to measure any sort of "creativity" factor, which nearly anybody would agree, at least anecdotally, is at the core of genius.
VZN: "an objective way that intelligence may be rated in the not-so-distant future. we will find some algorithm that works, and then, measure the performance of this algorithm. for math, the case was FLOPs or floating point operations per second. in military research, they have a benchmark that measures LIPS I believe.. "logical inferences per second". so anyway, it may end up to be something like "neuron summations per second" or something like that. I know it sounds unlikely to CA, but that is a pattern that has many prior cases/examples."

While something may (and likely will) eventually be worked out to test for intelligence, the broad nature of the methodologies you've outlined above only really test the speed of the hardware and not its processing ability. It doesn't test for its reasoning ability, ability to deduce or infer facts not in direct evidence, nor does it attempt to measure any sort of "creativity" factor, which nearly anybody would agree, at least anecdotally, is at the core of genius.
CA: "I remembedr one such gamehow where it pushed a pawn forward because it was able to calculate that all possible subsequent tree branch countermoves of its opponent would result in a lower 'value' index than what Deep Blue would get from an advanced pawn position...as Mr. E says, in this case it was 'pure computational power' that allowed DB to play this way because no human ever could. so I come back to my fundamentsl point...Deep Blue is NOT playing chess...the heuristics, so-called pattern recognition that DB is supposed to be endowed with...it is not 'pattern recognition' as we know it...its 'learning' is not learning as we know it"


Beautiful opening. Good point to start with. Notice how you're saying "as we know it". You are at the very very least conceding the possibility it may be intelligent. And my point all along has been, without explicitly stating as such-- "as we know it". The whole point of the "alien" intelligence is that it is *not "as we know it". We don't understand it. We don't "grok" it intrinsically. We don't have that much in common with it. It's hard to believe it's even intelligent at all since we have some concept of how it might work, how it might formulate a response, etc. That more closely fits our more general concept of "machine"-- in the 'tool' sense, and not in the 'being' sense. We, humans, have a hard time separating the notion of intelligence from 'life' and 'being'-- an 'entity', with whom we could conceivably converse.

But a large bit of human intelligence is based on rote memorization, heuristics, borrowed algorithms, and carefully administered tutelage of this information to successive generations.

To make a sharper point-- pretty much *all* of human mathematics, except perhaps quantization of simple numbers by counting on fingers, is based on heuristics. We don't spring forth from the loins with the knowledge of calculus, trigonometry, linear algebra or even geometry. While people may have some simple innate ability to do simple arithmetric, adding and subtracting, counting to ten on their fingers, or simplistic spatial recognition and ability to understand simple geometric concepts suitable for basic cartography-- that's about it. Everything else is built on top of small-but-brilliant insights into mathematics, algorithms, and heuristics designed to solve various problems. Humans don't even intrinsically have the concept of *zero*. Most human brains are hardwired to deal with small quantities like 1, 2, 3, or maybe for someone smarter 1, 2, 3, 4, 5, or for a genius-level, 1, 2, 3, 4, 5, 6, 7. Typical ancient human languages had very few mathematical concepts and their counting / arithmetic systems were very crude. Beyond a formal declaration of 1, 2 or 3, most primitive languages bundled everything larger into a word or two such as: "more" or "many".

Classically most humans have not known how to read or write or do much in the way of arithmetic. They have not been educated in the classics, they have not been exposed to great works of art, music or literature, they have not been schooled in logical concepts, abstract thought, or tutored in principled reason, moralities, or philosophy.

Classically most humans have toiled in the fields, been subject to some overlord's domination, grown up malnourished, impoverished, and down-trodden, and what training or education they did receive-- if any-- was toward the furtherance of a skill or trade, or if better off, apprenticed to a skilled artisan of some sort to learn an advanced skill.

Typically great insights were reserved for the gentried elite and not the work-a-day peasant in the field.

And yet nearly every single one of those humans had brains-- the "hardware"-- that possessed the latent ability for great intelligence. Diseases and defects aside, I think it would likely be fair to say that our societies dullards are legions beyond past societies greatest thinkers-- with perhaps the exception of a few. Da Vinci, Newton, Leibniz, etc.

Well, okay-- perhaps not equal to the greatest, but certainly superior to the average.

And the largest difference? Twofold I think: first, having the knowledge (presumes being schooled in it), and secondly being immersed in the knowledge-- it being an 'everyday' item. I think you (CA) touched on that aspect briefly previously.


BTW -- if you're looking for an interesting book to read, writing this reminded me of "A Canticle for Leibowitz", by Walter Miller. If you haven't read it, pick it up-- its a very enjoyable bit of fiction.
CA: "This is one example of my larger thesis through all of this...that, if a machine is wholly dependent on the procesing of symbol strings, that this cannot produce intelligence in the sense that new configurations of symbol strings generated by the machine, as a result of the INTENT of the programmers in programming with an original program and data set of symbol strings (SS) can do ANYTHING more than what is deterministically, mechanically generated from the original machine endowment of symbol strings."

This of course pre-supposes that the machine has no ability to get more strings? It can't read and understand the newspaper. It can't converse with a human and infer anything from the conversation. It can't feedback its own internal concepts and follow-up on self-determined ambiguities. Or strike out to discover more about curious juxtopositions.

You're assertion presumes that a machine cannot be built in such a fashion as to have the creators wind it up and let it go-- off to be whatever it is.

Why can't you build a machine that's autonomous? Why can't you program it with a "desire" to learn? Or the ability to infer meaning?

It all comes back to the basic fundamental concept of what it means "To Know", doesn't it.

Or else fear that the machine *might* after all be capable of achieving intelligence. Which is worse, a human, or an intelligent machine?

What if you make an intelligent machine and it decides it no longer wishes to be subservient? (Which after all, believe it or not, was one of the principle aspects of the original thread ;-)
CA (and anyone)-- join with me in an online thought experiement. Let's put aside our preconceptions, as well as we can, and posit what it would take to create an intelligent machine. What would the parts be? How would they work? How would they interrelate, how would they interact? What would its general structure be? Etc. Let's conceptually build an artificial intelligence, just for conversation's sake.

Here, I'll go ahead and get the joke out of the way:

I build a box and magically bless it with intelligence. The end.


Now, what could we *really* do?
CA: "If the machine is able to play intelligently it is because the deterministic computation of SSs has been so contrived that the output of the machine, when translated by humansinto chess positions, results in an intelligbly and usually successful game. The numerical computational analogue of chess that DB executes does not require intelligence. The discovery and construction of this analogue by engineers is where the entirety of the intelligence displayed came from originally. The super set of SSs created by the engineers has nothing in itself that makes it UNIQUELY a description of a successful chess game.."


You're a really smart guy-- seriously, I mean that. I've enjoyed talking to you-- but you're missing a fundamental aspect of your own observations. What you are describing is an artificial evolution for a likewise artificial niche: "Chess-Playing Wall Slime".

It used to be Bobby Fischer and Garry Kasparov. Now its Deep Blue.

Did Bobby or Garry spring from the womb as chess-playing maniacs? (Well, maybe they did, I don't know :-) They had to be taught at least the rules of the game, how to set up the pieces, how the pieces move, the goal of the game, and probably loaded up with some general strategies and tactics, blunderous moves to avoid, etc., and then were cut loose to play a gazillion games to develop their understanding, exercise their mental powers, formulate openings, attacks, responses, learn how to develop their strategies, how to feint, how to avoid, various "trick" openings, etc., plus naturally over time they would have developed an ability to "see further" the number of moves ahead which would have given them a real computational advantage, even if it was only intuitive, over their opponents.

Deep Blue simply had a larger coaching staff, was a more resiliant student able to remember everything the coaches taught it, had a beefier ability to compute look-ahead moves, recall classic openings and piece developments, etc. Perhaps its internal workings are different from Fischer's and Kasparov's, but in the end, in a fair contest, it won-- repeatedly.

Both Fischer and Kasparov had the same opportunity. They could have studied the books, learned all of the classic openings and piece development forms, learned about the "trick" games, and trained themselves to look-ahead more deeply. But they were... um... human. And human intelligence-- brains-- aren't capable of that rigorous and complete recall and application.

On the other hand, the human brain probably has some tricks that Deep Blue hasn't thought of yet either-- meaning hasn't been programmed for, or "emerged" from its programming yet. Or its hardware is incapable of.


Perhaps, after all, you're not concerned that Deep Blue is intelligent, but that set side-by-side with humans, it appears superior. At least with respect to playing chess.

It is after all, only "Chess Playing Wall Slime", and as such faithfully fulfilling its artificial little niche in a forgotten corner of the universe.

(Cue Rod Serling... funky music... doo doo doo doo)
CA: "The symbol strings are far too ABSTRACT for that to possibly happen."


Oh, I almost agree completely. Setting aside the "could it be done" aspect, I definitely agree its probably not the most advantageous approach. Nor the one that is most likely to bear fruit the quickest.

Notice that I have rarely, if ever, in this whole series of posts, advocated or suggested that a symbolic processing system was the way to go. I've been talking about basic fundamental elements-- though occasionally borrowing examples from electronics / logistics when its been handy. But the crux of my focus has been steadily upon the application of basic elements to build up more complicated components, and the subsequent boot-strapping of a framework suitable for housing "intelligence" (or a "mind" if you prefer).

There are some really interesting experiments ongoing with symbolic processing however-- the 'Cyc' project, for instance, if you haven't heard of it, is very interesting and intriguing. It aims to create-- arguably has created-- an "intelligent entity" capable of understanding real-world concepts and inductive reasoning. The project, to-date, has been both more successful and less successful than originally hoped. And a lot of really interesting lessons have come about as a result.

Cyc Project (Wikipedia):
http://en.wikipedia.org/wiki/Cyc
Again I think a lot of what you and I (you especially) are pushing back and forth are interpretations of the elephant.
CA: "uh oh the stupid bastard playing me knocked over a pawn, this game will be challenging, a win may mean more oil for me, is this being televised?..., all of which, and a fist full of other rambling musings, I contend, would have to be generated by a mind"


I see, we're strictly discussing "Chess Playing" machines, and not musing on Machine Intelligence in general. Apparently I'm pretty dense, my apologies all around-- it only took me 108 comments to figure that out. :-)


I don't think it matters much how the "intelligence" got in there. And I agree that in one view / mode of thinking, it is a machine, developed by humans / hobbits to perform a specific function, however brilliantly, and thus makes it really, really hard to take a step back and agree that it is "intelligent". Somehow it rings hollow. The "intelligence" aspect seems somehow removed from the beastie, and yet however it works, it does work, and it works very well-- AND-- I should point out-- it plays chess MUCH better than any of its creators / instructors could, probably combined. Which, if you consider it, *is* a facet of "being intelligent".

I think you're looking at an example of artificial evolution in action. With the engineers and programmers and all of the subject-matter experts acting the part(s) of mother nature, adverse pressure to 'survive' (and thus advance successful memes), and iterative generations-- at the *very* least that's true in the sense that they had to develop numerous prototypes to get to the one that played Kasparov.

And then there's the issue of emergent behaviors-- they just loaded it up with tables and heuristics (just so we're clear: a heuristic is simply an algorithm) and gave it enormous computational ability for looking ahead moves. They did *not* hard-wire it for every conceivable move.

When Deep Blue played, there can be no mistaking this point:

It played on its own.

It drew from its teachings and utilized its native assets. But the game was far from decided (technically) when the match began.

After awhile of whomping up on humans, one might say that statistically the game was essentially decided. But not that first match. Nobody knew what would happen.

Deep Blue played on its own.
CA: "we hobbits, in order to cover these last ideas, and literally millions of others, that swim in an astronomically HUMONGOUS concept space...no one can seriously maintain that all these ideas are described by physical configurations of our brain that correspond to unique symbol strings..."


Yeah, well... about that-- that's why I have books. And the Internet these days. Google is my friend.
CA: "I agree that IF a minimum set (and here we could grade AIs on how rich these minimum sets were) of these seemingly infinite and subtle mental constituents could be meaningfully translated/transcribed into SSs...if this was theoretically possible, that we would have a basis for creating real AI."


Gee, this "car" concept is really challenging. If we could just figure out how to make these 'wheel' things round instead of square we might could just get someplace....
CA: "streaming video presentation of the symbol output of the Matrix mainframe. The symbol stream is scrolling downwards on a display monitor. The human says:"I like to watch the activity in the Matrix directly"...without its INTERPRETATION into the ENTIRE reality of the jacked in humans.."

Yeah, that was always pretty bogus. Made for a nice graphic though, you gotta admit ;-)
CA: "If this little feat could be done today then robot/computers would SEE the world like we do."


Robots *can* see the world like we do. The real observation to make is that WE cannot see the world like robots can.

What robots cannot do (well) yet is *process* that imagery in a substantial way. Although that is rapidly developing. I am amazed at what is happening currently compared to when I first started thinking / investigating / researching all this stuff nearly 30-35 years ago.

Humans have a whole lotta hardware to help them interpret their vision receptors (eyeballs). And there is a lot of side-processing that occurs to tie in the visual stream into other areas of the overall system. For example 'motion sensing', 'edge detection' (as you pointed out), 'color matching', 'feature recognition', etc. Lots of stuff. Not to mention that human eyeballs 'twitch' almost imperceptibly nearly all the time. The twitching is necessary in order to generate successive image frames. Our eyeballs are tuned (via evolution) to "weed out" things that don't move. So human eyeballs 'twitch' to create an essentially artificial movement which counteracts the 'weeding-out / no-movement' part. (Its true, go look it up!)

But beyond that there is an enormous network of processing power brought to bear to see, sense, and interpret the human visual stream. To say nothing of the other senses which are sensed, received, processed and correlated into the same (overall) data stream (of near / sub consciousness).

To take any current robot and try to make a case that it is not possible to process the imagery with the same sophistication as a human is not really a fair comparison since you're missing a lot of that requisite processing / correlation layer. Its possible to create it. And experiments so far have been quite promising in terms of down-the-road potential. But the ability to cram enough processing power into a small space / power is only just barely becoming a reality.
CA: "Just like DB does not play chess the Beach AI has not, can not, been/gone to the beach."


Neither have I gone to the moon. Or California for that matter. But I know a bit about both of them. Does that make me a machine?
CA: "I contend that no conceivable super sets of SSs can ever have the SPECIFICITY required to enable a robo-puter to qualify as a robust enough genuine AI. On the other hand I also said that one of the over arching qualities an AI must have to be an AI is the capacity to be a GENERAL purpose pattern recognizer, analogy generator, problem solver."


If we agree that Humans, let's say, are "intelligent", then how did they get that way? Were they just set down on the earth one afternoon fully-formed with all of their faculties?

I think I am correct in inferring that you believe in the theory of evolution (which, for the record, is still just a theory ;-)

In which case you most likely don't believe humans just arrived one day, but instead evolved.... but from what? And then that, from what? And from what? And so on...?

Its the same thought experiment I posited way back a long time ago-- when you whack away at the intelligence, when does it stop being intelligent. And once you've determined that its no longer intelligent, where did the intelligence go? If you put the pieces back together does the intelligence return? If you believe that the "mind" is an ethereal creation hoisted by the hardware / software platform (of the brain, robot system, computer-- whatever it is), can it be potentially backed-up and restored? (Assuming one knew how to read / write to the hardware)

And if, ultimately, you believe that humans evolved from something less intelligent, and less intelligent before that, and without intelligence before that, and without form or substance before that-- then don't you think that all those engineers and scientists and computer programmers and psychologists are doing is learning from the example(s) of intelligence they already have to work with, learning what they can, formulating what they can, and doing their best to bestow "canned" intelligence on the entity in the manner they can? Aren't they, in essence when you boil it all away, simply filling in for mother nature and evolution and advancing the clock ahead by a greater unit of time, conceptually? Instead of the process taking jillions of years, it now can be done in a couple of weekends... right?

And if they encounter a bug or a flaw, they head back to the drawing board to puzzle it out-- in essence a few more years of evolutionary time is passing while they mull it over-- and then come up with a fix and implement it. If its possible, its really handy they can reuse the same hardware container. But the next revision is a NEW individual, or else a reconstituted individual that is "compatible" to some degree with the new requirements.

I think the parallels are astounding. The principle difference is what drives the engineering. In nature its essentially a 'drunken-walk' with a couple of caveats: individual and species survival. But are you honestly trying to tell me that something can only be considered 'intelligent' if it happens accidently?
Thinking about the nature of intelligence and whether an entity might be constructed that approximates human intelligence, it's useful to consider the work of epistemologists like Northrop -- though not as obssesively as Pirsig did, if we value our sanity -- to get a sense of the complexity of the concepts humans routinely manipulate as they engage the world. In The Logic of the Sciences and Humanities, for example, Northrop speculates there are eight kinds of concepts: two each, one monistic and one pluralistic, of concepts by intellection, imagination, perception and intuition.
"we hobbits, in order to cover these last ideas, and literally millions of others, that swim in an astronomically HUMONGOUS concept space...no one can seriously maintain that all these ideas are described by physical configurations of our brain that correspond to unique symbol strings..."

you could argue that, but isnt the state space of the Google search engine pretty @#%*& huge??? beyond anything grasped by a human? isnt google arguably to some degree outdoing human ability in its own "ability" in a distinctively intellectual operation? ie, "search" -- search is a fundamental aspect of what you call "concept space". I would argue google is actually dealing with something approaching "concepts" in a digital way. I worked on search engine technology. there are techniques that actually refer to "meaning" as in "latent semantic indexing/analysis". ie *semantic* == meaning. urge you to look into that technique, its very intriguing, and coincidentally the exact same mathematics applies to netflix ratings prediction algorithms.. remarkable.. evidence for a universal intelligence algorithm.... as I have been continually proposing here and the rat neuron navigation is another instance of...
hi billy thanks for the comments. I especially like the BRIEF ones sometimes :p
have you read pirsig, zen & the art of motocycle maintenance? is that what you're referring to?
it seems we're not at "checkmate" but "stalemate" with CA.. as usual. I think CA has a natural affinity for stalemate whether it be WTC conspiracy theories or theories about artificial intelligence.. dont know why that is, probably some particular trauma he suffered as a young lad I would guess :p

anyway here is a thought experiment that I challenge CA to refute. suppose that I find "how" neurons tend to behave. I create an artificial neuron to help people who are losing theirs due to alzheimers. initially, I just replace the defective neurons [in this thought experiment, alzheimers is due to defective neurons]. then, I become very proactive and slowly replace other neurons.

eventually, I replace *every* neuron in their brain.
what would happen?
(a) the person behaves exactly as before
(b) the person has improved function
(c) intelligence slowly degrades
(d) there is some magic point at which point the person no longer is "intelligent"

now, I will be curious as to how CA argues this case and I think he will run into some difficulty. but lets see how it plays out.
must find myself again agreeing with MrE. evolution is strong evidence that intelligence is basically an algorithm. animals use different algorithms for survival in what is called "behavior". there is a continuum of intelligence, with humans so far at the apex, but there are not clear discontinuities. apes have intelligent-like aspects to them. the main difference in our brains is complexity it appears. and possibly, also, our brains better implement some kind of algorithm rather than ape brains. I would suspect it has something to do with what neurobiologists call "plasticity" ie neurons ability to learn over time. involving stuff like habituation and dendrite growth. I would suspect that ape neurons have less plasticity to them. plasticity is a technical term with specific meaning in neurobiology.
CA: "About a human knowing something about Mars or ancient history, etc., without having been there....just like the beach AI may be said to know something about beaches without having been there...if the AI could demonstrate coherent knowledge...like concluding that sand grains get carried in clothing back home and fall out on the floor, that kind of inferential knowledge...without having had it programmed in directly...if it can do so consistently then I would agree, it is approaching being a real AI...let the folks keep working at it."

Heck, if that's all it takes, we're already there. Go check out the 'CYC' project (I posted the url a couple of comments back). It can already do that kind of processing. And symbolically to boot.
CA: "I'm skeptical because, like you said elsewhere, the universe evolved meat bots to do the things we do and most artificial constructions are still inferior to nature's from flight, to artificial limbs, to nascent nano-tech, etc....if hobbits still have a hard time developing controlled fusion, other super hi tech gizmos....then AI, which must be the hardest of inventions, should come last on the list, don't you think?"


Yes, but two things:

1. Mother nature's had a several billion year head start and already has a pretty good library of parts and algorithms built up.

2. Human engineers have not historically been able to design at the same level as mother nature because: (a) its difficult, but more importantly (b) its technologically difficult. Only in very recent times has the human technology began to approach a level of scale that can begin to compete with mother nature and biological systems.

Biological systems tend to be highly parallel and redundant. Cells are ten cents for a bucketload of dozens. Nature doesn't need to achieve the highest degree of efficiency possible since it has the ability and the propensity to throw throw literally thousands, hundreds of thousands, even millions of parts at a problem. What it can't do in direct engineering, it makes up for in spades with redundancy.

And not all of mother nature's designs pan out either. There are literally millions of creatures rotting away in the great dustbin of evolution. And even of just the ones that are still around today, there are many features that are no longer needed, no longer supported, or even outright disadvantageous for the current environment-- i.e., a species on its way out, or at least in some serious transition.

Let human engineers begin to compete with mother nature at the nano level and with massive parallelism and redundancy, with new super batteries (super capacitors) and fancy exotic musculatures (that have recently been developed). Mankind is just about on the verge of being able to give as good as it gets.

Mark my words! :-)
"although I'm skeptical because, like you said elsewhere, the universe evolved meat bots to do the things we do and most artificial constructions are still inferior to nature's from flight, to artificial limbs, to nascent nano-tech, etc"
whoa dude I think you may want to reevaluate how you stated this. many human machines are far superior to natures in almost every way. now, evolution is an incredible designer, and humans cant even yet create a robotic fly, although we are getting close. but think of a bulldozer as an earth-excavating mechanism, or a skyscraper as a nest, or a spaceship as a flying mechanism (no creature other than humans has gone into space-- although there actually are some rare bugs that can live/hibernate/float high in the stratosphere..).. or a computer as a data processing system.. etcetera.... humans have far outstripped nature. in fact we are slowly/quickly [depending on who you ask-- republicans or democrats haha] destroying the environment with our resource-chomping/consuming superiority. so I think you might want to reevaluate your statement there.
CA very wisely avoids any mention of my neuron-replacement gedanken. checkmate =)
so yeah, an UNAUGMENTED human is actually pretty unimpressive, weak, insignificant. but thats the whole point. humans are into AUGMENTATION ie TOOLS. its part of our transcendant superiority. our superiority over nearby animals is immeasurable. its not even an order of magnitude. its a whole other ballpark. and, guess what .. AI is in many ways the ultimate augmentation.

now, I do agree with you that AI is going to be difficult to achieve. but, look, the "cloud" is starting to emerge in IT, and I just read a great article that speech recognition on the cloud is coming soon, and its much more accurate. its a milestone. and the "cloud" or a cloud-like environment will probably be the birthplace of AI. thats why you are very clueless to be dissing google, because they are one of the masters of the cloud, they have built the largest "cloud" in the world, they have virtually invented and utilized it before anyone else.
Yeah. After struggling with Northrop in grad school, I decided it was Northrop drove Pirsig insane. I thought you guys might have come back to noodle this some more.

From a work in progress:

“Where does this feeling you describe as ecstatic come from, Professor?” Everett asked.
“It comes,” Mr. Sugarman, “from an innate appreciation of balance and harmony. Of beauty, if you like. The machine appreciates the beauty of the world. Of itself. Although I think we can assume that from the machine’s point of view, at first at least, those things will seem to be identical. It appreciates beauty, because it is bred to appreciate beauty. Don’t you see? It appreciates the beauty it is bred to appreciate.”
“But where is the feeling?”
“In those parts of the machine that are capable of feeling. Obviously, you would have to include some living cells.”
“Obviously,” Everett said, glancing at Bobby Lee Reed.
“You see, Mr. Sugarman,” Professor Grossman said, “you need an organic platform. Silicon is a couple of million years behind on the evolutionary learning curve. It has no useful racial memory, so to speak. What we think of as intelligent life has had millions of years to make a perfect adaptation to its environment.
“Let me give you an example. A baby’s cries are impossible to ignore. It is no accident that the organs that produce that irresistible sound are the same ones that need what the sound attracts. Food. We’re a gut with brains. That’s evolution. You can’t replicate that kind of adaptation in a laboratory.”
CA: "depends on a lot of factors that robotics can probably assist in resolving...but that still doesn't require genuine AI."

Perhaps your criteria for "intelligence" does not permit it to exist?
BILLY: "“Where does this feeling you describe as ecstatic come from, Professor?” Everett asked."

Interesting comment. My mom and I had a discussion once many years ago about the origins of emotions. We were debating the most primitive emotion-- what it is, what was its genesis, etc. And to be clear, we were separating "emotion" from "condition", i.e., hunger is not an emotion, its a condition. Satiation is not an emotion, its a condition.

One of the things we talked about is whether or not there is a distinction between emotions and conditions, or else whether 'emotions' are just more subtle or abstract 'conditions'.

For example, are 'hope', 'love', despair', 'joy', 'happiness', 'sadness', 'anger', 'embarrassment', 'envy', 'jealousy' emotions or conditions?

What about 'courage'?

What about 'sacrifice'?

What about 'love'?

Perhaps they are neither emotions nor conditions, perhaps they are something else?

Is it possible to create a machine to have emotions? Would they be "real" emotions or just pre-programmed constructs? Would it be possible to discern the difference? And even if they were conclusively demonstrated to be illusory, would they feel any less "real" to the machine?

Are your emotions real? Or simply artifices of your body, the machine? How do you know? How could you know? Would it make a difference if you did know? For certain? Would your emotions feel any less "real" to you?

For CA, whom I presume is reading along here, how much of yourself, your thoughts, feelings, emotions, etc. are "real" and how much is simply a collection of illusory states created by various sensory inputs or interior system states? Are you any less "real" if you know how you work? How you operate? If you can reduce your workings to a set of equations and theorems? If you can duplicate them in the laboratory?

Or, in the end, does 'individuality' largely boil down to the unique set of sensory inputs you've experienced in combination with the pattern of neural / biological components that comprise you?

And here's one more question-- it is well known that over time we grow, we develop, we mature, we even change out the cells in our bodies completely over time-- about every seven years or so.

Are you the same person today you were 50 years ago? 25 years ago? 10 years? 1 year? Last month? Yesterday? An hour ago? A minute ago? A second ago? 1/5th of a second ago? Now?

When is "now"? Is it "now" as you perceive it in your head? Or is it now as you believe it to be theoretically on paper? For most people there is about a 1/5 of a second difference between the "now" they perceive and the "now" they believe.

In the computer world, a lot can happen in 1/5th of a second. In the meat world too. In the abstract world there is no time, unless you introduce it yourself. And then you assume it works the way you think it does. Does it? Is time contiguous and flowing? Or does time only exist in quantitative chunks? How are Xeno's paradoxes resolved? If time is quantized, what are the resulting implications for space and matter? Where do space and matter go 'between the chunks of time'? Are they still there but imperceptible? Or do they cease to exist? What binds them together? How do they maintain integrity? Why does time appear to only flow in one direction? Why isn't that a contradiction of Einstein's theory of special relativity? Is the universe really expanding (and/or contracting)? Or is time just moving at a different rate and causing us to perceive the relationship between the quantized elements of space and matter differently-- elongated or compressed, for instance-- ??? How could you tell? What 'ruler' or 'yardstick' could you use to measure "true" distance irrespective of time? Einstein's constant-- The speed of light?


If the observer affects the observed, who is the ultimate observer?
CA: "I wonder if you and vzn really believe that that is what a human brain does in functioningin the world. Do you think that the human brain also JUST processes some kind of sophisticated SSs...that we have no access to as yet, but which must exist...because if you really think that then I understand why you think that AIs can theoretically exist."


I have commented numerous times, several *JUST* to clarify this very same point. My own belief is that complex systems are built from the ground up out of simple, easily-understood components. Which are then further grouped into successively more advanced structures until a platform is ultimately developed suitable for the housing / hosting of a 'mind'-- i.e., theory of mind.

The layers of the system can be considered in a manner similar to an onion. In the outer layers there are very simple parts performing simple tasks, which are combined to perform slightly more complex tasks, and so on. Those tasks might be sensing temperature, pressure, light, etc., Those layers may also be involved in the absorption of nutrients or shedding waste materials.

As an organism grows more complex, its needs and requirements for care and maintenance grow more complicated along with it. Additionally it may also take on new tasks and new roles in accordance with its ever-improving (in an evolutionary sense) functional components and governing behaviors. Strategies are developed, at the hardware / firmware level, for implementing and deploying more and more advanced and subtle refinements to form and function.

When does "awareness" begin? Is it an emergent property? Is it a certainty given a particular level of complexity? Is awareness an accident? Is awareness a requisite component for 'intelligence'. Or is it the other way around?

I believe that symbolic processing, *IF* -- get this, I don't know how much clearer I can be -- *IF IF IF IF IF IF IF* symbolic processing is ever possible, in natural systems anyway-- it is at the very top levels of complexity and is not the usual course of natural systems.

Artificial systems perhaps. The engineering process is completely different. The goals and means are completely different. While parallels and analogs can be drawn, inferred, and employed from one realm to the next, from an engineering and practical applications standpoint, they are-- at least at this point-- separate engineering realms with very little crossover, except at the very conceptual levels. Mother nature, however thoughtless and haphazard her methods, is ultimately a very resourceful and practically-minded engineer. Human engineers, at our present stage of technological prowress can study, imitate, but not yet duplicate all of her feats. Although lately, the humans have been showing off some very interesting ideas that mother nature never thought of.


I do *NOT* believe, and I'm stressing this most forcefully, that intelligence, in natural systems, develops in a vaccum. I am a believer of the bottom-up approach, with respect to getting results.

But I am not resolved in my mind whether or not a pure symbolic processor is possible or not. I am not prepared to state unequivocally one way or the other. My *suspicion* is that it *is* possible. Whether or not it is possible *practically* is another matter.
CA: "nature does what it does without having to do the tedious calculations that we have to do to DESCRIBE what nature does. So, that's our fundamental difference...I maintain that a biological brain is operating in the same reality as the electron, the planet"


Sure, so what? You and I and all of us operate with less than perfect information all the time. In fact I'd wager that for most things-- *MOST* things-- most of us are blissfully unaware of the true depth and nature for what we're doing.

Sure you know how to break eggs into a pan over medium heat, toss in some grated cheese and sliced mushrooms to make a delicious omlette. But do you know *how* the eggs were made? What is their chemical composition? The cheese? The mushrooms? Do you know precisely what chemical reactions are taking place as the proteins in the egg is being broken down in the medium heat and mixing with the emulsifiers in the cheese and so on?

When you climb into your car and turn on the engine do you understand all the intricate interactions between the pistons, the cylinders, the input and output valves, the timing, the fuel / oxygen mixtures, etc.? Or do you just put it in gear and drive?

If you write a program do you concern yourself with the inner workings of the CPU? The RAM? The I/O bus? Or do you just fire up your editor and start coding, confident that your favorite compiler can figure it out and everything else just works?

How much fundamental information is needed to create, invent, combine, utilize much of the materials and things that occur around us? There are at least two answers to that I think.

First, it depends on what you are hoping to accomplish. If you just want "a house" then you can utilize the trees in the forest or the clay from the ground. For tools you need only invent the axe or the pot. More advance needs (goals) require sufficiently advanced tools and methods to meet them.

Second, some applications require fundamental / specialty knowledge based (as much as possible) on first principles. In many cases applications cannot be conceived until after the attributes and properties of materials are discovered and understood. They don't need to be understood completely however for some applications to become possible and/or practical. Discovery and refinement is occurring all the time. And accordingly new applications and the refinement of previous applications is possible as a result.
CA: "because the 'puter has been built as just a vehicle to house the all important formal system. To assume that we are analogously constructed by nature is a huge assumption...the result of computer based prejudice."


Who do you believe is making that assumption here? Who do you think keeps dragging us back to 'computers' and 'symbolic' processing?

*I* don't believe that those are necessarily the requisite components and have never said so. While it is certainly true that a large portion of my own background is based around computers, electronics, programming, mechanical and the marriage of various combinations of those elements, I have not stated anywhere (that I'm aware of) that those are *the* fundamental components and no other components can be used.

What I *have* said is that I believe *we* are machines.

And that is not the same thing.

What I have also said-- I think-- is that I believe it is possible to construct intelligent machines, which might be indirectly construed to mean I believe intelligent machines could be constructed out of electro-mechanical assemblies, suitably sized in scale.

In fact, I will go a step father and state that I definitely believe that it is possible to construct an electro-mechanical *organism* (by which I mean "really complex mobile autonomous system") that can be considered intelligent. But I don't think "mobile" is a key term here however. I think once a system was ever developed, the resultant theory of mind and practical concepts could be retrofitted and applied in a non-mobile capacity. So please don't get hung up on the mobile part. Its probably even possible to develop the intelligence without the mobility in the first place, but I think the emphasis would have to be placed more on the abstract (formal) aspects and not the practical. I am more practically minded when it comes to my forays into robotics and intelligence.
CA: "So, a computer can run a chess game simulation as Deep Blue does...the chess game context is such a restricted part of our universe, already rooted in the description part of the universe, that the simulation of the chess game is very close to a real chess game for us when we interpret DB's SSs back to our world. But most things in life are not that easy. When DARPA gets robot cars to drive over a real world course, then it means that the sim in the computer is robust enough to make it look, after the robot cars SSs are interpreted back into our world of dirt tracks, and road curves, and grades, etc., that the sim IS the real thing."


You haven't built very many robots have you?
@mr e
re: people are machines
Whether we are or not, I think most of us understand ourselves better as machines than as organisms. I wonder when it became okay to think of ourselves that way. Not when we started thinking about ourselves as machines, but when we started to feel okay about thinking about ourselves as machines.

Back in the old days when machines were simple enough to understand how they were "wired up," people built a machine to make decisions for them. They would feed in the data and the machine would grind out decisions on governance, economics, health and nutrition, their love lives, and anything else they could think to ask it about. That went on for a years, and everybody was perfectly happy with the arrangement, until one day a curious engineer peeked inside the machine and discovered a couple of circuits were wired backwards.

Question is: How did he know the machine was wired up backwards?
CA: "I may agree with you up to that point, but you have no right to assume that this functioning of our brains is just a matter of processing information from neuronal signals."
but, maybe neither do you have any right to assume the converse =)
"Is it possible to create a machine to have emotions? Would they be "real" emotions or just pre-programmed constructs? Would it be possible to discern the difference? And even if they were conclusively demonstrated to be illusory, would they feel any less "real" to the machine?"
I tried to work this out at a very young age, and concluded that the question of artificial and artifical intelligence were somewhat disconnected & independent or as the geeks say sometimes, "orthogonal". as a model, consider a pure reason version of spock who has both vulcan parents. he's intelligent, but emotionless, right?
but, artificial emotions are "almost" as interesting a question to try to reproduce as artificial intelligence.
I also conclude that many emotions are actually disguised systems for things like self-preservation, and many are rooted in the ego. "I feel this. I feel that." the I is the ego. evolutionary psychology increasingly supports this pov.
"What I *have* said is that I believe *we* are machines."
its interesting to try to trace the conceptual origination of this idea. it probably does not date to pre-modern age from what I can tell. because ancient machines were primitive. it was around the enlightenment that machines became very sophisticated, mostly in the form of clocks, and the idea of a "clockwork universe" was born.
I trace it perhaps to davinci who is quoted as saying, "man is a marvelously constructed machine". it is very apropos that davinci would be the first to voice this, because he was one of the most brilliant engineers and machine-makers who ever lived. he couldnt even build all the machines he had in mind [in this way he reminds me of babbage]. some of his designs foreshadow the helicopter. etcetera. but notice, davinci was "early enlightenment" era. the idea is about half a millenium old. and, I would argue, CAs attitude represents the last vestiges of resistance to it. CA, now dont you feel great being on the opposite side of davinci? wink...
CA I urge you to ponder the artificial neuron idea more fully. because, shouldnt it be possible to replace defective neurons in much the way we replace defective limbs? artificial ears & retinas have now been around for several years and the technology is improving significantly, possibly to the point that those with artificial retinas may actually have better vision than the rest of us.
now, why do you think such a series of events, an evolution of replacement, is impossible in the brain?
I will concede that we might be able to replace neurons without figuring out how intelligence works. but it might not matter, we might be able to reverse-engineer it in the following way. just map out somebody's neuron connections, and then replicate it in silicon. sounds outlandish? but we do the same with "cell lines" right now where there are original cells used in biological research that are endlessly copied.
BILLY: "I think most of us understand ourselves better as machines than as organisms."


What an interesting comment-- and I mean that sincerely. I am curious how 'machine' and 'organism' differ? The techical definitions are a bit different, but I don't think any of us has been using the word 'machine' in this discussion in its strictest denotative meaning-- instead the sense I've gotten is that we've been using the word 'machine' more as a "shock" word intended to force us to look at ourselves (or a potential entity) from a purely "mechanical" / "architectural" point of view, rather than get caught up in the gestalt of life and mythical magical properties construed therein.

By restricting ourselves to the realm of "machines" we restrict ourselves to the observed and the observable. We leave "God" out of it-- even if it comes to pass that He (or She or It) is the ultimate inventor / creator of ourselves as "machines". We look at ourselves and other "entities" around us in terms of their parts and components, and study those parts with an eye toward how they work, interact, and combine together to create the "organisms" we are.

Further, we have been discussing how various types of "machine platforms" can be constructed or construed so as to be a container for the ideas of "mind" and "intelligence". Several of us have been further interested in whether or not "mind" and/or "intelligence", insofar as they may or may not reference different concepts, can also be constructed in a purely artificial and abstract realm-- first principles-- and therefore "modeled" in an abstract manner, and/or simulated on different types of platforms, not necessarily biological in type.
BILLY: "Back in the old days when machines were simple enough to understand how they were "wired up," people built a machine to make decisions for them. They would feed in the data and the machine would grind out decisions on governance, economics, health and nutrition, their love lives, and anything else they could think to ask it about. That went on for a years, and everybody was perfectly happy with the arrangement, until one day a curious engineer peeked inside the machine and discovered a couple of circuits were wired backwards.

Question is: How did he know the machine was wired up backwards?"


This has the hallmarks of a parable. Would you finish it for us and give us the moral to ponder?
VZN: " (Mr.E) "Is it possible to create a machine to have emotions? Would they be "real" emotions or just pre-programmed constructs? "

I tried to work this out at a very young age, and concluded that the question of artificial [emotions??] and artifical intelligence were somewhat disconnected & independent [...]

I also conclude that many emotions are actually disguised systems for things like self-preservation, and many are rooted in the ego."


I too have had many discussions and thought experiments concerning the ideas of "artificial emotions" and conclude that they are no more difficult or illusory than "artificial intelligence" is itself.

And, as I indicated previously, I don't like the terms "artificial intelligence", nor "artificial emotion", nor any other such construct because the terms indicate something that is a *property* and not a *thing*. And as properties they either *are* or *are not*, they either exist or they do not. It is not possible to have an "artificial property".

They could be properties of a "non-biological" system; they could be properties of a "non-human" ("inhuman"?) system; they could be properties of an "alien" system; but the properties themselves, if they are observed are real, not illusory, even if the mechanism used to produce them is completely and totally understood-- a constructed or "machine" artifice.
VZN: "it was around the enlightenment that machines became very sophisticated, mostly in the form of clocks, and the idea of a "clockwork universe" was born."

There have been "sophisticated" machines discovered that utilize gears and potentially clockwork, or clockwork-like mechanisms that date back millenia-- at least 6,000 or more years. Their existence was somewhat fabled for a long time, the subject of curiosity and conjecture, but modern scientific technologies have confirmed their antiquity and the presence of "gears" (proto-gears), and other "clockwork-like" mechanisms. The actual use of the objects is still somewhat of a mystery but most people believe they were primitive analog "computers" designed to assist mariners in navigation.

Also, unconnected to the above, there is also evidence of other "technology" known to ancient peoples-- though it is not known whether or not they possessed a true understanding of it or were able to make true use of it. For example, ancient "batteries" have been discovered in both Egypt and "Persia" (now modern day Iraq and Iran).

To what end and extent the knowledge of "batteries", which were simple clay pots containing combinations of acidic and metalic materials-- usually gold or bronze-- which when recreated in modern times have been demonstrated to pack a reasonable kick, but it is otherwise unknown whether or how they might have been used at the time.
VZN: "CA I urge you to ponder the artificial neuron idea more fully. because, shouldnt it be possible to replace defective neurons in much the way we replace defective limbs? artificial ears & retinas have now been around for several years and the technology is improving significantly"


I think we-- all of us, myself included-- may be doing ourselves and our fellow readers a disservice every time we use the word "artificial". As such we are perpetuating the notion that:

A. These components are somehow 'inferior' or 'false' with respect to the 'real thing', whereas the meaning intended was really more that of 'replacement', 'enhancement' or 'augmentation'

B. It seems to suggest that the 'replacement' components cannot be biological in nature and must instead be of some alternate construction. Which may or may not prove to be true.

I do not think that just because we are trying to make a case for "machine intelligence", machine "life" (as it were), or other "machine" concepts, we are necessarily intending to *exclude* the biological as an example or contender. I myself would be quite happy for instance with a replacement biological heart, or a mechanical heart-- in any classical sense-- assuming either one worked-- preferably at least as well as the original-- and supplied the necessary functioning to the rest of me-- the "machine" that is *me*.

Moreover, why should I care whether I was "meat" or "machine" as long as I continued to have at least the same level of functioning, mental powers, reasoning abilities, etc. as I do in my original corporeal essence? And if by becoming something other than a "pure meat"-based entity I can gain additional features, functions or advantages-- well, why not?

And to take it to the complete limit-- CA will have fun with this-- and my "essence" can be transferred into some other medium and yet continue to "run"-- i.e., I get to continue to "be me" in my new form-- again, why not? Particulaly if I get to continue to "live" and interact with the universe at-large, and perhaps a "new" "universe" completely artficially constructed ! :-)
VZN: "I will concede that we might be able to replace neurons without figuring out how intelligence works."


Though figuring out "how intelligence works" may be some time off, brain researchers are rapidly honing in on how "memory" and biological "storage" mechanisms work, and moreover are devising instruments to detect, display, and record them-- though not "display" in the sense of truly "rendering" them as if a picture, but to display the existance of them with respect to specific, demonstrable, identifiable constructs the individual has "recorded" / "remembered" in the brain.
CA: "Sure deep Blue may be an instantiation of a frozen 'intellect'..."


Uh oh-- battle stations! CA is admitting that intelligence can be "canned" !!!


CA: "but one of the fundamental criteria for genuine AI should be that it is flexible"


Says who? Seems like we're talking apples and oranges here. Are we talking about "intelligence", as a concept and practical implementation; or are we talking about an "entity" here complete with a "mind" aka "theory of mind"??
THEORY OF MIND: (from WikiPedia, link below)

"Defining Theory of Mind

Theory of Mind is a theory insofar as the mind is not directly observable.[2] The presumption that others have a mind is termed a theory of mind because each human can only prove the existence of his or her own mind through introspection, and no one has direct access to the mind of another. It is typically assumed that others have minds by analogy with one's own, and based on the reciprocal nature of social interaction, as observed in joint attention, [3] the functional use of language,[4] and understanding of others' emotions and actions.[5] Having a theory of mind allows one to attribute thoughts, desires, and intentions to others, to predict or explain their actions, and to posit their intentions. As originally defined, it enables one to understand that mental states can be the cause of—and thus be used to explain and predict—others’ behavior.[6] Being able to attribute mental states to others and understanding them as causes of behavior implies, in part, that one must be able to conceive of the mind as a “generator of representations”.[7][8] If a person does not have a complete theory of mind it may be a sign of cognitive or developmental impairment."

(Url: http://en.wikipedia.org/wiki/Theory_of_mind)


Note the pronounced anthropomorphic / "human"-centric bias and slant the article has. It seems to presuppose that other beings do not have a "mind", or else a complete mind, unless they can use language, interact socially, or understand "others emotions or actions".

However, being kind, I do not believe the author(s) had deliberate intent to slight or diminish other entities with lesser or alternative "mind constructs", but were rather drawing on their own belief that they themselves possess a mind and are merely enumerating its various features and abilties. However, it is this type of thinking and defining that is dangerous to the realm of "machine intelligence" (constructed intelligence), and the study of the "mind" and developing theories to represent the mind, as they tend to channel thinking and requirements along anthro-centric lines. This, in turn, falsely promotes humans and the human-centric view of "mind" (et al) as the primary, principle and most "properly defined" model of the concept-- aka. the "reference" model. Whereas in truth, it is merely one of a continuum of available models, even if only restricted to existing naturally-occurring biological versions.

And yet, even with its faults, it is clear that the science and study of mind must begin somewhere, and accordingly must state some definitions-- even if later they turn out to be inaccurate or flat out wrong. There must be a beginning point for consideration and conjecture. And to fuel the imagination and to use to develop points and counter-points.

For the model of intelligence that we humans are familiar with, study, and possess, is completely centered around and based-upon the notion of the "relationship as definition", the "uber relationship", refinements of same, and expressive constructs that permit the manipulation of "relationships". It is only natural that our initial theories of minds will utilize these concepts and incorporate them into our resultant models for both mind and intelligence. Whether or not they prove to be the *only* constructs possible remains to be seen. "Mind Science", as a field, is still very much in its infancy.
A couple of interesting books on robots, souls, philosophy, etc.

"The Soul of the Robot", Barrington J. Bayley

"Cyborg", Martin Caidin (This book later became the basis for the television show "The Six Million Dollar Man")
"There have been "sophisticated" machines discovered that utilize gears and potentially clockwork, or clockwork-like mechanisms that date back millenia-- at least 6,000 or more years."
I must disagree MrE. the most sophisticated machine of antiquity, as far as I can tell, is the antikythera mechanism. its a work of great beauty & urge everyone to read about it. however, it was probably only known to a few at the time, a kind of historical oddity/anomaly. there may have been primitive machines, but not sophisticated ones.
"I think we-- all of us, myself included-- may be doing ourselves and our fellow readers a disservice every time we use the word "artificial"."
I think you have a good point here, its interesting how much terminology "frames" the debate [have you guys read about "framing"? much interesting info on that]. I agree somewhat that maybe the mere term AI plays into the hands of its detractors. however, the term has kind of stuck in the field. it may be a historical anomaly. you appear to prefer "machine intelligence" which yeah, I guess I am ok with that. I personally use the term in a figurative way, almost in ironic tone at times. because if we can create artificial intelligence and convince someone like CA, and pass his own stringent turing test, then it will be anything but artificial for sure. I guess we have a new variant of the Turing test, the CA test-- the most strict version in which the human is a very dubious one who doubts AI is even technically possible =)
If CA admits there is a Turing style test, of any nature or complexity, that he would agree with the outcome, his position will have shifted to its complete opposite. And yet, in order to "agree" on intelligence, there must be an external reference point.
@mr. e
"This has the hallmarks of a parable. Would you finish it for us and give us the moral to ponder?"

I wouldn't dare. But I think the point is that while we just "grow" from the inside out so to speak, the things we construct are all designed, based on some theory, so we can say whether they have been put together properly or not. I suspect when all is said and done that the route to "artificial" big brains is likely to be drugs, surgery, genetic engireering, implants, etc., to augment our brains.
And that has its own problems. Don't forget the Krell.
BILLY: "Don't forget the Krell."

Krell, hell. Don't forget the Griswalds!

:-)