Log in

No account? Create an account

Previous Entry | Next Entry

On Artificial Intelligence and Machine Men
By Everett A Warren
March 14, 2004

Presented as part one of sixteen

[Edited to add: the entirety of this article can now be read in one place, over on Green Man Envy: here! The links referenced in the essay were surprisingly intact given almost a decade has passed since this was posted... there are some dead ones, and they're fixed over on GME!]

~ ~ ~

To get the obvious out of the way, and to make sure it is so for all readers, Artificial Intelligence, or AI, is not necessarily just a film by Steven Spielberg, it is a term used to describe a computer system, or other machine, that is able to think for itself. As in the aforementioned production, AI is often associated with robotics, specifically with those with a humanoid structure – an effort to anthropomorphise machines in appearance and behaviour.

Of course, in recent computing journals there have been articles that claim AI is a misnomer1 – after all, the computer can't truly possess intelligence – it is merely programmed to take some action based upon a particular occurrence. Whether or not that proves itself true – and I will detail another related common misconception that may shed some light on this in a moment – it is worthy of some thought and analysis. To support how far the lofty goal of AI has fallen, one might point to the common marketing of simplistic algorithms found in behaviours of computer-controlled opponents in gaming software as "enemy AI". Do not think for a moment that a modern day video foe can think one whit more than some of the pixelated opponents of the early games – they merely have more data points, and more complex looping behaviour, all of which are merely a set of rules coded out that they must follow.

"Speak - it is our only hope."

Computers are digital; human minds are analogue. This has been stated many times, and presented as pure fact by far too many to be untrue. Analogue means "many states" and digital means "two states" - in other words, something that is analogue will have a smooth range of possibilities, while digital is either on or off. Many insist that the differences between analogue and digital are best expressed in the difference between a dimmer switch and a common wall switch. Computers, of course, speak binary – zeros and ones, the language of a two-state digital system. Humans can speak all manner of languages, pitches, and so forth: there's an upper and a lower limit, but they can pretty much sweep the range. Digital and analogue, yes?

No, not really.

Looking closer at the two-state system of computers, you may come to realise that 5 volts or 12 volts or some such must be maintained, and transistors let the current flow in various levels based on such a triggering voltage – an artificial constraint requires that only one of two voltages, either the positive or the negative – is applied to the switch. The logic systems that the technology relies on requires that these zeros and ones are maintained through force – in some regards, the system would much rather be a dimmer switch – after all, it runs on alternating current, which doesn't sit still. It gravitates between one extreme and another in 60 hertz cycles. To do anything with modern microchips, this untamed wave has to be clipped and strapped down into voltage levels that can translate to that on or off state.

The human mind, of course, is purely analogue, yes? No, to that as well? The mind, like a computer, is made up of a bunch of different components that react to electrical signals. Although it's not a neat and clean analogy, consider a neuron to be the transistor of the mind. Each neuron is either on or off – a neuron doesn't fire half-way, or part-way, so it would seem that the transistor already has the analogue advantage. With a transistor, you have to artificially control the input "switch" so that it either gets a high or a low blast of voltage, thus has a binary operation. The neuron, however, only offers two states. The analogue aeffect that we experience is achieved through the quantity of neurons that fire in response to an event.

Keep that in mind, we're going to jump around a bit.

~ ~ ~

Reference Links

1. Neville Holmes – Artificial Intelligence: Arrogance or Ignorance? – IEEE Computer Magazine – November 2003 (Vol. 36 No. 11) – pp. 120, 118-119 – The Profession


~ Questions and discussion welcome! ~

~ ~ ~


( 2 comments — Leave a comment )
Feb. 19th, 2006 02:15 pm (UTC)
Interesting. We looked at this kind of thing in my one philosophy course, Mind and Body, albeit in a more primitive form since it was ten years ago and computers were advanced to the 286/386 level.

I'll wait to see where you're going before I try to make any comments.
Feb. 19th, 2006 03:02 pm (UTC)
It does ramble around quite a bit, to things that might seem unlrelated, but there is a core premise, no matter how much I tried to hide it. What can I say, I've always been a fan of James Burke and his Connections series, partially for the reason that I always tend to think in terms of how seemingly separate concepts and ideas are actually related.

If you come up with anything for the individual bits and pieces along the way, feel free to go off on tangents - in fact, I'd appreciate it! I'm including links to all the research involved, although ones that point to certain articles may have dissolved since this piece they are two years old, and further exploration is welcome.

As for comments on the overall piece, probably best to wait for the last part, or, if needed, I'll start a separate thread for that discussion once I've posted all of it.
( 2 comments — Leave a comment )

Latest Month

July 2014


Powered by LiveJournal.com
Designed by Jamison Wieser