The basic idea behind the (Technological) Singularity is that sometime in the future, humans will either create an Artificial Intelligence (AI) that surpasses human intelligence, even if only slightly, or use technology to amplify our own intelligence. Either way, they say, the result will be an "intelligence explosion" because once it/we can make itself/ourselves smarter, it/we will be smart enough to do so even faster and better than before in a positive feedback loop.
It's a really interesting idea and one that I subscribed to for a few years. One of my favorite pieces of fiction, Accelerando (which you can download for free from that site), shows a possible (and entertaining!) future based on it. Humans making temporary copies or "ghosts" of themselves in cyberspace to truly multitask, the question of whether or not a hyperintelligent AI will have the same goals as the humans that created it, what happens when one of your ghosts decides that it is different enough from you to become its own person, and a whole slew of other issues that never existed before.
You've noted that I wrote "subscribed," past tense. A primary reason for this is because until recently I've only looked at it from a technological perspective. When you're immersed in new tech all the time, it's very easy to believe that there are no limits that will remain unsurmountable. We're pretty smart creatures, and when we find something that we can't break through we are usually sneaky enough to find away around it instead. There is also a huge amount of optimism in AI research, especially when talking about neural networks and the possibility, if Moore's Law remains in effect a while longer, of running a neural network with a number of neurons and connections on the order of magnitude of the human brain. Proposed improvements in the medical sector are making increasingly plausible the idea of non-destructively mapping the brain's neurons, so it should be possible within ten or twenty years to simulate a human brain albeit slowly. But is that really the case, or are Singularity proponents making some assumptions that they really shouldn't be?
Of course they are! I wouldn't be writing this otherwise! I should preface the following by first saying that three books were influential in my revision of my views. The first was Hofstadter's Gödel, Escher, Bach, and the second The Emperor's New Mind by Roger Penrose. Both of them are about the non-algorithmic nature of consciousness and while it's been some time since I last read them, their general ideas have certainly influenced me. The third book, the one that gave me a better understanding of consciousness from a psychological perspective, is Jacques Barzun's book A Stroll with William James. While James's ideas originate way back in the late 19th century, psychology has found them to be very sound: most have been barely modified or even left unchanged for over a century now. (As James himself would say, that does not mean that they are guaranteed to be correct, but their longevity and the lack of alternatives in the interrim lends them a great deal of weight.)
The first assumption made by Singularity proponents is that consciousness can arise or reside in a neural network. The fact is that while we know that neurons are the primary cells of the brain and that they are instrumental in the formation of consciousness, there's a whole lot of other stuff going on in your me-jelly than neurons zapping each other. Often the chemical part of "electrochemical" is ignored entirely even though every neuron firing is augmented or diminished by the particular makeup of the local chemical soup it resides in at any given moment. The makeup itself is a result of activity in various glands located throughout the body, which in turn are regulated (well, mostly) by the brain...yet another feedback loop. The existence of even deeper levels of complexity such as this likely mean that in order to create a consciousness by computer we would need to simulate all those chemical reactions in addition to neuronal activity. I believe that it would be necessary to perform the actual simulation rather than fudging the numbers because of the nature of chaotic systems: a minute change can often have large, unanticipated effects down the road (incidentally, this is why it is a very bad idea to attempt to control the weather).
The second assumption is the nature of I/O, input and output. This presents in my opinion problems even larger than those of modeling the intricacies of electrochemical reactions; after all, those can theoretically be brute-forced. Suppose for a moment that a consciousness could be constructed out of a sufficient number of neurons alone. How would we communicate with it, and even more fundamentally, how would we know that what we are simulating is a conscious entity and not merely a non-intelligent chaotic system like weather? We would have to be able to ask it things, or at least show it things, and in return get some sort of response. Now, the response could be simply looking for specific changes in network activity after being shown some image a number of times, but how would we go about showing an image to something that exists only as a software construct?
We would presumably have to hook up a video camera and maybe a few microphones. This presents a different set of challenges for either a homegrown AI or an uploaded human brain. The homegrown AI would have to be trained to see things (more importantly, to recognize them) in its incoming video stream. This would require an enormous amount of training, assuming that you'd be able to get the AI to recognize the video stream as something other than noise. Given that you would presumably be working with digital video, and given the lack of progress we've had in creating conventional software to perform such a task, and given how poorly our own visual processing is understood, this seems implausible. It's even harder for an uploaded human intelligence--we already have visual processing pathways, but hacking them to receive digital information instead of analog strikes me as being at least as difficult as creating an artifical eye.
The third assumption, and the final one I'll tackle because like most people I like threes, is that using the extremely limited (when compared to any living thing) sensory input available in electronic devices, consciousness is nonetheless sustainable. To understand why I believe this is a problem, you have to understand consciousness as a process inseparable from its environment. We are constantly bombarded by the external world: sights, sounds, smells; the pressure of clothes against our skin (or not!); heat and cold, etc. The body itself provides even more stimuli: hunger, discomfort, pain, pleasure, movement, excitement, and all of the other body states that we call emotion, along with everything managed by the autonomic nervous system (heart rate, breathing, etc.). I believe that this vast amount of physical input is one requirement of consciousness. I have no proof for this belief; it is my intuition. What would happen if we were to put a clear plastic box around a hurricane? Separated from the continual feed of energy from the ocean system that spawned it, it quickly loses its structure and begins to dissipate. My theory is that consciousness is a similarly complex phenomenon that requires a vast amount of input to sustain itself.
A human in a sensory deprivation chamber still has the innumerable processes and sensations of his body feeding his mind, providing a respite from most of the external world but not in any way stopping. Even this isolated state cannot be endured by most people for very long: there is a reason that solitary confinement is a worse punishment than communal. Try to imagine, then, what it would be like to exist with less than a thousandth of your normal capacity for sensation. I don't think that a man-made sensory apparatus whose sensing capacity is at least two orders of magnitude less than a human's could ever achieve consciousness, and I think it would very quickly lose its tenuous connection to the outside world if it did.
I've been wrong before. I'll be wrong again. I have no desire to see AI research stopped or creative thought stifled; I am simply stating my belief that the Singularity, as cool as it sounds, is not something that will become manifest. One thing I forgot to mention is that Roger Penrose and a small yet intelligent group of other folks believe that our brains may make use of quantum effects through structures known as microtubules. Confirmation of quantum effects as part of consciousness would add an entirely new level of complexity and mystery to our physical existence and would, IMO, put another nail in the coffin of the simulated human.
It is my hope, as you will see if you keep reading this thing, that science in the 21st century begins to find (more) evidence that our universe has stranger and more wonderful things available than smaller, faster cleverer devices, and that the interest in immortality through digital reproduction will be replaced by something more profound.