欧美性猛交XXXX免费看蜜桃,成人网18免费韩国,亚洲国产成人精品区综合,欧美日韩一区二区三区高清不卡,亚洲综合一区二区精品久久

打開(kāi)APP
userphoto
未登錄

開(kāi)通VIP,暢享免費電子書(shū)等14項超值服

開(kāi)通VIP
Excerpt from On Intelligence

Excerpt 1 Artificial Intelligence

When I graduated from Cornell in June 1979 with a degree in electrical engineering, I didn‘t have any major plans for my life. I started work as an engineer at the new Intel campus in Portland, Oregon. The microcomputer industry was just starting, and Intel was at the heart of it. My job was to analyze and fix problems found by other engineers working in the field with our main product, single board computers. (Putting an entire computer on a single circuit board had only recently been made possible by Intel‘s invention of the microprocessor.) I published a newsletter, got to do some traveling, and had a chance to meet customers. I was young and having a good time, although I missed my college sweetheart who had taken a job in Cincinnati.

A few months later, I encountered something that was to change my life‘s direction. That something was the newly published September issue of Scientific American, which was dedicated entirely to the brain. It rekindled my childhood interest in brains. It was fascinating. From it I learned about the organization, development, and chemistry of the brain, neural mechanisms of vision, movement, and other specializations, and the biological basis for disorders of the mind. It was one of the best Scientific American issues of all time. Several neuroscientists I‘ve spoken to have told me it played a significant role in their career choice, just as it did for me.

The final article, "Thinking About the Brain," was written by Francis Crick, the codiscoverer of the structure of DNA who had by then turned his talents to studying the brain. Crick argued that in spite of a steady accumulation of detailed knowledge about the brain, how the brain worked was still a profound mystery. Scientists usually don‘t write about what they don‘t know, but Crick didn‘t care. He was like the boy pointing to the emperor with no clothes. According to Crick, neuroscience was a lot of data without a theory. His exact words were, "what is conspicuously lacking is a broad framework of ideas." To me this was the British gentleman‘s way of saying, "We don‘t have a clue how this thing works." It was true then, and it‘s still true today.

Crick‘s words were to me a rallying call. My lifelong desire to understand brains and build intelligent machines was brought to life. Although I was barely out of college, I decided to change careers. I was going to study brains, not only to understand how they worked, but to use that knowledge as a foundation for new technologies, to build intelligent machines. It would take some time to put this plan into action.

In the spring of 1980 I transferred to Intel‘s Boston office to be reunited with my future wife, who was starting graduate school. I took a position teaching customers and employees how to design microprocessor-based systems. But I had my sights on a different goal: I was trying to figure out how to work on brain theory. The engineer in me realized that once we understood how brains worked, we could build them, and the natural way to build artificial brains was in silicon. I worked for the company that invented the silicon memory chip and the microprocessor; therefore, perhaps I could interest Intel in letting me spend part of my time thinking about intelligence and how we could design brainlike memory chips. I wrote a letter to Intel‘s chairman, Gordon Moore. The letter can be distilled to the following:

Dear Dr. Moore,
I propose that we start a research group devoted to understanding how the brain works. It can start with one person梞e梐nd go from there. I am confident we can figure this out. It will be a big business one day.
桱eff Hawkins

Moore put me in touch with Intel‘s chief scientist, Ted Hoff. I flew to California to meet him and lay out my proposal for studying the brain. Hoff was famous for two things. The first, which I was aware of, was for his work in designing the first microprocessor. The second, which I was not aware of at the time, was for his work in early neural network theory. Hoff had experience with artificial neurons and some of the things you could do with them. I wasn‘t prepared for this. After listening to my proposal, he said he didn‘t believe it would be possible to figure out how the brain works in the foreseeable future, and so it didn‘t make sense for Intel to support me. Hoff was correct, because it is now twenty-five years later and we are just starting to make significant progress in understanding brains. Timing is everything in business. Still, at the time I was pretty disappointed.

I tend to seek the path of least friction to achieve my goals. Working on brains at Intel would have been the simplest transition. With that option eliminated I looked for the next best thing. I decided to apply to graduate school at the Massachusetts Institute of Technology, which was famous for its research on artificial intelligence and was conveniently located down the road. It seemed a great match. I had extensive training in computer science?check." I had a desire to build intelligent machines, "check." I wanted to first study brains to see how they worked?uh, that‘s a problem." This last goal, wanting to understand how brains worked, was a nonstarter in the eyes of the scientists at the MIT artificial intelligence lab.

It was like running into a brick wall. MIT was the mother-ship of artificial intelligence. At the time I applied to MIT, it was home to dozens of bright people who were enthralled with the idea of programming computers to produce intelligent behavior. To these scientists, vision, language, robotics, and mathematics were just programming problems. Computers could do anything a brain could do, and more, so why constrain your thinking by the biological messiness of nature‘s computer? Studying brains would limit your thinking. They believed it was better to study the ultimate limits of computation as best expressed in digital computers. Their holy grail was to write computer programs that would first match and then surpass human abilities. They took an ends-justify-the-means approach; they were not interested in how real brains worked. Some took pride in ignoring neurobiology.

This struck me as precisely the wrong way to tackle the problem. Intuitively I felt that the artificial intelligence approach would not only fail to create programs that do what humans can do, it would not teach us what intelligence is. Computers and brains are built on completely different principles. One is programmed, one is self-learning. One has to be perfect to work at all, one is naturally flexible and tolerant of failures. One has a central processor, one has no centralized control. The list of differences goes on and on. The biggest reason I thought computers would not be intelligent is that I understood how computers worked, down to the level of the transistor physics, and this knowledge gave me a strong intuitive sense that brains and computers were fundamentally different. I couldn‘t prove it, but I knew it as much as one can intuitively know anything. Ultimately, I reasoned, AI might lead to useful products, but it wasn‘t going to build truly intelligent machines.

In contrast, I wanted to understand real intelligence and perception, to study brain physiology and anatomy, to meet Francis Crick‘s challenge and come up with a broad framework for how the brain worked. I set my sights in particular on the neocortex梩he most recently developed part of the mammalian brain and the seat of intelligence. After understanding how the neocortex worked, then we could go about building intelligent machines, but not before.

Unfortunately, the professors and students I met at MIT did not share my interests. They didn‘t believe that you needed to study real brains to understand intelligence and build intelligent machines. They told me so. In 1981 the university rejected my application.

* * *

Many people today believe that AI is alive and well and just waiting for enough computing power to deliver on its many promises. When computers have sufficient memory and processing power, the thinking goes, AI programmers will be able to make intelligent machines. I disagree. AI suffers from a fundamental flaw in that it fails to adequately address what intelligence is or what it means to understand something. A brief look at the history of AI and the tenets on which it was built will explain how the field has gone off course.

The AI approach was born with the digital computer. A key figure in the early AI movement was the English mathematician Alan Turing, who was one of the inventors of the idea of the general-purpose computer. His masterstroke was to formally demonstrate the concept of universal computation: that is, all computers are fundamentally equivalent regardless of the details of how they are built. As part of his proof, he conceived an imaginary machine with three essential parts: a processing box, a paper tape, and a device that reads and writes marks on the tape as it moves back and forth. The tape was for storing information條ike the famous 1‘s and 0‘s of computer code (this was before the invention of memory chips or the disk drive, so Turing imagined paper tape for storage). The box, which today we call a central processing unit (CPU), follows a set of fixed rules for reading and editing the information on the tape. Turing proved, mathematically, that if you choose the right set of rules for the CPU and give it an indefinitely long tape to work with, it can perform any definable set of operations in the universe. It would be one of many equivalent machines now called Universal Turing Machines. Whether the problem is to compute square roots, calculate ballistic trajectories, play games, edit pictures, or reconcile bank transactions, it is all 1‘s and 0‘s underneath, and any Turing Machine can be programmed to handle it. Information processing is information processing is information processing. All digital computers are logically equivalent.

Turing‘s conclusion was indisputably true and phenomenally fruitful. The computer revolution and all its products are built on it. Then Turing turned to the question of how to build an intelligent machine. He felt computers could be intelligent, but he didn‘t want to get into arguments about whether this was possible or not. Nor did he think he could define intelligence formally, so he didn‘t even try. Instead, he proposed an existence proof for intelligence, the famous Turing Test: if a computer can fool a human interrogator into thinking that it too is a person, then by definition the computer must be intelligent. And so, with the Turing Test as his measuring stick and the Turing Machine as his medium, Turing helped launch the field of AI. Its central dogma: the brain is just another kind of computer. It doesn‘t matter how you design an artificially intelligent system, it just has to produce humanlike behavior.

The AI proponents saw parallels between computation and thinking. They said, "Look, the most impressive feats of human intelligence clearly involve the manipulation of abstract symbols梐nd that‘s what computers do too. What do we do when we speak or listen? We manipulate mental symbols called words, using well-defined rules of grammar. What do we do when we play chess? We use mental symbols that represent the properties and locations of the various pieces. What do we do when we see? We use mental symbols to represent objects, their positions, their names, and other properties. Sure, people do all this with brains and not with the kinds of computers we build, but Turing has shown that it doesn‘t matter how you implement or manipulate the symbols. You can do it with an assembly of cogs and gears, with a system of electronic switches, or with the brain‘s network of neurons梬hatever, as long as your medium can realize the functional equivalent of a Universal Turing Machine."

This assumption was bolstered by an influential scientific paper published in 1943 by the neurophysiologist Warren McCulloch and the mathematician Walter Pitts. They described how neurons could perform digital functions梩hat is, how nerve cells could conceivably replicate the formal logic at the heart of computers. The idea was that neurons could act as what engineers call logic gates. Logic gates implement simple logical operations such as AND, NOT, and OR. Computer chips are composed of millions of logic gates all wired together into precise, complicated circuits. A CPU is just a collection of logic gates.

McCulloch and Pitts pointed out that neurons could also be connected together in precise ways to perform logic functions. Since neurons gather input from each other and process those inputs to decide whether to fire off an output, it was conceivable that neurons might be living logic gates. Thus, they inferred, the brain could conceivably be built out of AND-gates, OR-gates, and other logic elements all built with neurons, in direct analogy with the wiring of digital electronic circuits. It isn‘t clear whether McCulloch and Pitts actually believed the brain worked this way; they only said it was possible. And, logically speaking, this view of neurons is possible. Neurons can, in theory, implement digital functions. However, no one bothered to ask if that was how neurons actually were wired in the brain. They took it as proof, irrespective of the lack of biological evidence, that brains were just another kind of computer.

It‘s also worth noting that AI philosophy was buttressed by the dominant trend in psychology during the first half of the twentieth century, called behaviorism. The behaviorists believed that it was not possible to know what goes on inside the brain, which they called an impenetrable black box. But one could observe and measure an animal‘s environment and its behaviors梬hat it senses and what it does, its inputs and its outputs. They conceded that the brain contained reflex mechanisms that could be used to condition an animal into adopting new behaviors through reward and punishments. But other than this, one did not need to study the brain, especially messy subjective feelings such as hunger, fear, or what it means to understand something. Needless to say, this research philosophy eventually withered away throughout the second half of the twentieth century, but AI would stick around a lot longer.

As World War II ended and electronic digital computers became available for broader applications, the pioneers of AI rolled up their sleeves and began programming. Language translation? Easy! It‘s a kind of code breaking. We just need to map each symbol in System A onto its counterpart in System B. Vision? That looks easy too. We already know geometric theorems that deal with rotation, scale, and displacement, and we can easily encode them as computer algorithms梥o we‘re halfway there. AI pundits made grand claims about how quickly computer intelligence would first match and then surpass human intelligence.

Ironically, the computer program that came closest to passing the Turing Test, a program called Eliza, mimicked a psychoanalyst, rephrasing your questions back at you. For example, if a person typed in, "My boyfriend and I don‘t talk anymore," Eliza might say, "Tell me more about your boyfriend" or "Why do you think your boyfriend and you don‘t talk anymore?" Designed as a joke, the program actually fooled some people, even though it was dumb and trivial. More serious efforts included programs such as Blocks World, a simulated room containing blocks of different colors and shapes. You could pose questions to Blocks World such as "Is there a green pyramid on top of the big red cube?" or "Move the blue cube on top of the little red cube." The program would answer your question or try to do what you asked. It was all simulated梐nd it worked. But it was limited to its own highly artificial world of blocks. Programmers couldn‘t generalize it to do anything useful.

The public, meanwhile, was impressed by a continuous stream of seeming successes and news stories about AI technology. One program that generated initial excitement was able to solve mathematical theorems. Ever since Plato, multistep deductive inference has been seen as the pinnacle of human intelligence, so at first it seemed that AI had hit the jackpot. But, like Blocks World, it turned out the program was limited. It could only find very simple theorems, which were already known. Then there was a large stir about "expert systems," databases of facts that could answer questions posed by human users. For example, a medical expert system might be able to diagnose a patient‘s disease if given a list of symptoms. But again, they turned out to be of limited use and didn‘t exhibit anything close to generalized intelligence. Computers could play checkers at expert skill levels and eventually IBM‘s Deep Blue famously beat Gary Kasparov, the world chess champion, at his own game. But these successes were hollow. Deep Blue didn‘t win by being smarter than a human; it won by being millions of times faster than a human. Deep Blue had no intuition. An expert human player looks at a board position and immediately sees what areas of play are most likely to be fruitful or dangerous, whereas a computer has no innate sense of what is important and must explore many more options. Deep Blue also had no sense of the history of the game, and didn‘t know anything about its opponent. It played chess yet didn‘t understand chess, in the same way that a calculator performs arithmetic but doesn‘t understand mathematics.

Copyright ?2004 by Jeff Hawkins and Sandra Blakeslee

本站僅提供存儲服務(wù),所有內容均由用戶(hù)發(fā)布,如發(fā)現有害或侵權內容,請點(diǎn)擊舉報。
打開(kāi)APP,閱讀全文并永久保存 查看更多類(lèi)似文章
猜你喜歡
類(lèi)似文章
Elon Musk Unveils Neuralink Brain Computer Implant...
Hidden Computational Power Found in the Arms of Ne...
馬斯克:把人腦和電腦連接起來(lái)的技術(shù)不遠了!
The AI Revolution: Why Deep Learning Is Suddenly C...
人工神經(jīng)元已從果蠅漲到鼴鼠,聰明過(guò)人還要多久?
Part 1 The AI Revolution: Road to Superintelligence
更多類(lèi)似文章 >>
生活服務(wù)
分享 收藏 導長(cháng)圖 關(guān)注 下載文章
綁定賬號成功
后續可登錄賬號暢享VIP特權!
如果VIP功能使用有故障,
可點(diǎn)擊這里聯(lián)系客服!

聯(lián)系客服

欧美性猛交XXXX免费看蜜桃,成人网18免费韩国,亚洲国产成人精品区综合,欧美日韩一区二区三区高清不卡,亚洲综合一区二区精品久久