Friday, October 15, 2010
Current Event: The Semantic Computer System
This current event revolves around how A.I is starting to evolve with getting over one of its biggest barriers, understanding language. Semantics is the understanding of what language actually means, to understand the back round knowledge behind the context of words. A team of researchers at Carnegie Mellon University, headed by Tom M. Mitchell, a computer scientist, have been working on getting a computer system to understand semantics to be more “human.” They have a computer in Pittsburg, running 24 hours a day, seven days a week, surfing the web to teach itself how to understand words rather then just regurgitate them. The computer system they are working on is called N.E.L.L, which stands for (Never-Ending-Language-Learning System). How N.E.L.L does is by scanning millions of web pages for text patterns to understand facts, its accuracy in realizing semantics so far is 87%, the way it understands semantics is by grouping them into different categories, 280 to be exact. These categories by example are things like animals, car types, states, companies, etc. N.E.L.L figures out facts by understanding how to different categories would be related like the Mustang is a car (one category), and Ford is a company (The other category). By scanning text patterns N.E.L.L understands by the key word probability, that the Mustang is a car built by Ford. The “built by” is the relationship between the two categories, upon which there 280 different types of relations. It understands the relationship between things by looking for patterns, correlations, and by using programs that understand rules (like when Microsoft is a unique name of a company and not a general word like bus). It is interesting to note that both categories and relationships between categories are expanding as N.E.L.L searches the web for more facts and information. As N.E.L.L learns a new fact it is then stored into its “knowledge base” a word coined by the researchers working on N.E.L.L. As its database grows larger N.E.L.L actually becomes more efficient at learning new facts because each new fact helps refine its learning algorithms. Because the Web is a rich source of material for a computer to learn in it helps making computers become more “human” faster. This would directly help people in general when semantics become part of the daily software on computers and search engines, where people can interface with computers in a more general atmosphere because computers will be able to understand what the user is trying to get out. For example if one were to go on a search engine to find out what is wrong with ones television for instance, instead typing in keywords, one could ask a question like “What is wrong with my Bravia LCD TV it has a big red spot in the middle of the screen?” It could help refine the results more. Or even down the road having software that can act like a personal assistant which can help one in their day to day routines. The N.E.L.L program is not the only, nor is it the first try at getting semantics right with computers, many other companies like IBM and Google are trying to do the same but what makes N.E.L.L different is that it is almost completely automated while other programs are more passive and require more user interface and programming which takes more time. Its learning systems go by a hierarchy of rules to help resolve vagueness of words that generally stump other semantic programs. Also N.E.L.L learns hundreds of things at once which is how it was designed because the more things it takes in the easier it is for itself to self correct a mistake.(the more differentiating two things are the easier it is for the program to understand them so that’s why it takes in more information at once). This idea though of teaching computers semantics takes time and even N.E.L.L needs the occasional assistance because it’s the background information of words that tie up computers systems like N.E.L.L an example given by the NY Times articles states “When Dr. Mitchell scanned the “baked goods” category recently, he noticed a clear pattern. N.E.L.L was at first quite accurate, easily identifying all kinds of pies, breads, cakes, and cookies as baked goods. But things went awry after N.E.L.L’s noun-phrase classifier decided “Internet cookies” was a baked good. (it’s a database related to baked goods or the internet apparently lacked the knowledge to correct the mistake.) What is interesting about this program is that it really trying to get artificial intelligence over a major hurdle which is trying to A.I understand what language does and means. All this information was taken from the NY Times technology website with the title named Aiming to Learn as We do, a Machine Teaches Itself with the author being Steve Lohr. I felt this current event was interesting because it really had me wondering about how close we are to have A.I really being a consistent part of society. All I can say is with the way we are so dependent on technology already I wonder how dependent we will be when artificial intelligence is more of a expert on specific topics then the experts are. I do not know but as cool as search engine that understands whole questions rather then keywords would be I do not think I would be thrilled with robots running around in the near future doing ones house chores wondering to itself " Why if I am smarting then this organism am I cleaning up after it? I think this should be the other way around." Obviously a bit extreme but you never know, what if we do end up making terminators with skynet going all crazy?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment