Artificial intelligence combines science and engineering aimed at creating smart computers and machinery. The goal is designing an artificial entity especially a computer to replicate and communicate human knowledge to the highest degree possible. To comprehend artificial intelligence it is necessary to define the concept of intelligence. Intelligence is the ability to achieve results by using the brain to think out and complete a plan, which makes something happen.
Can a machine think? No, however it can be so designed to appear as having thought. The object of artificial intelligence is to go farther than simply replicating human intelligence. Its main objective is problem solving. It is the storage of knowledge and the attempts and results designed to simulate methodical alternatives, repeatedly until reaching a solution. The intelligence quotient (IQ) is not a medium for measurement of AI. IQ measures learning ability and levels.
Today’s computers are not yet capable of learning. They take stored data and compute outcomes based on inter-mixing that data in a myriad of diverse and different variations. The advantage the computer has over the human brain is storage and retention potential. A computer can compute and remember thousands upon thousands of attempts and their results calling them all up for examination at any given moment.
The human brain can only multi-task to an individually limited capacity. It stores every experience and uses those experiences for decision-making processes of the future. It cannot pull every experience to the frontal lobe for instantaneous examination of content. The computer’s ability to retain information depends on its memory space. It is an easy process to extend a computer’s memory. If a computer and a person approach an exact computational task and the person outperforms the computer, the computer is demonstrating a lack of proper programming.
Five thousand years ago, the Asian abacus emerged as the motherboard of today’s computer technology. In 1642, Blaise Pascal invented the first calculator. Fifty-two years later in 1694, Gottfried Wilhelm von Leibniz expounded on the calculator adding algorithm computations. A genetic algorithm (GA) includes algorithms that derive their behavior from evolutionary patterns of natural selection, fitness, reproduction, crossover, mutation, randomization, life and death.
Wishing to mass-produce at a faster rate of speed than was available by hand crafting in 1805 Joseph-Marie Jacquard introduced an automated weaver. In 1821, Charles Babbage won the British Astronomical Society’s first “Gold Metal” for his work, “Observations on the Application of Machinery to the Computation of Mathematical Tables.”
England patented a differential calculating machine in 1842. Ada Lovelace, Lord Byron’s child became the first software engineer designing programs in 1843 for an analytical engine to play chess and compose music. In 1890, Herman Hollerith the founder of IBM patented an electromechanical information device that won the U.S. Census Competition for its use of electricity in data processing. Alan Turing an English mathematician using telephone relays and electromagnetic components created Robinson the first American computer and used it to crack Germany’s military codes during WWII. In 1943, using 2000 radio vacuum tubes Colossus replaced Robinson.
In 1941, Konrad Zue created the first fully programmable computer. Six years later 1947 brought the invention of the transistor replacing vacuum tubes and greatly decreasing computer size. In 1951, Marvin Minsky co-built a machine, called SNARC that could maneuver mazes. It was the creation of the first neural network. At a conference of scientists and engineers, now referred to as the “1958 Dartmouth Artificial Intelligence Conference,” mathematician, John McCarthy coined the term “Artificial Intelligence.”
In the 1960’s Daniel G. Bobrow created a program called “Student” which added algebra to a computer’s capability. Edward A Feigenbaum who added chemistry as a program followed him. Video game arcades of the 1970’s introduced AI to the American public. Atari developed in the 1980’s introduced handheld devises. Also in the 1980’s IBM introduced America to the personal computer (PC). In 1993 the first browser called “MOSAIC” was developed. Then in 1997, a supercomputer called Deep Blue defeated Gary Kasparov, a world-class chess player.
Today non-information industries are shrinking. We have watched this over the last six decades. Where once Mom and Pop stores dominated the grocery store business today there are supermarkets. Today you can pull into one parking lot where one provider offers gasoline, groceries, beauty aids, medicine, flowers, perfume, magazines, toys, apparel, fast food, photography, sporting goods, fire arms, arts and crafts, footwear, electronics, and more. It is easy to grasp the concept of one-stop-shopping. It is just one more step to grasp AI and a super computer offering unlimited solutions for world needs and desires.
AI has been researched for the last sixty or so years in the U. S., Europe & Asia especially in Greece, Germany, China, Japan, France, Sweden, and Iran. Some give credit to Alan Turing for bringing it to public attention with his 1947 lecture. In his theory, he represented that computational devices were necessary to create AI and not just ordinary machinery. Put in non-technical language, a tractor pulling a sharpened, spinning blade can cut grass for the first step in the process of hay storage. This represents a machine and an attachment. A computational device is necessary for the storage and analyzing of data. Unless programmed to do so the blade cannot tell the speed at which it turned or how thick each blade of grass was. The tractor cannot tell the amount of fuel it used or how many men it took to perform the task of mowing. Neither can provide data on the time it took to mow a certain area or how many bales of hay the area mowed produced. This type of data requires a computer program.
In Alan Turing’s 1950 article, “Computing Machinery and Intelligence,” he claimed that if a machine can present to a learned individual as displaying human-like thought abilities that it was an intelligent machine. Many philosophers labeled the Turing test as strictly one sided. Most agreed that such a machine displayed intelligence, however that was not sufficient to satisfy the majority of scientists. They argued that the Turing test only proves intelligence measured by the individual observing and that is not what the search for an intelligent machine requires.
For sixty years, researchers have been trying to reduce computer size, improve speed, and design a computer that can learn as a human child learns. The advantage of maneuverability and ease of usage brought about by size reduction is outstanding. Speed is good, however computers designed to function at speeds beyond human comprehension are just speedy computers and still bound by the data programmed into them. Will a computer ever reach a level where it is capable of learning and improving itself? Probably, but as for now the limiting factor is in the programming. Human programmers can only input data, which they contain within themselves or which they can obtain from other humans. At this point in 2010, no one has successfully formulated a computer capable of replicating learning skills.
The U.S. Government has been the major support behind AI. The establishment of the Defense Advanced Research Projects Agency (DARPA) came in 1958 as a monitoring tool after the launching of SPUTNIK the first space traveling satellite. During the Gulf War DARPA used AI to schedule military units in the Middle East. During the last 50 years, the military has used AI for the identification of enemy aircraft, weapons and long distance object targeting.
The airlines industry has long used AI for autopilot maneuvering. Developed by the Sperry Corporation, autopilot controls hydraulically operated rudders, elevators and ailerons. This system has made a tremendous reduction in pilot error in controlled take off, ascent, leveling, approach and landing.
Nanotechnology is a major player in global economy and holds a vital place in the overall AI picture. The world stands on the edge of a nanotechnology revolution. Because of its size, a nanometer is one billionth of a meter this will in reality be an invisible revolution. Richard Feynman won the Nobel Prize for his work in the field of nanotechnology. In 1959, he theorized that matter was maneuverable at the atomic scale. Nanotechnology research called for the designing of new tools and instruments capable of working with matter at such a small scale. One such instrument is the scanning probe microscope (SPN) constructed in 1981. On December 3, 2003, President George W. Bush signed the 21st Century Nanotechnology Research and Development Act authorizing $3.7 billion for research spending over the next four years.
The first job of nanotechnology is the improvement of existing products. The second is in radical innovations. Today the study of nanotechnology starts in high school. The International BioGENEius Challenge invites entries from across the globe. Winning entries include such genius work as John Zhou’s pathogen-detecting biosensor, which uses polymers as nano-wires. The University of California has nanotechnology centers at Berkley, Los Angeles, and Santa Barbara. There are also nanotechnology programs at MIT and Harvard, Cornell and Columbia University, Purdue and Rensselaer Polytechnic Institute and Rice University.
Nanotechnology education is drawn from ten departments physics, engineering, applied science, chemistry, chemical biology, anthropology, philosophy, economics, religion and mathematics. The prediction is nanotechnology will be a $2.6 trillion worldwide industry by 2014, utilizing a workforce of 2 million by 2020. Nanotechnology careers include applications engineers, patent agents, research scientists, research and development chemists, scientific drug formulators, encapsulation and micro-fabrication technicians, and biomedical micro nano-systems scientists.
Corporations such as IBM, Hewlett-Packard, Intel, DuPont, GE, Dow Chemical, Merck, ExxonMobil, ChevronTexaco, and GM are all recruiting properly educated employees. Careers in the nanotech field are available at Zyvex with operations in Dallas, Austin and Houston Texas. Currently there is a race for breakthroughs in solar cells, drug delivery systems, computer chips, and batteries.
One of the first nano-particles created was the buckminsterfullerene often called the “buckyball” or the “fullerene.” These are spheres made up of 60 carbon atoms These atoms connect forming 12 pentagons and 20 hexagons on the molecules surface which is one nanometer in diameter. L’Oreal uses buckyballs in manufacturing facial creams. Research has shown that buckyballs can replace electronic switches with optical switches revolutionizing the speed at which information travels along fiber-optic networks. The carbon nano-tube is another nano-particle consisting of a lattice of carbon atoms forming a cylinder. The nano-tube is as strong as steel but lightweight.
The “nano-crystal” or quantum dot is a nano-particle that produces light when charged with energy. Nano-materials include nano-wire, nano-clay, nano-powder, and nano-coatings. These materials have improved semiconductors, fuel cells, catalytic converters, flat-panel displays, plastics, healthcare and offer many other applications. Nano-materials enhance sports equipment such as nano-tube tennis balls and racquets. The tobacco industry currently uses nanotechnology to inoculate tobacco plants against insects and disease and for rapid growth and durability. A new soldier uniform called “The Vision 2020 Future Warrior” uses nanotechnology in sensors, robotics, durable armor, auto-communication systems, exomuscles, and materials with injury treating molecules. Other fields affected by nanotechnology include cosmetics, apparel, medicine, energy, and telecommunications.
In the clothing, industry nano-materials produce stain resistant jeans, foot warmers for boots, and static resistant fabrics and materials. There are nanotech improved rubber products such as tires. There are fabricated diamonds, and this is just a small sampling. Nanotechnology scaling principles created Apple Computer’s pocket-sized MP3 player and the IPOD Nano. Nanotechnology offers alternatives to photolithography including electron-beam lithography, x-ray lithography, ion-beam lithography, and soft lithography. An innovation presently in designing stages is replacing the silicon chip with a carbon nano-tube chip. IBM’s creation the Millipede stores a huge amount of nano-scale data on a polymer plate. It is fast, energy-efficient, and affordable.
In the medical field nanotechnology includes nano-silver, an antibacterial burn dressing, Abraxane, a nano-particle breast cancer treatment drug, and nanotech dental adhesives and tooth fillings and hundreds of other improvements and innovations. Many human components are nano-scale sized. DNA is approximately 2 nanometers wide. Viruses are approximately 50 nanometers long.
A molecule is the smallest particle of a substance, element or compound that retains the chemical properties of the substance, element or compound. A molecule is two or more atoms held together by chemical bonds. Every method of manufacturing requires the arrangement of atoms. Chemistry and biology create molecules defined by a specific arrangement of atoms. These arrangements always contain the same numbers, kinds, and bonds. Biology works with cells containing molecular machines that read digital genetic data guiding the assembly of large molecules, usually proteins that serve as integral pieces of the molecular machines. Molecular manufacturing uses stored data to construct molecular machines.
Where chemistry allows the random arrangement of molecules, molecular machine assembly places molecules in specific locations and sequences. Holding and positioning molecules by choice controls their reactions allowing the building of complex structures with atomic precision. Many believe that molecular manufacturing requires the assembly of self-replicators; this is a false concept.
Micro-electromechanical Systems (MEMS)
MEMS are systems or machines existing on the micro-scale, such as micro-fluidic chips, sensors, switches, motors and labs-on-a-chips (LOC). LOC’s improve DNA testing. MEMS are the technology of small electronic devises driven by electricity. MEMS merge at the nano-scale into nano-electromechanical systems (NEMS). MEMS are separate and unique from molecular manufacturing. They consist of components from one to one hundred micrometers in size. They consist of a central data processing unit, a micro-compressor, and several components that interact with the outside called micro-sensors.
Robotics & Neural Networks
The “Afghan Xplorer” a mobile robot created at MIT records data for reporters in hostile environments and war zones. Las Alamos National Laboratory in New Mexico is working on robot components to withstand hazardous environments. ASIMO is a robot that uses sensors and intelligent algorithms to avoid bumping into obstacles and to climb and descend stairs. Kismet is a social skills robot.
Neural networks or circuits consist of biological neuron networks or artificial neuron networks. Synapses are neuron connections formed from axons to dendrites. AI simulates some of the properties of neural networks in speech recognition, image analysis, and adaptive controls to construct software agents in computer and video games and autonomous robots. The concept for neural networks began in the 19th century with attempts to describe how the human mind functioned.
TOOLS & LANGUAGES
Being successful in the AI field requires mathematical prowess, and a deep degree of understanding in both physics and biology. The biological approach to AI examines the human nervous system from both physiological and psychological avenues. A good comprehension of programming languages is required. U. S. troops in Iraq use portable language translators. The most beneficial languages to learn are C, C++, Java, Lisp (LISt Processing), Scheme, and Prolog.
The AI business according to some scientists is moving at a snail’s pace, but in reality, it is in a state of constant change and advancement. In 1990 if you spoke Java you could wing your way through lower level research programs. In Java, a genetic algorithm (GA) is an algorithm that reproduces new code from existing code. Java is an excellent choice but in 2010, it is not always the only language needed. In fact learning two or more computer languages is often necessary.
Task systems are accomplished by gathering knowledge from experts in a given field or domain and developing a program to perform that task. There has been much disappointment in this arena as the experts consulted each had predetermined outcome expectations based on their individual experiences. The engineers developing the task-oriented programs were limited to working within established databases only.
Computer generated board games provide a prime example of the finite difference between thought and computation. Games such as chess and “Go” require strategic thought which computers are not capable of. To win against a world champion player a computer must perform as many as 200 million positional computations. Deep Blue beat a chess pro, but not by out-thinking him, by out computing him.
Financial control task systems are relatively successful and based on mathematical equations. ATM machines, computerized cash registers, and credit card systems use stored calculations and simple math procedures. Financial control programs fall under the heuristic classification and provide good examples of successful AI programming. AI research has two angles of approach. The biological approach attempts to imitate human behavior and response. The second approach emulates common sense and logical reaction.
Four of the most common applications of AI are games, speech recognition, task systems, and financial controls. Today a vast variety of computer games exist and are played by people of all ages from infancy to ninety plus. There are fantasy games, teaching games, war games, sports games, dexterity games, and more. Speech recognition instead of key board and mouse provides extra convenience but has limited usage. Extensive research and development will redefine medical transcription methods in the near future.
AI is multifunctional and used to read to the blind, maneuver wheel chairs and in a myriad of medical devises from electrocardiograms’ to hearing aids. Prosthetic and cyber kinetic neuro-technologists are working on a system called Brainwave, which will soon allow a chip in the motor cortex of the brain to decode brain waves and control the movement of an artificial limb by direction of the user’s thoughts.
The first prosthetic leg developed using AI was finished on March 30, 2006 in New York. The Mitsubishi Concept car is a product of AI powered by lithium ion batteries. Hearing aids use AI to filter out distractions. AMAZON.COM was one of the first web sites to use AI offering shoppers related goods based on past purchases. GOOGLE’S vision was to make the world’s entire printed content searchable online by using research scanners and scalers.
Using global positioning “Smart” phones predict and locate traffic snarls. Global positioning transmitters track minors who are under house arrest and adult criminals both using legally placed ankle bracelets, which are Ai devises. The U. S. Global Positioning System (GPS) assists cars and cell phones in mapping destinations and locating vehicles and persons in distress. The European Space Agency (ESA) launched their global navigation satellite system “Galileo” in 2005 in competition against the United States GPS.
Technicians look at what is still lacking in the research of AI and are overwhelmed at their inability to formulate the right data processing programs immediately. Novice computer users look at the wondrous tasks performed by computers and are awe struck in breathless marvel. This gives proof to the perception that Turing’s test is truly one sided. Many philosophers and scientists oppose AI claiming it to be incoherent, impossible, obscene, anti-human, and immoral.
In researching AI, “the Singularity” is a common term. Vernor Vinge, a computer scientist and science fiction writer, used this term in a paper he wrote in 1993 describing his take on the concept of ultra smart computers. “The Singularity” described a machine with greater than human intelligence.”
Kevin Kelly an editor at “Wired” magazine is writing a book titled “The Technician” predicting the emergence of a global brain. He is presenting his idea that at some unknown future time the planets interconnected computers might coordinate and exhibit actual intelligence. William Joy a computer designer and co-founder of Sun Microsystems has a theory that the human race is more apt to destroy itself with their own technology than they are to create utopia assisted by AI.
In the 1950’s researchers predicted that AI computers capable of super human computation and simulated internal thought would be complete in ten years. Sixty years later, they make no such bold proclamations. Although there are significant differences of opinion among AI researchers, engineers, and scientists they all agree on one significant point, that being, “If they die one moment before the achievement of “the Singularity” that would be unfair.”
AI stimulates economic growth and offers many career opportunities. Video game designers start out earning about $25,000 a year. Experienced designers top out at about $100,000 annually. The average American designers earn from $45,000 to $65,000 yearly. AI engineers earn excellent money. The range is diverse depending on individual skill. The designer responsible for FIDO C FIELD Integrated Design and Operations, a mechanical robot rover used by NASA pulled in bigger paychecks than the engineers who designed AI cash registers. Science writer’s medium salaries run around $45,000 yearly. Earnings vary depending on the scope and level of the research paper.
Computer scientists earn lucrative amounts of money. Again, the difference between $40,000 and $250,000 yearly income depends on the individual’s skill and project assignment. There is a need for programmers, technicians, teachers and many other positions. Corporations such as GE, Microsoft, IBM, and AT&T are just a few of the companies in constant search of educated talent. The higher the potential employee’s degree, the higher the pay will be. Entry-level positions are available at Yahoo!, Google, IBM, AT&T, GE, and NecGrow.
The Association for the Advancement of Artificial Intelligence (AAAI) is a nonprofit organization founded in 1979. They are devoted to the advancement of the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. Their goal is to increase public awareness and understanding of AI, to improve teaching and training methods among AI practitioners, and to provide guidance for planners and funders.
AAAI organizes and sponsors conferences, symposiums, and workshops. They publish a quarterly magazine. They publish books and reports. They award grants and scholarships, and various other honors. Their sixth AI for Interactive Digital Entertainment Conference took place October 10, 2010 in Palo Alto, California.
Their fall symposium begins November 11, 2010 just outside of Washington D. C. Their spring symposium begins on March 21, 2010 in Palo Alto. The 25th AAAI conference scheduled in San Francisco, California takes place on August 7, 2011.
The Computer Museum of America is located in San Diego, California and provides a great cultural medium for the public.
In the immediate future the 2011 super computer will perform ten million billion calculations per second, referred to as ten billion MIPS. By 2020 that computer will be household affordable. The human brain is limited to a one cubic foot container, the skull and transmits messages at a few hundred feet per second, which is a million times slower than electronics. The cerebral cortex contains a billion or so pattern recognizers. Computers in the near future will contain a trillion. The researchers, engineers, and technicians working on this super computer are aiming to reach where it has been impossible to go.
One such area is nano-engineered photovoltaic panels, which are eliminating fossil fuels. Two other areas are hydroponic computer-controlled vertical farming and in-vitro-cloned meat. There are project engineers designing nano-scales to replace micro-scales, which can print three-dimensional complex objects capable of blueprinting low cost housing modules with pipes and wiring designed in. The goal is housing the developing world at very low costs. This technology should be complete by 2030 and capable of producing low cost modules to satisfy all needs.
The group working on the super computer and its following updates and improvements are working from the principle that human beings have evolved to an intelligence level that will not change significantly and AI technology will eventually predominate. That word “predominate” draws pictures in a novice mind of hostile takeovers by computers and machinery as displayed in popular science fiction movies. This is not reality. The “predomination” is just another way of extending the human reach to places where without AI it could not go. These same novice minds fear that billions of religious people around the world will be anti-AI. This is also a misconception. The major religions all welcome scientific discovery, which lessens human suffering. They are pro-life, and encourage and support medical progress and the over-coming of disease and hunger.
Today most American homes have computers and electronic toys. In many homes across the country, every member of the house has their own personal computer. Across the world, computers are available for public use. Once their size and cost were prohibitive, today they are practical in both areas. Technology in general is realizing decentralization. The internet is a prime example. If a piece of it goes down rarely is anything lost. It simply computes a path to route around the break. The novice finds this frightening, the professional exhilarating.
Are computer ran cars in the near future? A DARPA funded on board computer system created by Carnegie Mellon University drove a van 2797 miles out of a 2849-mile trip from Washington D. C. to San Diego, California at an average speed of 63 MPH.
The future will bring breakthroughs in molecular medicines. Nanotechnology will deliver sub-cellular level medical victories. Quantum dots diagnosis when perfected will identify cancer cells at their onset. The future holds individual television transmitter implants for the retina of the human eye, and a bridge, which spans the ocean. In the distant future, there could be elevators extending from the equator to stations in outer space or cybernetic ladders extending from Earth to Mars constructed of AI nano-technological materials for the transportation of both man and merchandise.
Using the power of his brain the first computer programmer was a blind man named Arnold Fast. The human brain is an ever-compensating powerful machine. However, there is no record of a human being using 100% of his brain capacity. Yet, man proposes to teach an artificial intelligence model to think and learn before first mastering his own full potential.
AI is a worthy field of science. It will provide breakthroughs that redesign how the nations live. Sixty years ago, man had never reached the moon, computers were bulky, often requiring a room to hold one, DNA testing of crimes was not conclusive, and cell phones did not exist. From 1900 to 1950, the world changed. From 1950 to 2010, the world changed again with doubled and tripled technological advancements. From 2010 to 2060, where will this ever-increasing advancement take us? The possibilities are unlimited and AI will be the prominent benefactor of change.
“Artificial Intelligence.” Wikipedia, the Free Encyclopedia. Wikimedia Foundation, Inc. Web. 03 Sep. 2010. .
Brezina, Corona. Careers in Nanotechnology. First ed. New York: Rosen Pub., 2007. Print. Cutting-Edge Careers.
Greenberger, Robert. Careers in Artificial Intelligence. First ed. New York: Rosen Pub., 2007. Print. Cutting-Edge Careers.
Humphrys, Mark. The Future of Artificial Intelligence. Tech. Reed Business Information. New Scientist Magazine. Robot Books.com – Robot Kits, Robotics, and Toy Robots. University of Edinburgh. Web. 03 Sep. 2010. .
“An Interview with Ray Kurzweil, Inventor, Bestselling Author, WorldFuture 2010 Speaker.” Interview by Aaron M. Cohen. World Future Society | Tomorrow Is Built Today. 11 July 2010. Web. 03 Sep. 2010. .
Markoff, John. “The Coming Superbrain.” The New York Times [New York, NY] 23 May 2009, TECHNOLOGY sec. NYTimes.com. The New York Times Company, Web. 3 Sep. 2010. .
McCarthy, John. WHAT IS ARTIFICIAL INTELLIGENCE?Formal Reasoning Group. Stanford University, 12 Nov. 2007. Web. 03 Sep. 2010. .
“Microelectromechanical Systems.” Wikipedia, the Free Encyclopedia. Wikimedia Foundation, Inc. Web. 03 Sep. 2010. .
“Molecular Manufacturing.” E-drexler.com: The Trajectory of Nanotechnology. 13 May 2004. Web. 03 Sep. 2010. .
“Neural Network.” Wikipedia, the Free Encyclopedia. Wikimedia Foundation, Inc. Web. 03 Sep. 2010. .
Sharman, Lane W., “Breathe Intelligence into Java – JavaWorld.” Welcome to JavaWorld.com. Infoworld, Inc., 06 Apr. 2001. Web. 03 Sep. 2010. .
Waltz, David L. Artificial Intelligence. University of Washington Computer Science & Engineering. NEC Research Institute and the Computing Research Association. Web. 03 Sep. 2010. .
“Welcome to AAAI!” Association for the Advancement of Artificial Intelligence. 2008. Web. 03 Sep. 2010. .