Augmented reality in Mobile
Posted by haryvedca on July 8, 2010
Augmented reality (AR) is a term for a live direct or indirect view of a physical real-world environment whose elements are augmented by virtual computer-generated imagery. It is related to a more general concept called mediated reality in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality.
In the case of Augmented Reality, the augmentation is conventionally in real-time and in semantic context with environmental elements, such as sports scores on TV during a match. With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally usable. Artificial information about the environment and the objects in it can be stored and retrieved as an information layer on top of the real world view. The term augmented reality is believed to have been coined in 1990 by Thomas Caudell, an employee of Boeing at the time.
Augmented reality research explores the application of computer-generated imagery in live-video streams as a way to expand the real-world. Advanced research includes use of head-mounted displays and virtual retinal displays for visualization purposes, and construction of controlled environments containing any number of sensors and actuators.
There are two commonly accepted definitions of Augmented Reality today. One was given by Ronald Azuma in 1997 . Azuma’s definition says that Augmented Reality
- combines real and virtual
- is interactive in real time
- is registered in 3D
Additionally Paul Milgram and Fumio Kishino defined Milgram’s Reality-Virtuality Continuum in 1994 . They describe a continuum that spans from the real environment to a pure virtual environment. In between there are Augmented Reality (closer to the real environment) and Augmented Virtuality (is closer to the virtual environment).
This continuum has been extended into a two-dimensional plane of “Virtuality” and “Mediality”. Taxonomy of Reality, Virtuality, Mediality. The origin R denotes unmodified reality. A continuum across the Virtuality axis V includes reality augmented with graphics (Augmented Reality), as well as graphics augmented by reality (Augmented Virtuality). However, the taxonomy also includes modification of reality or virtuality or any combination of these. The modification is denoted by moving up the mediality axis. Further up this axis, for example, we can find mediated reality, mediated virtuality, or any combination of these. Further up and to the right we have virtual worlds that are responsive to a severely modified version of reality. (at right) Mediated reality generalizes the concepts of mixed reality, etc.. It includes the virtuality reality continuum (mixing) but also, in addition to additive effects, also includes multiplicatave effects (modulation) of (sometimes deliberately) diminished reality. Moreover, it considers, more generally, that reality may be modified in various ways. The mediated reality framework describes devices that deliberately modify reality, as well as devices that accidentally modify it.
More recently, the term augmented reality has been blurred a bit due to the increased interest of the general public in AR.
Commonly known examples of AR are the yellow “first down” lines seen in television broadcasts of American football games using the 1st & Ten system, and the colored trail showing location and direction of the puck in TV broadcasts of ice hockey games. The real-world elements are the football field and players, and the virtual element is the yellow line, which is drawn over the image by computers in real time. Similarly, rugby fields and cricket pitches are branded by their sponsors using Augmented Reality; giant logos are inserted onto the fields when viewed on television. In some cases, the modification of reality goes beyond mere augmentation. For example, advertisements may be blocked out (partially or wholly diminished) and replaced with different advertisements. Such replacement is an example of Mediated reality, a more general concept than AR.
Television telecasts of swimming events also often have a virtual line which indicates the position of the current world record holder at that time.
Another type of AR application uses projectors and screens to insert objects into the real environment, enhancing museum exhibitions for example. The difference to a simple TV screen for example, is that these objects are related to the environment of the screen or display, and that they often are interactive as well.
Many first-person shooter video games simulate the viewpoint of someone using AR systems. In these games the AR can be used to give visual directions to a location, mark the direction and distance of another person who is not in line of sight, give information about equipment such as remaining bullets in a gun, and display a myriad of other images based on whatever the game designers intend. This is also called the head-up display.
In some current applications like in cars or airplanes, this is usually a head-up display integrated into the windshield.
The F-35 Lightning II has no head-up display. This is because all targets are tracked by the aircraft’s situational awareness and the sensor fusion is presented in the pilot’s helmet mounted display. The helmet mounted display provides an augmented reality system that allows the pilot to look through his own aircraft as if it wasn’t there.citation needed
- 1957-62: Morton Heilig, a cinematographer, creates and patents a simulator called Sensorama with visuals, sound, vibration, and smell.
- 1966: Ivan Sutherland invents the head-mounted display suggesting it was a window into a virtual world.
- 1975: Myron Krueger creates Videoplace that allows users to interact with virtual objects for the first time.
- 1989: Jaron Lanier coins the phrase Virtual Reality and creates the first commercial business around virtual worlds.
- 1992: Tom Caudell coins the phrase Augmented Reality while at Boeing helping workers assemble cables into aircraft.
- 1992: L.B. Rosenberg develops one of the first functioning AR systems, called VIRTUAL FIXTURES, at the U.S. Air Force Armstrong Labs, and demonstrates benefit on human performance.
- 1992: Steven Feiner, Blair MacIntyre and Doree Seligmann present first major paper on an AR system prototype, KARMA, at the Graphics Interface conference. Widely cited version of the paper is published in Communications of the ACM next year.
- 1999: Hirokazu Kato (加藤 博一) created ARToolKit at HITLab, where AR later is further developed by other HITLab scientists and it is demonstrated at SIGGRAPH that year.
- 2000: Bruce H. Thomas develops ARQuake, the first outdoor mobile AR game, and is demonstrated in the International Symposium on Wearable Computers.
- 2008: Wikitude AR Travel Guide launches on Oct. 20, 2008 with the G1 Android phone.
- 2009: Wikitude Drive – AR Navigation System launches on Oct. 28, 2009 for the Android platform.
- 2009: AR Toolkit is ported to Adobe Flash (FLARToolkit) by Saqoosha, bringing augmented reality to the web browser.
The main hardware components for augmented reality are: display, tracking, input devices, and computer. Combination of powerful CPU, camera, accelerometers, GPS and solid state compass are often present in modern smartphones, which make them prospective platforms for augmented reality.
There are three major display techniques for Augmented Reality:
- Head Mounted Displays
- Handheld Displays
- Spatial Displays
 Head Mounted Displays
A Head Mounted Display (HMD) places images of both the physical world and registered virtual graphical objects over the user’s view of the world. The HMD’s are either optical see-through or video see-through in nature. An optical see-through display employs half-silver mirror technology to allow views of physical world to pass through the lens and graphical overlay information to be reflected into the user’s eyes. The HMD must be tracked with a six degree of freedom sensor.This tracking allows for the computing system to register the virtual information to the physical world. The main advantage of HMD AR is the immersive experience for the user. The graphical information is slaved to the view of the user. The most common products employed are as follows: MicroVision Nomad, Sony Glasstron, and I/O Displays.
 Handheld Displays
Handheld Augment Reality employs a small computing device with a display that fits in a user’s hand. All handheld AR solutions to date have employed video see-through techniques to overlay the graphical information to the physical world. Initially handheld AR employed sensors such as digital compasses and GPS units for its six degree of freedom tracking sensors. This moved onto the use of fiducial marker systems such as the ARToolKit for tracking. Today vision systems such as SLAM or PTAM are being employed for tracking. Handheld display AR promises to be the first commercial success for AR technologies. The two main advantages of handheld AR is the portable nature of handheld devices and ubiquitous nature of camera phones.
 Spatial Displays
Instead of the user wearing or carrying the display such as with head mounted displays or handheld devices; Spatial Augmented Reality (SAR) makes use of digital projectors to display graphical information onto physical objects. The key difference in SAR is that the display is separated from the users of the system. Because the displays are not associated with each user, SAR scales naturally up to groups of users, thus allowing for collocated collaboration between users. SAR has several advantages over traditional head mounted displays and handheld devices. The user is not required to carry equipment or wear the display over their eyes. This makes spatial AR a good candidate for collaborative work, as the users can see each other’s faces. A system can be used by multiple people at the same time without each having to wear a head mounted display. Spatial AR does not suffer from the limited display resolution of current head mounted displays and portable devices. A projector based display system can simply incorporate more projectors to expand the display area. Where portable devices have a small window into the world for drawing, a SAR system can display on any number of surfaces of an indoor setting at once. The tangible nature of SAR makes this an ideal technology to support design, as SAR supports both a graphical visualisation and passive haptic sensation for the end users. People are able to touch physical objects, and it is this process that provides the passive haptic sensation.    
Modern mobile augmented reality systems use one or more of the following tracking technologies: digital cameras and/or other optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, RFID, wireless sensors. Each of these technologies have different levels of accuracy and precision. Most important is the tracking of the pose and position of the user’s head for the augmentation of the user’s view. The user’s hand(s) can be tracked or a handheld input device could be tracked to provide a 6DOF interaction technique. Stationary systems can employ 6DOF track systems such as Polhemus, ViCON, A.R.T, or Ascension.
 Input devices
This is a current open research question. Some systems, such as the Tinmith system, employ pinch glove techniques. Another common technique is a wand with a button on it. In case of smartphone, phone itself could be used as 3D pointing device, with 3D position of the phone restored from the camera images.
Camera based systems require powerful CPU and considerable amount of RAM for processing camera images. Wearable computing systems employ a laptop in a backpack configuration. For stationary systems a traditional workstation with a powerful graphics card. Sound processing hardware could be included in augmented reality systems.
For consistent merging real-world images from camera and virtual 3D images, virtual images should be attached to real-world locations in visually realistic way. That means a real world coordinate system, independent from the camera, should be restored from camera images. That process is called Image registration and is part of Azuma’s definition of Augmented Reality.
Augmented reality image registration uses different methods of computer vision, mostly related to video tracking. Many computer vision methods of augmented reality are inherited form similar visual odometry methods.
Usually those methods consist of two parts. First interest points, or fiduciary markers, or optical flow detected in the camera images. First stage can use Feature detection methods like Corner detection, Blob detection, Edge detection or thresholding and/or other image processing methods.
In the second stage, a real world coordinate system is restored from the data obtained in the first stage. Some methods assume objects with known 3D geometry(or fiduciary markers) present in the scene and make use of those data. In some of those cases all of the scene 3D structure should be precalculated beforehand. If not all of the scene is known beforehand SLAM technique could be used for mapping fiduciary markers/3D models relative positions. If no assumption about 3D geometry of the scene made structure from motion methods are used. Methods used in the second stage include projective(epipolar) geometry, bundle adjustment, rotation representation with exponential map, kalman and particle filters.
||This section is in a list format that may be better presented using prose. You can help by converting this section to prose, if appropriate. Editing help is available. (February 2008)|
 Current applications
Advertising: Marketers started to use AR to promote products via interactive AR applications. For example, at the 2008 LA Auto Show, Nissan unveiled the concept vehicle Cube and presented visitors with a brochure which, when held against a webcam, showed several versions of the vehicle. In August 2009, Best Buy ran a circular with an augmented reality code that allowed users with a webcam to interact with the product in 3D.
Support with complex tasks: Complex tasks such as assembly, maintenance, and surgery can be simplified by inserting additional information into the field of view. For example, labels can be displayed on parts of a system to clarify operating instructions for a mechanic who is performing maintenance on the system. .AR can include images of hidden objects, which can be particularly effective for medical diagnostics or surgery. Examples include a virtual X-ray view based on prior tomography or on real time images from ultrasound or open NMR devices. A doctor could observe the fetus inside the mother’s womb .See also Mixed reality.
Navigation devices: AR can augment the effectiveness of navigation devices for a variety of applications. For example, building navigation can be enhanced for the purpose of maintaining industrial plants. Outdoor navigation can be augmented for military operations or disaster management. Head-up displays or personal display glasses in automobiles can be used to provide navigation hints and traffic information. These types of displays can be useful for airplane pilots, too. Head-up displays are currently used in fighter jets as one of the first AR applications. These include full interactivity, including eye pointing.
Industrial Applications: AR can be used to compare the data of digital mock-ups with physical mock-ups for efficiently finding discrepancys between the two sources. It can further be employed to safeguard digital data in combination with existing real prototypes, and thus save or minimize the building of real prototypes and improve the quality of the final product.
Military and emergency services: AR can be applied to military and emergency services as wearable systems to provide information such as instructions, maps, enemy locations, and fire cells.
Prospecting: In the fields of hydrology, ecology, and geology, AR can be used to display an interactive analysis of terrain characteristics. Users could use, and collaboratively modify and analyze, interactive three-dimensional maps.citation needed
Art: AR can be incorporated into artistic applications that allow artists to create art in real time over reality such as painting, drawing, modeling, etc. One such example of this phenomenon is called Eyewriter that was developed in 2009 by Zachary Lieberman and a group formed by members of Free Art and Technology (FAT), OpenFrameworks and the Graffiti Research Lab to help a graffiti artist, who became paralyzed, draw again.
Architecture: AR can be employed to simulate planned construction projects.
Sightseeing: Models may be created to include labels or text related to the objects/places visited. With AR, users can rebuild ruins, buildings, or even landscapes as they previously existed.
Collaboration: AR can help facilitate collaboration among distributed team members via conferences with real and virtual participants. The Hand of God is a good example of a collaboration system  Also see Mixed reality.
Entertainment and education: AR can be used in the fields of entertainment and education to create virtual objects in museums and exhibitions, theme park attractions (such as Cadbury World), and games (such as ARQuake and The Eye of Judgment). Also see Mixed reality.
Music: Pop group Duran Duran included interactive AR projections into their stage show during their 2000 Pop Trash concert tour. Sydney band Lost Valentinos launched the world’s first interactive AR music video on 16 October 2009, where users could print out 5 markers representing a pre-recorded performance from each band member which they could interact with live and in real-time via their computer webcam and record as their own unique music video clips to share via YouTube.
 Future applications
It is important to note that augmented reality is a costly development in technology. Because of this, the future of AR is dependent on whether or not those costs can be reduced in some way. If AR technology becomes affordable, it could be very widespread but for now major industries are the sole buyers that have the opportunity to utilize this resource.
- Expanding a PC screen into the real environment: program windows and icons appear as virtual devices in real space and are eye or gesture operated, by gazing or pointing. A single personal display (glasses) could concurrently simulate a hundred conventional PC screens or application windows all around a user
- Virtual devices of all kinds, e.g. replacement of traditional screens, control panels, and entirely new applications impossible in “real” hardware, like 3D objects interactively changing their shape and appearance based on the current task or need.
- Enhanced media applications, like pseudo holographic virtual screens, virtual surround cinema, virtual ‘holodecks‘ (allowing computer-generated imagery to interact with live entertainers and audience)
- Virtual conferences in “holodeck” style
- Replacement of cellphone and car navigator screens: eye-dialing, insertion of information directly into the environment, e.g. guiding lines directly on the road, as well as enhancements like “X-ray”-views
- Virtual plants, wallpapers, panoramic views, artwork, decorations, illumination etc., enhancing everyday life. For example, a virtual window could be displayed on a regular wall showing a live feed of a camera placed on the exterior of the building, thus allowing the user to effectually toggle a wall’s transparency
- With AR systems getting into mass market, we may see virtual window dressings, posters, traffic signs, Christmas decorations, advertisement towers and more. These may be fully interactive even at a distance, by eye pointing for example.
- Virtual gadgetry becomes possible. Any physical device currently produced to assist in data-oriented tasks (such as the clock, radio, PC, arrival/departure board at an airport, stock ticker, PDA, PMP, informational posters/fliers/billboards, in-car navigation systems, etc.) could be replaced by virtual devices that cost nothing to produce aside from the cost of writing the software. Examples might be a virtual wall clock, a to-do list for the day docked by your bed for you to look at first thing in the morning, etc.
- Subscribable group-specific AR feeds. For example, a manager on a construction site could create and dock instructions including diagrams in specific locations on the site. The workers could refer to this feed of AR items as they work. Another example could be patrons at a public event subscribing to a feed of direction and information oriented AR items.
- AR systems can help the visually impaired navigate in a much better manner (combined with a text-to-speech software).
 Notable researchers
|This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (September 2009)
- Steven Feiner, Professor at Columbia University, is a leading pioneer of augmented reality, and author of the first paper on an AR system prototype, KARMA (the Knowledge-based Augmented Reality Maintenance Assistant), along with Blair MacIntyre and Doree Seligmann.
- L.B. Rosenberg developed one of the first known AR systems, called Virtual Fixtures, while working at the U.S. Air Force Armstrong Labs in 1991, and published first study of how an AR system can enhance human performance.
- Mark Billinghurst and Daniel Wagner jump started the field of AR on mobile phones. They developed the first marker tracking systems for mobile phones and PDAs.
- Bruce H. Thomas and Wayne Piekarski develop the Tinmith system in 1998. They along with Steve Feiner with his MARS system pioneer outdoor augmented reality.
- 1st International Workshop on Augmented Reality, IWAR’98, San Francisco, Nov. 1998.
- 2nd International Workshop on Augmented Reality (IWAR’99), San Francisco, Oct. 1999.
- 1st International Symposium on Mixed Reality (ISMR’99), Yokohama, Japan, March 1999.
- 2nd International Symposium on Mixed Reality (ISMR’01), Yokohama, Japan, March 2001.
- 1st International Symposium on Augmented Reality (ISAR 2000), Munich, Oct. 2000.
- 2nd International Symposium on Augmented Reality (ISAR 2001), New York, Oct. 2001.
- 1st International Symposium on Mixed and Augmented Reality (ISMAR 2002), Darmstadt, Oct. 2002.
- 2nd International Symposium on Mixed and Augmented Reality (ISMAR 2003), Tokyo, Oct. 2003.
- 3rd International Symposium on Mixed and Augmented Reality (ISMAR 2004), Arlington, VA, Nov. 2004.
- 4th International Symposium on Mixed and Augmented Reality (ISMAR 2005), Vienna, Oct. 2005.
- 5th International Symposium on Mixed and Augmented Reality (ISMAR 2006) Santa Barbara, Oct. 2006.
- 6th International Symposium on Mixed and Augmented Reality (ISMAR 2007) Nara, Japan, Nov. 2007.
- 7th International Symposium on Mixed and Augmented Reality (ISMAR 2008) Cambridge, United Kingdom, Sep. 2008.
- 8th International Symposium on Mixed and Augmented Reality (ISMAR 2009) Orlando, Oct. 2009.
- Augmented Reality Developer Camp (AR DevCamp) in Mountain View, Dec. 2009.
 Free software
- ARToolKit is a Cross-platform Library for the creation of augmented reality applications, developed by Hirokazu Kato in 1999  and was released by the University of Washington HIT Lab. Currently it is maintained as an opensource project hosted on SourceForge  with commercial licenses available from ARToolWorks.
- ATOMIC Authoring Tool is a Cross-platform Authoring Tool software, for Augmented Reality Applications, which is a Front end for the ARToolKit library. Was developed for non-programmers, to create small and simple, Augmented Reality applications, released under the GNU GPL License.
- ATOMIC Web Authoring Tool Is a children project from: ATOMIC Authoring Tool that enables the creation of Augmented Reality applications and export it, to any website. Developed as A front end (Graphic Interface) for the Flartoolkit library. And it’s licensed under the GNU GPL License.
- OSGART – a combination of ARToolKit and OpenSceneGraph
- ARToolKitPlus – extended version of ARToolKit, only targeted to handheld users and developers of AR-oriented software. No longer developed.
- Mixed Reality Toolkit (MRT) – University College London
- FLARToolKit – an ActionScript 3 port of ARToolKit for Flash 9+.
- SLARToolkit – a Silverlight port of NyARToolkit.
- NyARToolkit – an ARToolkit class library released for virtual machines, particularly those which host Java, C# and Android.
- ARDesktop – ARToolKit class library that creates a three-dimensional desktop interface with controls and widgets.
- AndAR – A native port of ARToolkit to the Android platform.
- mixare – Open-Source (GPLv3) Augmented Reality Engine for Android. It works as a completely autonomous application and is available as well for the development of own implementations.
- OpenMAR – Open Mobile Augmented Reality component framework for the Symbian platform, released under EPL
 Non-commercial use
- Woodrow Barfield, and Thomas Caudell, eds. Fundamentals of Wearable Computers and Augmented Reality. Mahwah, NJ: Lawrence Erlbaum, 2001. ISBN 0-8058-2901-6.
- Oliver Bimber and Ramesh Raskar. Spatial Augmented Reality: Merging Real and Virtual Worlds. A K Peters, 2005. ISBN 1-56881-230-2.
- Michael Haller, Mark Billinghurst and Bruce H. Thomas. Emerging Technologies of Augmented Reality: Interfaces and Design. Idea Group Publishing, 2006. ISBN 1-59904-066-2 , publisher listing
- Rolf R. Hainich. “The end of Hardware : A Novel Approach to Augmented Reality” 2nd ed.: Booksurge, 2006. ISBN 1-4196-5218-4. 3rd ed. (“Augmented Reality and Beyond”): Booksurge, 2009, ISBN 1-4392-3602-X.
- Stephen Cawood and Mark Fiala. Augmented Reality: A Practical Guide, 2008, ISBN 1-934356-03-4
 In popular culture
 Television & film
- The television series Dennō Coil depicts a near-future where children use AR glasses to enhance their environment with games and virtual pets.
- The television series Firefly depicts numerous AR applications, including a real-time medical scanner which allows a doctor to use his hands to manipulate a detailed and labeled projection of a patient’s brain.
- In the 1993 ABC miniseries Wild Palms, a Scientology-like organization used holographic projectors to overlay virtual reality images over physical reality.
- In the movie Iron Man, Tony Stark (Robert Downey Jr.) uses an augmented reality system to design his super-powered suit.
- In the video game Heavy Rain, Norman Jayden, an FBI profiler, possesses a set of experimental augmented reality glasses called an “Added Reality Interface”, or ARI. It allows him to rapidly investigate crime scenes and analyze evidence.
- In the Philippines, during their first automated elections (2010), ABS-CBN News and Current Affairs used augmented reality during the counting of votes for all National and Local Candidates and in delivering news reports.
- In “Minority Report” Tom Cruise stands in front of a supercomputer using AR technology.
- The table top role-playing game, Shadowrun, introduced AR into its game world. Most of the characters in the game use viewing devices to interact with the AR world most of the time.
- Cybergeneration, a table top role-playing game by R. Talsorian, includes “virtuality”, an augmented reality created through v-trodes, cheap, widely available devices people wear at their temples.
- The books Halting State by Charles Stross and Rainbows End by Vernor Vinge include augmented reality primarily in the form of virtual overlays over the real world. Halting State mentions Copspace, which is used by cops, and the use by gamers to overlay their characters onto themselves during a gaming convention. Rainbows End mentions outdoor overlays based on popular fictional universes from H. P. Lovecraft and Terry Pratchett among others.
- The term “Geohacking” has been coined by William Gibson in his book Spook Country, where artists use a combination of GPS and 3D graphics technology to embed rendered meshes in real world landscapes.
- In The Risen Empire, by Scott Westerfeld, most – if not all – people have their own “synesthesia“. An AR menu unique to the user that is projected in front of them, but they can only see their own synesthesia menus. It is controlled by hand gestures, blink patterns, where the user is looking, clicks of the tongue, etc.
- In the Greg Egan novel Distress, the ‘Witness’ software used to record sights and sounds experienced by the user can be set-up to scan what the user is seeing and highlight people the user is looking out for.
- In the Revelation Space series of novels, Alastair Reynolds characters frequently employ “Entoptics” which are essentially a highly developed form of augmented reality, going so far as to entirely substitute natural perception.