2006年02月10日

Eva Kaplan-Leiserson论技术趋势之拓展性虚拟现实[翻译了部分]


 


假如由你来预测学习技术的未来你会怎么看?如果你认为未来将是模拟和虚拟现实的天地,那么你就错了,至少是错了一半。


模拟将赢得其应有的地位。他们正在变得越来越复杂,并且,象第二生命(Second Life)和存在(There)这样的虚拟实在的流行,证明了人们能够参与到虚拟现实之中,甚至沉迷于那些纯粹虚拟的经验。


但是,学习的真正的未来正象一位开发者所说的那样,“混合学习”。当需要决定究竟学习内容的那种传输机制最好的时候,培训设计者现在仅仅在基于班级的培训、同步在线研讨、非同步网络培训以及其他各种选项中进行选择选择,而在不久的将来,这些培训将从真实的环境选择、拓展的现实、甚至虚拟环境中进行选择


真实的环境是自我探索性的。以基于计算机模拟为形式的虚拟现实,已经被写在各种学习出版物中了——但是人们对拓展的现实的学习潜力却没有引起人们足够的重视。这无疑是由于拓展性现实是以比较缓慢的速度发展的更复杂的技术。但是研究和硬件的进步已经大大地推进了拓展性现实的发展,并且它已经开始渐渐地进入学习领域。那么,本文正是要在这里从更深地层面上来考察拓展性现实。


拓展性现实将真实世界与虚拟环境结合起来,更经常的情况是,拓展性现实是一个用户运动自己的眼球使得它与一个有线计算机连接并对设备进行定位。通过跟踪用户的头部以及眼睛所看的东西,计算机就可以把他的视觉印象扫描出或形成文字。


根据科学美国人的报道,这种技术已经在类似哈佛大学、北卡罗来纳大学欠迫山分校、由他大学、美国空军阿姆斯特郎实验室以及(美国)国家航空和宇宙航局艾姆斯氏研究中心等单位里开发和研制了30年了。


科学美国人认为,在20世纪90年代,当科学家开发用于帮助工人把各种配线整合在一起的时候,拓展性现实一词在博音公司开始流行。


随着计算机技术的进步,拓展性现实开始迅速发展。到2000年,在美国计算机协会计算机绘图专业组的一个关于计算机图形学学术会议上,一个叫做魔幻书(MagicBook)的项目吸引了人们的注意。魔幻书(MagicBook)可以象正常的文字那样去阅读,但是,当从一个头盔展示装置去观看的时候,故事中的三维动画人物就开始表演了。在2002年的《拓展性现实与教育:当前的项目与班级学习的潜力》一文中,Brett Shelton认为魔幻书(MagicBook)项目激发了人们的拓展性现实工业应用的兴趣。


在过去几年,一些研究人员在开发其他的技术的时候,拓展性现实的定义已经得到拓展。除视觉性拓展外,拓展性现实也可以包括听觉性拓展(一种计算机化的进入人耳朵的微型声音信息)、触觉性拓展或借助PDAs的拓展。一位研究人员甚至在设计可用于在线学习的视觉拓展性现实。我们将分析这些例子以及各种形式的拓展性现实技术的学习应用。


拓展性现实可以被用于在线学习。在英国萨散克斯大学,研究人员Fotis Liarokapis开发了一种交互e-learning拓展性现实环境,在那里用户可以在在线教师的帮助下,收看和与三维虚拟物体交互。


正在进行的研究项目中,E-Learning多媒体拓展现实界面(MARIE)项目使用了许多和视觉拓展性现实相同的技术——头盔现实摄象机和计算机——来为学习者展示三维多媒体信息。


但是,物体只是同步e-learning 的一部分:教师在自己的教学策略中指导学习者去观看各种视觉拓展性现实,这就好象传统教师可能会告诉学习者翻开手册或教科书的特定说明一样。然而,借助MARIE,,学习者可以通过旋转和操纵物体来与物体交互。未来的发展可能增加其他形式的拓展性现实,比如触觉和味觉,而现在,为了把它与e-learning平台WebCT进行整合,他正在继续开发这个项目。


 ==============


Think you can predict the future of learning technology? If you point to simulations and virtual reality, you’re wrong. Or at least half wrong.


 


Simulations will have their place. They’re getting more and more sophisticated and the popularity of such virtual universes as Second Life and There prove that people can be engaged by, even addicted to, experiences that are purely virtual.


 




But the real future of learning is what one developer calls “blended learning on steroids.” Just as training designers now choose among classroom-based training, synchronous online seminars, asynchronous Web-based training, and other options when determining the best delivery mechanism for learning content, in the near future those designing training will choose among training delivered in the real environment, via augmented reality, or in a virtual environment.


 


(These designations, along with a fourth—augmented virtuality—were laid out on a continuum in 1994 by Paul Milgram, Haruo Takemura, Akira Utsumi, and Fumio Kishino in their paper “Augmented Reality: A Class of Displays on the Reality-Virtuality Continuum.”)


 


The real environment is self-explanatory. Virtual reality, in the form of computer-based simulations, has been written about quite frequently in learning publications—but much less attention has been paid to the learning potential of augmented reality. This is no doubt because its more complicated technology has matured at a much slower rate. But research and hardware advances have helped propel AR forward, and it is poised to make a very big splash in the learning arena. It is AR, then, that we will examine here in more depth.


 


A definition and brief history


 


Augmented reality combines features of a virtual environment with the real world. Most often, the augmentation is visual, with a user sporting an eyepiece connected to a wearable computer and positioning equipment. By tracking where the user’s head is and what he is seeing, the computer is able to overlay graphics and/or text onto his vision.


 


This type of technology has been under development for more than 30 years, according to Scientific American, at such places as Harvard University, University of North Carolina at Chapel Hill, The University of Utah, the U.S. Air Force’s Armstrong Laboratory, and the NASA Ames Research Center.


 


The U.S. military has funded much of the AR research over the last decade, not only spending millions of dollars at their facilities but also at universities, says Sonny Kirkley, whose company Information in Place develops augmented reality solutions and has contracts with the Army, Air Force, and Coast Guard. He says that although the university research was not ostensibly for military use, “everybody knew the intent was, how is this going to apply in a military space?”


 


In the 1990s, the term augmented reality was coined at Boeing, Scientific American  says, when scientists there developed a prototype solution to help workers put together wiring harnesses.


 


As computer technology improved, augmented reality developed more rapidly. In 2000, a project called MagicBook garnered excitement at the computer graphics SIGGRAPH conference. MagicBook could be read like normal text, but when viewed through a head-mounted display, animated 3D figures acted out the story. In his 2002 article “Augmented Reality and Education: Current Projects and the Potential for Classroom Learning,” Brett Shelton says the MagicBook project sparked people’s interest in the industrial applications of AR.


 


In the last few years, the definition of augmented reality has expanded as researchers have developed additional technologies. In addition to visual augmentation, AR can also encompass auditory augmentation (a computerized earpiece whispers information into a person’s ear), touch augmentation (also called haptic augmentation) or augmentation via a personal digital assistant (PDA). One researcher is even designing visual augmented reality that works with online learning. We will examine examples and learning applications for the various types of AR technology.


 


Visual


 


Because visual AR has been under development for much longer than other types, it is the furthest along in terms of practical application. Rather than being just prototypes, as many of these technologies are, visual AR solutions are actually implemented in some instances. For example, with technicians at Boeing and mechanics at the American Honda Motor Company. Both companies are providing schematics on “heads-up” displays, as they’re called, to repair and maintenance crews for real-time electronic performance support.


 







Microvision Screenshot


 Nomad Expert Technician System. Photo courtesy of Microvision.


Honda is deploying Nomad Expert Technician Systems from Microvision, hands-free wearable displays that reflect images directly into the user’s eye, in their 12 U.S. training centers. The systems give the entry-level and experienced technicians who come for training access to online vehicle history and repair information without them having to take time away from the car to look at a separate computer display.


 


Microvision says their technology results in higher-quality work from less experienced technicians, and 30 to 40 percent efficiency gains measured in real-life trials. The devices cost about $4000 but the company says they can be paid back in less than 3 months and that a typical dealership adds $16,107 in gross profit per technician by using the devices.


 


In addition to Honda, some vocational technology colleges are using Microvision devices, also for automotive repair, as well as the military for maintenance training. We can imagine many other uses for this type of visual augmented performance support for workers who don’t sit at desks and would not normally have access to a computer, but who need to access written material or diagrams. Medical personnel and factory workers are just two groups who could benefit.  



Auditory


 


Accenture Technology Labs has developed a prototype auditory AR system called the Personal Awareness Assistant, a small wearable computer device that can record and transmit sound and contains two microphones, a small camera, and voice recognition software. One use for it is in networking. When a user says the words, “It’s nice to meet you, Jane,” the system records the name of the person and takes a picture. Then the assistant stores the audio, image, time and date stamp, and location in an address book for later retrieval. So asking, “Who was that woman I met last week at the holiday party?” will bring up the information.


 








Accenture Screenshot 
Personal Awareness Assistant.
Photo courtesy of Accenture.

The Personal Awareness Assistant could have further uses in performance support. Kelly Dempski, one of the researchers at Accenture, imagines that the system could, for example, warn workers at a chemical plant that they are entering an area that’s not safe for them because they lack the required safety gear.


 


Dempski envisions that the system could increase its level of support based on user needs. It could whisper in the ear of a first-day employee and tell her where to report and when. After she’s been on the job for a while, it could provide suggestions on improving the quality of her work.


 


Accenture also foresees uses for the device for what the Website calls “collective intelligence.” The scenario: A user is in an important business meeting and a new customer asks a question that he doesn’t know the answer to. Via the device, colleagues who are situated elsewhere can give him the answer discreetly.


 


Haptic


 


Touch is another sense that can be augmented with learning applications. A paper presented at the 2004 International Symposium on Mixed and Augmented Reality (ISMAR) discussed the development of augmented Standardized Patients (SPs) in medical training.


 


SPs are traditionally real people who portray patients in order for a medical student to practice such skills as getting a medical history, performing an examination, and so forth. However, it’s often difficult to find people who can realistically simulate various symptoms.


 


Completely virtual patients accessed via a computer raises a different set of concerns, the papers’ authors state. The medical students must learn not just how to diagnose patients but also how to interact and communicate with them. Human-to-computer interaction doesn’t provide the necessary skills transfer.


 


Fortunately, there’s a third option, thanks to new technology being tested and prototyped by the Eastern Virginia Medical School and the Virginia Modeling, Analysis, and Simulation Center at Old Dominion University. (Some funding also came from the Naval Health Research Center.)


 


Researchers rigged a system that tracks the movement of a student’s stethoscope on an augmented SP. When the stethoscope reaches the proper location, it triggers a sound file played over headphones the student wears. The system was developed using mannequins because of the long periods of immobility needed during testing, but it can be used just as easily on real people.


 


Other training situations that currently use actors, such as rescue worker or other emergency personnel training, could provide additional uses for haptic augmented reality.


 


Online learning and collaboration


 


Augmented reality can even be used with online learning. At the United Kingdom’s University Sussex, researcher Fotis Liarokapis developed “an interactive e-learning AR environment” in which users can view and interact with three-dimensional virtual objects aided by online instructors.


 


Part of an ongoing research project, the Multimedia Augmented Reality Interface for E-Learning (MARIE) uses many of the same technologies as visual augmented reality—the head-mounted display, camera, and computer—to present 3D multimedia information to the learner.


 


But the objects are part of synchronous e-learning: The instructor guides learners to view various them in sequence as part of his or her learning strategy, much as a traditional instructor might tell learners to turn to a certain illustration in a manual or textbook. However, with MARIE, the learner can interact with the object, rotating and manipulating it. Future development may enable the addition of other forms of augmentation, such as touch and smell, and he is continuing to develop the project in order to integrate it with e-learning platform WebCT.


 


Similarly, Kelly Dempski talks about a project he’s working on at Accenture that combines videoconferencing with augmented reality for augmented collaboration. Two people in different locations have cameras and display screens set up so they seem to be talking to each other from across a table. Then, the two of them can both interact with shared virtual objects in 3D for collaboration or learning.


 


Liarokapis says augmented reality tops virtual reality for learning and training “in terms of cost, realism, and human factors. Virtual reality completely replaces the real environment and this makes it much more difficult for users to adapt.” In 5 or 10 years, he says, we will be using AR for “a number of everyday applications,” including training and learning.


 


Personal Digital Assistants (PDAs)


 


In most of the previous examples, the augmented reality technology was under development and not in commercial use. For now, the technology is still too complicated and expensive for most companies. That will change as computer hardware gets smaller and less expensive. In the meantime, there is a workaround that has arisen over the past few years: augmented reality via personal digital assistant (PDA). The experience is obviously not as immersive, but it is a viable strategy being used with some success in learning, especially when combined with a game-based approach.


 


For example, the Massachusetts Institute of Technology’s Education Arcade (a group that promotes the use of computer and video games in education) joined with the Boston Museum of Science last February to create Mystery at the Museum, a “Hi-Tech Who Done It” they tested with pairs of middle school children and adults.


 


The premise: thieves have stolen a museum artifact and replaced it with a fake. The detective teams, three adult-child pairs, each received a handheld device, which they used in various galleries (along with the museum’s Wi-Fi network) to analyze objects with virtual instruments, interview characters, exchange information with other teams, and more. The children were not only highly engaged but also learned quite a bit.  


 


College students in MIT’s Environmental Education courses play Environmental Detectives, an AR game in which they investigate a toxic spill and learn about observation, hypothesis testing, data gathering and analysis in a problem-based learning simulation untethered from the desktop.


 


Can these experiments translate to adult learning? Absolutely. Game-based learning is already being used with adult learners, and combining that with whole-body experiences as augmented reality does can not only make the learning more engaging but also more “sticky.”


 


The ADL (Advanced Distributed Learning) Academic Co-Lab is working to develop new augmented reality applications through the AR gaming platform developed by Education Arcade. Common elements include “engaging backstory, differentiated character roles, reactive third parties, guided debriefing, synthetic activities, and embedded recall/replay to promote both engagement and learning.”


 


The future of augmented reality


 


Where is augmented reality and learning going? For the answer to that question, we can look to the U.S. military, which has a head start on the technology as well as the educational pedagogy.


 







Fast Fact


By 2014, more than 30 percent of mobile workers will be using augmented reality.


 


Source: Gartner, via Information in Place


The Army is developing the technology not only for performance-support information and graphics to be overlaid on soldiers’ vision—navigation lines or equipment repair instructions, for example—but also for thoroughly virtual characters, buildings, explosions, and so forth. Eventually, soldiers will be able to interact with these virtual items in the real world for sophisticated, realistic, whole-body training.


 


That kind of technology can cost millions of dollars and is still years off—Sonny Kirkley says at least five years and maybe as many as 10, depending on the rate at which the hardware matures. But the Army is getting ready now. As part of their Future Force Warrior program, they’ve contracted with Information in Place not just to work on the technology side but also to develop the instructional methodologies that will be needed in this new brand of training.


 


The Army said to Kirkley, “Great that you can make [the technology] work, but do people really learn?” So his company has been working to develop problem-based learning—they call it Problem-Based Embedded Training—and learner supports that will be used not only with the augmented reality training but also with the mix that is developing of augmented reality, virtual reality (simulations), and real-life training. Kirkley calls this “mixed reality” training.


 


In addition, Information in Place is developing an authoring tool that helps training designers think through the process of developing this new type of blending. “Instructional designers have enough trouble developing face-to-face and Web-based [training],” Kirkely says. “So we’ve been looking at an authoring support tool that focused on the instructional design process [for mixed reality training] from needs analysis to final product.”


 


Kirkley and his colleagues at Information in Place stress that the focus needs to be on what is effective, not the technology. Just as in selecting Web-based or classroom training, training designers need to look first at the training objectives and then decide which medium will best help learners meet those objectives. And that’s not always the expensive solution with the highest fidelity. In short, augmented reality training is no different than other types in that it needs to be focused on solid instructional design and well thought-out cost-benefit analysis.

本篇文章使用aigaogao Blog软件发布, “我的Blog要备份”

2006年01月27日





 


http://www.hjenglish.com/