Back to HOME Good Sites
Below you will find the summaries of ten technologies that will change the world.

The Technology Review Ten


The Technology Review Ten Emerging Technologies That Will Change the World

What if you had a crystal ball that foretold the future of technology?

Imagine, for example, if you had known in 1990 just how big the Internet was going to be 10 years hence. Sorry, that crystal ball doesn't exist.

But in this special issue of Technology Review, we offer you the next best thing: the educated predictions of our editors (made in consultation with some of technology's top experts).

Technology Review - Magazine - TR10: Brain-Machine Interface


Recorded from four separate areas of Belle's cerebral cortex, the signals provide a window into what her brain is doing as she reaches to touch one of four assigned buttons to earn her reward-a few drops of apple juice.

Miguel Nicolelis, a Duke University neurobiologist who is pioneering the use of neural implants to study the brain, points proudly to the captured data on the computer monitor and says: "This readout is one of a kind in the world." Only about a half-dozen teams around the world are pursuing the same goals: gaining a better understanding of how the mind works and then using that knowledge to build implant systems that would make brain control of computers and other machines possible.

Nicolelis terms such systems "hybrid brain-machine interfaces" or HBMIs. Recently, working with the Laboratory for Human and Machine Haptics at MIT, he scored an important first on the HBMI front, sending signals from individual neurons in Belle's brain to a robot, which used the data to mimic the monkey's arm movements in real time.

In the long run, Nicolelis predicts that HBMIs will allow human brains to control artificial devices designed to restore lost sensory and motor functions. And implants may also shed light on some of the brain's unresolved mysteries.

World Travel GuideBird Flu Brain Facts
Technology Review - Magazine - TR10: Flexible Transistors


The implementation of pervasive computing-the spread of digital information throughout society-will require electronics capable of bringing information technology off the desktop and out into the world (see "Computing Goes Everywhere").

To digitize newspapers, product labels and clothing, integrated circuits must be cheap and flexible-a tough combination for today's silicon technology. Others have abandoned inorganic compounds like silicon to develop transistors based on organic (carbon-based) molecules or polymers. Indeed, research teams at places such as Lucent Technologies' Bell Labs, England's University of Cambridge and Pennsylvania State University have made impressive progress, and commercial products are nearing the market.

Now a 31-year-old materials scientist at IBM, Cherie Kagan, may have opened the door to cheap, flexible electronics that pack the mojo needed to bring ubiquitous computing closer.

A compromise: transistors made from materials that combine the charge-shuttling power and speed of inorganics with the affordability and flexibility of organics.

Technology Review - Magazine - TR10: Data Mining


Compiling a simple recommendation list requires a system that can burrow through gigabytes of Web site visitor logs in search of patterns no one can anticipate in advance.

Welcome to data mining, also known as knowledge discovery in databases (KDD): the rapidly emerging technology that lies behind the personalized Web and much else besides.

The emphasis here is on "emerging," says Usama Fayyad, who should know: data mining didn't exist as a field until he helped pioneer it. He had taken a summer job with General Motors, which was compiling a huge database on car repairs. The pattern recognition algorithm he devised to solve that problem became his 1991 doctoral dissertation, which is still among the most cited publications in the data-mining field.

A geologist could retrieve the image of a previously identified volcano; the computer would then examine the picture for patterns and search through other images for similar patterns. By decade's end, Fayyad had caught the entrepreneurial bug sweeping through computer science labs. The results are still preliminary, as various labs experiment with natural-language processing, statistical word counts and other techniques.

Another hot area, says Fayyad, is "video mining": using a combination of speech recognition, image understanding and natural-language processing techniques to open up the world's vast video archives to efficient computer searching.

Super MemoryBrain FoodNeurotech Makeup.Fashion

Technology Review - Magazine - TR10: Digital Rights Management


Sitting in his office in McLean, Va., Ranjit Singh is at ground zero of what may be the biggest-and bloodiest-of the many battles that will shape the Internet during the 21st century's initial decade.

And then there is Singh, president of ContentGuard, a company that spun out of research at Xerox's Palo Alto Research Center on a mission to commercialize content protection in a wired world. "The Internet changes everything," says Singh, 48, an England-born technology manager whose resume glitters with senior positions at Xerox, Citibank and Digital equipment plus a number of startups.

A few mouse clicks sends a work to millions of users, but the creators and owners of the content won't necessarily collect dime one (see " Your Work Is Mine!"). Digital rights management, or DRM, is "the catalyst for a revolution in e-content," says Singh.

"DRM will allow content owners to get much wider and deeper distribution than ever before," he maintains. Even if Napster is put out of business by the courts, he predicts that the frictionless distribution of digital content among the millions of Internet users will live on.

Technology Review - Magazine - TR10: Biometrics


Large companies use fingerprint sensors for logging on to corporate networks, state driver's license authorities employ face recognition for capturing and storing digital photographs, and the first iris-scan-protected ATM in the nation was introduced in Texas in May 1999.

Yet consumers have been reluctant to adopt the technology, and so far, it remains largely relegated to military and government applications. But the emergence of another technology-the wireless Web-could soon change all that, according to Joseph Atick, president and CEO of Visionics, one of the leaders in face recognition technology.

And while the need for security is pushing the demand for biometric systems, other technology developments-increased bandwidth, new cell phones and handheld computers equipped with digital cameras-will create an infrastructure capable of putting biometrics into the hands of consumers.

At the age of 15, while living in Israel, Atick dropped out of school to write a 600-page physics textbook entitled Introduction to Modern Physics. While heading the Computational and Neuroscience Laboratory at Rockefeller University, he sought to understand how the brain processes the abundance of visual information thrown at it by the environment.

Technology Review - Magazine - TR10: Natural Language Processing


Already we have commercial speech recognition software that can take dictation, speech generation equipment that can give mute people voices and software that can "understand" a plain-English query well enough to extract the right answers from a database.

Emerging from the laboratories, moreover, is a new generation of interfaces that will allow us to engage computers in extended conversation-an activity that requires a dauntingly complex integration of speech recognition, natural-language understanding, discourse analysis, world knowledge, reasoning ability and speech generation.

But the Defense Advanced Research Projects Agency (DARPA) is working on wide-ranging conversational interfaces that will ultimately include pointing, gesturing and other forms of visual communication as well. Getting there will be a huge challenge-but that's exactly what attracts investigators like Karen Jensen, the gung-ho chief of the Natural Language Processing group at Microsoft Research.

And now, she says, they've begun to focus all their efforts on a unique technology known as MindNet. MindNet is a system for automatically extracting a massively hyperlinked web of concepts from, say, a standard dictionary. If a dictionary defines "motorist" as "a person who drives a car," for example, MindNet will use its automatic parsing technology to find the definition's underlying logical structure, identifying "motorist" as a kind of person, and "drives" as a verb taking motorist as a subject and car as an object.

The very act of putting this conceptual network into a computer takes the machine a long way toward "understanding" natural language.

Technology Review - Magazine - TR10: Microphontonics


Light bounces off the small yellow square that MIT physics professor John Joannopoulos is showing off. Photonic crystals are on the cutting edge of microphotonics: technologies for directing light on a microscopic scale that will make a major impact on telecommunications.

In the short term, microphotonics could break up the logjam caused by the rocky union of fiber optics and electronic switching in the telecommunications backbone. Photons barreling through the network's optical core run into bottlenecks when they must be converted into the much slower streams of electrons that are handled by electronic switches and routers.

They explained how crystal filters could pick out specific streams of light from the flood of beams in wavelength division multiplexing, or WDM, a technology used to increase the amount of data carried per fiber (see "Wavelength Division Multiplexing," TR March/April 1999).

Technology Review - Magazine - TR10: Untangling Code


Irony isn't lost on Gregor Kiczales, principal scientist at Xerox's Palo Alto Research Center (PARC) and professor at the University of British Columbia in Vancouver-and he has a fix in mind. Kiczales champions what he calls "aspect-oriented programming," a technique that will allow software writers to make the same kinds of shortcuts that those of us in other professions have been making for years.

Other crosscutting capabilities include security and synchronization-the ability to make sure that two users don't try to access the same data at the same time. Forget to upgrade just a few of these instances, and your code starts collecting bugs. But unlike these other research projects, Kiczales and his team at PARC have taken the concept out of the lab and into the real world by incorporating the idea of aspects into a new extension of the programming language Java.

While Kiczales admits the tools are still a little raw, there are nevertheless about 500 users of AspectJ today-most of them finding existing tools inadequate for creating long, complicated programs in Java.

Technology Review - Magazine - TR10: Robot Design


Robot builders make a convincing case that in 2001, robots are where personal computers were in 1980-poised to break into the marketplace as common corporate tools and ubiquitous consumer products performing life's tedious chores.

One big obstacle remains: It is expensive to design and make robots smart enough to adapt readily to different tasks and physical environments, the way human beings do.

That's the reason why robotics have, so far, found a commercial niche only in simple and highly repetitive jobs, such as working on an automotive assembly line, or mass-producing identical items, such as toys.

One promising approach is to fully automate the design and manufacture of robotics by deploying computers to conceive, test and even build the configurations of each robotic system: in short, to use robots to build robots.

Last year, in a cramped lab at Brandeis University in Waltham, Mass., Jordan Pollack demonstrated how this automated robotic design and manufacturing might work.

"If we are successful, we could see an industry within a decade which makes low-quantity custom machinery worth more than it costs to make."

In Japan, for example, Honda has spent over 14 years building a humanoid robot able to walk, open a door and navigate stairs.

Technology Review - Magazine - TR10: Microfluidics


But applied physicist Stephen Quake uses them to manipulate things on a vastly reduced scale: tiny volumes of fluids thousands of times smaller than a dewdrop. Microfluidics, as Quake's field is called, is a promising new branch of biotechnology.

Over the past decade or so, scores of researchers have set out to build microscale devices for many of the basic processes of biological research, from sample mixing to DNA sequencing. But many of those groups have run into roadblocks in developing technology that can be generalized to a broad range of applications and would allow several functions-such as sample preparation, DNA extraction and detection of a gene mutation-to be integrated on a single chip.

The potential payoff of these advances is huge: mass-produced, disposable microfluidic chips that make possible everything from drug discovery on a massive scale to at-home tests for common infections. First was a microscale DNA analyzer that operates faster and on different principles than the conventional, full-sized version, then a miniature cell sorter and most recently, those valves and pumps, described last April in the journal Science.

All this while regularly publishing important findings on the basic physics of biological molecules. When Technology Review went to press, the company was planning to deliver its first microfluidic devices to selected university researchers and industry partners by the end of 2000, and was hoping for a commercial release by the end of this year or early 2002.

And though he has built quite a reputation as a technologist, he hopes soon to focus more of his attention on some of the most pressing questions in basic biology: How do the proteins that control gene expression work?

Back to HOME Good Sites