by Mike Adams The Health Ranger 2005 from NaturalNews Website
Introduction In modern society, there’s very little discussion about what’s needed to fundamentally improve our collective quality of life. How do we evolve our societies into something more productive, more rewarding, and more in harmony with our natural environment? Answers are found in many disciplines: psychology, spirituality and religion, health and wellness, and even sociopolitical theory. In this paper, however, I focus on answers that may be provided by technology. My name is Mike Adams, I’m the president & CEO of Arial Software, the executive director of the Consumer Wellness Research Center, and author of several books and audio programs on nutrition, medical ethics and food toxicology. I’m also the primary contributor to a number of websites covering technology and medicine, including TechnologyNews.info, FutureWheels.com, SpamAnatomy.com, and HealthFactor.info. The ten technologies covered here each hold tremendous promise for uplifting our collective quality of life on planet Earth. Some of these technologies have already begun to appear; others will take years or decades. A few are stalled out for political reasons or because they threaten the profits of today’s influential institutions or industries. Most of these technologies will, at some point, be hotly debated for their social, economic, and political implications. Like nuclear energy, each of these holds both a promise for creative use and, simultaneously, the risk of abuse by those who seek to gain power and control at the expense of fellow human beings.
Taken together, however, these technologies can not only sharply improve the world in which we live, they can alter who we are as human beings, and in this way, they can forever shape and improve our quality of life.
A more advanced search engine would operate through voice queries and be capable of retrieving results deemed relevant to the interests of the particular user. A nutritionist who searches for “pizza,” for example, would likely be interested in something quite different from a hungry college student entering the same search query. Even as search personalization advances, there’s also the much larger question of what knowledge or content is available to be searched. Google searches only the Internet, and while that may represent a significant quantity of information, it is but a small portion of the total knowledge available on the planet. What’s needed to uplift our civilization is a Global Electronic Library.
Presently, we are nowhere close to a Global Electronic Library. Astoundingly, with all the technology available today, we still have no way to access printed books online (other than through limited snippets thanks to Amazon.com).
Desire for profitability and control of intellectual content coupled with a lack of a micro payment infrastructure have resulted in most content publishers (magazines, books, science journals, etc.) denying the public access to their content unless they buy their books or pay for subscriptions. This arrangement excludes by default the poorer citizens of the planet and, by doing so, encourages a cycle of global poverty by denying the poor access to educational information that might improve their economic outlook.
Making knowledge “open source,” as this paper is, would offer an opportunity for more people to be more thoroughly educated about the world around them. It offers the promise of uplifting entire societies. Planet Earth needs to pursue the construction of the modern-day equivalent of the Library of Alexandria (ultimately destroyed by Julius Caesar’s military campaigns around 47 B.C.). A freely-available online resource offering instant access to the vast majority of books, publications, and documents on the planet would be considered one of the great wonders of the world and would significantly uplift the intelligence and education of people everywhere. Unfortunately, no one is currently working on such a project. Of course, the Global Electronic Library it would need to be available in many different languages, too, so that world citizens could view content regardless of their country of origin. Most of all, the Global Electronic Library must be coupled with an advanced search technology so people can find the information they want. One of the most significant global trends arriving in the near future is a shift away from fossil fuels and towards hydrogen. The term, “hydrogen economy” refers to a global economy powered by hydrogen, not oil. The hydrogen economy is important for the advancement of humanity for several reasons.
First off, the oil economy is fraught with problems:
Beyond the problems with the oil economy, there are additional reasons why a hydrogen economy offers unprecedented benefits to the quality of life of people everywhere:
Remarkably, very little has changed today: with notable exceptions, the vast majority of university professors continue to bore students with ineffective, non-interactive approaches to education that result in little more than the professor’s notes becoming the students’ notes without passing through the minds of either. True learning is experiential. Humans learn best by doing, not by reading or listening to lectures. The more senses are involved (sound, sight, touch, emotions, etc.), the more powerful the learning experience.
That’s why today’s best teachers are those pioneering individuals who take the effort to engage their students in meaningful activities that reach students at multiple levels.
Let me explain. A person who wishes to experience a learning session via augmented reality would don a pair of see-through glasses that also host two tiny video cameras and a pair of earphones. A tiny computer, perhaps worn on the wrist or around the waist, would recognize the geometry and content of the user’s immediate environment and overlay that environment with meaningful images and sounds for a specific purpose. From the user’s point of view, he or she would apparently see and hear other people, objects, or events taking place right in front of or around them. These augmented perceptions would appear to be completely real. In technical terms, they would be rendered by the wearable computer with light shading that takes into account both the ambient and directional light sources found in the user’s immediate environment. Put simply, the augmented reality system is “projecting” people, objects, environments or other elements onto the environment around you. In its most simple form, an augmented reality system could, for example, project a different colored carpet or wallpaper as you stroll through your house. On a slightly more advanced level, it could project memory icons and appear to place them strategically throughout your house so that, for example, you would see a certain icon (with an attached note, perhaps) as you open your front door or medicine cabinet. In practical terms, this might serve as a personal reminder to make sure you pick up something at the grocery store or remember to take medications. But these rudimentary applications are just the beginning.
The more advanced applications of augmented reality have to do with learning. Augmented reality technology holds the promise of immersing individuals in experiential learning environments. Instead of reading about the Civil War in a textbook, a student could observe battles or conversations as if they were there. Animated, lifelike historical figures would seemingly appear right in front of them. The student would see and hear events at a level unmatched by today’s outmoded lecture formats. The applications are tremendous: students could learn anatomy by walking through a human body and observing the functioning of biological systems. Students could learn geography by “flying” around the globe, visiting any city they wished, zooming in and out of detailed renderings of geopolitical regions. Students could learn chemistry by observing, at a simulated microscopic level, chemical structures and reactions. These are but a few of the many potential applications.
The systems and technologies needed to accomplish this include:
(Interestingly, several of these areas are being pushed forward through interactive gaming technology. First person games such as Microsoft's Halo are outstanding demonstrations of real-time visual and auditory rendering technology.)
This industry will dwarf today’s software and computing industries and become one of the most influential technological shifts yet experienced by our civilization. With this technology in place, users could simply obtain different program modules and plug them into their standard augmented reality hardware systems.
Available programs would certainly include:
Hopefully, you see the potential for this sort of technology in terms of uplifting humanity. The examples I've mentioned here barely scratch the surface.
Augmented reality technology holds the potential to be the darkest, most powerful system for mass control of the population ever invented. If people use augmented reality systems to tune in to experiential broadcasts created by corporations and centralized governments, the result will likely be a system approaching “total mind control.”
If advertisers and governments can project anything they want into a person’s immediate environment and make it seem real, there is no limit to the control that could be exercised over the general public. Infants could be brought up in “augmented reality schools” and literally brainwashed into accepting practically any interpretation of history or current events that the program controllers desired, for example. Let this be a warning. Like many technologies, augmented reality holds both tremendous creative potential and a truly horrifying potential for abuse.
Augmented Reality can either enslave the world, or it can set it free.
Practical robots offer tremendous potential for enhancing the quality of life for humans everywhere. The robotics industry is emerging now, and progress is steady. The world leader in robotics is Japan, which has invested heavily in social robots - robots that interact with people. The United States, in contrast, is focused primarily on robots that kill people. The vast majority of robotics research in the U.S. is underwritten by military interests. The Pentagon essentially wants to develop a Terminator: a battlefield robotic soldier that can accomplish political or military objectives without resulting in human casualties that cause troublesome dissent back home. Once again, we see that a promising area of technology can be both constructive and destructive, depending entirely on the intent of its creators.
For this section, however, I will focus on the far more peaceful Japanese approach to robots, because this is the area that holds promise for enhancing the lives of human beings.
Honda, Toyota and Sony are all working hard on humanoid robots and have working, walking prototypes right now. Why humanoid? As humans, we’ve created environments built for humanoid creatures. Our physical environments (cities, houses, stores, etc.) have been constructed for the convenience of people with a certain height, a certain eye level, and a certain stride length. As humanoid robots are developed, the more easily they can navigate our environments the more helpful they can be to humans. It is the “helpful” category in which humanoid robots offer the greatest promise. At a basic level, these robots promise to free us from physical labor (factory work) and household chores such as doing the dishes, taking out the trash, folding laundry, cooking, etc. This alone, as gimmicky as it may seem, would free people from time-consuming chores. (None of these chores are simple from a robotics point of view, by the way. The technology needed for robots to engage in such tasks is still many years away.) Such robots would probably never be cheap to build, but they will quickly pay for themselves in terms of reclaimed time for their owners.
A professional earning $100,000/year, for example, might easily waste $25,000/year worth of her time handling household chores that could be managed by a practical household robot. If the robot costs $50,000, the payoff would be just two years.
That’s makes a $50,000 robot a reasonable investment for most professionals.
The next level up, in terms of enhancing the quality of life of humans, is for robots to serve as companions. Are you the parent of an only child? A companion robotic pet or robotic child could teach your child a lot about social interaction, responsibility, friendship, and even help the child learn academic subjects like mathematics, reading, history, literature and science. Are you a lonely retiree? A robotic companion could add a lot to your life through conversations, games, physical activity, and coaching. You see, robotic companions won’t argue, won’t betray you, won’t divorce you, won’t die, won’t fall asleep when you want to talk, and they won’t even eat the favorite food out of your refrigerator. As humorous as these points may sound, they are serious considerations for companionship. In time, many humans may choose robot companions over human friends for these (and other) reasons. Meaningful companionship with robots requires significant leaps in AI (artificial intelligence), portable power, vision and voice recognition systems, and many other technologies.
These technologies are steadily moving forward. In time, robotics engineers will be able to deliver companion robots that do far more than household chores: they will actually add meaning to our lives.
It seems that no matter how advanced notebook computers get, their battery life remains at a standstill: 2-3 hours from most models, regardless of price.
From electric vehicles to portable electronics, today’s battery capacity lags far behind the steady improvements in other areas of technology. Despite the hype and advertising from battery manufacturers, today’s chemical batteries are virtually identical to ones sold three decades ago. It’s not that battery manufacturers aren’t trying to develop something better: efforts to improve battery capacity and power density have been underway for years. Despite the research, arguably the best technology they’ve produced yet is the ingenious battery testing strip that you can use to check how quickly your batteries have gone dead. Today’s battery technology is simply outdated. The chemicals are extremely hazardous to the environment (Nickel-Cadmium, for example, is made from two heavy metals that are toxic to practically all forms of life on the planet), dangerous to nearby users (risk of explosions), heavy (standard car batteries can weigh 70+ pounds) and unreliable. They charge slowly, their output voltage wavers, and their size becomes a major limiting factor when designing portable electronics like digital cameras.
Did I mention they also leak acid from time to time? Clearly, the world needs a breakthrough in portable power. But what does this have to do with uplifting humanity and improving our collective quality of life? Portable power is a crucial enabling technology for a vast array of applications that promise to improve our lives and our planet.
Some of these applications include:
These are just a few of the many important applications of high density portable power. Remember, though, it’s not just the density that matters: it’s the cost as well.
To herald a genuine breakthrough, the next wave of technology needs to be better on all counts: size, weight and cost.
The industry leader in portable zinc power is Metallic Power.
Fuel cells can make the leap, and their adoption by consumers and manufacturers alike is all but assured.
The personal automobile is the source of both fantastic benefits to modern life and terrible consequences. Those consequences range from devastating public health effects due to automobile emissions (asthma, lung cancer, throat infections, etc.) to the rapid alteration of our planet’s own atmosphere (global warming).
But what if a new technology could bring us all the benefits of personal transportation without these drawbacks? Fuel cell vehicles may deliver on precisely that promise. Fuel cell vehicles (FCVs) don’t burn fossil fuels and emit toxic fumes, they take a hydrogen fuel source such as methanol, propane gas or hydrogen gas and convert it directly to electricity to power the vehicle. Like fuel cell battery technology, it’s clean for humans, clean for the environment, and safer than carrying around highly explosive liquids like gasoline. Perhaps even more importantly, it would spearhead the shift away from the global oil economy and free the United States and other nations from their heavy dependence on oil -the source of tremendous global strife. There are considerable obstacles to fuel cell vehicles, however: infrastructure obstacles, primarily. Whatever fuel is ultimately chosen for FCVs, we will need an infrastructure of refueling stations (“gas stations”), fuel distribution systems (tanker trucks), fuel refineries, mechanics who can work on such systems, and so on. It’s akin to reinventing the entire automobile infrastructure from the ground up.
These enormous startup costs remain the primary obstacle to the widespread adoption of fuel cell vehicles, and it’s a catch-22 situation: people won’t buy the vehicles if there are no refueling stations, and no company will build refueling stations if there are no vehicles waiting to use them. Hybrid vehicles offer a smart interim solution to this dilemma. While today’s hybrid vehicles derive all their power from a gasoline engine, tomorrow’s hybrids could be made to run on either fuel cells or gasoline, depending on what’s available.
Both the gasoline engine and fuel cell would be used to recharge the primary vehicle batteries that provide the operating power. Or the battery could be scrapped and replaced with a zinc fuel cell system where the gasoline engine could kick in when the zinc needs to be recharged. This configuration would eliminate the battery altogether and could still take advantage of the regenerative recharging ability during vehicle braking. Today’s hybrid vehicles like the Toyota Prius have made great strides in the technology needed to mass produce such vehicles. In fact, the Prius is a shining achievement in the marriage of combustion engines and battery technology.
Without question, Toyota has the technical mastery and foresight needed to build a fuel cell hybrid vehicle if the public infrastructure will support its use. We can expect Japanese automobile manufacturers to stay in the lead on fuel cell vehicles, by the way. American car companies are years behind and have resorted to licensing Japanese fuel cell technology rather than creating their own. There are many potential explanations for this lack of vision on the part of American car companies, but there’s no denying the fact that the Japanese are leading the field and seem well positioned to continue doing so.
We are born with physical structures that were designed to help us survive harsh, prehistoric environments, and they did their job well (we’re here, aren’t we?), but this genetic blueprint doesn’t serve our modern lifestyle. In essence, we are walking museums of outdated hardware. What concerns me the most is the “software blueprint” with which we are all born. Human males, in particular are born with an innate desire to dominate limited resources and control others. From an anthropological viewpoint, this is largely due to the fact that these behavioral traits create reproductive options for males, but the explanation of why that is the case goes well beyond the scope of this paper.
The point is that males are “born takers” and they seek power and control.
This is part of the reason why males dominate positions of power, both in politics and private business, and it helps explain why so many wars are fought between nations headed by men who seek power. Women are born with “social software.”
They innately seek to understand the individual members of social groups, and they tend to be far more interested in the overall social good than men. Once again, this is well explained through anthropology by the fact that a balanced, well-functioning social group provides an environment conducive to the raising of successful offspring, to which females contribute a far greater personal investment of time and resources than males. The point here is that planet Earth is presently dominated by power-seeking males running outdated software (genetically influenced behavior) that does very little to uplift civilization as a whole. Males are primarily interested in what they, personally, can accumulate and control, not what they can do for the common good.
It is this innate greed and self-interest that limits possibilities of uplifting civilization as a whole through attention to the common good.
We must, as conscious beings, decide what kind of beings we truly wish future generations to be. With the technology of genetic engineering, we are not limited to the blueprint provided by Darwinian evolution (or God, from another point of view). Instead, we can design ourselves to be whatever sort of beings we wish. As a simple example, we could genetically engineer subsequent generations of children to hate the taste of sugar. This simple step would practically eliminate the problem with obesity, since generations would no longer grow up on soft drinks, candy and refined carbohydrates (the leading causes of Type-II diabetes and obesity). At a more advanced level, genetic engineers could alter behavioral programming, producing a new generation of beings whose primary motivations were based on sharing and working for the common good.
As a civilization, we are nowhere near the level of maturity that should be required before we start toying with our own genetic code. Altering the genetic code of our offspring is no small matter: we are indeed “playing God” and, potentially, violating laws of nature. Even if we had the maturity to approach genetic engineering with wisdom and compassion, we currently have neither the understanding of how DNA actually controls human behavior, nor the technology to selectively replace undesirable behaviors with ones we would prefer. There is no “violence” gene, for example, that could be reconfigured into a “peace” gene. So we are nowhere close to being able to accomplish meaningful genetic engineering of humans even if we wanted, and that’s a blessing, since we aren’t mature enough as a civilization to deal with its implications. But make no mistake: if we are to move beyond the genetic blueprint handed down to us by the great apes, we must at some point consciously and deliberately begin improving our own genetic code. In fact, “evolution” is strangely the correct term here, since genetic engineering is the only mechanism by which any further human evolution can conceivably take place.
That’s because human evolution has largely stalled out from a survival point of view. (From a global perspective, very few human die off due to lack of food or predators, for example.)
To achieve any further genetic evolution, we must eventually become engineers of our own genetic code. With the proper technology, maturity and ethics, we could accomplish tremendous outcomes through genetic engineering.
Some of the more obvious advances might include:
The mere discussion of all this justifiably brings up a long list of very spooky themes like eugenics, "Master Race" philosophies, Frankenstein babies, and of course the movie, "GATTACA."
I'm not at all saying this technology will be easy to grapple with from ethical, social and philosophical perspectives.
What I am saying is that modern day humans are walking museums. Our souls inhabit outdated hardware, and our brains are running software meant for a long-gone era. Genetic engineering offers us the potential to consciously improve our core design. It allows us to decide who we want to be as conscious beings. It simultaneously presents the potential for truly horrific abuses. In my view, we are presently nowhere near the level of global wisdom and spiritual understanding required to justify experimenting with the genetic code of our own offspring.
And yet genetic engineering of the human race remains an essential step to uplifting our species.
Computer/Human Interface Systems
There’s no mistaking the significant influence of personal computers and the Internet on our modern way of life. Many of us have so quickly adapted to regular use of search engines and web surfing that it’s difficult to imagine life without the Internet. The Internet allow us to research products and companies, share ideas with the public, research nutritional supplements, find articles on historical figures, and do a million other things that simply weren’t possible a mere two decades ago. And yet our interface with the Internet remains the lowly personal computer. With its clumsy interface devices (keyboard and mouse, primarily), the personal computer is a makeshift bridge between the ideas of human beings and the world of information found on the Internet. These interface devices are clumsy and simply cannot keep pace with the speed of thought of which the human brain is capable. Consider this: a person with an idea who wishes to communicate that idea to others must translate that idea into words, then break those words into individual letters, then direct her fingers to punch physical buttons (the keyboard) corresponding to each of those letters, all in the correct sequence. Not surprisingly, typing speed becomes a major limiting factor here: most people can only type around sixty words per minute.
Even a fast typist can barely achieve 120 words per minute. Yet the spoken word approaches 300 words per minute, and the speed of “thought” is obviously many times faster than that. Pushing thoughts through a computer keyboard is sort of like trying to put out a raging fire with a garden hose: there’s simply not enough bandwidth to move things through quickly enough. As a result, today’s computer/human interface devices are significant obstacles to breakthroughs in communicative efficiency. The computer mouse is also severely limited. I like to think of the mouse as a clumsy translator of intention: if you look at your computer screen, and you intend to open a folder, you have to move your hand from your keyboard to your mouse, slide the mouse to a new location on your desk, watch the mouse pointer move across the screen in an approximate mirror of the mouse movement on your desk, then click a button twice.
That’s a far cry from the idea of simply looking at the icon and intending it to open, which would of course be the desired level of computer/human interface as I’ll discuss below. Today’s interface devices are little more than rudimentary translation tools that allow us to access the world of personal computers and the Internet in a clumsy, inefficient way. Still, the Internet is so valuable that even these clumsy devices grant us immeasurable benefits, but a new generation of computer/human interface devices would greatly multiply those benefits and open up a whole new world of possibilities for exploiting the power of information and knowledge for the benefit of humanity.
Let’s take a closer look at those emerging technologies now.
Its accuracy is impressive, and the technology is far ahead of voice recognition technology from a mere decade ago, but it’s still not at the point where people can walk up to their computer and start issuing voice commands without a whole lot of setup, training, and fine tuning of microphones and sound levels. For many people, that’s just way too much configuration. This situation is no doubt recognized by the developers of Dragon Naturally Speaking.
Nevertheless, widespread, intuitive use of voice recognition technology still appears to be years away.
With the iGesture Pad, users place their hands on a touch sensitive pad (about the size of a mouse pad), then move their fingers in certain patterns (gestures) that are interpreted as application commands. For example, placing your fingers on the pad in a tight group, then rapidly opening and spreading your fingers is interpreted as an Open command. This technology represents a leap in intuitive interface devices, and it promises a whole new dimension of control versus the one-dimensional mouse click, but it’s still a somewhat clumsy translation of intention through physical limbs. For more intuitive control of software interfaces, what’s needed is a device that tracks eye movements and accurately translates them into mouse movements: so you could just look at an icon on the screen and the mouse would instantly move there. Interestingly, some of the best technology in this area comes from companies building systems for people with physical disabilities.
For people who can’t move their limbs, computer control through alternate means is absolutely essential.
One device, the HeadMouse, does exactly that. You stick a reflective dot on your forehead, put the sensor on top of your monitor, then move your head to move your mouse. I haven’t tried the technology, so I can’t say how well it works, but the company (Origin Instruments) has a reputation for providing assistive technologies to physically disabled persons, and the HeadMouse is their latest technology. Another company called Madentec offers a similar technology called Tracker One. Place a dot on your forehead, then you can control the mouse simply by moving your head. In terms of affordable head tracking products for widespread use, a company called NaturalPoint seems to have the best head tracking technology at the present: a product called SmartNav, priced at a mere $199, allows for hands-free mouse control via head movement. Add a foot switch and you can click with your feet.
I’ve used this product myself, and while it definitely presents a learning curve for new users, it works as promised.
Tracking eye movements
A company called LC Technologies, Inc. is doing precisely that with their EyeGaze systems. By mounting one or two cameras under your monitor and calibrating the software to your screen dimensions, you can control your mouse by simply looking at the desired position on the screen. Once again, this technology was originally developed for people with physical disabilities, yet the potential application of it is far greater. In time, I believe that eye tracking systems will become the preferred method of cursor control for users of personal computers. Eye tracking technology is quickly emerging as a technology with high potential for widespread adoption by the computing public. Companies such as Tobii Technology, Seeing Machines, SensoMotoric Instruments, Arrington Research, and EyeTech Digital Systems all offer eye tracking technology with potential for computer/human interface applications.
The two most promising technologies in this list, in terms of widespread consumer-level use, appear to be Tobii Technology and EyeTech Digital Systems.
Today, all these tasks are accomplished by our brains moving our limbs, but the limbs, technically speaking, don’t have to be part of the chain of command.
Clickable icons, for example, feel like “bumps” as you mouse over them. The edges of windows can also deliver subtle feedback. The mouse sells for around $40, but it hasn’t seen much success in the marketplace. Reviews from users reveal that the vibrating mouse is considered more annoying than helpful, so don’t expect to see this technology taking over the world of computer mice. But tactile feedback has potential for making human/computer interfaces more intuitive and efficient, even if today’s tactile technologies are clunky first attempts.
The more senses we can directly involve in our control of computers, the broader the bandwidth of information and intention between human beings and machines.
So there’s no capability to stack windows or view the depth of objects. It’s a classic chicken-and-egg conundrum: who’s going to buy 3D displays if the software can’t support them, and why would software makers write 3D layering logic if nobody owns the displays? In time, thanks to the “cool” factor of 3D displays, the technology will eventually receive enough attention to warrant the necessary R&D investment by operating system developers like Microsoft and Apple. No doubt, future generations will conduct all their computing with the aid of 3D displays, and the very idea of 2D displays will seem as outdated as black & white movies do to us today. Another new 3D display device is the Perspecta Spatial 3D globe. This device displays 3D objects or animations inside a globe. Users can walk around the globe and view the objects from any angle. It’s a rather expensive item, of course, so early applications for this product focus on medical and research tasks. In time, however, the technology will drop in price, bringing it within reach of more consumers. In the category of the more familiar, a German company called SeeReal Technologies offers a 20” LCD 3D display that uses eye tracking combined with unique left/right display technology to create a true 3D image on a flat panel monitor without the need for special viewing glasses. These monitors are typically used in the CAD/CAM industry where the visualization of 3D objects is especially helpful.
The lack of support for 3D space in the Windows operating system, however, makes these monitors useless for everyday users... at least for the moment.
At the most basic level, operating systems would need to support fundamental 3D features like:
Note, however, that a 3D flat panel monitor is not the same as a true 3D display system: you can’t walk to the side of the monitor and see the windows behind it. It’s still essentially a 2D system in that it can’t display true volumetric shapes and objects that are viewable from multiple angles.
This would be a true volumetric 3D display system, and it’s here that the technology truly represents a breakthrough. Program application windows could literally be stacked from the rear to the front, and if you peeked around the side of the display, you could see a side view of all the windows at once. With proper software control, objects or documents could be placed in true 3D space: desktop icons, for example, could be lined up along the very back row. Games could display true 3D scenes as if you’re actually in them, and CAD engineers would have the ability to observe their designs in true 3D space. Better yet, if coupled with a motion tracking glove or similar technology, users could use their hands to grasp, move, resize or otherwise manipulate elements in 3D space.
This, of course, opens up an unlimited universe of possibilities for computer/human interaction.
Today, the gap is very large: a typical keyboard and mouse setup is essentially a two-channel interface system. But tomorrow, the gap could be very small: add a head tracking system, hand-sensing glove, foot pedal switches, voice recognition system, 3D display and a brainwave-sensing helmet, and you’ve created layers of multi-channel interface technologies that allow infinite expression. In time, as this technology is developed and adopted by mainstream users, the gap will continue to shrink. This has enormous positive implications in the workplace, medicine, science, education, social interaction, entertainment and many other areas, which is why it earns such a lengthy discussion in this report.
And it’s not technology that’s “way out there,” either: it’s technology that’s emerging now and will continue to be developed in the years ahead.
Vibrational medicine is a promising area of “technology” (it’s difficult to call it that) that covers a variety of pioneering healing modalities now known to be far more powerful than drugs and surgery in improving the lives of patients.
These modalities include:
There are many other areas as well, but these represent some of the most popular vibrational medicine technologies being used today. Unlike the other technologies mentioned in this report, much of the technology already exists for vibrational medicine. Every therapy mentioned above is being used right now in the United States and around the world.
The challenge is to see their use become widespread and accepted by practitioners of western medicine. Unfortunately, most practitioners of modern (western) medicine are steeped in an outdated mindset of drugs and surgery and tend to shun any therapy that isn’t sanctioned by the pharmaceutical industry. Let’s take a closer look at the kind of paradigm shifts that will be required in modern medicine in order for vibrational medicine to earn increased credibility.
Physical medicine describes the sort of medicine practiced by the western world in the 19th early 20th centuries. If a foot became infected, the doctor cut it off.
Surgery was regarded as a “heroic” procedure (to a very large degree, it still is), and disease was understood to be caused by the physical malfunctioning of physical organs. Chemical medicine emerged in response to the discovery of penicillin and the realization that certain chemicals - prescription drugs or antibiotics - could target and destroy infectious disease. This belief continues to this day, where diseases are now commonly described as “chemical imbalances” that must be treated with a lifetime of prescription drugs.
Today, western medicine is firmly seated in the belief system of chemical medicine. Pharmaceutical companies, which dominate today’s medical landscape, rely exclusively on this paradigm to market their products and convince patients they need potent chemicals in order to be happy, healthy or sane. This is why nearly all diseases and symptoms are presently described as chemical imbalances that can be corrected with expensive drugs.
This belief is a distortion, however. Energetic medicine (vibrational medicine) is just starting to be explored by the medical mainstream. In energetic medicine, the powerful effects of subtle energy systems are explored and leveraged for healing. Energetic medicine recognizes the whole of the patient rather than the parts (as in physical medicine).
Energetic medicine also believes that the human body is not a chemical dumping ground, and that both disease and health have core underlying causes that go far deeper than mere symptoms.
Not only is it a more advanced perspective on the true causes of disease and health, but it can be offered to patients with virtually no side effects and at very low cost. As one simple example, if a doctor can help a patient laugh heartily for five minutes, the patient will be significantly helped in all three areas: physical, chemical and vibrational. From a physical point of view, the very act of laughing moves lymph fluid, promotes the oxygenation of body cells and organs, and improves circulation. From a chemical point of view, laughter results in the creation of literally tens of thousands of dollars worth of healthful brain chemicals (if you had to buy them, that is) that improve mood, enhance alertness, etc.
From an energetic point of view, laughter helps relax the patient’s body and mind, opens them to enjoying interaction with others, and literally restructures their internal energies. That’s just one reason why Dr. Patch Adams, popularized in the movie with Robin Williams, relied so heavily on laughter as a powerful healing tool.
In a very real way, laughter is perhaps one of the most powerful healing tools available to mankind, and yet today’s hospitals and doctors’ offices are hardly places that inspire unbridled joy.
Let me explain: in most clinical trials, there is something called the placebo effect which describes the level of healing that takes place in patients who were given no drugs and no surgery but who thought they received the drugs or surgery. For example, they would be given inert pills or subjected to a “sham surgery” that actually resulted in no real surgical operation.
This is standard practice in clinical trials. But even though the patients don’t receive the drugs or surgery as part of the study, they routinely show permanent improvements in their health. One study of Parkinson’s patients proved that this genuine health improvement remained strong even twelve months after the placebo surgery, and the measure of improvement was objective: even the medical staff agreed that patients showed measurable improvements. Obviously, if patients are getting better thanks to the placebo effect, it can’t be the drugs or surgery that’s causing the improvement. The healing effect is caused by the mind of the patient. Their belief in the drug or surgery is what’s causing them to get better, not the actual drug or surgery (since they didn’t receive either). Now here’s the amazing part: if you take a closer look at these tens of thousands of studies, you’ll find that the placebo effect has been proven effective in treating approximately 30% of all disorders and diseases.
That’s right: this single mind/body tool has been scientifically proven to reverse or improve 30% of all diseases and symptoms: heart disease, stroke, arthritis, cancer... you name it.
The proof is right there in the studies. This is astonishing: mind/body medicine offers us a powerful healing tool that works with no negative side effects and zero cost... and it’s effective against practically any disease or condition.
So what does western medicine do with this knowledge? They discard it. The placebo effect is routinely tossed or ignored.
It’s considered a “false” result by medical researchers, even when the numbers prove it to be not just real, but perhaps the most powerful healing tool of all.
Doctors, researchers, surgeons and others in the medical community function like everyone else: when presented with evidence that contradicts their firmly held belief systems, they discard the new evidence because it doesn’t fit their internal model of the way the world works.
Accordingly, the mountain of evidence supporting the placebo effect gets routinely discarded not because it isn’t compelling and scientific, but because modern medicine doesn’t understand how it could work. It doesn’t fit the model. And it’s not just the placebo effect that gets ignored. Homeopathy is also routinely ignored or even attacked by western medicine for the simple reason that western medical technology doesn’t understand how it works, either. In a homeopathic remedy, an extract from a particular element such as a flower, a plant, or even a poison like arsenic, is mixed with water and then diluted to such extremes that there’s not a single molecule of the original element remaining in the final mixture.
Yet the final mixture holds the “memory” or the “vibration” of the original element that was used, and it exhibits scientifically measurable and verifiable effects on biological systems (both humans and animals) when consumed. The evidence showing that homeopathic remedies work is not merely compelling, it is scientifically robust. An honest researcher reviewing the clinical evidence on homeopathy can only reach one of two conclusions: either homeopathy works, or controlled, double-blind placebo clinical trials don’t work. In other words, if you measure the effect of homeopathic remedies using the same science and scrutiny as clinical drug trials, you get a significant result that proves homeopathic remedies work.
And yet western medicine continues to throw out this scientific reality, not because it hasn’t been scientifically proven, but because it doesn’t fit the model. Homeopathy is one of the most promising areas of vibrational medicine.
Homeopathic remedies can help people fight infectious disease, reverse cancer and diabetes, improve their brain function, detoxify their systems, recover from wounds more quickly, increase fertility, and accomplish a long list of other health benefits.
The mere presence of the light causes the body to accelerate its healing. Light is a powerful healing tool, and no light is more available than our own sun. The sun is a source of tremendous healing potential. With natural sunlight, people can reverse prostate cancer and breast cancer, reverse clinical depression, enhance their bone density and prevent osteoporosis, vastly improve circulation, accelerate wound healing, and experience a long list of other significant health benefits.
And yet, remarkably, nearly the entire population of the western world has been taught to believe that sunlight is dangerous. People are warned to “stay out of the sun!”
They slather on sunscreen, they wear heavy clothing, and they avoid the sun at all costs. Meanwhile, rates of prostate cancer are skyrocketing and vitamin D deficiency is now one of the most common nutritional deficiencies in America, Canada and Europe. With daily exposure to natural sunlight, the body creates its own vitamin D and puts it to work preventing prostate cancer, breast cancer and a long list of other disorders. People need natural sunlight. It seems so obvious that it’s almost ridiculous having to point it out. And yet fear of the sun is so deeply ingrained in western societies that merely mentioning the phrase, “sunlight is good for you...” earns you gaping stares from practically everyone.
Clearly, the human species didn’t evolve under fluorescent lighting: it evolved under the natural sun, and as human beings, we depend on frequent exposure to the sun for optimum health. Without sunlight, in fact, we cannot function in a healthy way. The growing problem of Seasonal Affective Disorder, where people experience deep depression due to lack of sunlight, is just one of the many clues pointing to the reality that people need natural sunlight in order to be healthy. Lack of sunlight is even part of the reason we’re seeing an epidemic of obesity: sunlight exposure diminishes cravings for carbohydrates and sweets by balancing levels of serotonin in the brain.
How? Cymatics = the study of sound on physical matter Sound restructures physical matter. This is evidenced by observing the effect of sound waves on grains of sand spread across the top of a large drum. If you hum into the drum, the sand will form physical patterns that coalesce across the drum head according to slight variations in pitch and amplitude. The science is called cymatics, and much of the original work in this area was conducted by the late Hans Jenny. In cymatics, we see that sound creates waves of force that can move physical objects either towards or away from the source of that sound. In my own experiments using tone generator software, home speakers, sheet metal, and dirt from my back yard (how’s that for high-tech?), I was able to propagate grains of dirt and sand along a radiating path from the source of the sound by simply altering the frequency of the sound. (You have to watch the amplitude, however, because if the sound waves are too strong, the grains of sand will leap right off the sheet metal.) For example, if you start with a sound frequency of 300 hertz (300 cycles per second), and then slowly reduce the frequency (pitch down the sound), it will elongate the wavelength of the sound and the grains of sand will slowly move away from you. If you start at a low frequency and increase the pitch, the grains of sand will move towards you as if on a conveyer belt. This same technology, I proposed in 2001, could be used in the bodies of patients to move body fluids and massage organs, among other uses. Diabetic patients, for example, frequently experience a critical lack of blood supply to their feet due to diabetic neuropathy.
By using sound generators under the soles of their feet and broadcasting a sound sequence that slowly increases pitch (then repeats from the original low tone after ramping up), you can actually draw blood into the feet and minimize damage from neuropathy. The same approach can be used for any organ or limb in the body. Sadly, such medical devices do not exist today. Yet this merely scratches the surface of potential for sound therapy. Imagine using two sound sources and coordinating their configuration of standing waves so that peaks of force can be pinpointed along the X and Y axis. Now add a third sound source so that you can operate in three dimensions. With such a system, doctors or surgeons could manipulate internal organs or biological structures with precision without needing to slice into the patient’s body with scalpels. It’s non-invasive surgery through the miracle of sound. To date, no such system exists, but they are theoretically possible. There has been, however, some exciting new research emerging in the world of “medical acoustics.”
Dr. Alexander Sutin at the Stevens Institute of Technology in New Jersey recently presented six papers at the Acoustical Society of America where he described a phenomenon known as time-reversal acoustics that promises to revolutionize modern medicine. Time-reversal acoustics will allow a whole new approach to imaging (seeing inside the body), destroying kidney stones, targeting tumors and even conducting surgical procedures without needing to invade the body. Such technology blends the often mysterious world of vibrational medicine with today’s so-called “hard core science” to bring significant new healing modalities to the world of medicine. If sound can be widely accepted as a healing technology by organized medicine, further exploration into phototherapy, homeopathy and acupuncture is likely to follow.
And that’s how modern medicine graduates from a stage two (chemical medicine) paradigm and moves into stage three (vibrational medicine).
But what’s needed is the acceptance of this technology by the medical community, and that acceptance will take time to achieve.
One of the great failures of modern society is public education. In the United States, the public education system has been denied adequate funding for so long that teachers frequently resort to buying textbooks for their students with their own money. Many schools lack even fundamental instructional tools like desktop computers, and much about public education remains mired in bureaucracy and political power grabs. The advancement of modern civilization will require a quantum leap in the approach of public education. It’s not simply about giving more money to the schools, raising teachers’ salaries, or buying textbooks for students; it’s about changing our entire approach to teaching our next generation of human beings the knowledge and skills they need to succeed in tomorrow’s world. Super-learning systems offer the ability to rapidly accelerate the learning process for children and adults alike. But what is a super-learning system?
Today, it’s a largely fictional technology that’s perhaps best described in the sci-fi movie Brainstorm, released in 1983 and starring Christopher Walken. In Brainstorm, a brain monitoring device could record the thoughts and sensory experiences of one person, then replay them into the brain of another person.
The promise of the device was perhaps best described by one character in the film who said,
It may have been science fiction in 1983, but today the exploration of super-learning is underway. In the last two decades, there has been a tremendous amount of research conducted on multisensory learning theory.
Researchers have found that the human brain learns best through multisensory association, not rote memorization. A child will learn best, for example, when she is engaged in a learning activity that uses sight, sound, emotions, tactile feedback, spatial orientation, and even smell and taste. Learning has also proven to be far more effective when subjects are in a relaxed mental state. Compare this to modern day schools and universities, where to this day, tenured professors mumble over a collection of notes to an auditorium full of students who learn little more than how to take notes and pass rote memorization tests. Sadly, many of today’s institutions of learning aren’t very good at their only mission. Advances in super-learning will require the radical reformation of our learning institutions and yet will simultaneously usher in a new era of prosperity and quality of life. To believe this idea, you have to believe that it is the lack of education that’s largely responsible for the problems of society.
And that’s the point I’ll explain next.
Or they couldn’t learn in the same way that others learn. Multiply that situation by twenty or thirty years and you get someone who falls between the cracks of modern society: a petty criminal, a homeless person, a drug addict, or, if you’re lucky, people working from one minimum wage paycheck to the next, just barely surviving, usually with the help of public assistance. Simultaneously, lack of education also affects everyone I haven’t mentioned yet: the working middle class and wealthy. If they never learned about the real history of the world, they’re likely to repeat the same mistakes today. If they never learned about other countries, populations, and cultures, they will undoubtedly emerge from public schools with an ethnocentric viewpoint and demonstrate a disturbing intolerance for people of different ethnic backgrounds.
If they didn’t study the great authors, the great artists, or the great poets, they will act in soulless ways, or without an open heart and mind. If they didn’t learn about the history of the universe, our planet, the evolution of the species, and ancient man, they will never come to appreciate the sanctity of their own lives, nor of others’ lives. See, education does more than just keep people out of the gutter: it transforms an ordinary, closed-minded human being into a world citizen. Studying the great masters - the philosophers, the healers, the poets, the political figures, the artists, the scientists, the revolutionaries - is the pathway to being a great citizen of our world. Education is everything to society. Without it, we are all just berry-hunting primates. Education is what allows us to carry memories, lessons and advances from one generation to the next. And it’s a short window: the blink of a human life. In the span of a single lifetime, we as a society must transfer the entirety of our knowledge and wisdom to the next generation. Inevitably, each of us will pass on. Education is the keystone of civilization.
And super-learning brings us the promise of accelerating our education processes so that we can, in a sense, multiply the “bandwidth” of information and wisdom being passed to our children.
At a biological level, learning is simply the building of new associative pathways in the human nervous system. As we learn new things, we don’t increase our brain matter, we simply make new neural pathways in the brain cells that are already there. A “smart” person has more interconnected neural pathways than a “dull” person, although they may possess the same physical brain matter. The human brain will create these new neural pathways in response to external stimuli -the more diverse, the better. So a child who is given the definition of the word “weightless” in a verbal format gets that information in one channel: the audio channel. That creates a one-dimensional association in their brain. But take the same child and show them a movie of a person floating in space while you’re saying the word “weightless,” and you now have a two-dimensional learning experience: the child both sees and hears the word. Better yet, take the child to a trampoline and start bouncing up and down. Make it fun, because that invokes the emotional channel. Between bounces, when you’re in the air, happily shout, “Weightless!” Now the child gets the word in two more channels, and the understanding of that word is firmly implanted in their brain.
They’ll probably never forget the word. That’s a simplified example of how learning can be made more effective: use immersion and engage multiple channels of experience to introduce people to new concepts. So getting back to the super-learning machine, how can we use this process of learning to create a super-learning experience?
One answer is something I’ve already presented in this report: augmented reality!
Augmented reality systems can provide the imagery, sounds, user feedback mechanisms (like using your hands to control virtual objects that appear to be floating in front of you) and even the tactile sensations that accelerate learning. Properly programmed, these wearable augmented reality systems could guide students through an unlimited series of educational exercises that are experiential, multi-channel, self-paced, fun, and highly effective. As one example, consider the walkthrough history lesson presented earlier in this report: with augmented reality systems, students could physically explore historical events, hold conversations with historical figures, and see, hear and feel history with their own senses.
This represents a quantum leap over today’s public school lessons:
A good teacher, by this definition, is one who can properly assess the learning potential of each student, assign the appropriate augmented reality learning programs, keep the students challenged and motivated, and when necessary, enter each student’s own augmented reality to provide assistance with the learning process. In my own early drafts of such a system, the teacher is networked into each student’s augmented reality feed and can flip from one student’s reality to another like clicking on software screens in the Windows operating system. Being fully networked with all the students, the teacher can serve as an active mentor to either observe or assist the student, depending on the lesson context.
The teacher need not even be physically present: a virtual representation of the teacher will suffice, as long as both the teacher and the student share the same rendering of the augmented reality. Also important to super-learning is social collaboration among students. This, in fact, represents the best first step into the world of super-learning until augmented reality technology comes along. By engaging in group problem solving, group tests, and group discussions, students can learn from other students’ associations.
As learning theory research has shown, individuals in a group tend to automatically integrate (“learn”) things originally known by only a few members of that group. Put simply, if one student knows the solution to a problem, and that answer is shared with other students in a team setting, the other students tend to quickly grasp the solution very quickly. Super-learning, then, has two promising fronts so far: technology (augmented reality) and social learning (group learning environments).
Yet there’s another important factor to consider when it comes to enhancing our society’s ability to do a better job of passing information and knowledge from one generation to the next... and this is something we can tackle right now: nutrition.
Unfortunately, the nutritional habits followed by most people today - and especially children -present significant obstacles to learning. In fact, it’s accurate to say that the diet of most American children today is a diet that automatically results in a very low level of intelligence. Let’s look at this more closely. The human brain is a delicate organ. It requires a precise mixture of water, blood sugar, temperature, electrolyte minerals, essential fatty acids and a whole host of other nutrients to function correctly. Alter even one of these just slightly, and brain function suffers dramatically.
For example, a 30% drop in blood sugar - the inevitable result of consuming a breakfast of refined white flour and sugar as found in practically every brand-name breakfast cereal - causes brain “fuzziness”, moodiness, a drop in the ability to concentrate and even tendencies towards violent behavior, especially in young men. The lack of sufficient hydration - a condition affecting the vast majority of Americans - also affects the brain. Since electrical impulses are impeded by even a slight dehydration of the brain, not getting enough water literally interferes with proper brain function. Making matters worse, most Americans simply don’t eat enough of the critical nutrients needed to build and maintain the brain from infancy. One of the most common deficiencies is GLA (gamma linolenic acid), an essential fatty acid found in abundance in human breast milk, but entirely missing from cow’s milk. Baby cows don’t have quite the need for brain matter that human babies have.
Fortunately, nature has made sure that human breast milk provides the nutrients needed to build large, healthy brains. Not surprisingly, clinical studies have shown that babies raised on cow’s milk score lower on intelligence tests than those raised on human breast milk. (But don’t expect the dairy industry to remind you of this little fact...) Beyond the lack of essential nutrients found in the American diet, the brain function of children is especially susceptible to the influence of destructive dietary ingredients such as refined white flour, white sugar and high-fructose corn syrup (the primary sweetener in soft drinks). The regular consumption of these ingredients, researchers have now demonstrated, leads to alarming changes in the behavior of adolescents.
Such behavior is typically described as “hyperactive” or having a “short attention span.”
These children, as you may have now guessed, are typically diagnosed as having ADHD and are frequently dosed with narcotic drugs such as Ritalin. This treatment protocol is entirely unnecessary, since dietary changes alone bring nearly all children back into the realm of “normal” behavior. Studies in the UK with so-called hyperactive children have demonstrated this quite convincingly: change the child’s diet, and their behavior shifts in a matter of days.
Read more about this at http://www.SugarFactor.org So there’s more to super-learning than merely inventing some cool new technology: we have to start getting serious about preparing the bodies and brains of our children to be ready for learning in the first place.
As a society, we cannot have both a quality education system and an adolescent population that acquires nearly 30% of its dietary calories from junk foods and soft drinks. A child who regularly consumes soft drinks and junk foods is a child who is not biologically prepared to learn. We can address this problem in several ways, but some of the more obvious starting points are to ban all junk food vending machines in public schools, outlaw all advertising of junk foods to children (including television, magazines, and retail merchandising), and start educating parents on the fundamentals of nutrition so that they can make informed choices about what to feed their children. Ultimately, in an advanced civilization, the production, distribution and marketing of ingredients like high-fructose corn syrup, refined white flour, refined white sugar, hydrogenated oils, aspartame, sodium nitrite and other metabolic disruptors would be outlawed altogether.
These substances have no place in a society of intelligent, healthy human beings. In conclusion, advances in super-learning hold tremendous promise for uplifting our civilization, but only if we are biologically prepared for learning (good nutrition).
Until the technology arrives, group learning, total immersion learning, and fundamental improvements in health education can deliver great improvements over the current system of teaching and learning.
This is no oversight. Nanotech isn’t on the list because nanotechnology isn’t a specific technology in the first place. The term “nanotechnology” has been so distorted by the popular press and researchers who add “nano” to their projects in order to get funding that, today, it essentially means “anything that’s really tiny.” Makers of artificial joints drill tiny holes into the surface of the joint structures and call it nanotechnology.
Why? The holes are nano. Makers of pants that resist stains claim to use nanotechnology, too: pant fibers are coated with “nano whiskers” which are, essentially, tiny cloth fibers. Sunscreen makers claim to be using nanotechnology, too. By producing sunscreen lotion particles smaller than ever, the easier it can be dispersed on the skin and block UV rays. These are just three of the many examples where manufacturers are jumping on the nanotech bandwagon with items that fundamentally have nothing to do with the original definition of nanotechnology. Based on the examples above, a household blender is a nanotech device, because it can blend foods into very tiny particles. This isn’t to say that these innovations aren’t useful. They are. But they’re not nanotech. Yet the hype surrounding nanotechnology has reached insane proportions. When I was a kid, a friend and I created an imaginary pet dog named Super Mutt. Super Mutt was an all-purpose companion who could perform miracles. He could not only mow the lawn, he could actually take shape as an apparent clone of one of us and go to school for us. When our cars ran out of gasoline, we could just stuff Super Mutt into the gas tank and use him as fuel. Super Mutt could do anything we wanted. Nanotechnology is the world’s Super Mutt. Anything you can dream up, somebody will tell you that nanotechnology can do it, regardless of its merit. Need to clean up all that nuclear power plant radiation? No problem: nanotech robots will reconfigure the materials so they don’t radiate. Is your body’s immune system failing? No problem: little tiny robots will be your immune system for you.
Concerned about global warming? Don’t fret. Airborne nano-robots will process the atmosphere and make sure the greenhouse effect never kicks in. The popular press stories about nanotechnology are filled with such promises. Nanotechnology has become, essentially, the scientific community’s Moses. Need a miracle? Call Nanotech Moses. The upshot of all this is that expectations about nanotechnology are off the charts. People expect it to work miracles. The same hype was once observed about ceramics or even superconductors, but neither panned out. The dot-com Internet hype didn’t pan out, either. Nanotechnology will be no different. Beyond the issues already mentioned here, there are other problems with the concept of nanotech that I’d like to point out: Everything is nano: The physical world around us is made up of molecular building blocks. Nature is already nano. As human beings, the vast majority of our biological processes operate at the nano level. Everything is already nano, and has been for a long time.
Saying that things are suddenly nano and using the term “nanotechnology” is akin to saying that things are made up of matter and claiming to be working on pioneering “matter technology.” Well of course! Big on hype and government funding: If you’re a researcher seeking a government grant, just add the word “nano” to your project and your odds of receiving funding quadruple. Dropping the word “nanotechnology” into your research, no matter how irrelevant the concept may really be, is a great way to make your work sound important and advanced. I’ve seen many examples of this nanotech hype in the scientific community. All of a sudden, there’s nano research everywhere!
That isn’t because researchers changed their research focus, it’s largely because they attached the word “nanotechnology” to their pet projects. Today, there’s a lot of money being thrown at nano-sounding projects that aren’t nanotechnology at all. Nanotech may fuel the next big stock market bubble: The discussions about nanotechnology in the mainstream today seem eerily similar to those about the Internet in the mid 1990’s.
Everybody’s excited, everybody wants to get on board as investors, and yet nobody has demonstrated a working application of hard-core nanotechnology (nano machines) that would actually generate revenues and improve peoples’ lives. Nano is shaping up to be the catalyst for the next big stock market boom and subsequent crash (like the dot-com crash). It is seriously over-hyped. Nanotechnology in medicine is a sham: One of the most frequently mentioned areas of nanotechnology is in medicine, where researchers promise that an army of millions of nanotech robots will travel through the bodies of medical patients and repair cells, destroy tumors, rebuild damaged tissue, and perform other medical miracles.
These researchers forget that the body already has its own nanotechnology that does all this and more! It’s called the immune system and the best way to improve the quality of life for most people, in terms of health, would be to support their own natural healing abilities.
Injecting a swarm of tiny robots into their bloodstream - which is precisely what is being proposed by medical nanotech pioneers - is a fundamentally flawed medical strategy that assumes scientists know how to heal people better than the body itself.
The true answers to improved health and quality of life are to be found in nutrition, physical exercise, avoidance of disease-causing foods, and a wholesale shift away from pharmaceuticals and Western medicine. Nanotechnology is not a promising solution for health and healing, but it is a great way to rack up funding grants and, someday, charge patients hundreds of thousands of dollars for complex-sounding treatments.
But remember, the body already has its own nanotechnology, and it’s far superior to anything the human mind can come up with. Nanotechnology poses a potential danger to humans: One of the few areas in nanotechnology actually completing research and producing results is the study of the toxicity of nanotech particles.
Experiments headed by Gunter Oberdurster, Ph.D., professor of Toxicology in Environmental Medicine and director of the University of Rochester’s EPA Particulate Matter Center recently revealed that inhaled nano-sized particles end up in the lungs and brains of rats.
In other studies, nano-particles have been shown to cause extensive brain damage in fish and to disrupt normal liver function. If humans were exposed to such nano-particles, we would very likely start seeing increases in brain disorders or perhaps even cellular malfunctions throughout the body. Nano-particles are so small that they can work their way into the mitochondria (the “power plants” of our cells).
The long-term health impact of exposure to these particles is clearly being shown to be very negative.
|
No comments:
Post a Comment