February 28 2008 / by memebox
Category: Technology Year: General Rating: 5
The following is a transcript of an audio interview with Steve Omohundro by Venessa Posavec
V: For Memebox.com, this is Venessa Posavec, and with me is Dr. Stephen Omohundro, president and founder of Self-Aware Systems, and research advisor to the Singularity Institute for Artificial Intelligence. Thank you for joining us, Steve.
Steve: Thanks for having me.
V: What do you do and how is that related to the future?
Steve: I’m the president of a company called Self-Aware Systems, and we’re involved both with developing a new form of advanced artificial intelligence, and also we serve as a kind of think tank where we’re analyzing the social consequences of this new technology and trying to guide it in a way that best serves humanity.
V: What is artificial intelligence?
Steve: That’s a discipline that’s been going since, oh, the late 50s, where we try and understand what’s the fundamental nature of human intelligence and build machines which can solve the same kinds of problems that people can.
V: What are some opportunities (and risks) of a self-improving AI?
Steve: The particular approach to artificial intelligence that my company is taking, and now a few other groups are as well, is to try and build systems that understand their own behavior and watch themselves as they work and solve problems; notice what things are working well and which things aren’t working well, and then change themselves, improve themselves, so that they work better. And we believe that this is a very powerful new approach to artificial intelligence that will solve many of the problems that other people haven’t been able to solve. And so on the one hand, it gives rise to great opportunity. On the other hand, it’s a very different kind of system than we’re used to. When a human programmer just writes a program, he understands what he wants it to do, and sometimes there are bugs, but basically the system behaves the way you expect it to. When you have a system that can change itself, basically it writes its own program, then you may understand the first version of it, but unless you’ve done a lot of analysis, it may change itself into something that you no longer understand. And so, these systems are quite a bit more unpredictable than the kinds of software we’ve been used to, so it’s very powerful, but there’s also potential dangers. So, a lot of our work is involved with getting the benefits while avoiding any risks.
V: What kind of impact would AI have on our economy? if our machines can perform many of the tasks that we think of as belonging to humans?
Steve: Well, I think we’re in for a big shift, because essentially every aspect of the economy can be improved by having more intelligence there, by making decisions more effectively. One of the consequences of artificial intelligence will be robotics that can actually behave much more flexibly than the robots we have today. And on the good side, that means a lot of manual labor which people don’t particularly like doing can be replaced by robots. On the potentially negative side, a lot of jobs that people have today will be much more cheaply accomplished by robotic systems, and so it’s going to be a big dislocation in the economy of the world. Huge potential benefits, way greater productivity, meaning that there’s a lot more potential wealth for the entire world, but exactly how we distribute that, and how the social structure adapts to this new technology is one of the big questions we’re facing right now.
V: Where does opposition to building intelligent machines usually come from?
Steve: Well, it leads to a very uncertain future, with a lot of potential downsides and risks, at the same time as potential positive things. Any time a new technology comes around, there is a kind of conservatism. There are a lot of people who, quite rightfully so, believe we should move very slowly. Bill Joy, one of the founders of Sun Microsystems, wrote a very influential paper, saying ‘the future doesn’t need us’, suggesting that we relinquish the technologies of artificial intelligence, nanotechnology, and biotechnology, and that we only proceed very very slowly in developing them. That might make perfect sense if we could actually be sure we could do that. The problem is that if a country, say, the United States, decides to stop developing this kind of technology, it just means that the future we end up with is going to be determined by some other country – and that may be North Korea, Iran, or a country with values very different from our own. And so I think there’s really no way to stop it. I think the best path is to understand it very carefully, to be very clear about our values and what we want our future society to look like, and then we can guide this technology to help us to develop that future.
V: What is productive nanotechnology?
Steve: Nanotechnology refers to doing manufacturing and engineering on the nanoscale. A nanometer is 10 (-9) meter, it’s very very small, it’s on the scale of individual atoms. There’s nanotechnology today, which builds structures and materials that have intricate structure on the nanoscale. But what productive nanotechnology refers to are nanosystems which can actually build other nanosystems. So you can think of them as little molecular robot arms, or little molecular mills, like in a manufacturing mill, and that leads to an entirely different set of consequences, because one of the things a productive system can make is a copy of the manufacturing device itself. It may be very very expensive, maybe billions of dollars, to create the first one. But once you have the first one, then it can very cheaply make a copy of itself, and then you can get vast manufacturing capability very inexpensively. So it has the potential of making things that are very expensive today, like a laptop computer or solar cells or diamond rings, all of those materials today require very expensive processes to build. In the presence of productive nanotechnology, those would cost about $1/kilogram, and you could have little blocks that would crank out whatever you have the instructions for, at the rate of about a kilogram per hour. So, it has the potential to radically lower the cost of manufacturing.
V: What is that going to mean for the manufacturing industry in general – with the birth of these 3D printers, where you can just print from home any kind of object, when people don’t have to go to the store anymore to buy something, they just print what they want right from their house?
Steve: Yeah, I think one of the revolutions we’re right in the middle of is trying to understand the nature of intellectual property. Property rights make perfect sense for specific physical objects. If I have something and you take it, I no longer have it. What we’re seeing right now is this raging battle over music and movies, where the record companies and movie companies have this notion of intellectual property. The problem is with music and movies, if you give a copy of it to your friends, you still have your copy. Humanity loves to share, it’s part of our altruism, and so with information it’s very difficult to stop people from giving away things that they like. With the fabs you were mentioning, and with nanotechnology, ultimately, it turns material objects essentially into information. Let’s say you’ve got a laptop computer you really love, it works really well. In this future world, you’ll be able to take the description of that laptop computer and send it to your friend, and then they can create it on their own home fab. So the notion of material objects as possessions gets much more blurry, and the material world becomes much more integrated with the informational world.
V: What is the connection between productive nanotechnology & AI?
Steve: These are, I think, the two most important technologies that are going to transform everything. Productive nanotechnology deals with manipulating the physical world at a fine level, and self-improving artificial intelligence deals with manipulating the informational world at a fundamental level. They’re actually synergistic with each other, in that to really get the power and impact of productive nanotechnology, you need to control it, and these are very complicated devices, with billions and billions of components, and to control it effectively, you need a great intelligence. To really make productive nanotechnology effective, you need artificial intelligence. To make the artificial intelligence really powerful and effective, it needs to run on a very powerful computer, which productive nanotechnology can provide. And so the two technologies are very synergistic, in that the combination of them is what’s going to be so transformative. There’s also a very interesting aspect, which is, both of those technologies, we can clearly see what they’re going to look like, what they’re outcome is like, but the pathway from where we are today to getting there – there’s still some roadblocks and stumbling blocks along the way. Interestingly enough, each of the technologies would enable the other one very quickly. If productive nanotechnology were to be produced first, you could make very conservative designs that Eric Drexler has done in his book Nanosystems, which enable the construction of a computer in the size of a sugar cube, which is more powerful than all the computers on Earth today together, and it would cost a few cents, and one of those productive nanotechnology manufacturing systems could produce it in a few minutes. So, the compute power that’s available once we have productive nanotechnology will just dwarf anything that’s around today. And that enables some brute force approaches to artificial intelligence. One, for example, that Ray Kurzweil has written quite a lot about, is the idea of scanning the human brain in detail, discovering what all the neuronal connections are like, and then simulating whole human brains directly. It requires a lot of compute power, but presumably you should be able to get the level of intelligence of a person in that way, and then that can serve as the bootstrap for having the system improve itself, and you get full arbitrarily powerful artificial intelligence. So, if productive nanotechnology comes first, then probably a matter of a year or two after, I would expect to see artificial intelligence. If artificial intelligence comes first, then it could be used to solve some of the hurdles on the pathway to productive nanotechnology. There’s a roadmap that the Foresight Institute and Battelle and several other groups just released a couple of months ago, at a big conference in Washington DC, in which they outlined four different pathways toward productive nanotechnolgy – some that sort of build on top of biotechnology using DNA as a construction tool, others that use organic chemistry and self-assembling molecules, another approach using scanning tunneling electron microscopes, so each of these approaches is very promising, but there are some hurdles, some steps that we don’t yet understand how to accomplish. If we had a full artificial intelligence, it’s very likely that those systems would be able to solve those remaining hurdles. So, probably in the matter of a year or two after AI we would have the first productive nanotechnologies. So, I think when we think about the future, we really need to think of those two technologies as working together.
V: You mentioned brain scans. Do you think that once that’s possible, the next step is that our minds can be scanned and uploaded, and we can live on in immortality in a virtual world?
Steve: That’s a vision that some people think is both possible and desirable. I’m a little more circumspect about it. I have no question that intelligence will be possible in that way, I think there are a lot of philosophical questions which we don’t yet understand about consciousness and about qualia, the sense of, when you see the color red, it has a certain quality or a certain feeling to you. Whether those aspects of our humanity are replicable inside of a machine – I think we’re going to just experiment with them and see. What the experience of a person whose mind or brain is copied and simulated in a computer, is that program really you? Does that program feel pain? I think those are questions that are going to really probe to the depths of our humanity and what does it mean to be human and what do we want to do. So, I’m hoping that that transition is one that we explore very carefully and very delicately. So, I would not predict that people would be rushing (whole-hog?) to be uploaded into computers, but I think it’s going to shine a lot of light on our experience, and we’ll make those decisions as we come to them.
V: Based on the research up to this point, and talks from the Singularity Summit, when do people think the first AI will be created?
Steve: Well, it’s challenging to try and put precise dates on it. Artificial intelligence, the name, was introduced by John McCarthy in 1959 I think it was, and because intelligence is something we do so naturally, it’s very difficult to see what’s hard about it. Interestingly enough, the things that we thought were hard, such as playing chess, turn out to be really easy for computers. In fact, the best computer chess player beat the world champion in 1995, whereas things that are very easy for us, say, distinguishing a dog from a cat, no computer system can yet do that. In the early 60s when the field of AI was just getting going, people expected it would be about 3 or 4 years before they would be able to solve a problem like distinguishing dogs from cats. In fact, Marvin Minsky had a master’s student he assigned a summer project of doing machine vision, and he made great advances, but the field is nowhere near complete at that. So, the field of AI, unfortunately, has made optimistic prognostications during it’s whole evolution, and so it’s dangerous to say when it’s going to happen, but the kind of brute force methods I was talking about such as simulating brains – the analysis that Ray Kurzweil does in his book The Singularity is Near, he argues that if you look at technological trends, how fast computers are getting cheaper and faster, he estimates somewhere around 2030 as the point at which a brute force approach to AI should work. If there’s a new idea, and that’s something I’m very interested in, my company feels that our technology has a number of new ideas that may be the critical ones, that a new idea can make AI have dramatic improvements in these systems, and so that’s very unpredictable. So, I’d say the situation looks like 2030 or somewhere thereabouts we’re likely to be able to do it with brute force approaches. Using more clever techniques, using new ideas and insights into intelligence, it could happen much sooner. Some of the people at the Singularity Summit claimed that they expected their systems to be fully intelligent within the next 4 years. So, I think there’s a lot of uncertainty and we’ll have to see how it plays out, but it could be soon. I think in our thinking about what the future looks like, we have to account for the possibility that it could be in the next few decades, certainly.
V: What resources do you recommend for people who want to better understand both artificial intelligence and the future in general and what the future will hold?
Steve: There are several books that are really good. If you want to understand the ideas and technical details of artificial intelligence, there’s an excellent textbook called Artificial Intelligence: A Modern Approach, by Stewart Russell and Peter Norvig, it’s the standard textbook now that’s used in universities. There are also a huge number of resources on the web of various kinds, and I think your website is a very exciting new direction there, where you aggregate and present information that helps one form a coherent picture of the future. Some other places – I’m on the Advisory Board of the Singularity Institute – their website has a lot of interesting information, Ray Kurzweil’s site has a lot of pointers to new developments in artificial intelligence, the Foresight Institute has a lot of references to nanotechnology developments, as does the Center for Responsible Nanotechnology. So I think we’re seeing a real flourishing of interest in these topics, and people writing excellent books and resources, so that it’s possible to become knowledgeable much more quickly than it was a few years ago.
V: Can you give us a general landscape of what you think the future will look like?
Steve: Well, I think it’s pretty clear what the technology looks like. I think what humanity does with that technology is much more uncertain. There are both positive utopian scenarios, and very negative dystopian scenarios. Let me just say a few of the positive things we can expect. One of the great things about nanotechnolgy is that it will enable us to advance medicine way beyond where we are today. The current techniques of surgery is basically taking a big knife and cutting into your tissues. Post-nanotechnology, that’s going to look extremely crude, almost like bloodletting looks to us. With nanotechnology, you can have devices that go into the human body, find pathogens and eliminate them, find cancerous cells and destroy them, clean out your arteries, basically restore health at a very fundamental molecular level. One thing that I think is unquestionably a positive good will be the elimination of human disease. That leads to a somewhat more controversial thing, which is – and many people are also excited about the prospect of also eliminating aging and potentially eliminating death – that is a prospect that scares a lot of people and excites a lot of people. The social consequences are huge. If people no longer die, the whole structure of our society will change dramatically, but I think that will certainly be a technological possibility. The presence of productive nanotechnology will dramatically lower the cost of material goods, so there’s the potential to eliminate poverty throughout the world. One of the challenges though is, there are mechanisms in today’s economy whereby the wealthy become extremely wealthy, and the poor tend to get poorer, so there’s sort of a divide between the richest and the poorest humans. These technologies can amplify that, because of their very nature. It is possible for them to be used by the very wealthy to enhance their wealth. It’s also possible, if we choose the way we structure society properly, if you’ve got your home nanotech manufacturing device, you can give a copy of it to your friend, and they now have one, and you can each make whatever material goods you need, and so there’s the potential for great equalization of wealth and prosperity throughout the world. So, finding how we want to structure this post-AI, post-nanotech economy, I think, is one of the big questions that faces us. Another huge area is conflict. Today, we have wars and skirmishes all over the planet. These technologies, on the one hand, will enable weapons far more powerful and more deadly and more scary than anything we have today. On the other hand, they also enable defensive mechanisms that will – for instance, one of the big worries a lot people have today is about biological terror. If someone were to develop a pathogen, an artificial pathogen, which was like an airborne ebola virus or something, it could be extremely devastating, and could kill millions and millions of people, we really don’t have very effective defenses against that kind of thing today. With nanotechnology, we can defend against those things, of course it also enables the creation of much more horrific tools. And so there’s a balance of defense and offense, and finding a way to create a social structure wherein we eliminate violence and conflict that ends up damaging the participants, I think that should be one of the highest priorities. And yea, we’ve been developing some possible approaches to that. So on the positive side of the ledger, we have the potential to eliminate disease, eliminate aging, eliminate poverty, eliminate hunger, eliminate pollution, we can clean up global warming, eliminate war, and so it’s kind of a utopian vision, but it all looks technologically possible. On the negative side, there are all sorts of potential dangers where you have runaway technology or you have some small piece of society which ends up trying to take over the rest. So the choices we make today in guiding this technology will lead us to either an amazing, beautiful future, or to one that’s very negative.
V: So, 2008 is right around the corner. Can you list some specific predictions and possibly some emerging trends that you see for next year?
Steve: Well, I don’t expect artificial intelligence and nanotechnology to come about on that short of a time scale, but I think the sort of “normal” technology – Moore’s Law making computers cheaper and hard disks cheaper, bandwidth is increasing, there’s a new firewire spec that will enable high definition video to be transmitted into and out of our computers. One trend, I think is really exciting, that was just announced is – well, Wikipedia is a phenomenon that is phenomenal. It’s sort of an aggregation of all of human knowledge, and just in the few short years that it’s been around, it’s grown to become really an amazing and powerful resource. I think it has a number of problems though, and right now it appears that there’s quite a lot of political scwabbling inside of it, and particularly the rise of kind-of powerful Wikipedians who delete – who feel their role is to delete knowledge – there’s a big furor right now whether mathematical proofs belong in that, as if that’s not human knowledge. So, Google is coming up with a competitor, potentially, they’re calling Knols, where an individual has ownership of a page, and others can comment on it and make suggestions, but they control what the content of it is. That seems to me a more stable model for the long term, but I think it’s actually good that we have competing models. One or the other or maybe some other idea will come about to make access to human knowledge way easier than it is today. I think another trend that we see right now is cell phones are just spreading all over the world, and they’re becoming more and more powerful. I think most people’s access to the internet in the world will not be through computers, it will be through handheld devices like cellphones. I see a lot of advancement in that area.
V: Again for 2008, do you think there are any disruptive events that may occur that people just don’t see coming?
Steve: There’s nothing that I’m aware of. Of course, if it’s really disruptive, I probably wouldn’t be aware of it. I don’t see 2008 as having a huge unforseen thing. I guess it depends on what you consider disruptive. For example, the social networking sites really exploded in this past year, things like Facebook. On the other hand, they’ve been going at a lower rate for a number of years, so it’s not clear whether that’s disruptive or not. I think the technology trends that underlie it are gonna smoothly continue on, I think Moore’s Law has at least another 10 or 15 years to go, and that will drive amazing and powerful technology. The computers we’ll be able to buy are just getting better and better. What’s disruptive, I think, is the social response to that. I just saw that YouTube, which enables people to put their own videos on the web, has had a big impact in the Western countries, but I was just reading that Saudi Arabian teenagers who feel sort of oppressed by their society are using YouTube as a means of expression, and they’re videotaping themselves doing wild and crazy things – hanging out of cars running down the freeway, and it’s shocking the elders of that society, but it’s become a mechanism for social change that wasn’t necessarily predictable just by looking at the technology. So, I would expect most of the unexpected things to come from the social level rather than from the technological level.
V: Craig Venter and his team are working on creating the first synthetic lifeform. What are you thoughts about that?
Steve: Well, I think it’s very interesting. Biotechnology is right at the cusp of major advances. I mean, we just sequenced the genome a few years ago and now already there are companies like 23andMe, who for a thousand dollars, will examine your DNA and tell you if you have a wide variety of genetic diseases. Craig Venter has been at the forefront of that. He recently finished a trip around the world collecting DNA from the oceans and finding many many new species. So, I think our understanding of the natural world, particularly how biology works, is just exploding. I would expect sometime in the next few years, that in addition to having the complete genome, we’ll have a complete map of all the proteins in a cell, and the metabolic processes by which those proteins interact, and the paths by which they’re created. Once we have that, I think it will give us insight into human disease that’s just beyond anything we’ve got now. It will also potentially enable us to develop new molecules and drugs in an amazing way. And, – and this is somewhat controversial, to develop new lifeforms, to modify the DNA to produce an outcome we want. I think it hasn’t been quite announced, but everyone’s been pre-announcing it, this artificial lifeform, is certainly a milestone in the history of our understanding and relationship with living things. Where that leads? Today we have big controversies about stem cells and about abortion. I think that when we can choose the genomes of our children, what social consequences does that lead to? Certainly everybody’s going to want to eliminate genetic diseases, and I don’t think there will be much controversy about that. But, why stop at eliminating diseases? Why not say, well I want my kids to be a little smarter? I want my kids to be more beautiful? I want them to be stronger? Pretty soon, the choice of your child’s genome has nothing to do with your own genome, so we’re kind of breaking the paths that evolution has run on, of heritable characteristics. Now, the characteristics of our children will be what is popular – memes choose genes in that world. I expect there’s going to be a firestorm of controversy about that, and I think it could be dangerous. If we don’t do it carefully, you could have a runaway fad changing humanity in a very dramatic way, without necessarily understanding what the full ramifications of those changes will be. So I think biotechnology will be the first – y’know, it’s biotechnology, nanotechnology, and AI are the huge tsunamis coming towards us, and I think biotechnology will be the first of them. And actually maybe that’s good. The decisions and understanding and the way of thinking about ourselves and our relationship to the world – biotechnology will help us get clarity on that, so that when nanotech and AI come, we’ll have at least some grounding in this new future we’re creating.
V: Do you have any predictions for the next 5 years, through the year 2012?
Steve: Well, I think we can expect to see computer power, and particularly hard disk storage, getting cheaper and cheaper. I mean, I just bought a 750GB hard drive at Frey’s on Black Friday for $150 bucks, which just shocked me. It kind of made me think back a few years ago, I was envisioning what would it take to have a terabyte of storage and you could fit it in one house and it would be less than a million dollars, and now you can just go down to the store and buy it. So that means you can store thousands of movies, and I think those trends, there’s no sign of them ending for a number of years. So, the machine of five years from now… for instance, cameras are getting very cheap, and if you have very inexpensive storage, at some point it’s cheap enough that you can just record your entire life. There’s going to be a date, probably within the next five years, where I expect to see a lot of people who just record everything in their lives. That’s sort of intriguing from a historic point of view, they’re creating a full history. It also leads to a very different society. David Brin has a very interesting book called The Transparent Society, looking at some of the consequences of very inexpensive cameras and recording devices, where pretty much all public space eventually becomes monitored, and it dramatically changes the balance between criminals and police forces. There’s potential for abuse of that, and there’s also potential to use that kind of “su-veilllance” – surveillance is sort of viewing from above, su-veillance is sort of viewing from below – if you have an empowered citizenry that has inexpensive cameras and are recording everything, it could create a new level of people being accountable for their actions and could transform society in a big way.
V: And finally, what specific or general predictions for next 10 years, thru end of 2017.
Steve: Well, that’s getting into the realm where we may actually see some early forms of these technologies coming about. AI in particular – today we have AI systems that are not of the great power of the systems I was talking about, but they’re getting more and more powerful. You see them in speech recognition systems. That’s becoming more and more common. And particularly as computer hardware gets cheaper, even just today’s pretty dumb algorithms for things like speech recognition and handwriting recognition and image recognition, very brute force approaches will become inexpensive and viable. I think we can expect to see the consequences of that much more widely. One area where I expect to see that is in the area of video games. I think there’s going to be a kind of blending of movies and video games. Already today, the most powerful video game platforms can almost produce photo-realistic imagery. Kind of interactive entertainment is likely to become more and more important. I expect it to kind-of expand beyond the teenage boy customer, and new forms and new versions of those systems which are appealing to adults and to women, I think that’s a trend we are going to see a lot more of. And what form it takes I think is unclear. People were talking a lot about virtual reality and Second Life. None of those systems have really caught on in a huge way yet, so I would expect that over that time frame the kind of understanding of our connection between the physical world and the virtual world is going to become much clearer.
V: Thank you for speaking with us.