Vint Cerf takes his title of Chief Internet Evangelist for Google seriously. He is knee-deep in several projects to bring the next versions of the Internet into the world -- or in some cases, beyond the world and into the solar system. One of his pet projects includes an extraterrestrial Internet that uses a protocol other than IP.
Cerf sat down with Network World's Cisco Subnet editor, Julie Bort, at the annual Digital Broadband Migration conference in Boulder, Colo., to discuss the InterPlanetary Internet, cloud computing standards, the Semantic Web and other topics.
About a year ago, you began talking a lot about a concept called the "InterPlanetary Internet" stretching the Internet so that it can reach into outer space. What can you share about that project?
It's happening. It's not using the Internet Protocol. It's using the new Bundle Protocol that was developed as part of the more general notion of delay-and-disruption-tolerant network.
We recognized as far back as 1998 that the traditional Internet design had implicit in it the assumption that there was good connectivity, and relatively low latency, whereas in a space environment, when you are talking at interplanetary distances, you have speed-of-light delays and those can be minutes to days. We need this new Bundle Protocol to overcome the latencies and all the disconnects that occur in space, from celestial motion [and from] orbiting satellites.
The Bundle Protocols are running onboard the International Space Station. They are running in a number of locations around the United States in the NASA labs and in academic environments. There's a thing called the Bundle Bone, which is like the IPv6 backbone, that is linking a lot of these research activities to one another. There's at least one rather experimental implementation of the Bundle Protocol for the Android operating system, but it's not production quality, so it really needs to be redone/revisited.
There is a spacecraft called EPOXI that used to be called Deep Impact spacecraft (it fired a penetrator into a comet a few years ago in order to expose the interior for spectrographic analysis). The spacecraft is still in orbit around the sun and it just visited the Comet Hartley 2 in November 2010. We've uploaded the InterPlanetary protocols to that spacecraft and we've done testing of them at approximately 80 light seconds.
So during 2011, our initiative is to "space qualify" the interplanetary protocols in order to standardize them and make them available to all the space-faring countries. If they chose to adopt them, then potentially every spacecraft launched from that time on will be interwoven from a communications point of view. But perhaps more important, when the spacecraft have finished their primary missions, if they are still functionally operable -- they have power, computer, communications -- they can become nodes in an interplanetary backbone. So what can happen over time, is that we can literally grow an interplanetary network that can support both man and robotic exploration.
Part of the motivation for all of this is that most of the space exploration up until now has been supported by point-to-point radio links. We see much more complex missions needing a richer communications environment. We also found that because of the delay-and-disruption tolerance, we can get more data back from the scientific mission.
Here on Earth, Google has been engaged in numerous projects to "speed up the Internet" such as the new Internet protocol called SPDY. Should we be paying attention to SPDY and is there much support for it?
Yes, you should pay attention. These are efforts by Google to make more efficient the implementations of the Internet. A lot of this stuff is available through open source. You don't need very many people at Google to make something happen, that's what's so cool about Google. You have a little "Sherpa Team" that actually does this work.
There used to be a lot of talk about the Semantic Web. Is it still hot -- or not?
Well, I don't know if it's still hot. I can tell you that Tim Berners-Lee is still very, very determined. He calls it "deep linking" now and it's related to how you identify data in the network in such a way that you can converge or conjoin data coming from disparate sources and still make sense of it. My impression is that it's a tough slog, and it's been going for about a decade now. But Tim's been successful in the past, so I would not rule this out as a potential positive outcome, but it's a long haul.
Last year, you were talking a lot about cloud standards, and now it looks like Rackspace's OpenStack has got a groundswell of support. Can you declare it a winner?
I would not declare it a winner yet, and it's not because I have any preference for something else. By my count, there must be 25 or 30 different groups that are looking at cloud-based standards. The real issue is going to be implementation and testing. Until we have some serious experience in getting clouds to interact with each other in various ways, I think we won't know what works and what doesn't.
All of these efforts are laudable, but they are also going to have to prove themselves in the real world before we can declare any winner. There's a real question of what functionality we're looking for.