Forgot your password?
typodupeerror
The Internet Google Networking Space Technology

Vint Cerf Answers Your Questions About IPv6 and More 150

Posted by timothy
from the tell-us-more-about-your-beard dept.
Last week, you asked questions of "father of the Internet" Vint Cerf; read on below for Cerf's thoughts on the present and future of IPv6, standards and nomenclature, the origin of his beard, and more. Thanks, Vint!
What can we do to get ISPs to switch on IPv6?
by jandrese

One of the biggest hurdles to IPv6 adoption today is that the average home user simply cannot get an IPv6 address from their ISP. Tunnels are hacker toys, and completely impractical/impossible for people who are using their ISP's "home router". What do you think we can do to convince ISPs to start rolling out IPv6 [i]before[/i] there is a crisis? Everybody agrees that the transition will go smoother if we take it slow and easy, but nobody is willing to make the first step, and IPv4 addresses aren't still being inexorably depleted the world over.

VC: I have been asking myself (and others) this question for some years now! When you try to explain that they can't really expand the Internet effectively relying solely on cascading NAT boxes they kind of glaze over. Sadly, now that we really are in the IPv4 end-game, there is not much choice but to deploy NATs to try to make dual-stack work as a transition plan. If ISPs had started implementing IPv6 5 years ago we would not have this problem. I think only pressure from consumers, businesses and governments to demand IPv6 implementation will help. Even then, I can imagine the bean counters insisting that there be incremental revenue for implementing IPv6 despite the simple fact that the only serious path to supporting smart devices (including smart grid, mobiles with IP addresses, etc) is through implementation of IPv6. We are also going to have to find some incentives for users to upgrade their home routers to handle both IPv4 and IPv6. Maybe a trade-in policy???

IPV6, and a related question
by gr8_phk

With IPv6 we could all have fixed IP addresses (or blocks of them) at home. Is this likely to happen? What do you see as the pros and cons from the ISP point of view for doing this? I think the reasons I want it are the reasons they don't, but I'd like to know how someone with your perspective sees it.

VC: We could actually have a fairly large group of IPv6 addresses at each termination point. An advantage is that one could then run servers but some ISPs might find that problematic because of the potential uplink traffic. I ended up paying for "business" class service to assure fixed IP addresses for that reason. I did not have servers of video or imagery in mind, but, rather, controllers and sensors (and ability to print remotely, for instance).

Hardware accelerated IPv6
by vlm

Hardware accelerated ipv4 routing/switching was out there, I dunno, at least a decade ago, or more. Your expectations on the rollout of hardware accelerated ipv6 switching?

VC: It probably won't happen until there is clear evidence of an IPv6 tipping point. Of course, it makes every bit of good sense and the IPv6 format is better geared to hardware assist than IPv4.

Why the colon in IPv6?
by jandrese

The biggest thing I hate about IPv6 is that the standard format uses colon as the digit separator. On most keyboards, that is a fairly awkward character to type, especially in rapid fire between groups of hex digits. Also, it causes problems for the many many programs that specify ports after IP addresses with a colon (like URIs!). IPv4's use of the period instead is much nicer. If you didn't want to reuse the period (so programs can distinguish between the two types of addresses more easily), why not use dash instead? It's just as visually appealing and doesn't require you to hit shift to type it. It would have saved a whole lot of ugly brackets around IP addresses.

Any aesthetic qualities of the colon are lost when you have to do this:
http:/// [http] [1005:3321:5a52:4fca::1]:8080/
instead of: http://1005-3321-5a52-4fca--1:8080/ [1005-3321-5a52-4fca--1]

And that second example was noticeably quicker for me to type.

Edit: And of course because this is Slashdot it made a huge mess of the first URL and forced me to mess it up slightly to be readable!


VC: The colon was needed to allow for compressed display of IPv6 addresses and to avoid confusion with a dotted representation of IPv4. It was apparently the only character thought to be unencumbered for this purpose at the time. Other slashdot readers may have additional comments on this.

Hindsight is 20/20
by eldavojohn

If there was one thing you could go back and change about TCP/IP -- something that is far too entrenched to change now -- what would it be?

VC: Well, I wish I had realized we'd need more than 32 bits of address space! At the time, I thought this was still an experiment and that, if successful, we would develop a production version. I guess IPv6 is the production version! I would also have included a lot of strong authentication mechanisms but at the time we were standardizing TCP/IP (version 4), there was no practical public key crypto capability ready in hand.

.here TLD?
by TheLink

Do you think there should be a .here TLD, reserved officially for local use in an analogous way to the way that the RFC1918 IP addresses are reserved officially for private use?

Currently many are coming up with their own ad hoc TLDs for local use. In my opinion this is suboptimal. Having a standard official TLD would allow more interesting things to "organically grow" on it.

(See also: http://tools.ietf.org/html/draft-yeoh-tldhere-01)


VC: Hard to say, honestly. I am not sure just what ".here" might actually mean unless intended to be self-referential (in other words, the server is the same as the referring party - kind of like 127.0.0.1? In that case, it need only be a reserved term rather than something you register in.

Ooh! Settle An Argument For Me!
by Greyfox

Though my deep and thoughtful meditation on IP addressing, I have realized that an IP address is simply a number. We canonically break it up into 4 smaller numbers that are presumably easier to remember. However if you stack all the bits of those smaller numbers together, you get a bigger number, and that number is actually the address. Moreover, every C standard library that I have ever tried is able to resolve this bigger number to the correct address. If I ping a 10 digit number in that address range, the C standard library will figure it out. It is my position that this is a feature and not a bug.

It seems that the OS X Firefox Guys don't agree with me. Admittedly they do have an RFC on the subject, but their browser breaks a known behavior that every other TCP/IP client program on the planet exhibits, including other operating system versions of Firefox!

Would you kindly bludgeon one of us into submission? I don't really care which side of the argument you come down on, but one of us has to be able to say "Because Vint Cerf said so!"

Oh, and while I've got you, I'm sick of writing stateless http applications. May I have your permission to go back to writing plain old socket servers on other ports, providing data based on whatever query format I feel like implementing? It kind of looks like REST, I suppose, except that I don't have to load 14 layers of frameworks to get to that point.


VC: LOL! actually, most of us assumed that any way to generate the 32 number should be acceptable since the connection process doesn't actually use the text representation of the IP address. I think any value in the range 0 to 2^32-1 should be acceptable as an IP reference. As to stateless operation, I know what you mean; you have to get used to figuring out how to stash intermediate state (cookies usually)...

SMTP, DNS, U.S. Customs
by molo

It seems that it is getting more and more difficult to successfully run your own SMTP server. See, for example, this post responding to the idea that a user was going to move off gmail to their own server. Are there any prospects for meaningful SMTP reform that would lower the barrier to entry for legitimate emailers?

DNS has been often criticized as a centralized single point of failure / censorship. Have you been following the development of namecoin and P2P DNS? Are these systems viable in your estimation? How would you improve them or encourage their adoption?

The U.S. Customs department recently created headlines in seizing domains. These seizures appear to be extra-legal (not founded in law), but ICANN has gone along with them. Are those fair statements? Should ICANN's trustworthiness be suspect as a result of this process?


VC: On SMTP, the problem is spam. If SMTP relays could be authenticated in some way, perhaps running your own would work better. As of now, it is a problem to validate relays and most ISPs don't allow it. Maybe we will make some progress in this when we can strongly authenticate/validate end points in the network better. Regarding alternatives to DNS, it would be interesting to find alternatives to DNS that might be less prone to the business models that produce domaining, for example, but I have not yet seen evidence that such an outcome is likely to gain traction. I am not sure that ICANN has any ability to resist effectively the so-called seizures of domain names by the DHS/ICE. I am disturbed by the argument that this is comparable to FBI "seizures" of contraband for many reasons but I think the ability to resist this would rest on a successful court challenge to the practice, not to an ICANN policy.

Smart Grid
by kiwimate

You're currently on the Governing Board of the NIST Smart Grid Interoperability Panel. What is the state of standards development, and how big an impact does it have to move national infrastructure communications into the public IP arena so far as our ability to strengthen and expand our infrastructure? Conversely, how big are the threats in this new world?


VC: The process is moving along reasonably well although adoption of the standards that are emerging in the US will depend on endorsement by FERC and NERC. I think the standards can be very beneficial to the creation of interoperable energy management systems, edge devices, and device controllers. I am pleased that IPv6 forms a major basis for edge communication but concerned that the domestic ISPs, with some notable exceptions, have been slow to roll out support for IPv6. I imagine that an IPv6-equipped mobile could easily become a remote controller for a wide range of IPv6-labelled devices.

What would you like to see developed next?
by techmuse

I'm curious what technologies you would like to see developed next, or what you think would be most important to develop next. In other words, what do you think researchers should work on now that would be most significant? (Oh, and thank you for changing my life!)

V: My major wish right now, apart from ISP implementation of IPv6, DNSSEC and more end/end crypto and strong, 2-factor authentication, is the implementation of true broadcast IP. Satellites raining IP(v6) packets to Earth in range of millions of receivers could make widespread digital distribution of information far more efficient.

Interplanetary Internet
by immakiku TCP/IP started as a military project but has been adapted for all the Internet applications we see today. What sort of applications do you foresee/imagine for the Interplanetary Internet, aside from the stated purpose of coordinating NASA devices?

VC: The primary terrestrial applications are military tactical communications and enhanced mobile communications. I see a role for these delay and disruption tolerant protocols in public safety networking as well. All devices in the system could also serve as relays to allow for the dynamic creation of Mobile Ad hoc Networks, making more resilient emergency services communications and any number of popular user apps on mobiles.
The IP of TCP/IP
BY WHOM

The head of UN's WIPO believes that the Internet (and obviously the stack on which it runs) should have been patented. How do you believe it would have evolved, would TCP/IP be protected by patents?

VC: This is really pretty silly. Bob Kahn and I consciously did NOT patent or control distribution of the design and protocol specifications for TCP/IP for the simple reason that we wanted no intellectual property barriers to the adoption of TCP/IP as an international standard. I see absolutely no utility in the proposition to patent TCP/IP. It would have given a reason for SNA, DECNET and other proprietary protocols to persist since their inventors/purveyors could have argued that licensing TCP/IP (had it been patented) would be of no interest to them - indeed, its use opened up interoperability among many brands of computers (and networks) leading to more competition.

Has the Internet become too centralized?
by slashsloth

That is to say, do you think that too much power & control now lies in the hands of the Internet Service Providers, thereby making it, at least in terms of control if not routing, too centralized & too easily manipulated by the powerful few. I guess this question stems from a viewpoint that it should be somehow democratic & free (as in free speech). Also do you share my pedantic belief that the public Internet should be spelt with a capital 'I'?

VC: As to the latter, yes, I strongly believe that the capital was intended to refer to the public Internet (I have written on this in the past). We accepted the notion that "internet" could use the protocols but be private and disconnected from the public Internet but that "Internet" referred to the latter. Some people disagree but I still believe it to be a useful distinction. As to centralization, it is possible that the lack of competition among Internet access providers is a bad outcome. I have always been a proponent of intra-modal competition through open access to underlying transport networks but not everyone agrees with me.

How can we bring trust back to the internet?
by Madman

One of the secrets of the internet's massive success is the lack of controls over it; if there had been strict security and processes in place it would likely not have come about. One of the downsides is that all our security measures are tacked-on, there is no built-in security to the protocols used on the internet and as a result security is a massive problem. How do we go from the wild west to having at least a reasonable level of trusted computing?

VC: Better and stronger authentication would help. 2-factor "passwords" and registration of devices. We may also need to adopt international norms for acceptable usage of the net with some kind of enforceable rules with reciprocity. Until we have some collective and cross-border ability to bring miscreants to justice, we will continue to see relatively unconstrained behaviors including harmful ones.

No more "peace and love" in software designs
by BeforeCoffee

I take it that the "route around failures" and other original design features of TCP/IP and the Internet as a whole relied upon trusting others always having good intentions and cooperating. Those designs were necessary at the time and the reason the internet exists today.

Nowadays distrust, firewalls, and coding defensively is the norm (or it should be). In that light, the internet's design seems creaky and vulnerable.

Do you have any thoughts or feelings on how software has changed and seemingly become so treacherous since you first designed TCP/IP? Would you advocate a ground-up redesign of internet transports and protocols starting with TCP/IP?


VC: I have always been a fan of trying clean-sheet designs. Sometimes you discover retrofits that don't require a re-design. In other cases (such as delay and disruption tolerance) you need serious re-implementation of new designs. It is clear that authentication, various forms of cryptographic protections and the like are needed at several layers in the architecture. Deploying something wholly new is hard, though.

Future of the Internet
by H0bb3z

Do you feel the security concerns over collected information will trump the leveraging of information in future Internet technologies? Will there be a separate "opt-in" or "opt-out" web to cater to each preference?

Context: There have been many controversies recently regarding the collection of data and the privacy of individual information. As we move forward, I've heard a mixed set of messages regarding the direction we should expect to see.

Consumerism is indeed driving innovation and everything is going mobile these days (there's an app for that I think). One example I heard recently of the benefit of the convergence of information and mobility: a consumer can point their mobile phone at a shelf of groceries, get an active "overlay" of information regarding the products and determine which best suits the customer needs. On the flip side, sensors that track customer behavior are installed at the grocery shelf and based on detected behavior (like stopping for a moment to reminisce about Coco-Puffs even though you know they are bad for you) initiates a coupon for whatever the vendor may feel would provide enough motivation to purchase their product -- in the example a $1 off coupon to the mobile phone of a shopper.

Will this become reality in the future?

I think there are benefits to be had, but also am fiercely protective of my personal information and preferences.


VC: At least in America, we have tended to readily give up privacy in exchange for convenience. Credit card information bases being a good example of that. If one can divorce identity from behavior patterns, it might be acceptable to many to benefit from system reactions to our choices and behavior if these are not correlated with identity.

Postel and Crocker
by vlm

So you went to high school with Postel and Crocker, according to Wikipedia; did you guys hang out all along or meet up decades later?

V: Crocker and I have been best friends since about 1959. Jon was in a later class and we didn't know him until we all reconvened at UCLA in the late 1960s.

A Simple Pogonological Question
by eldavojohn

What level of success does TCP/IP owe to your glorious beard?

VC: LOL!! not much! I just got tired of nicks and cuts from shaving my whole face and went with the beard!! I did shave it off once, but quickly re-grew it after being painfully reminded why I had grown it in the first place!!!
This discussion has been archived. No new comments can be posted.

Vint Cerf Answers Your Questions About IPv6 and More

Comments Filter:
  • by rudy_wayne (414635) on Tuesday October 25, 2011 @11:56AM (#37832332)

    I find it odd that nobody ever mentions that during his tenure as head of ICANN they were one of the biggest scumbag organizations of the internet.

  • by Anonymous Coward

    I think the problem is legacy machines. Not the unwillingness to upgrade, but the shear expense. And I don't mean the expense of new hardward. One issue is legacy software, a subset of what I mean by legacy machines. And software isn't so nice to replace, no matter how you spin it.

    I hope IPv4 and IPv6 can live side by side for as long as necessary.

    • by Anonymous Coward

      As someone who's been running native IPv6 for the past year and a half, I have never come across software that had any serious problems with IPv6. Any operating system and any piece of software built within the past 15 years is, in my experience, fine. Every now and then you find a corner case of a tool not parsing the colons properly, but you can work around that by using the host's name instead of its address.

      Maybe there are old legacy business systems, those that run banks or something, that have problem

    • by unixisc (2429386)
      What % of boxes are legacy machines, legacy as in there-is-a-snowball's-chance-in-hell-that-it'll-ever-support-IPv6? I'd suspect that it's really low, and for such things, exceptions can be made, and they can be allowed to remain IPv4. But if a situation was created whereby anything that can upgrade must upgrade, this problem wouldn't be banging on our door today.
      • "Let's blame legacy machines" is an incredibly silly idea and it is so easy to prove how dumb it is.

        Legacy Systems = "Old stuff"...
        Now tell me how fast is the quantity of "Old stuff" increasing? Who is making the new "Old stuff"? (gaaak!)
        (Where can I find the next generation of really old stuff? ...)
    • A big problem for years was the price/performance of routers that were big enough to run large businesses or medium-large ISPs - they'd use hardware acceleration for IPv4, but didn't have anywhere near as good performance for IPv6, even if they had hardware support and weren't doing it all in software. And there was a big chicken-vs-egg problem of getting ISPs to spend more for fast IPv6 hardware when there wasn't enough IPv6 demand, while customers weren't pushing to go to IPv6 because few ISPs supported

  • by OverlordQ (264228) on Tuesday October 25, 2011 @12:01PM (#37832404) Journal

    The colon is hard to type? It's two pinkies

    • by Desler (1608317)

      They were probably meaning that in light if the fact that ipv4 addresses are easily typed with only one hand on the numpad whereas ipv6 requires using shift and hitting colon.

      • by vlm (69642)

        They were probably meaning that in light if the fact that ipv4 addresses are easily typed with only one hand on the numpad whereas ipv6 requires using shift and hitting colon.

        And the letters a thru f, most of the time.

        I suppose a standard for ipv6 addresses using octal digits and dash - as spacer could work, but it'll be hard to share with the 16 bit boundaries of "standard hex". I'm thinking the only way to make those people happy, is a standard like this:

        http colon slash slash 11111110110101001010101 ... 128 bits of binary ... 101010101000111/index.htm (gotta be a .htm extension for these types)

    • Re:What? (Score:5, Interesting)

      by Arlet (29997) on Tuesday October 25, 2011 @12:20PM (#37832600)

      Besides, if you have to enter so many numeric IPv6 addresses that the colon is bothering you, you're doing it wrong.

      • *snerk*
      • by jandrese (485)
        You might think that, but in practice you end up typing the addresses a lot more than you would think, especially when you're working on small disconnected networks with intermittent connectivity and doing lots of peer to peer traffic with embedded devices. When something autoconfigures an address, that address needs to be added to DNS somehow, and that's often a fairly difficult step, if you even have DNS.

        The colon requires you to keep smacking the shift key, which is awkward becuase your left hand is
        • by Bengie (1121981)

          " especially when you're working on small disconnected networks with intermittent connectivity and doing lots of peer to peer traffic with embedded devices."

          That's what static private IPs are for or just static public IPs for that matter.

          fd::1 for one machine, fd::2 for another.. really, these address aren't that hard to remember or type.

  • by tlhIngan (30335) <slashdot AT worf DOT net> on Tuesday October 25, 2011 @12:06PM (#37832468)

    That's the big problem.

    NAT decouples the internal private network from the external network - and I'm sure any IT admin who has had to renumber their internal network would agree it's a huge PITA on IPv4. Luckily though they don't have to do it when their ISP gives them a new range of IPv4 addresses except for the few machines that are using them (DNS servers mostly - other servers can often hide behind NAT).

    They see the IPv6 transition as hard because no one makes NATv6 boxes (though it does exist, and heck, NAT-PT makes it possible to isolate the internal network's protocol from the external network - start IPv4, NAT-PT translates to IPv6 for the internet, etc.). They see the ISP giving them a prefix and changing that prefix willy-nilly causing lots of fun for everyone inside. They'd rather do it the IPv4 way - give everyone a private IPv6 address (FC00::/64) and worry on the few border routers and such.

    Even worse - home users, who most likely do NOT have a working DNS setup and have to type the damn things in. And just when my parents have gotten used to typing the long string of nonsense garbage to hit the printer, the ISP changes their prefix and they have to learn a new set of IPs.

    If we break the concept of true-end-to-end connectivity (already broken thanks to firewalls), the IPv6 transition could've been done years ago - everyone replaces their Linksys or Cisco router and go on their way, while the router does NATv6/NATv4/NAT-PT as appropriate. It just works, my parents don't have to learn anything new (and I don't have to fiddle with their machines and everything), etc. etc.

    IPv6 is sorely needed, yes. But the assumptions made 20 years ago when it was designed just aren't true today and no one wants to play network admin for their entire extended family and neighbourhood. And enterprise is slow because they're worried about end-to-end connectivity for security reasons. NAT breaks that, so it's a nice secondary layer beyond the firewall at ensure they don't accidentally leave their customer database exposed (it might be protected on IPv4, but exposed on IPv6).

    We can probably switch a good chunk of the Internet to IPv6 by haivng a transition plan of home users replacing their routers with ones that do NATv6/NATv4/NAT-PT - they're used to stuff like that and it makes life easy. Ditto enterprise customers - most businesses will probably just switch if they only have to replace one box and not have to learn the ins and outs of IPv6 and getting every PC to have a routable address it doesn't need.

    • by Bookwyrm (3535) on Tuesday October 25, 2011 @12:16PM (#37832558)

      I wish I had mod points at the moment to moderate you up, because not many get the problem.

      It gets even worse if you imagine that, some day, someone comes up with a protocol that's better than IPv6 (not a bigger address space, for goodness' sake, but *better*). If people compulsively cling to the dead-end-to-dead-end connectivity model with IPv6, trying to migrate that network to the next generation of technologies that come after when every lightbulb has its own IPv6 address will bring network innovation to a stand-still.

      Unfortunately, NAP-PT and related do not always work because there is not a clean separation in many applications between network-layer stuff and application-layer stuff. The applications/network services APIs have to be cleaned up first.

    • by csnydermvpsoft (596111) on Tuesday October 25, 2011 @12:38PM (#37832794) Homepage

      NAT decouples the internal private network from the external network - and I'm sure any IT admin who has had to renumber their internal network would agree it's a huge PITA on IPv4. They see the ISP giving them a prefix and changing that prefix willy-nilly causing lots of fun for everyone inside.

      IPv6 provides an excellent way to address this: prefix delegation [wikipedia.org]. Your router gets a prefix assignment automatically from your ISP and advertises it to clients. If the ISP renumbers, everything is automatically reconfigured when the ISP's announcement changes. The only issue is DNS, and there are mechanisms to ease that as well (though some manual intervention is required with current tooling).

      More importantly, prefixes won't need to change very often. The only times I've ever had to renumber were when I was either changing ISPs or when I wanted a different size IP block. The former case still exists (though the mechanisms I mentioned above help with that transition), but the latter case should be virtually nonexistent, as everyone will be assigned a block of subnets large enough to service them for the foreseeable future, no matter how big they get.

      Even worse - home users, who most likely do NOT have a working DNS setup and have to type the damn things in.

      Thankfully, there are solutions for this problem as well - and they're already widespread. Look for technologies such as zeroconf to become even more common going forward (all of the printers I've purchased in the past few years - including a large corporate laser [Ricoh] and two smaller multifunctions [Brother] - include and enable it by default).

      • More importantly, prefixes won't need to change very often. The only times I've ever had to renumber were when I was either changing ISPs or when I wanted a different size IP block.

        Or if you have two connections (from the same or different ISPs) and try to load balance them or even just use one when the other one fails. And changing the IPs (well, at least on IPv4) breaks all established connections, local or not.

    • by CAPSLOCK2000 (27149) on Tuesday October 25, 2011 @12:46PM (#37832878) Homepage

      Even worse - home users, who most likely do NOT have a working DNS setup and have to type the damn things in. And just when my parents have gotten used to typing the long string of nonsense garbage to hit the printer, the ISP changes their prefix and they have to learn a new set of IPs.

      Multicast DNS is gaining traction. Multicast is a requirement for IPv6 anyway so it has a reasonable chance of working.
      In my experience most parent-class-beings are unable to deal with raw IPv4 either.

      • by jandrese (485)
        Multicast was a requirement in v4 as well, and you see how far that got. The advantage of v6 is that it supports anycast, which is like multicast, except that it stops when it reaches the first recipient instead of getting delivered to every one.
        • I think you have things reversed. Anycast works just fine with ipv4. On the other hand, IPv6 breaks if you disable multicast.

          • by jandrese (485)
            Only router discovery, which is optional (you can manually configure your address and the next hop router if you like). IPv6 has the same problems with multicast that IPv4 does, namely that routing it is a big mess and nobody wants to do it unless they have to, especially on the internet. So it's fine for local discovery type applications (most routing protocols use multicast for this reason), but nobody wants to think about sending it past the first hop unless they really have to.
    • by vlm (69642) on Tuesday October 25, 2011 @12:51PM (#37832942)

      If you don't want global addrs, don't use them. Use link local addrs inside and have everyone talk thru a proxy. Its Just Not A Big Deal.

      If you don't want world wide access to your local printer, put it on a vlan thats not running radvd handing out global addrs...

      Basically your "private" web browsing clients will get inet access via "squid" instead of "iptables nat". The industry has been moving toward "everything over port 80" anyway for a decade or two now.

      Static DNS is dead/dying/soon no longer usable. Will that be a change? Yeah. So start changing now, so you're not trying to do dyndns and ipv6 at the same time. Dynamic for global and simple multicast DNS for internal. Yes multicast DNS is an unholy pain between VLANs, but it can be (carefully) done.

      We "need" a way to actively repeatedly quickly renumber DNS because our ISP "needs" to shuffle their precious resource of tiny little /20's around to different POPs because there is an intense shortage of ipv4 space. So we can't roll out ipv6 until it supports the intense address churn required by ipv4. Err, wait a second, we don't need that administrative load of address renumbering with ipv6, thats kinda the whole point. Standard /. car analogy is we can't roll out automobiles because we are having a production problem at the horse harness factory and the customers have always needed horse harnesses with our coaches so lets not roll out "the car" until we have a guaranteed scalable horse harness factory, otherwise what would our customers use to harness their new cars?

      At some point, randomly renumbering people in ipv6 is going to be considered red in the face screaming into the phone "contract breaking time" not just business as usual another day at the office ho hum. Maybe you should expect a faxed/emailed maint notification for ipv6 renumbering?

    • Yea, I once used NAT to load balance between two connections (DSL and (legit) Wi-Fi) so that I could upload/download torrents faster. The router, when it detected a new outgoing connection just routed it to one of the connections and it worked. I could use uTorrents and achieved almost the sum of the speeds of both connections.

      With IPv6 and no NAT, that would not have been possible, unless the ISP would agree to load balance it (and give the same IPs on both connections) and that is less likely than a hard

    • by Jonner (189691)

      That's the big problem.

      NAT decouples the internal private network from the external network - and I'm sure any IT admin who has had to renumber their internal network would agree it's a huge PITA on IPv4. Luckily though they don't have to do it when their ISP gives them a new range of IPv4 addresses except for the few machines that are using them (DNS servers mostly - other servers can often hide behind NAT).

      Why should it be necessary to ever change statically-allocated network addresses? The only reason that's necessary for IPv4 is that the addresses are scarce.

      Even worse - home users, who most likely do NOT have a working DNS setup and have to type the damn things in. And just when my parents have gotten used to typing the long string of nonsense garbage to hit the printer, the ISP changes their prefix and they have to learn a new set of IPs.

      As you say, the problem is the lack of a working DNS setup. Few people should ever have to be aware of IP addresses, including your parents. Multicast DNS already works great today with IPv4 and IPv6.

      If we break the concept of true-end-to-end connectivity (already broken thanks to firewalls), the IPv6 transition could've been done years ago - everyone replaces their Linksys or Cisco router and go on their way, while the router does NATv6/NATv4/NAT-PT as appropriate. It just works, my parents don't have to learn anything new (and I don't have to fiddle with their machines and everything), etc. etc.

      The simplicity you want is already provided by the end-to-end model of IP (both versions) and broken by NAT. The only reason we must use NAT for IPv4 is th

    • by Jonner (189691) on Tuesday October 25, 2011 @03:03PM (#37834936)

      And enterprise is slow because they're worried about end-to-end connectivity for security reasons. NAT breaks that, so it's a nice secondary layer beyond the firewall at ensure they don't accidentally leave their customer database exposed (it might be protected on IPv4, but exposed on IPv6).

      Relying on NAT rather than a stateful firewall for security is a rookie mistake. NAT provides absolutely no security benefits beyond a properly configured stateful firewall. If you don't want to allow any incoming connections, configure that on the firewall and NAT is irrelevant. OTOH, many of the increasingly common peer to peer protocols, such as those used for VoIP are made less reliable and harder to diagnose by NAT.

    • by Bengie (1121981)

      "NAT decouples the internal private network from the external network - and I'm sure any IT admin who has had to renumber their internal network would agree it's a huge PITA on IPv4."

      IPv6 makes it even easier. Also, Ever have to renumber your network because you merged with another corp? NATs won't help you there. IPv6 helps this also by having HUGE address spaces. Chance of a collision is crazy small.

      "Even worse - home users, who most likely do NOT have a working DNS setup and have to type the damn things

  • In my opinion the biggest problem with TCP/IP is that TCP is a stream protocol. Everyone who uses it immediately creates some sort of scheme to divide the stream into messages. Making it a stream protocol is logically equivalent to making it a messaging protocol with messages of size 1 byte. Maybe someone somewhere uses it as a pure byte stream, but it's not very common (and can be easily simulated over a message-based protocol).

    Not that I blame Vint Cerf for that.....he created it, he didn't decide which
    • by Arlet (29997)

      That's not a problem, but a feature. It's trivial to make a message protocol on top of a stream, and the stream protocol is easy to implement.

      Streams on top of messages, or one type of messages on top of other type of message protocol is trickier.

      • Streams are just as trivial to implement on top of messages as the other way around. In fact, that's exactly what TCP is. But it is slightly painful to implement either one on top of the other, and since 99% of the time people want messages, logically that should have been the default.

        Also, I seriously doubt (I'm giving you credit as a network programmer here) that you would have implementing one type of messages on top of another type of messages, since network programmers do it all the time. As question
        • by Arlet (29997)

          Yes, TCP implements streams on top of messages, but I wouldn't call it trivial. Even though the essence of the protocol is simple, many implementers would still get it wrong.

          Also, the IP message is limited in size, so if you want to implement larger messages, you'd have to split them up into smaller ones. Or, alternatively, it you want to exchange very short messages, performance will suffer. At least TCP protects you from that with the Nagle algorithm.

          But, hey, if you don't like the stream protocol, you ca

          • Even though the essence of the protocol is simple, many implementers would still get it wrong.

            No. Getting reliable, inorder message transfer is difficult. Implementing a stream on top of reliable inorder messages is simple as pie.

            • by Arlet (29997)

              Getting reliable, inorder message transfer is difficult

              That's probably why it wasn't done. The beauty of TCP/IP is it's simplicity, and that's probably also why it was so successful. For most applications, the stream model works fine, so why would it be better to implement a more complex message transfer protocol instead ?

              • by Sir Homer (549339)

                TCP is simple to use (from the view of someone doing network programming), but under the scenes it is crazy complicated to implement properly.
                 
                  Fortunately you really only need someone to implement a TCP stack once (in open source) and it can be reused in a multitude of operating systems. BSD pretty much set the standard for a TCP/IP stack (TCP Reno) and everyone went from there.

                • by Arlet (29997)

                  The core of TCP, as its original RFC 793, is quite straightforward. The many later additions have made it more complicated, though.

                • Yes, it's simple and obvious, and it took years of experimentation to get the simple and obvious parts to work well. The early Internet had congestion collapse problems that TCP needed to be retuned for, and figuring out how to get slow machines to send data fast (Van Jacobson's work) took a while, and Jim Getty's Bufferbloat [wikipedia.org] work says we're not done yet.

                  Bram Cohen put a huge amount of incremental experimentation and testing into making Bittorrent work as well - things that are simple and obvious when you'

        • by Sir Homer (549339)

          Stream protocols that offer error, flow and congestion control over heterogeneous datagram networks are NOT trivial.

          TCP is not trivial at all. In fact & efficient algorithms to implement features of TCP is still an area of active research. IETF RFCs in various stages of standardization related to TCP probably amount to thousands of pages at this point, and it's still growing. Linux recently got a new algorithm for congestion control for instance: http://www4.ncsu.edu/~rhee/export/bitcp/cubic-paper.pdf [ncsu.edu]

          • I didn't say that TCP is trivial. I said that message based protocols are better than stream based. This is independent of whether said protocol has error correction, flow control, and guaranteed order. Please work on your reading comprehension.
        • by Jonner (189691)

          Streams are just as trivial to implement on top of messages as the other way around. In fact, that's exactly what TCP is. But it is slightly painful to implement either one on top of the other, and since 99% of the time people want messages, logically that should have been the default.

          How can you say that messages aren't the default orientation, since IP is a message-based protocol? For implementing applications, TCP and UDP have equal footing. The fact that TCP is far more used implies that your 99% figure was pulled out of your ass.

          • Traditionally a "TCP/IP stack" gave two main options for applications.

            * an "unreliable", non congesion-controlled and non-connection based message based protocol with limited message sizes and no message aggregation (UDP)
            * a "reliable" congestion controlled, connection orientated stream based protocol (TCP) .

            So the path of least resistance for most applications was to turn their messages into a stream so they could transmit arbitary sized messages and take advantage of the "reliable", connection orientated

          • Your comment makes me wonder if you've ever written a network program. You do realize that if you call read() on a TCP socket, you are only guaranteed to get one byte, right (assuming no errors, etc)? It's called a stream based protocol for a reason, because it simulates a byte stream on top of the ip packets.
        • by Bengie (1121981)

          While processing "messages" is easier than a stream for any case where the message is small, what about a message that is larger than a packet? Suddenly you need a layer on top of IP again. So now you have message on message instead of stream on message.

          In order to process message on top of message, you have to treat it as a stream. So instead of stream on top of message, you have message on top of stream on top of message.

          Now if you wanted to have implement a stream interface, you will have stream on top o

          • This is pretty dumb. I can think of three ways to resolve this 'problem' in about 30 seconds. Whereas you apparently can't think of any. Therefore, you are pretty dumb.
    • or SCTP, or TIPC, or RDS. There are lots of message-based protocols out there. Why use TCP if you don't want streams?

      • Re:so use UDP (Score:4, Insightful)

        by vlm (69642) on Tuesday October 25, 2011 @01:05PM (#37833114)

        or SCTP, or TIPC, or RDS. There are lots of message-based protocols out there. Why use TCP if you don't want streams?

        Industry standard for the past 20 years has been to try and run every freaking thing over TCP port 80, often thru a proxy and a NAT. Some scummy companies try to claim something that limited actually is "internet access". And everyone is loudly trying to bend over backwards to reimplement that in ipv6. Sometimes a bad idea just needs to get chopped but no one wants to admit it.

        • by Jonner (189691)

          or SCTP, or TIPC, or RDS. There are lots of message-based protocols out there. Why use TCP if you don't want streams?

          Industry standard for the past 20 years has been to try and run every freaking thing over TCP port 80, often thru a proxy and a NAT. Some scummy companies try to claim something that limited actually is "internet access". And everyone is loudly trying to bend over backwards to reimplement that in ipv6. Sometimes a bad idea just needs to get chopped but no one wants to admit it.

          Let me get this straight. You're trying to blame poor service from "scummy" ISPs on the easiest to use Internet protocol built on IP? You need to revisit your history if you think that it has been industry standard to run everything over TCP port 80. Last time I checked, TCP port 80 is used for exactly HTTP. There are certainly plenty of bad ideas out there, but TCP wasn't one of them.

      • I've used SCTP before. It's a fine protocol, but implementations are buggy, and as vlm said, there are problems with proxies, firewalls, etc.
    • by vlm (69642)

      Everyone who uses it immediately creates some sort of scheme to divide the stream into messages.

      If its small, stick it in a single UDP packet instead of TCP, if its just one message if you can standardize on one message per TCP session its easy, so if its big and multiple messages in a stream isn't that still just one line of perl? I know its more work with every other language, but...

      You can find much worse problems with TCP/IP if you want.

      The biggest problem with TCP was having to implement big windows on top of it a decade or two ago to handle long latency high bandwidth links. TCPv6 or whatever

      • UDP is even worse as a message passing system, because it isn't reliable. If you don't mind that, it's great, though. The stream problem is the one that causes me the most pain in my life.
        • by Arlet (29997)

          Why would the stream problem cause problems ? It's not that hard to transport messages over a stream. A trivial solution would be to the send the message length, followed by the message.

          • It causes problems because every time I ever want to do network programming, I have to implement a scheme for setting up message sizes. It's more than just setting the message length followed by the message because then you have to set up a read loop to make sure you got the entire message.

            Of course there could be worse things, but it is the problem with TCP that causes me the most problems.
            • by Bengie (1121981)

              "I have to implement a scheme for setting up message sizes."

              ROFL

              Lets give a scenario where you want to send message like you're talking about. Say you send 1400 byte messages.

              1) You send 1400 bytes
              2) OS packages it up into a single packet and sends it out.
              3) OS gets ICMP about fragmentation
              4) OS splits up original message into two packets
              5) Receiver gets two packets

              Now, should your app see one message or two messages?.. I'm leaning towards 1. What does this mean? It means at the very basic network stack lev

        • by Sir Homer (549339)

          You can implement reliable transmission over UDP. And you have more options as well: you can do it with error correction algorithms for latency intorelent applications, something TCP can't provide with it's ARQ design.

      • by fa2k (881632)

        As would an embedded public key crypto infrastructure inside the TCP system supporting multiple protocols. And multiple selection of hash checking protocols. Lets make setting a md5 hash at the BGP level obsolete?

        No need to do it in TCP when you have IPsec [wikipedia.org]! Unless of course you want per-process authentication instead of per-host authentication -- then you could use TLS. I think you are suggesting a built-in version of TLS anyway, The key management would be a pain if we didn't go with the same error-prone

    • by Jonner (189691)

      In my opinion the biggest problem with TCP/IP is that TCP is a stream protocol. Everyone who uses it immediately creates some sort of scheme to divide the stream into messages. Making it a stream protocol is logically equivalent to making it a messaging protocol with messages of size 1 byte. Maybe someone somewhere uses it as a pure byte stream, but it's not very common (and can be easily simulated over a message-based protocol).

      Not that I blame Vint Cerf for that.....he created it, he didn't decide which parts would become popular.

      Yeah, the most commonly used Internet application protocols aren't stream protocols. That is, unless you count HTTP and SMTP. You also might want to study up on this new-fangled thing called UDP.

      • HTTP and SMTP are message protocols built on top of TCP. That means you have to go to the extra effort to divide the TCP stream into packets. Which was exactly my point.

        UDP sucks unless you can accept dropped packets, out of order messages, and don't care about flow control. If you're ok with all of those, then it's fine. But that situation is rare.
    • by jgrahn (181062)

      In my opinion the biggest problem with TCP/IP is that TCP is a stream protocol. Everyone who uses it immediately creates some sort of scheme to divide the stream into messages.

      Yeah -- but dividing it in a way which suits the problem they want to solve. I'm not at all convinced that it's feasible to design a simple and safe "one size fits all" reliable datagram protocol.

      And I'm very unimpressed by the UDP-based protocols I've seen: slow, fragile, constant problems with a fast sender overloading a slow receiver, inefficient stack--application interfaces ...

      • And I'm very unimpressed by the UDP-based protocols I've seen: slow, fragile, constant problems with a fast sender overloading a slow receiver, inefficient stack--application interfaces ...

        UDP WOULD be a great message based protocol, if they had implemented ordered reception, resending, flow control, etc. These are reasons to use TCP over UDP, and they are good reasons. But if UDP had them, it would be used much more often than TCP.

        • by Bengie (1121981)

          TCP is just a super-set of UDP. I don't see the problem. Implement your own protocol on top of UDP.

          If TCP was message based, how would you handle messages that are larger or smaller than a packet's payload capacity? A smaller packet will have wasted space, so you're better of including "part" of the next packet. If it's a larger packet, you'll have to fragment the message across more than one packet.

          Suddenly we're right back to not working with messages, but partial messages. Guess what's a great way to re

  • by Anonymous Coward

    I work at a relatively large ISP in south Europe, and i can tell you that we are fully ready for IPv6 except for one thing: home gateway IPv6 support. Our vendors (three of them, all well known companies) simply do not have the firmwares that support IPv6 for broadband modems yet. Sad, but true.

    • Tell your three vendors that the first one of them who gets working IPv6 support will get all your business for two years, minimum. They'll have the firmware by the end of the year. (And it'll help all of us.)

  • by sootman (158191)

    > What do you think we can do to
    > convince ISPs to start rolling out
    > IPv6 [i]before[/i] there is a crisis?

    Slashdot editors: they put the 'k' in 'quality'. :-)

  • by phantomfive (622387) on Tuesday October 25, 2011 @12:31PM (#37832720) Journal
    I talked to the owner of a mid-sized ISP about IPv6. He said they had enough IP addresses assigned to them to last for another year and a half. I asked him what his plan was for migrating to IPv6. He glared at me slightly, and said, "pay lots of money for hardware."

    Also, a lot of mobile carriers are starting to use IPv6. Try running netstat on an Android phone and you might see some IPv6 activity there.
    • Aren't there gobs of address blocks reserved for ISP's to talk to one another? Perhaps ISP's could lead the way by converting their back-channel host addresses over to IPv6 and then release those blocks for public site use? Are there really 2 or 3 billion IPv4 internet addresses serving public clients?

      Perhaps IPv4 running out of available addresses is the necessity that will push the experts, at least, to convert over to IPv6...

    • by Bengie (1121981)

      You should've asked him what he would've done once he ran out of IPs. It'll cost more to do ISP level NAT than upgrading to IPv6.

      Either way he's going to have to pay money, but the "proper way" (IPv6) will be cheaper in both the short and long run.

      • His plan was to start rolling out IPv6. His ISP is unusual in that it doesn't NAT at all, he gives all his customers static IPs. I think most ISPs are planning on doing IPv6 after they run out, though. It really does make more sense economically, as you mentioned.
  • So now we can use LOL and say "hey Vint Cerf uses it in public correspondence too!". :).

    p.s. Too bad he didn't seem to understand my question. Oh well.
    • by Greyfox (87712)
      No, he did understand the question. "Any way to generate a 32 bit number... because the text address isn't used during the connection process," could not be clearer. Indeed, the text address is not even a necessary part of the internet. It's a convenient directory service that is widely used by clients, but it is in no way essential to the functioning of the network itself. That's always what bothered me about that bug. They're essentially replacing the numeric-format addressing with a text mode one that th
      • by TheLink (130905)
        Yes he did understand your question. But that was not _my_ question which was a the ".here TLD?" one.

        Anyway, your reply and his just shows that what I write is hard for people to understand. Dunno why.

        Regarding your question, I wonder how should "numeric only" IPs work if a browser supports both IPv4 and IPv6? The two address ranges would overlap. So would numeric-only IPs mean IPv4 only?

        BTW: on some OSes you can ping 4.8. Which isn't a numeric only IP address, so it's even messier than that ;).
        • by Greyfox (87712)
          Oohh sorry didn't see the the second LOL down there toward the bottom. It seems the father of the internet is a cheerful sort of guy. Kind of like Father Christmas. You don't suppose they're the same guy? They both deliver stuff very quickly world-wide...
        • by Coren22 (1625475)

          The .here TLD you are talking about didn't make much sense to me either, but I think what you are looking for is .local which is what I was taught to use in MCSE classes.

          http://en.wikipedia.org/wiki/Top-level_domain#Pseudo-domains [wikipedia.org]

          The top-level pseudo domain local is required by the Zeroconf protocol. It is also used by many organizations internally, which may become a problem for those users as Zeroconf becomes more popular. Both site and internal have been suggested for private usage, but no consensus has emerged[citation needed].

          So apparently it isn't a standard, and can break zeroconf.

          • by TheLink (130905)

            So apparently it isn't a standard, and can break zeroconf

            That's why I said: "a .here TLD, reserved officially for local use" and "analogous way to the way that the RFC1918 IP addresses are reserved officially for private use".

            If RFC1918 IP addresses didn't exist people could have used arbitrary IP ranges they hope won't conflict. The same reason why RFC1918 is a better idea than that, is the same reason why there should be a .here or similar TLD.

            Once you have a standard, others can build upon it. For example: many areas might allow you to visit http://here/ [here] so t

            • by swillden (191260)

              It wasn't at all clear to me from your question what you expected .here to be for... your comparison to E911 made me think you were talking about regional resources, i.e. stuff within tens of miles, and it was really unclear to me how that would be useful, much less how it could be defined or implemented. But the analogy with RFC 1918 makes it much clearer... you're talking about subnet-local resources.

              That's really easy to build on top of IPv6, and you don't need a new .TLD to do it. Just define some c

              • by TheLink (130905)

                For identifying nearby hosts with names, the .local TLD already exists.

                Which Internet standards document or similar says that .local is a TLD reserved for such use? As far as I know .local is not reserved after so many years.

                And because of all the messiness, it's probably best to reserve .local for "local legacy" use (mDNS etc), and start over with .here OR some other tld for a proper "reserved for official local use" TLD, which could resolve to standard local use IPv6 and IPv4 addresses as you suggest.

              • by TheLink (130905)
                BTW:
                1) I don't see where I made a comparison to E911.
                2) much of your talk about IPv6 is not really relevant to domain names and TLDs.

                Hardly any user in the world is going to type IPv6 addresses.They're going to type domain names. So if they want to figure out what is available in a particular room/hall/building, what is the way for them to do so without forced HTTP/DNS packet redirection and other hack jobs?
      • At first I thought you were right, but I wanted to confirm it so I dug into the issue further.

        RFC 2396, regarding URIs, states that URI authority hosts look like so:

        host = hostname | IPv4address
        IPv4address = 1*digit "." 1*digit "." 1*digit "." 1*digit

        It exactly specifies the manner of IPv4 address representation, constraining it from the wide world of possible ways to format a 32 bit number. Whether represented as

        • 3626153261 (decimal)
        • 033010532455 (octal)
        • 0xd8.0x22.0xb5.0x2d (hex dotted
        • by Greyfox (87712)
          Yes yes yes, they have an RFC. And I have the father of the internet saying "Text addresses are not used during the connection process"! They're not part of the protocol. Elevating an answer from a directory service to protocol level is not the direction we should be going in!

          I do, however, have a workaround, and that's to register the numeric address as a ".com". 1137387091.com! And then I thought, why not register a different address than what that would point you to? Then all other client programs woul

          • I acknowledge that you are being funny.

            I worry, however, that you will still retain an element of seriousness when you, in the future, summarize this matter the way that you are currently.

            Text addresses are used in URLs. That's part of the HTTP protocol. The father of the web said we should use decimal dotted quads, so your browser does not take all representations of addresses for URLs. Rest assured that when your browser gets around to making its connections it's not using a text address. But when we

          • by swillden (191260)

            There are important security reasons for not allowing too many representations for a host address, and there really aren't any compelling motivations for allowing it. The FF guys are right, and Vint didn't really contradict them; he just said that in the low-level protocols it's a pure number. That has nothing to do with the UI considerations that drive having a single, consistent representation.

  • by vlm (69642) on Tuesday October 25, 2011 @01:16PM (#37833250)

    The problem with .here is there are so many "rfc1918 like dns names".

    Off the top of my head some standard ones are ".localnet" (as in localhost.localnet) and .local as in mdns/bonjour

    I don't think creating another tld is going to solve the problem of why people would not / will not use the previous "local" tlds.

  • I am missing a question and an answer: Why is IPv6 autoconf missing such basic features as providing information about DNS servers?
    Or the other way round: why did nobody think about central management stuff that DHCPv4 provides in corporate networks? DHCPv6 is nowhere even barely usable.

    • by bbn (172659)

      You need to understand the coupling between "autoconf" also called SLAAC (stateless address autoconfiguration) and DHCPv6. SLAAC is used when the network has active routers with a RA daemon but no DHCPv6 server. The RA daemon is nothing but a router announcing its presence and the subnet it will route. Any extra information is retrieved from a possibly stateless and possibly non-local DHCPv6 server by multicast.

      The idea is that RA gives you enough information to communicate with the DHCPv6 server. It is not

On the Internet, nobody knows you're a dog. -- Cartoon caption

Working...