2014-02-27

angry tapir writes
"As the number of top-level domains undergoes explosive growth, the Internet Corporation for Assigned Names and Numbers (ICANN) is studying ways to reduce the risk of traffic intended for internal network destinations ending up on the Internet via the Domain Name System. Proposals in a report produced on behalf of ICANN include preventing .mail, .home and .corp ever being Internet TLDs; allowing the forcible de-delegation of some second-level domains in emergencies; and returning 127.0.53.53 as an IP address in the hopes that sysadmins will flag and Google it."

Yay, applied spearfishing!

By Opportunist



2014-Feb-27 07:24

• Score: 3
• Thread

Let's face it, the whole craze around the new TLDs is a huge can of worms that serves no purpose (well, none but to make some people rich) while being a problem waiting to explode.

Take Mike, the manager. Mike is a good manager (yes, they exist. No, really!) and he's pretty competent. Well, not in IT, of course, but in his field. In IT, he has to rely on his IT department (and, as I said he is a good manager, he actually does). Mike takes his laptop home, not to play but to actually do some meaningful work. So he creates a rather sensitive document and decides to save it. Now, in our current world, the internal name "documents.thecompanyheworksfor" won't resolve and the system falls back onto his documents folder. Which is pretty neat, because that's what gets sync'd automatically the next time Mike drops his laptop into the docking station at work. No fuss for him and also none for his IT department.

In the new and improved world of TLDs at will, that server could well exist. And it does not necessarily belong to the company Mike works for.

And that's just the tip of the ice berg, how about launching some program from a remote location? Undocked, it won't launch, and if it's just a script that provides network information, who cares? In our new world, it may well launch some malware.

Now, of course one may say that IT should know that and IT should prevent it by ensuring that these things either resolve correctly or not at all. Fair enough. Now, who here can say that he knows of EVERY domain entry in his company's environment (provided you're not working for some mom'n'pop shop)? Who would put his job on the line for saying that there was never some self absorbed PHB who insisted in having the necessary rights to create what domains he thinks was funny without informing the IT department?

TLDs are going to be a security nightmare. But hey, who am I to complain, it's job security for decades!

Re:hacky

By DarkOx



2014-Feb-27 07:26

• Score: 5, Interesting
• Thread

The problem really isn't so much not being able to reach some.home, on the internal network or even something.home on the Internet when you already have a local .home. zone.

The problem is all the uncounted config files out there with unqualified or partially qualified names in them. The RFCs are not entirely clear on what the correct behavior is, and worse the web browser folks have decided to implement the behavior differently themselves in some cases, rather than use the system nss services/apis.

So if you imagine an environment where DHCP configures a list of DNS search suffixes, and one of those is something like us.example.com or something. How the Windows boxes interpret a query mobile.mail (note no trailing dot) will possibly be different than the way the Linux machines do, and different than what the OS X machines do, etc and what Chrome or Firefox decide to do might be different than what nslookup does even on the same machine!

Its going to be nightmarish from a support and troubleshooting perspective, and lets face it nobody on your PC tech team really understands DNS, your network admins probably have a good handle but some major blind spots, and your developers are accustomed to making what are now dangerous assumptions. I am not sure I fully understand DNS on most days.

This is going to be a support nightmare at least at some sites, even some places where the ONLY sin was not using FQDNs everywhere all the time. Which might have deliberate, perhaps not the best way to have gone about it but knowing how search domains operate, and being able to set them with DHCP is entirely possible and like someone architect-ed mobile systems getting a local resource by depending on that behavior.

There are all kinds of potential security problems too. The gTLD expansion is making the Internet both less reliable and less safe.

Re:is 10.0.0.0/8 really needed to be private?

By squiggleslash



2014-Feb-27 07:26

• Score: 5, Informative
• Thread

This isn't the problem. As I understand it (and I've read the article multiple times and it's early in the morning so I may be getting it wrong), the problem is this:

1. ICANN is introducing new .TLDs (eg additions to .com, .net, .org) etc (we've known about this for a while, this isn't news.)
2. Common practice on private networks is to create and use an unused .TLD for the private network, for example ".internal", ".corp", etc. For example, your employer might, right now, be calling your workstation "pc117.nyoffice.intranet"
3. After analyzing global DNS hits, ICANN's researchers found that many/most of the new proposed .TLDs are already, apparently, in use by private entities for their private networks. You might ask how they know? Well, think in terms of a roaming laptop that upon connecting to a Wifi at Starbucks immediately, before the VPN is set up, tries to access "exchange-server.nyoffice.intranet". It makes the DNS lookup, and because the VPN isn't up yet, the DNS lookup goes to the global DNS servers, causing a bell to ring in ICANN's HQ (or something.)
4. ICANN needs to "do something" to alert people with private networks to change their TLDs, or else those people will, unintentionally, find themselves locked out of sites with the new TLD. (Cynical PoV: and this will decrease the value of the .TLDs themselves. Kerching!)

Now ICANN appears to believe that the best solution is to have the .TLDs return this odd 127.0.53.53 IP address instead of "domain not found" for all unknown domains, so that if a technie working for a company affected is roaming with their laptop, and they try to access "exchange-server.nyoffice.intranet" forgetting to put up the VPN, and ".intranet" is a new TLD, and they can't connect because the VPN isn't up, and they decide to check their Windows Event Logs to figure out why, then instead of "domain not found" which would immediately make them think "Oh wait, of course it can't be resolved, it's not a real domain and I'm not on the VPN", they'd see a weird IP address, and think "That's odd, let me Google that, there's obviously a problem with DNS."

(I think they'd have more luck if they made it a pair of real IP addresses, one A, one AAAA, pointing at a website that tells the roaming user the answer that they can report to a sysadmin, rather than forcing a sysadmin to Google something they may never become aware of because they may not roam in the first place, but to be honest, even that sounds like a bad idea, I'd rather IP addresses not be returned for invalid domains to begin with.)

Re:hacky

By DarkOx



2014-Feb-27 07:42

• Score: 4, Interesting
• Thread

Right its a good idea to expect every application developer everywhere to put a special case test into their code see if the value in the buffer after a call to gethostbyname is 127.0.53.53 rather than just checking the return code and using the value directly or not based on the return code. Doing this means a new branch in every new app, for no real reason; It means odd behavior in old/not updated code that expects to either successfully resolve and address or not.

Case in point someone introduced a hostname into our DNS recently that caused a major application to break. Turned out there was a stale config entry for a hostname that no longer existed. As long as it had been getting back NXDOMIN things hummed along nicely, it just tried the next host in its list from a config file. When someone added that name back it, it started trying to connect to the new server ( which did not run the application it was expecting and did not listen on that port ) this would cause long timeouts on login while it tried and retried the other server. I grant this was a configuration error, someone should have cleaned that old config file, but there are situations like laptops where this might not be the case. Inside your organization .mail might exist as a zone, take the machine home and CustomAPP might work fine today getting NXDOMIN and switching to a local database or trying a different public hostname etc, now its going get back 127.0.53.53 and quite likely not know what to do; when the service isn't there.

No its patently stupid for the name resolution system to return BAD data. If something like .mail is not allocated or de-alocated than it does not exist, and NXDOMIN is what a public DNS system should return. The meaning is clear.

Re:hacky

By Stalks



2014-Feb-27 08:56

• Score: 4, Insightful
• Thread

How do you put up a parking page that listens on loopback?

Show more