Hi Olaf - Thanks for this post. I appreciate your stoic approach and you mention a lot of important issues related to continuing the freedom we take for granted in using the Internet today. I work with a domain intelligence company. We have explored with another IoT firm, how the new brand TLDs (.honeywell, .bmw, .chrysler etc.), delegated at the root of the Internet, can improve the security of IoT networks and connected devices. My CTO, if he has time, will be writing a piece on this shortly. Thanks again! Here's to Eternal Vigilance! Andre.
Link | Posted on Oct 25, 2016 9:13 AM PDT by Andre Forrester
I'm very interested in discussing the design of protocols, and how the ones we have may be sub-optimal (or even quite bad) in many ways, but having followed the trail of links into the "TCP/IP vs RINA" world a little, I'm struck by the research team's apparent desire to offend and alienate everyone associated with the development of TCP/IP. Or, if that wasn't the intended effect of a presentation like "How in the Heck Do You Lose a Layer?", then I'm struck by the astounding lack of diplomacy — and I say this as someone who is naturally somewhat deficient in that quality myself.
I'd love to engage in a sober analysis of these attacks and the protocol design weaknesses which facilitate them (this kind of thing was the bread and butter of my PhD thesis), but the intemperate tone of this article and the linked material makes me think that I will be disappointed if I seek it via this avenue.
My perspective carries no special weight, so feel free to ignore it, but there it is for what it's worth. I'd rather be discussing protocol design than the social aspects of research, but the latter really is a glaring issue here, in my view.
Link | Posted on Oct 25, 2016 7:18 AM PDT by The Famous Brett Watson
> Unfortunately that shifting can't be done by technical means,
>it requires contractual and/or legal measures.
Name and shame might be useful. That is, for the masses to know there is a solution and that is being ignored.
>The fundamental problem is that the people who can do something about
>the problem don't bear any significant costs associated with the problem
Users can't expect or demand what they do not know is an option. And on the flip side, how many might appreciate their ISP letting them know they own a device that is compromised?
And is that compromised device just performing DDOS attacks, or is it trying to steal their banking information, the DDOS was just easier to detect .... In turn potentially doing damage to that brand (who's device was hacked) and thus the Brand accepts more responsibility for their designs and implementations, or their brand dies.
Almost 10 years ago I allowed a friend to connect his laptop to my network, only time I have ever had a problem. His laptop infected a server and my ISP (Qwest) disconnected me. I was upset at the loss of access, and appreciated the notice. It was my fault, I cleaned it up, and then implemented a completely separate guest network in my home. Point is, for some issues, ISP's clearly CAN and WILL implement filtering and total disconnection and again that was many years ago.
Why not for UDP address spoofing? That has never made sense to me.
As for cost, I now have Comcast Business service at my home. That is not a choice as Comcast residential has so many blocks and filters as to make my normal work literally impossible. One such filter being whois queries. So Comcast "protects" registrar/registry whois servers from scraping, and yet "DDOS filtering" is not an industry SOP? Again, I am left scratching my head.
With Business class there are no blocks, and I pay 3 to 4 times what residential service is and have 1/2 the connection speed. So they have done well to "shift a cost" to my bank account.
So my personal experiences are not reconciling with address spoofing being off the list of ISP filtering. Something is wrong with this picture. Being the cynic that I am, and to draw an analogy, no Doctor ever made money from a healthy person. Problems sometimes translate into profit in less than desirable ways. Back to educating the masses that the recent DYN experience could have been largely avoided, shine a brighter light on the facts and tools we have today.
Link | Posted on Oct 24, 2016 11:16 PM PDT by Charles Christopher
We had that. IP networks won out because they didn't require association and thus didn't allow network operators to control what services were created and offered on the networks. I'll have to agree with Charles contention that your idea of association management isn't needed. The problem you describe isn't that there isn't any association, it's that the origin networks themselves permit their hosts to lie about their origin network. That problem can be and has been solved for IP networks, the solution simply needs to be applied by network operators.
The same issue applies to your position on technical QoS contracts. Again, this is a solved problem. The most recent attack would never have exceeded any QoS limitations on the source end, and at the target end it wouldn't matter because the incoming traffic exceeding the QoS limit causing throttling would've caused exactly the same results as traffic overload did: legitimate DNS queries would fail on a massive scale because the malicious traffic outweighs the legitimate traffic by such a large amount. That's why "distributed denial-of-service" as opposed to the more general "denial-of-service".
Your last point, however, is spot-on. The fundamental problem is that the people who can do something about the problem don't bear any significant costs associated with the problem (and in fact would probably suffer some losses if took effective action about the problem), and the people who bear the costs aren't in a position to do anything about the problem. Allowing and encouraging the shifting of the costs back to the origin of the problem (or at least the malicious traffic) would probably see the required technical measures taken post-haste. Unfortunately that shifting can't be done by technical means, it requires contractual and/or legal measures.
Link | Posted on Oct 24, 2016 10:29 PM PDT by Todd Knarr
>The world is moving on - do join us!
Shall I take that to mean you feel ISP's have no responsibility as to the source addresses of the packets they place onto the internet?
And I did take a cursory look at RINA before my first post, and then applied Occams Razor.
Sometimes present moment awareness is better than living a future that does not now exist, especially when there is a solution available at this moment and its being ignored.
Link | Posted on Oct 24, 2016 4:05 PM PDT by Charles Christopher
Now I know for sure you don't know what association management is :) Please go read up on the RINA architecture. http://pouzinsociety.org/education/highlights?_ga=1.107318346.934011447.1468509500
I am very familiar with how the present Internet works. No need to regurgitate it. The transport protocols (UDP or TCP) exist on top of a routing fabric that has no concept of establishing secured associations and routes. The world is moving on - do join us!
Link | Posted on Oct 24, 2016 3:44 PM PDT by Martin Geddes
I am referring to the partitioning of the source IP address at the "wire" from which the packets originate. I am pointing out the concept of "association" IS useful and does not need a complete rewrite of decade of code base to implement. A much more simple and cheaper to implement solution exists now.
If IPS's accept responsibility for the source IP address then DDOS could end, especially for cameras located in homes where there is the lowest level of packet responsibility on the part of the device owner (versus say colocation or various styles of server leases).
> The ability to "fake" the sender exists because there is no
>association management defined in TCP/IP.
DDOS is implemented using UDP, not TCP. It uses the asymmetry of a simple request to generate a much larger response targeted at an unrelated third party (the "lie" of the source address).
TCP source address spoofing serves zero purpose as the connection could never be established and there is thus no ability to multiply packet size as there is in UDP, nor the ability to inflect that response packet on an unrelated third party. Filtering of TCP source address lies will have no impact on legit comm.
UDP spoofing can serve a purpose as their is no connection "associated" with the comm, however an "increased cost" to do so may make sense. That is service defaults to no UDP spoofing allowed, and you pay an added fee to remove that barrior.
So I fail to see your point as to why the simple to solve problem of UDP (source spoof for packet size multiplication) is justification to replace the unrelated TCP protocol which does not support such undesired behavior.
Link | Posted on Oct 24, 2016 3:36 PM PDT by Charles Christopher
Your comment suggests you do not know what association control is. The ability to "fake" the sender exists because there is no association management defined in TCP/IP. It should be architecturally impossible, not left to the whim of operational configuration and hope.
Link | Posted on Oct 24, 2016 2:55 PM PDT by Martin Geddes
By definition, a DDOS attack means involved devices were lying (UDP) about their source addresses, which means they were detectable at the source. Thus their "environmental pollution" could have been eliminated by the source network without any changes to current protocols.
>This process is called "association"
Yes, the packet was not "associated" with the sender, current protocols allow this to be obvious at the source network.
>"today's Internet is a shanty town next to a festering garbage dump"
Link | Posted on Oct 24, 2016 2:17 PM PDT by Charles Christopher
Thanks for your CircleID article on Scott Bradner's NANOG talk. In your description of the end of the relationship with the Department of Commerce, you referred to the Affirmation of Commitments as defining the residual role of the NTIA in the functions of the IANA. This isn't accurate and completely omits the IANA functions contract.
From the beginning there two separate, parallel contractual relationship between NTIA and ICANN. One started as a Memorandum of Understanding. It later transformed to a Joint Project Agreement and transformed yet again to the Affirmation of Commitments.
There was an entirely separate document, a formal procurement contract, specifically focused on the IANA function. It defined the IANA functions and tasked ICANN with providing the IANA service. That contract was renewed several times but was essentially unchanged over the years. The latest incarnation of that contract began 1 Oct 2012 and was structured to run seven years. More precisely, it was set up as a three year contract with the U.S. Government having a unilateral option to renew it twice, each renewal to run for two years. When NTIA announced in March 2014 it was planning to transition its stewardship, it expected the consultation with the community would be complete in time to simply not exercise its right to extend the contract in September 2015. The community consultation process took longer, so NTIA broke the first two year extension into two pieces, each a year long, and exercised the first piece.
All the drama was around the IANA functions contract, not the Affirmation of Commitments. It was that contract that lapsed on 1 Oct 2016.
And what happened to the Affirmation of Commitments? The operative parts of the AoC were about periodic reviews of ICANN's transparency and accountability, of security, stability and resiliency, of the whois service, and of competition, consumer choice, and consumer trust. These are now incorporated into ICANN's Bylaws.
Link | Posted on Oct 24, 2016 8:26 AM PDT by Steve Crocker
The obvious reason for a slowdown in registrations is the fact that the world economy is broken and broke.
Link | Posted on Oct 22, 2016 7:57 AM PDT by Geoff Goedde
Has Dyn considered dropping all incoming requests which would result in NXDOMAIN while the attack is in progress? Presumably the attack traffic pattern, which uses randomised domain names, means that the overwhelming majority of requests are resulting in NXDOMAIN responses. Intermediates will attempt to cache these, and this crowds out all the actually-useful responses which could otherwise be cached. If responses were limited to only those where a record was found, not only would outgoing traffic fall dramatically, but the intermediate caches would have a chance to fill up with useful information, hopefully mitigating the effect of the attack.
I say this as someone with PhD-level study of this kind of problem, but not as someone with extensive relevant operational experience. I'd be interested to hear the thoughts of someone who works closer to the coal-face.
Link | Posted on Oct 21, 2016 10:44 PM PDT by The Famous Brett Watson
Roland's observation that CoCCA provides support only on a "best efforts basis" is not entirely correct, commercial support is available from CoCCA for both gTLDs and ccTLDs.
With ~60 users, CoCCA has been deployed by 20% of ccTLDs in the root. It is also used by gTLDs and complies with ICANNs rigorous standards.
In any case, TLD managers looking for affordable registry services can now choose between CoCCA and Nomulus. FRED is only missing a few gTLD features and with some small enhancements would also work for gTLDs as well.
Link | Posted on Oct 21, 2016 7:37 PM PDT by Garth MIller
Yes there are other options out there, but they are, as Rubens already says, not all ICANN compliant or licensed in the same way.
I agree that it makes sense to work with someone with experience in the space, but using an open source solution to do so may be an option - like with the RSPs (such as our company) that Kevin Murphy mentioned in his post about Nomulus.
Ultimately it comes down to a combination of pricing, features & service and we will always strive to help our clients pick the solution that best serves their needs.
We (DomainCocoon) will be taking a look at this solution.
Link | Posted on Oct 20, 2016 3:00 PM PDT by Frank Michlick
Although not being the first open-source SRS (Shared Registry System), it's the first one that is directly usable in gTLDs. CoCCA license prevents its free usage in gTLDs, and FRED (fred.nic.cz) would require significant development before deployment in a gTLD. A good number of ccTLDs is now deploying FRED, and Nomulus has the potential to become a similar option for gTLDs, likely in a manner as Kevin Murphy described in DI of enabling other RSPs. As for ccTLDs, looking at FRED is likely a simpler option than deploying Nomulus.
Link | Posted on Oct 19, 2016 12:35 PM PDT by Rubens Kuhl