2017-02-20T16:39:00-08:00The other day several of us were gathered in a conference room on the 17th floor of the LinkedIn building in San Francisco, looking out of the windows as we discussed some various technical matters. All around us, there were new buildings under construction, with that tall towering crane anchored to the building in several places. We wondered how that crane was built, and considered how precise the building process seemed to be to the complete mess building a network seems to be. And then, this week, I ran across a couple of articles (Feb 14 & Feb 15) arguing that we need a new Internet. For instance, from Feb 14 post: What we really have today is a Prototype Internet. It has shown us what is possible when we have a cheap and ubiquitous digital infrastructure. Everyone who uses it has had joyous moments when they have spoken to family far away, found a hot new lover, discovered their perfect house, or booked a wonderful holiday somewhere exotic. For this, we should be grateful and have no regrets. Yet we have not only learned about the possibilities, but also about the problems. The Prototype Internet is not fit for purpose for the safety-critical and socially sensitive types of uses we foresee in the future. It simply wasn't designed with healthcare, transport or energy grids in mind, to the extent it was 'designed' at all. Every "circle of death" watching a video, or DDoS attack that takes a major website offline, is a reminder of this. What we have is an endless series of patches with ever growing unmanaged complexity, and this is not a stable foundation for the future. So the Internet is broken. Completely. We need a new one. Really? First, I'd like to point out that much of what people complain about in terms of the Internet, such as the lack of security, or the lack of privacy, are actually a matter of tradeoffs. You could choose a different set of tradeoffs, of course, but then you would get a different "Internet" — one that may not, in fact, support what we support today. Whether the things it would support would be better or worse, I cannot answer, but the entire concept of a "new Internet" that supports everything we want it to support in a way that has none of the flaws of the current one, and no new flaws we have not thought about before — this is simply impossible. So lets leave that idea aside, and think about some of the other complaints. The Internet is not secure. Well, of course not. But that does not mean it needs to be this way. The reality is that security is a hot potato that application developers, network operators, and end users like to throw at one another, rather than something anyone tries to fix. Rather than considering each piece of the security puzzle, and thinking about how and where it might be best solved, application developers just build applications without security at all, and say "let the network fix it." At the same time, network engineers say either: "sure, I can give you perfect security, let me just install this firewall," or "I don't have anything to do with security, fix that in the application." On the other end, users choose really horrible passwords, and blame the network for losing their credit card number, or say "just let my use my thumbprint," without ever wondering where they are going to go to get a new one when their thumbprint has been compromised. Is this "fixable?" sure, for some strong measure of security — but a "new Internet" isn't going to fare any better than the current one unless people start talking to one another. The Internet cannot scale. Well, that all depends on what you mean by "scale." It seems pretty large to me, and it seems to be getting larger. The problem is that it is often harder to design in scaling than you might think. You often do not know what problems you are going to encounter until you actually encounter them. To think that we can just "apply some math," and make the problem go away shows a complete lack of historical understanding. What you need to do is build in the flexibility that allows you to[...]
2017-02-20T13:24:00-08:00Co-authored by Leslie Daigle, Konstantinos Komaitis, and Phil Roberts. The incredible pace of change of the Internet — from research laboratory inception to global telecommunication necessity — is due to the continuing pursuit, development and deployment of technology and practices adopted to make the Internet better. This has required continuous attention to a wide variety of problems ranging from "simple" to so-called "wicked problems". Problems in the latter category have been addressed through collaboration. This post outlines key characteristics of successful collaboration activities (download PDF version). Problem difficulty and solution approaches Wikipedia offers a definition of "wicked problems” [accessed September 16, 2016]: "A wicked problem is a problem that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize. The use of the term "wicked" here has come to denote resistance to resolution, rather than evil [.] Moreover, because of complex interdependencies, the effort to solve one aspect of a wicked problem may reveal or create other problems." Of course, not all large problems are wicked. As noted in the Internet Society's commentary on Collaborative Stewardship, sometimes an Internet problem has a known answer and the challenge is to foster awareness and uptake of that known solution. Denning and Dunham characterize innovation challenges as simple, complex, or wicked [see Denning, Peter J. and Robert Dunham "The Innovator's Way – Essential Practices for Successful Innovation" page 315]. In the Internet context, the characteristics and approaches to addressing them can be summarized as follows: TypeCharacteristicsSolution pathSimpleSolutions, or design approaches for solutions, are knownCooperation: Awareness-raising and information sharing — typically through Network Operator GroupsComplexNo known solution exists, the problem spans multiple parts of the InternetConsensus: Open, consensus-based standards developmentWickedNo solution exists in any domain, general lack of agreement on existence or characterization of the problemCollaboration: moving beyond existing domain and organization boundaries and set processes for determining problems and solutions Why Internet problems are often wicked First, it is important to understand that, today, the Internet is largely composed of private networks. Individual participants, corporations or otherwise, must have a valid business reason for the adoption of a certain technology or practice in their own network. This does not necessarily rise to the level of a quantifiable business case, but they have to have some valid reason that it helps them make something better in their own networks or experience of the Internet. However, if the practice is a behavior on the network that is impacted by, or includes other networks, the participants must have a standard they agree to. This might be a protocol standard governing bits on the wire and the exchange of communication, or a common practice. To get to that level of agreement, participants — whether private companies with financial stakes in the situation, or governments, or individuals — must be disposed and willing to collaborate with others to instantiate the adoption. Addressing wicked Internet problems: Keys to successful collaboration We identify here four important characteristics of collaborative activities that have driven the success and innovation of the Internet to date. There must be a unifying purpose. There can be any number of participants in a successful collaboration, and they can have a range of different perspectives on what a good outcome looks like, but the participants must be united in their desire for an outcome. The participants will have some shared sense of "good." It is likely that that sense of "good" will include something about collaboration itself. That is to say, the very act of collaborating to achieve an outcome [...]
2017-02-20T12:30:00-08:00The Bug Bounty movement grew out a desire to recognize independent security researcher efforts in finding and disclosing bugs to the vendor. Over time the movement split into those that demanded to be compensated for the bugs they found and third-party organizations that sought to capitalize on intercepting knowledge of bugs before alerting the vulnerable vendor. Today, on a different front, new businesses have sprouted to manage bug bounties on behalf of a growing number of organizations new to the vulnerability disclosure space. Looking forward, given the history and limitations of bug bounty operations described in part 1 and part 2 of this blog series, what does the future hold? The Penetration Testing Business The paid bug bounty movement has been, and continues to be, a friction point with the commercial penetration testing business model. Since penetration testing is mostly a consultant-led exercise (excluding managed vulnerability scanning programs from the discussion for now), consumers of penetration testing services effectively pay for time and materials — and what's inside the consultants' heads. Meanwhile, contributors to bug bounty programs are paid per discovery — independent of how much time and effort the researcher expended to find the bug. Initially many commercial penetration testing companies saw bug bounty programs as a threat to their business model. Some organizations tied to adapt, offering their own bug bounty programs to their clients, using "bench time" (i.e. non-billable consultancy hours) to participate in third-party bug bounties and generate revenue that way, or sought collaboration with the commercial bug bounty operators by picking up the costly bug triaging work. Most of the early fears by penetration testing companies were ill founded. The demand for compliance validation and system certification has grown faster than any "erosion" of business due to bug bounties, and clients have largely increased their security spend to fund bug bounty programs rather than siphon from an existing penetration testing budget. While the penetration testing market continues to grow, it is perhaps important to understand the future effect on the talent pool from which both, that, and bug bounty industry, must pull from. There are several constraints that will influence the future of bug bounty and penetration testing businesses. These include: The global pool of good and great bug hunters is finite (likely limited to less than 5,000 people worldwide in 2017). Both industries need to tap this pool in order to be successful in finding bugs and security vulnerabilities that cannot be found via automated tools. Advances in automated code checking, adherence and enforcement of the SDL (Secure Development Lifecycle), adoption of DevOps and SecDevOps automation, and more secure software development frameworks, are resulting in less bugs making it to public release — and those bugs that do make it tend to be more complex and require more effort to uncover. The growing adoption and advancement of managed vulnerability scanning services. Most tools used by bug hunters are already enveloped in the scanning platforms used by managed services providers — meaning that up to 95% of commonly reported bugs in web applications are easily discovered through automated scanning tools. As security researchers identify and publish new attack and exploitation vectors, tools are improved to identify these new vectors and added to the scanning platforms. Over time the gap between automated tool and bug hunter is closing — requiring bug hunters to become ever more specialized. It is possible to argue that the growth and popularity of bug bounty programs is a direct response to often poorly scoped, negligently executed, and over-priced penetration testing. As many penetration testing service lines (and levels) became commercialized and competition subsequently drove down day-rates, providers were apt to use lesser-qualified and ine[...]
2017-02-20T07:25:00-08:00From "IGF 2016 Best Practice Forum on IPv6," co-authored by Izumi Okutani, Sumon A. Sabir and Wim Degezelle. The stock of new IPv4 addresses is almost empty. Using one IPv4 address for multiple users is not a future proof solution. IPv4-only users may expect a deterioration of their Internet connectivity and limitations when using the newest applications and online games. The solution to safeguard today's quality is called IPv6. The Best Practice Forum (BPF) on IPv6 at the Internet Governance Forum (IGF) explored what economic and commercial incentives drive providers, companies and organizations to deploy IPv6 on their networks and for their services. The BPF collected case studies, held open discussions online and at the 2016 IGF meeting, and produced a comprehensive output report. This article gives a high-level overview. IP addresses and IPv6 An IP address, in layman terms, is used to identify the interface of a device that is connected to the Internet. Thanks to the IP address, data traveling over the Internet can find the right destination. The Internet Protocol (IP) is the set of rules that among other things define the format and characteristics of the IP address. IPv4 (Internet Protocol version 4) has been used from the start of the Internet but has run out of newly available address stock. IPv6 (Internet Protocol version 6) was developed to address this shortage. IPv6 is abundant in its address space, can accommodate the expected growth of the Internet, and allows for much more devices and users to be connected. To communicate over IPv6, devices must support the IPv6 protocol, networks must be capable of handling IPv6 traffic and content must be reachable for users who connect with an IPv6 address. General state of IPv6 deployment According to the APNIC Labs measurements for November 2016, the global IPv6 deployment rate was close to 8%, with large differences between countries from zero to double-digit IPv6 deployment rates up to 55%. The higher deployment does not entirely follow the traditional division between industrialized and developing countries. There is not always a clear link between economic performance (e.g. GDP) or Internet penetration and IPv6 uptake in a country. The top 20 countries (end 2016), in terms of IPv6 deployment, are a diverse group with among others (in alphabetical order): Belgium, Ecuador, Greece, Malaysia, Peru, Portugal, Trinidad and Tobago, United States, Switzerland The commercial incentives for IPv6 deployment Major global players and some local and regional companies and organizations have commercially deployed IPv6. The BPF collected case studies from different regions and industry sectors to learn about the key motivations behind these decisions to deploy IPv6. The imminent shortage of IPv4 addresses is the obvious and most cited reason to deploy IPv6. IPv6 is regarded as the long-term solution to prepare network and services for the future and to cope with growth. Investing in IPv6 is cheaper in the long-term than the alternative solutions that now prolong the life of IPv4. Alternatives come with their own cost, and eventually, IPv6 deployment will be inevitable. It is advised to plan IPv6 deployment over a longer period and include it in existing maintenance cycles and in projects to renew and upgrade infrastructure, equipment and software. This can drastically reduce the burden and cost. Some see IPv6 deployment and providing IPv6 services as a way to show that a company has the technical know-how and capability to adapt to new technical evolutions. In today's competitive markets branding and image building are important. IPv6 can also create new business opportunities. It allows offering a high-quality Internet and some services and applications only work or work better with IPv6. There are examples of providers that deployed IPv6 to meet the demand of existing or new customers. Observations per industry sectors The higher deployment rate in a country is usually the[...]
2017-02-20T06:02:00-08:00Let's be clear: right now, any statements on when (or even if) a follow-up round of new gTLD applications might happen are pure conjecture. The first round closed on April 12, 2012. Since then, the pressure has been increasing for ICANN to actually live up to the guidebook premise of launching "subsequent gTLD application rounds as quickly as possible" with "the next application round to begin within one year of the close of the application submission period for the initial round." But that deadline is clearly not going to be met. ICANN no longer expects to complete reviewing the first round — a prerequisite for initiating a follow-up — before some time around 2020. Work has begun on imagining what a second round might look like, but that also seems a long way away from completion. Reviews and classes So to try and get a second round out of the gate, imaginations have been working overtime. What if only certain categories of applicants, say cities and brands, were allowed in? The logic being that by restricting applicant types, evaluating them would be easier. And not all the reviews, for all the TLD types applied for in 2012, would need to be completed before any new calls for applications go out. For cities and geographic terms (dubbed "Geo TLDs"), where the applicant needs to show support from the local government or authorities, the initial gating process could be somewhat easier. As for brands, there were many non-believers in 2012. Then Amazon, Axa, Barclays, BMW, Canon, Google and many others were revealed as applicants. And now those that didn't then, certainly want to now. They are lobbying hard to get their shot as quickly as possible. So when could that be? Those who understand ICANN know the organisation is notoriously slow at getting anything done… unless you do one of a couple of things. Get governments to push, or add symbolism to the mix. ICANN insiders who would see a second round asap are trying door number 2, by suggesting that launching a subsequent application window exactly 7 years after the first, i.e. on January 12, 2019, would satisfy the program's initial intent for a (relatively) quick follow-up to round 1 whilst being a nice nod to history at the same time. In the weird alternative logic universe of ICANN, that actually makes sense! Doesn't make it any more likely to actually happen though… Written by Stéphane Van Gelder, MilathanFollow CircleID on TwitterMore under: ICANN, Top-Level Domains [...]
2017-02-18T12:12:00-08:00The emergence and proliferation of Internet of Things (IoT) devices on industrial, enterprise, and home networks brings with it unprecedented risk. The potential magnitude of this risk was made concrete in October 2016, when insecure Internet-connected cameras launched a distributed denial of service (DDoS) attack on Dyn, a provider of DNS service for many large online service providers (e.g., Twitter, Reddit). Although this incident caused large-scale disruption, it is noteworthy that the attack involved only a few hundred thousand endpoints and a traffic rate of about 1.2 terabits per second. With predictions of upwards of a billion IoT devices within the next five to ten years, the risk of similar, yet much larger attacks, is imminent. The Growing Risks of Insecure IoT Devices One of the biggest contributors to the risk of future attack is the fact that many IoT devices have long-standing, widely known software vulnerabilities that make them vulnerable to exploit and control by remote attackers. Worse yet, the vendors of these IoT devices often have provenance in the hardware industry, but they may lack expertise or resources in software development and systems security. As a result, IoT device manufacturers may ship devices that are extremely difficult, if not practically impossible, to secure. The large number of insecure IoT devices connected to the Internet poses unprecedented risks to consumer privacy, as well as threats to the underlying physical infrastructure and the global Internet at large: Data privacy risks. Internet-connected devices increasingly collect data about the physical world, including information about the functioning of infrastructure such as the power grid and transportation systems, as well as personal or private data on individual consumers. At present, many IoT devices either do not encrypt their communications or use a form of encrypted transport that is vulnerable to attack. Many of these devices also store the data they collect in cloud-hosted services, which may be the target of data breaches or other attack. Risks to availability of critical infrastructure and the Internet at large. As the Mirai botnet attack of October 2016 demonstrated, Internet services often share underlying dependencies on the underlying infrastructure: crippling many websites offline did not require direct attacks on these services, but rather a targeted attack on the underlying infrastructure on which many of these services depend (i.e., the Domain Name System). More broadly, one might expect future attacks that target not just the Internet infrastructure but also physical infrastructure that is increasingly Internet- connected (e.g., power and water systems). The dependencies that are inherent in the current Internet architecture create immediate threats to resilience. The large magnitude and broad scope of these risks implore us to seek solutions that will improve infrastructure resilience in the face of Internet-connected devices that are extremely difficult to secure. A central question in this problem area concerns the responsibility that each stakeholder in this ecosystem should bear, and the respective roles of technology and regulation (whether via industry self-regulation or otherwise) in securing both the Internet and associated physical infrastructure against these increased risks. Risk Mitigation and Management One possible lever for either government or self-regulation is the IoT device manufacturers. One possibility, for example, might be a device certification program for manufacturers that could attest to adherence to best common practice for device and software security. A well-known (and oft-used) analogy is the UL certification process for electrical devices and appliances. Despite its conceptual appeal, however, a certification approach poses several practical challenges. One challenge is outlining and prescribing best common practices in the first[...]
2017-02-17T12:58:00-08:00Admittedly, timing is not altogether "all" since there's a palette of factors that go into deciding unlawful registrations of domain names, and a decision as to whether a registrant is cybersquatting or a mark owner overreaching, is likely to include a number of them, but timing is nevertheless fundamental in determining the outcome. Was the mark in existence before the domain name was registered? Is complainant relying on an unregistered mark? What was complainant's reputation when the domain name was registered? What proof does complainant have that registrant had knowledge of its mark? Simply to have a mark is not conclusive of a right to the domain name. Owners of newly minted marks complaining of domain names registered long before is a classic example of overreaching. To have an actionable claim for cybersquatting, the mark must predate the domain name registration. Examples of this type of "mis-timing" appear with some regularity. This month we have Obero Inc. v. Domain Manager, eWeb Development Inc., D2016-2591 February 10, 2017) (
2017-02-17T02:36:00-08:00The choices for consumers and business in Europe to get themselves online have never been so great. Social media, apps and blogsites all have made a lasting impression, and we are now in an increasingly crowded market with the addition of hundreds of new gTLDs. So how has all this affected growth and market shares among domain names in Europe? As seen in the chart, annual growth among European ccTLDs has been sliding for many years — until recently. In 2015 and 2016, just as many of the new gTLDs were being delegated, ccTLD growth rates began to stabilise, and downward trends flattened off to a median rate of 3.4% per year. Although it is unclear if this growth stabilisation will continue, it's certainly a positive sign for European ccTLDs who now are competing for attention with hundreds of new gTLDs. Drivers of stabilisation Buyer behaviour is notoriously difficult to assess to any fine detail, however based on market averages in registrations, we can get a sense of the different dynamics of activity. For example, in 2015 we observed that there was a noticeable reduction in churn ratios (domains that were deleted or did not renew). At the same time, new add ratios (new domain sales) remained stable compared to previously declining rates. This meant that the gap between new adds and churn, on average, widened helping to push up domain retention rates1 and of course slow down the decline in long-term growth trends. In 2016, domain registration activity was generally higher. Medians in new add ratios were up but so were churns. Overall the gap between the two remained relatively stable, however, as deletes increased at a slightly higher rate than new adds, the median retention rate felt a small negative pressure. A ccTLD is a brand Ten years ago, a ccTLD had relatively limited competition. There were only a few other relevant TLDs to choose from; internet usage was not as high and social media did not have the reach it does today. Many ccTLD registries did not spend much time in marketing, so simple volume discounts and other pricing incentives were the most common options to drive sales. Although the effects of new gTLDs have not been felt greatly in Europe (at least in terms of volume/market share), they still have the potential to develop and start chipping into new domain sales, so complacency is not an option. In today's competitive TLD market, a new business has plenty of choices and might choose to integrate a new gTLD in its' strategy, however it's perhaps less likely for an existing business that has held and used its local ccTLD for many years to quickly switch to a new gTLD — the cost benefit is probably a hard sell. None the less, ensuring awareness of the ccTLD brand is also now more important than ever. Market buyer behaviour in many sectors often tells us that familiarity is an important aspect in decision making — with that, ccTLDs have a good starting point however, and should continue to capitalise on their unique position as country identifiers as well as their reputations as trusted and secure options for its citizens. For more information on latest trends in ccTLD registrations see latest CENTR DomainWire Global TLD Report. 1 Retention rate is a standardised methodology used in CENTR across the European ccTLD market. It is an indication of renewals and is calculated as the difference between total domains at two points in time minus the new domains registered between those points. Written by Patrick MylesFollow CircleID on TwitterMore under: Domain Names, ICANN, Top-Level Domains [...]
2017-02-16T13:45:00-08:00A proposal from the Domain Name Association (DNA) would provide copyright owners with a new tool to fight online infringement — but the idea is, like other efforts to protect intellectual property rights on the Internet, proving controversial. The proposed Copyright Alternative Dispute Resolution Policy is one of four parts of the DNA's "Healthy Domains Initiative" (HDI). It is designed to: construct a voluntary framework for copyright infringement disputes, so copyright holders could use a more efficient and cost-effective system for clear cases of copyright abuse other than going to court and registries and registrars are not forced to act as "judges" and "jurors" on copyright complaints. The concept of the Copyright ADRP appears similar to the longstanding Uniform Domain Name Dispute Resolution Policy (UDRP). But, unlike the UDRP, which applies only to domain names, the Copyright ADRP would apply to what the DNA describes as "pervasive instances of copyright infringement." While many domain names are used in connection with infringing websites, the UDRP is only available when the domain name itself is identical or confusingly similar to a relevant trademark. As a result, the UDRP is often not available to copyright owners, despite obviously infringing content. Although the Digital Millennium Copyright Act (DMCA) already is frequently invoked by copyright owners to take down infringing content, it has significant limitations. For example, many website hosting companies (especially those outside the United States) do not participate in the DMCA system, and the counter-notification process for infringers can easily be used to defeat a DMCA claim. In those cases, a copyright owner often has no choice but to accept the infringing website or incur the burdens of fighting it in court. The Copyright ADRP is a fascinating idea that, if properly drafted and implemented, could help reduce infringing content on the Internet and would complement both the UDRP (and other domain name dispute policies) as well as the DMCA. Still, the idea of the Copyright ADRP already is meeting resistance. A blog post at Domain Incite expresses concern that the policy could be unfairly applied "in favor of rights holders." The Electronic Frontier Foundation reportedly has called it "ill-conceived" and "the very epitome of shadow regulation." And the Internet Commerce Association is worried about "a chilling effect on the domain leasing and licensing business." Given the early stage of the proposed Copyright ADRP and the undeniable prevalence of online copyright infringement, the criticism seems premature and/or unwarranted. Like any legal enforcement mechanism, the devil will be in the details — and, at this point, the details seem to be minimal. As of this writing, it is unclear how the DNA's proposal would be applied other than a broad statement that it should be limited to instances "where the alleged infringement is pervasive or where the primary purpose of the domain is the dissemination of alleged infringing material." How to define "pervasive" or "primary purpose" (let alone "infringement" — something with which the courts have long struggled) is far from clear. Plus, numerous questions remain to be answered. Among the most important: As a voluntary dispute system (not mandated by ICANN), which registries and registrars would adopt the Copyright ADR? And who would administer it? The answers to all of these questions are worth pursuing because, regardless of whether the DNA's idea is workable, reducing online copyright infringement is a laudable goal that will only strengthen the usefulness of and confidence in the Internet. Written by Doug Isenberg, Attorney & Founder of The GigaLaw FirmFollow CircleID on TwitterMore under: Domain Names, Intellectual Property, Law [...]
RIPE NCC will be hosting the fifth hackathon event in Amsterdam, on 20 and 21 April, 2017. Operators, designers, researchers and developers are invited to take on the challenge and join in developing new tools and visualizations for DNS measurements.
More about this event from RIPE NCC:
The RIPE NCC's fifth hackathon event offers an opportunity for collaboration on the development of new tools for DNS operators using data provided by the RIPE NCC (via RIPE Atlas, DNSMON, etc.). The event will bring together people with a variety of skills so as to encourage the combination of different types of expertise and inspire creativity.
Participants in the hackathon will discover new ways of tapping into the rich source of DNS measurement data to devise and implement helpful tools and create informative visualizations. This is your chance to get involved, get in touch with other people working in your field, get access to the RIPE NCC’s DNS measurements data and get to work on making something that could be of benefit to the entire internet community.
When & Where:
Date: 20-21 April 2017
Time: Thursday 9:00-19:00, Friday: 9:00-21:00 (including social event)
Location: Amsterdam, the Netherlands
Interested in participating? See the full details here.
Follow CircleID on Twitter
More under: DNS
2017-02-16T12:09:00-08:00Nomulus is the code for the backend domain name registry solution offered by Google which requires the use of Google Cloud. This solution is the one used for all of Google's new gTLDs and the solution works. An announcement for this solution can look like a potentially "simple" solution for future .BRAND new gTLD applicants — but is it truly the case? When Google makes such an announcement, it immediately catches the eye of the entire new gTLD industry as well others. If such an announcement is seen as a troublemaker for other backend registry businesses, it also alerts other potential new gTLD service providers — such as law firms — that would be interested in using a registry platform to avoid contracting with a backend registry. To help us clear some points, we sent our questions to one the key people involved with Nomulus, Ben McIlwain, Google's senior software engineer who was kind enough to answer them. * * * Q: What technical knowledge would a Law Firm need to offer Trademarks their .BRAND gTLD using Nomulus? A: "A law firm? They'd definitely need technically minded people, and probably at least one developer. There are not many law firms running TLDs, I would imagine? It seems more likely to me that said hypothetical law firm would want to use a registry service provider". Q: Can Google Registrar (for US companies) be the single registrar authorized to create a ".brand" new domain name, when using Nomulus? This question is important since a registrar is required to allow the registration of domain names. A: "With a relatively small amount of custom development to Nomulus, you could add a whitelist of registrar(s) to TLDs so that only those registrar(s) could register domain names on said TLD. This would work for any registrar and isn't specific to Google Domains. It'd all be using standard EPP". Q: Does it make sense to say that Google Registry has already passed the ICANN technical requirements and so a company using Nomulus+GoogleCloud would easily pass these tests prior to being delegated? A: "Not sure. You'd still need someone who's already familiar with ICANN's pre-delegation testing process to get through it "easily". But you could say that, since we passed the testing on our ~40 TLDs, there's more assurance that someone else could do so using our software than with some other software that hasn't yet passed testing for any TLD". Q: Starting from scratch with Nomulus, and knowing that OpenRegistry was recently sold for $3.7 million, what could be the estimated cost to build a backend registry solution with Nomulus? A: "I have no idea. Hopefully not too much, but there are way too many factors in play (requirements, prevailing wage of the area in which you're hiring developers, etc.)" Q: Are there already service providers able to build a backend registry solution for third-party customers? (Such as a law firm looking for a service provider to build its solution.) A: "There are registry service providers that exist, e.g. Rightside and Afilias. Did you mean using Nomulus though? If so, I'm not aware of any, but maybe someone would do so in the future?" Q: Automating and managing the invoicing process seems to be a though part of a backend registry solution: how can Nomulus help simplify this for an entrepreneur willing to operate a generic TLD dedicated to selling domain names? A: "I don't entirely understand the question. "though part"? The problem is that there are so many potential different ways to handle invoicing and payments, and what is available/allowable likely differs from country to country, that you'd probably need to end up developing stuff yourself. We provide some billing queries that aggregate billing events on a monthly basis and group them by registrar, which gets you most[...]
During a talk at the RSA Conference, security expert Bruce Schneier called for the creation of a new government agency that focuses on internet of things regulation, arguing that "the risks are too great, and the stakes are too high" to do nothing. Rob Wright reporting in TechTarget: "During a wide-ranging talk on internet of things regulation and security at RSA Conference 2017, Schneier, CTO of IBM Resilient, made the case that government intervention is needed to address threats such as the Mirai botnet. He described IoT security as a unique problem because manufacturers have produced many devices that are inherently insecure and cannot be effectively patched, and IoT malware has little impact on the actual devices. Because compromised devices are used to attack third parties ... there is little incentive on the part of the users and device manufacturers to act."
Follow CircleID on Twitter
2017-02-15T10:40:00-08:00Let's be honest about it. Nobody — including those very clever people that were present at its birth — had the slightest idea what impact the internet would have in only a few decades after its invention. The internet has now penetrated every single element of our society and of our economy, and if we look at how complex, varied and historically different our societies are, it is no wonder that we are running into serious problems with the current version of our internet. There are some very serious threats to the internet, the key ones being: Cyber-terror and cyber-war Cyber-crime Political (government) interference (Russian and Chinese hackers, Prism, Stuxnet, etc) Privacy intrusion (governments, Google, Facebook, Amazon, etc) So far the reaction to all of this has been to create draconian regulations which will never be successful, because the internet was never designed to cope with such complexities. The internet is now so critical to our society that we can't afford to lose it and so we are beginning to accept the breaches, hacks and interferences, because the need to use it is greater than the concerns we have in relation to the above-mentioned activities. This is creating very dangerous situations politically, socially and economically. We have been somewhat sheltered by the fact that over the last half century — in western democracies — we have had good institutions, both private and public, which in general terms have been addressing these negative outcomes with the good of all in mind. While this is in general still the case, it is not too difficult to see that populist regimes might have other ideas about what defines the 'public good' and that they will want to use the internet for their own purposes. On the other hand, we see the more responsible governments increasingly being forced to intervene and regulate, as they are unable to get on top of the above-mentioned issues. We know this is futile but they feel they have no other option. Rather than following this path, it would be much better to address the underlying technology issues of the internet. There is no way that we can avoid terrorists, criminals, and disruptive populist factions who will always look for ways to misuse the internet; but we can make the internet much safer than it is now. Unfortunately however, the current internet cannot be fixed. So we need a new one. My colleague Martin Geddes has written an excellent article on why the old net is broken and why it can't be fixed. It is not going to be easy to resolve this. It basically means that, bit by bit, the old internet will need to be replaced by a new one. The good thing is that the engineers involved in both the old and the new internet know what this new internet should look like — in some places this (industrial) internet infrastructure already exists. What is needed is the commercial and political will to start working on replacing the old with the new. Based on Martin's article, a group of my colleagues have started a discussion on this topic. I am a firm believer that our industry will need to drive this new development; we will have to create further awareness of the problem and at the same point the way forward. There is widespread support for looking at RINA for both the strategic and the technological guidance that is needed. There is a good description of RINA on Wikipedia — the following is only the introduction to it: RINA stands for Recursive InterNetwork Architecture and is a computer network architecture that unifies distributed computing and telecommunications. RINA's fundamental principle is that computer networking is just Inter-Process Communication or IPC. RINA reconstructs the ov[...]
2017-02-15T07:29:00-08:00A stack contrast is emerging within the DNS between providers who tolerate blatantly illegal domain use and those who do not. Our study, just published here focuses on five U.S.-based providers, their policies, and their response to reports of opioid traffic within their registry or registrar. There are many providers, not covered here, who removed hundreds of domains selling opioids and I applaud their efforts. In January of this year on a single day, in a single town in Massachusetts police seized $1.2 Million worth of Fentanyl from one location and revived an infant who was exposed to Fentanyl in another location. These scenes are repeated regularly throughout the world as the specter of opioid abuse haunts us. What is Fentanyl? Let us use a description from a Namesilo-sponsored domain selling Fentanyl without a prescription: Fentanyl is a powerful synthetic opiate analgesic similar to but more potent than morphine. It is typically used to treat patients with severe pain, or to manage pain after surgery. It is also sometimes used to treat people with chronic pain who are physically tolerant to opiates. It is a schedule II prescription drug. Fentanyl is 50 times more powerful than heroin and over 100 times more potent than morphine. It is intended to be used as a slow-release, but people who abuse it take the entire dosage through various means. It is a quick route to overdose and death. When we reported this domain to Namesilo, something curious happened, there was no response from NameSilo but the site became "hidden" from certain IP addresses. As of today domain is still selling Fentanyl. Different societies have struggled with different abuse issues throughout history, this one is ours and is being fueled from unexpected sources. I have written about various illicit pharmacy operations within the DNS before and the registrars who permit them to operate, but online opioid traffic is much worse. Online opioid traffic is inherently predatory, targeting people who will likely suffer and die. From January 2016 until now I have been working with a variety of ad hoc teams in addressing the problem of online opioids. First, I lead a group of undergraduates to collect and analyze opioid trafficking domains to determine how easy it was to get controlled substances and which providers were most pervasive. Following the release of our findings I was asked to present the report at a number of different venues from Internet policy, security, and law enforcement groups. They were all shocked, but not surprised at the scope of open narcotics traffic on the Internet. The next step in this effort, starting in August 2016, was to begin notifying the various providers and measure their response. The results, overall, were actually encouraging. Different providers (including registries, registrars, and ISPs) form India, Germany, China, Netherlands, and many other countries used their documented abuse procedures to suspend and remove domains, over 200 of them, engaged in opioids traffic. Domains either directly involved in the trafficking of narcotics or aiding them in transactions, marketing or Internet infrastructure were reported. The registries, registrars and hosting companies recognized that A) the illegal commerce occurring within these domains violated their policies, B) the registrants are likely criminals, and/or C) the threat to the public health does not support a positive model of the Internet. For these efforts, I thank all who participated. Some of the notified domains dropped opioids from their offerings, but continue to be illicit pharmacies and will have to be addressed in a different context, but this is still progress. That is the good news… The bad n[...]
2017-02-14T18:44:00-08:00The Internet is a great success and an abject failure. We need a new and better one. Let me explain why. We are about to enter an era when online services are about to become embedded into pretty much every activity in life. We will become extremely dependent on the safe and secure functioning of the underlying infrastructure. Whole new industries are waiting to be born as intelligent machines, widespread robotics, and miniaturized sensors are everywhere. There are the usual plethora of buzzwords to describe the enabling mechanisms, like IoT, 5G and SDN/NFV. These are the "trees", and focusing on them in isolation misses the "forest" picture. The Liverpool to Manchester railway (opened 1830) crossing a canal. * * * What we really have today is a Prototype Internet. It has shown us what is possible when we have a cheap and ubiquitous digital infrastructure. Everyone who uses it has had joyous moments when they have spoken to family far away, found a hot new lover, discovered their perfect house, or booked a wonderful holiday somewhere exotic. For this, we should be grateful and have no regrets. Yet we have not only learned about the possibilities, but also about the problems. The Prototype Internet is not fit for purpose for the safety-critical and socially sensitive types of uses we foresee in the future. It simply wasn't designed with healthcare, transport or energy grids in mind, to the extent it was 'designed' at all. Every "circle of death" watching a video, or DDoS attack that takes a major website offline, is a reminder of this. What we have is an endless series of patches with ever growing unmanaged complexity, and this is not a stable foundation for the future. The fundamental architecture of the Prototype Internet is broken, and cannot be repaired. It does one thing well: virtualise connectivity. Everything else is an afterthought and is (by and large) a total mess. Performance, security, maintainability, deployability, privacy, mobility, resilience, fault management, quality measurement, regulatory compliance, and so on… We have spent three decades throwing bandwidth at all quality and performance problems, and it has failed. There is no security model in the present Internet: it is a pure afterthought patched onto an essentially unfixable global addressing system. When your broadband breaks, it is nearly impossible to understand why, as I have personally found (and I am supposed to be an expert!). It isn't just the practical protocols that are broken. The theoretical foundations are missing, and its architecture justification is plain wrong. First steps are fateful, and when you misconceive of networking as being a "computer to computer" thing, when it is really "computation to computation", there is no way back. The choice to reason about distributed computing in terms of layers rather than scopes [PDF] is an error that cannot be undone. The problem is not just a technical issue. It is a cultural and institutional one too. Engineering is about taking responsibility for failure, and the IETF does not do this. As such, it is claiming the legitimacy benefits of the "engineer" title without accepting the consequent costs. This is, I regret to say, unethical. Real and professional engineering organizations need to call them out on this. We see many examples of failed, abandoned or unsatisfactory efforts to fix the original design. Perhaps the most egregious is the IPv4 to IPv6 transition, which creates a high transition cost with minimal benefits and thus has dragged on for nearly 20 years. It compounds the original architecture errors, rather than fixes them. For instance, the security attack surface grows e[...]
2017-02-14T18:03:00-08:00Few parts of the Domain Name System are filled with such levels of mythology as its root server system. Here I'd like to try and explain what it is all about and ask the question whether the system we have is still adequate, or if it's time to think about some further changes. The namespace of the DNS is a hierarchically structured label space. Each label can have an arbitrary number of immediately descendant labels, and only one immediate parent label. Domain names are expressed as an ordered sequence of labels in left-to-right order starting at the terminal label and then each successive parent label until the root label is reached. When expressing a domain name, the ASCII period character denotes a label delimiter. Fully qualified domain names (FQDNs) are names that express a label sequence from the terminal label through to the apex (or "root") label. In FQDNs, this root is expressed as a trailing period character at the end of the label sequence. But there is a little more than that, and that's where the hierarchal structure comes in. The sequence of labels, as read from right to left, describes a series of name delegations in the DNS. If we take an example DNA name, such as www.example.com., then com is the label of a delegated zone in the root. (Here we'll call a zone the collection of all defined labels at a particular delegation point in the name hierarchy.) code>example is the label of a delegated zone in the com. zone. And www is a terminal label in the example.com. zone. Now before you think that's all there is to the DNS, then that's just not the case! There are many more subtitles and possibilities for variation, but as we want to look specifically at the root zone, we're going to conveniently ignore all these other matters here. If you are interested, RFC1034 from November 1987 is still a good description of the way the DNS was intended to operate, and the recently published RFC7719 provides a good compendium of DNS jargon. The most common operation performed on DNS names is to resolve the name, which is an operation to translate a DNS name to a different form that is related to the name. This most common form of name resolution is to translate a name to an associated IP address, although many other forms of resolution are also possible. The resolution function is performed by agents termed resolvers, and they function by passing queries to, and receiving results from, so-called name servers. In its simplest form, a name server can answer queries about a particular zone. The name itself defines a search algorithm that mirrors the same right-to-left delegation hierarchy. Continuing with our simple example, to resolve the name www.example.com., we may not know the IP addresses of the authoritative name servers for example.com., or even com. for that matter. To resolve this name a resolver would start by asking one of the root zone name servers to tell it the resolution outcome of the name www.example.com. The root name server will be unable to answer this query, but it will refer the resolver to the com. zone, and the root server will list the servers for this delegated zone, as this delegation information is part of the DNS root zone file for all delegated zones. The resolver will repeat this query to one of the servers for the com. zone, and the response is likely to be the list of servers for example.com. Assuming www is a terminal label in the example.com. zone, the third query, this time to a server for the example.com. zone will provide the response we are seeking. In theory, as per our example, every resolution function starts with a query to one to the servers for the root zone. But how doe[...]
2017-02-14T11:46:00-08:00Could local people build local fiber backbones? Necessity has led Cubans to become do-it yourself (DIY) inventors — keeping old cars running, building strange, motorized bicycles, etc. They've also created DIY information technology like software, El Paquete Semanal, street nets and WiFi hotspot workarounds. Last June the International Telecommunication Union (ITU) adopted a standard for "low-cost sustainable telecommunications infrastructure for rural communications in developing countries," L.1700. L.1700 cable should be of interest to both DIY technologists and ETECSA. L.1700 is a technology-neutral "framework standard" for optical cable so there are multiple commercial offerings like these: The cables are strong enough to be installed without being threaded through a protective duct and light and flexible enough to be installed by supervised volunteers or unskilled workers. The cables can be buried in shallow trenches, strung above ground or submerged. (For an example installation in Bhutan and more on L.1700, click here). Large cities like Havana have expensive fiber rings installed in tunnels and ducts under the streets. (Google has installed that sort of infrastructure in two African capitals). Could a small town construct their own fiber ring using L.1700-compliant cables and electronics — creating a network like the one in the following ITU illustration? What role might ETECSA play in such a network? At a minimum, they could provide backhaul to their backbone, treating the local government as a customer, but I would hope they would take a more active role — training local people, designing the local network, making bulk purchases of cable and electronic and optical equipment, etc. (Current street net organizers could also play an important role in this process). This relatively active role is reminiscent of a suggestion I made some time back for installing local area networks in Cuban schools or another for providing geostationary satellite connectivity as an interim step on the path to modern technology. I've been offering suggestions like this to ETECSA since I began this blog. My suggestions might be financially, technically, politically or bureaucratically unfeasible, but I hope someone within ETECSA or the government is at least studying alternatives that go beyond today's slow, expensive and unreliable WiFi hotspots and the timid, obsolete home-connectivity plan foreshadowed by the recently-completed Havana trial. Written by Larry Press, Professor of Information Systems at California State UniversityFollow CircleID on TwitterMore under: Access Providers, Broadband [...]
2017-02-14T10:59:00-08:00In the first post on DDoS, I considered some mechanisms to disperse an attack across multiple edges (I actually plan to return to this topic with further thoughts in a future post). The second post considered some of the ways you can scrub DDoS traffic. This post is going to complete the basic lineup of reacting to DDoS attacks by considering how to block an attack before it hits your network — upstream. The key technology in play here is flowspec, a mechanism that can be used to carry packet level filter rules in BGP. The general idea is this — you send a set of specially formatted communities to your provider, who then automagically uses those communities to create filters at the inbound side of your link to the 'net. There are two parts to the flowspec encoding, as outlined in RFC5575bis, the match rule and the action rule. The match rule is encoded as shown below: There are a wide range of conditions you can match on. The source and destination addresses are pretty straight forward. For the IP protocol and port numbers, the operator sub-TLVs allow you to specify a set of conditions to match on, and whether to AND the conditions (all conditions must match) or OR the conditions (any condition in the list may match). Ranges of ports, greater than, less than, greater than or equal to, less than or equal to, and equal to are all supported. Fragments, TCP header flags, and a number of other header information can be matched on, as well. Once the traffic is matched, what do you do with it? There are a number of rules, including: Controlling the traffic rate in either bytes per second or packets per second Redirect the traffic to a VRF Mark the traffic with a particular DSCP bit Filter the traffic If you think this must be complicated to encode, you are right. That's why most implementations allow you to set pretty simple rules, and handle all the encoding bits for you. Given flowspec encoding, you should just be able to detect the attack, set some simple rules in BGP, send the right "stuff" to your provider, and watch the DDoS go away. ...right… If you have been in network engineering since longer than "I started yesterday," you should know by now that nothing is ever that simple. If you don't see a tradeoff, you haven't looked hard enough. First, from a provider's perspective, flowspec is an entirely new attack surface. You cannot let your customer just send you whatever flowspec rules they like. For instance, what if your customer sends you a flowspec rule that blocks traffic to one of your DNS servers? Or, perhaps, to one of their competitors? Or even to their own BGP session? Most providers, to prevent these types of problems, will only apply any flowspec initiated rules to the port that connects to your network directly. This protects the link between your network and the provider, but there is little way to prevent abuse if the provider allows these flowspec rules to be implemented deeper in their network. Second, filtering costs money. This might not be obvious at a single link scale, but when you start considering how to filter multiple gigabits of traffic based on deep packet inspection sorts of rules — particularly given the ability to combine a number of rules in a single flowspec filter rule — filtering requires a lot of resources during the actual packet switching process. There is a limited number of such resources on any given packet processing engine (ASIC), and a lot of customers who are likely going to want to filter. Since filtering costs the provider money, they are most like[...]
"Internet overseer ICANN will push ahead with a new ".africa" top-level domain, despite having twice been ordered not to because of serious questions over how it handled the case." Kieren McCarthy reporting in The Register: "Earlier this month, a Los Angeles court refused a preliminary injunction against ICANN that would prevent it from adding .africa to the internet and allowing South Africa-based ZA Central Registry (ZACR) from running it. The decision was just the latest in a lengthy battle between DotConnectAfrica (DCA), which also applied for the name, and ICANN, which decided to disqualify DCA back in 2013 on grounds that were later shown to be highly questionable."
Follow CircleID on Twitter