Subscribe: CircleID
Added By: Feedage Forager Feedage Grade B rated
Language: English
access  block  china  data  dns  domain  icann  internet  large block  large  market  new  registry  whois  work  years 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID


Latest posts on CircleID

Updated: 2018-03-15T10:41:00-08:00


Using Your domain Name in China


At Gandi, we offer over 750 TLD's right now (probably the one with the most coverage in the industry) so we often see changes in policy first hand. One of these changes was the introduction of new laws in China and regulations by the MIIT (China’s Ministry of Industry and Information Technology) in late 2017. We updated our customers on these new regulations to hopefully avoid some confusion, and I'd like to share them here as well. Gandi has been accredited by CNNIC since 2015, allowing us to sell .CN domains (the country top-level domain name for China) to our customers worldwide. At CNNIC, we are listed as an overseas registrar. As you probably know, owning a .CN domain name requires that the domain name contains the registrant’s real name and also needs to pass an additional verification process by CNNIC (using the owner's passport or ID). This process hasn’t changed. CNNIC has been under the management by the MIIT for a few years now. But that doesn’t necessarily mean Gandi is now a so-called “MIIT accredited registrar” (yet), even if we can support you perfectly in Chinese thanks to our team in Taipei. If you are not hosting data within China (meaning you are using a Gandi VPS or our PaaS outside of China), there are no additional steps required to register your .CN domain name. However, if you intend to host your data with a Cloud hosting provider within China (and sell to customers in China), they may ask you to transfer your domain name to an MIIT-accredited registrar. Only companies in China can currently become an “MIIT-accredited registrar.” Since Gandi doesn’t have an office in China, we are not MIIT accredited. We are in, however, checking with CNNIC about whether this policy can be changed. Please note that the domain owner must also be a legal entity in China (or China resident) if you are hosting data in China. A domain name must then undergo something called ICP verification (ICP Filling or ICP License depending on the use). Although MIIT removed the clause specifying that the registrar must be in China from the draft of the new China Cyberspace regulations, the local MIIT offices (which undertake ICP verification) may still insist that your registrar that uses your domain name is in China. This is odd as many registrars in China do not offer every top-level domain name, let alone all the additional services that Gandi provides such as DNSSEC, global DNS infrastructure, and services like Gandi Mail and Web Forwarding. In addition, not every top-level domain name offers real name verification. The process to have your ICP license or filling can, therefore, be rather complicated as of now. Mandatory real name verification also applies to .COM and other domain names (which until then technically weren’t legal to use within China although,, use a .COM domain name). CNNIC made a deal with Verisign (.COM & .NET) to allow it for them. Given the information above, we currently recommend the following: If you intend to sell to customers in China on your website with Chinese IP address, make sure the owner contacts are with your partner in China If you still encounter issues during ICP Filling or ICP License requests you will have to transfer your domain to a domestic registrar. Laws in China change often and hopefully these restrictions are lifted someday soon Alternatively, do not use a hosting provider in China but understand the risk that your website may be blocked if it contains sensitive or harmful materials (n.b. the definition can be very broad) Make use of multiple consultants to register your ICP license or conduct your ICP filling. Rules vary by province! Have you encountered issues using your domain name in China? We’d like to hear from you at feedback[at] You may also reach out to me on Linkedin or via CircleID. Written by Thomas Kuiper, General manager, Gandi Asia Co. Ltd ( CircleID on TwitterMore under: Domain Names, Policy & Regulation [...]

O3b Satellite Internet - Today and Tomorrow


I have written a lot about the potential of low-Earth orbit (LEO) satellites for Internet service, but have not said much about medium-Earth orbit (MEO) satellites — until now. O3b (other three billion) is an MEO-satellite Internet service provider. Greg Wyler founded the company, and it was subsequently acquired by SES, a major geostationary-orbit (GSO) satellite company. (Wyler moved on to found future LEO Internet service provider OneWeb). O3b's MEO satellites orbit at an altitude of around 8,000 kilometers, complementing the SES GSO constellation, which orbits at around 36,000 km. Because of their altitude, the GSO satellites have large footprints and are good for video broadcast and other asynchronous applications, but their latency can cause noticeable delays in applications like telephony or browsing modern, complex Web sites which may have many elements — images, text, video, programs, etc. — each adding transmission and error-checking overhead. The International Telecommunication Union says that if one-way latency is less than 150 milliseconds, users of most applications, both speech, and non-speech, will experience essentially transparent interactivity. I've never used an O3b link, but they promise a round-trip latency of under 150 milliseconds, so I assume they work well for voice calls, where GSO satellites would introduce a perceptible delay. However, an MEO link might be noticeably slower than a LEO link while browsing modern Web sites. O3b launched four satellites last week and they plan to launch four more early next year. That will bring their constellation to 20 satellites and enable them to continue expanding their current business of serving relatively large customers like mobile phone companies, government organizations, and cruise ship lines. for example, they serve Digicell which has over 40,000 LTE accounts in Papua New Guinea. There is a growing market for O3b's current service, but their next-generation satellite-communication system, called mPOWER, will be much improved and will compete in some applications with terrestrial fiber and, in the future, with LEO constellations. The initial mPOWER constellation of seven satellites was designed and will be manufactured by Boeing. While today's O3b satellites have ten steerable edge-terminal beams, the mPOWER satellites will have over 4,000 steerable beams that can be switched under program control giving the constellation over 30,000 dynamically reconfigurable beams and over 10 Tbps capacity. The highly-focused beams will address single terminals, not wasting power on locations with no users. The constellation will be launched in 2021. Seven satellites, each with over 4,000 steerable, fully-shapeable beams O3b has also contracted with three customer-edge terminal manufacturers ALCAN, Isotropic Systems and Viasat. I don't know the prices or capabilities of these terminals, but it is porbably safe to say they will be electronically steerable utilizing different antenna technologies and have different prices and characteristics for different applications. I am guessing that ALCAN is working on a low-cost, flat-antenna terminal using their liquid crystal technology. In the press conference announcing mPOWER, SES Networks CEO emphasized that these were smart terminals — computers that happened to have antennas. They will be continuously connected and monitored and able to download software updates and new applications. They will be parts of an integrated system comprised of edge terminals, terrestrial POPs and gateways, SES GSO satellites, existing O3b and future mPOWER satellites and terrestrial fiber and wireless networks. The network will be dynamically configured under program control as a function of applications and cost. Applications for flat-panel edge terminals (source) The first seven mPOWER satellites will be in equatorial orbit and cover nearly 400 million square kilometers between + and - 50 degrees latitude. Once mPOWER is up and running, SES plans to retire and not repl[...]

Accreditation & Access Model For Non-Public Whois Data


In the current debate over the balance between privacy and Internet safety and security, one of the unanswered questions is: "How will those responsible for protecting the public interest gain access to the non-public data in the WHOIS databases post General Data Protection Regulation (GDPR)?"

In an attempt to prevent WHOIS data from going "dark," several community members have been working for the past weeks to create a model that could be used to accredit users and enable access to the non-public WHOIS data. The submitted model seeks to help the ICANN community provide continuity for legal and legitimate access to non-public WHOIS data once ICANN’s proposed interim model (Calzone) is implemented. It's intended as a first step to developing an implementation model before the May 25th deadline that would have minimal impact to the contracted parties who control their WHOIS databases while respecting the GDPR and protecting users.

The ICANN multi-stakeholder community welcomes comments on this model.

Written by Statton Hammock, VP for Global Policy & Industry Development at MarkMonitor

Follow CircleID on Twitter

More under: Domain Management, DNS, Domain Names, ICANN, Internet Governance, Policy & Regulation, Privacy, Whois

Takeaways from the DNS-OARC's 28th Workshop


March has seen the first of the DNS Operations, Analysis, and Research Center (OARC) workshops for the year, where two days of too much DNS is just not enough! These workshops are concentrated within two days of presentations and discussions that focus exclusively on the current state of the DNS. Here are my impressions of the meeting. DNS as a Load Balancer When you have a set of replicated content servers spread across the Internet how do you share the load so that each user is directed to a server that can offer the user the best service? You could put the services behind the same IP address and leave it to the routing system to select the closest (in routing terms) server, but if the closest server is under intense load routing alone does not correct the problem. Facebook uses a slightly different approach, where each service point is uniquely addressed. They use a rather old technique that was first widely promoted by Netscape many years ago, where they use the DNS to steer the user to a close service point that has available capacity. To the first level of approximation, this apparently works acceptably well, but it is a coarse tool. Caching of the DNS response mean that it's not easy to steer this service traffic with any high degree of precision and timeliness, and the widespread use of non-local open DNS resolvers means that the user may not be located 'close' to their DNS resolver. So the location information associated with a resolver's IP address may be misleading in such cases. EDNS(0) client subnet signaling could assist here, but it only does so at the expense of a certain level of privacy leakage. Don't Give em Rope! Robert Edmonds of Fastly noted that consumer home gateway servers distributed by the larger home ISP retailers do not provide a configuration option to provide the IP address(es) of a chosen DNS resolver. The implication is that the home gateway is configured with the ISP's own DNS resolver, and by default, it will pass the consumer's DNS queries to the ISP's selected DNS resolver. It's easy to see why an ISP would find this attractive. It reduces the number of variable factors in the user environment, and thus reduces the number of variables that may cause the user to ask the ISP's helpline for assistance (a service that is generally quite expensive for the ISP to provide). Giving the user too much rope allows the user to get into all sorts of trouble and that involves often costly remediation efforts. By reducing the number of variables in the user's environment, the possibility of the user configuring themselves into a broken situation is reduced, so the principle of keeping the rope as short as possible is often used in these devices. Of course, there is also the prospect that the DNS query feed provides a useful insight into the online activities of the consumer base, and in a world of declining margins for access ISPs, the temptation to monetize this particular data flow may have already been overwhelming. This position on whether the gateway acts as a DNS interceptor and prevents local devices from passing DNS packets through the gateway to other DNS resolvers also appears to be a mixed picture. Some evidently work as interceptors while others will pass through the DNS queries. The picture gets a little more involved with gateways that also provide a NAT function. Trailing fragments of a large UDP DNS response have no port field, so the gateway either has to perform full packet reassembly to get the IP address translation correct for the trailing fragments, or it simply drops the trailing fragments and leaves it to the home device to work around the damage! The more robust approach from the perspective of the CPE device is to place a small DNS engine in the CPE itself and forward all local queries to an ISP-configured resolver. That way the CPE itself accepts the external DNS responses and passes them into the local network without the need to perform firewall magic on fragmented DNS respons[...]

Let's Talk About "Internet Responsibility"


We need to talk about Internet responsibility, and we need to talk about it now. By "Internet responsibility," I am not referring to some abstract subjective connotation of it, but rather to an attempt to identify objective criteria that could be used as a benchmark for determining responsibility. For the past 20 something years we all have been using the Internet in different ways and for different reasons; but, have we ever contemplated what our levels of responsibility are? Have we ever taken the time to consider how certain behaviors by certain agents affect the Internet? But, more importantly, do we even know what it is that we are responsible for? Responsibility and technology seem to have a special relationship mainly due to technology's pervasiveness role in societies. But, to understand what is so special about this relationship, we should first ask ourselves what it means to be responsible. In everyday life situations, there is typically an individual confronted with an ethical choice. Although the choice at hand may be morally complex, we tend to think of the actions and what ethical concerns they may raise as well as what are the direct consequences of that action. And, in most cases, it is often the case that there is a degree of certainty on what these consequences may be. In this context, ethical literature operates under three assumptions: (1) it is individuals who perform an act; (2) there is a direct causal link between their actions and the consequences that follow; and, (3) these consequences are most of the times a certainty. None of these assumptions, however, apply when trying to rationalize the nexus between responsibility and technology. First, technology is a product of a highly collaborative process, which involves multiple agents. Second, developing technology is also a complex process, which is driven by the actions made by technologists and the eventual impact of such actions. And, third, it is very difficult to predict in advance the consequences or the social impact of technology. But, there is yet another reason why responsibility plays such an important role in technology. Technology informs the way we, as human beings, go about living our lives. An example of this informative role is the correlation between the Internet and fake news. Who is, for instance, responsible if the news spread on the Internet is predominantly fake? Does it make sense, in such situations, to attribute responsibility to technology or is it possible to understand these complex situations so that, in the end, all responsibility can be attributed to human action? More interestingly, what — if any — is the responsibility of the actors running the platforms where fake news get disseminated? It may be argued that these platforms produce the algorithms that sustain and support an environment of fake news. And, ultimately, what is the impact of such actions to the Internet itself? These are hard questions that invite some even harder answers, and I am not alone in thinking of them. For the past few months, the press has been replete with articles on the impact the Internet has on societies, and there is compelling data, which accurately points to the fact that, in 2018, our online interactions have changed dramatically compared to just 2 years ago. Most of these stories, however, fall short on two counts: first, some of them at least, take the mistaken, yet predictable, route of placing the blame on the Internet itself — its decentralized architecture and design. But, as many of us have asserted, these stories lack an understanding of what decentralization means for the Internet. Secondly, these stories also use the current division on pro and against technology to sensationalize their points of view. But, as Cory Doctorow, argues: "This is a false binary: you don't have to be "pro-tech" or "anti-tech." Indeed, it's hard to imagine how someone could realistically be said to be "anti[...]

Tracking the Line that Separates Cybersquatting from Trademark Infringement


The Uniform Domain Name Dispute Resolution Policy (UDRP) is a rights protection mechanism crafted by the World Intellectual Property Organization (WIPO) and adopted by the Internet Corporation for Assigned Names and Numbers (ICANN) for trademark owners to challenge the lawfulness of domain name registrations. Cybersquatting or abusive registration is a lesser included tort of trademark infringement, and although the UDRP forum is not a trademark court, as such, in some ways it is since it empowers (assuming the right alignment of facts) to divest registrants of domain names that infringe a complainant's trademark rights. The argument that any use of a domain name "inevitably entail[s] an infringement of the world-renowned [name of any] brand in the industry" is unavailing because regardless of future use (by a successor holder, for example), if the original registration is lawful, the complaint must be dismissed. Equipo IVI SL v. Domain Admin, WebMD, LLC, D2017-2240 (WIPO January 31, 2018) (). The complaint must also be dismissed if the substance of the claim is trademark infringement. Force Therapeutics, LLC v. Patricia Franklin, University of Massachusetts Medical School, D2017-2070 (WIPO December 12, 2017) (: [T]he Policy is directed to determining abusive domain name registration and use. This involves a more limited assessment than trademark infringement. The term "infringed" in the domain name context refers to unlawful registration in breach of the warranties agreed to in the registration agreement and, by incorporation, Paragraph 2 of the UDRP. The evidentiary demands for proving cybersquatting under the UDRP are different and less demanding than proving trademark infringement, but nevertheless demanding in its way and if not properly understood will sink the party with the burden of proof or production, as it did in Equipo IVI SL. If one has to look for an analogy for the UDRP it is to the commercial rules promulgated by arbitration providers, with this difference: the UDRP has its own special purpose law as expressly defined by the terms of the Policy and Rules, as construed by neutral panelists. I underscore this because while these neutrals are limited in their assessment of the facts to determine whether 1) domain names are identical or similar to trademarks, 2) registrants lack or have rights or legitimate interests, and/or 3) the domain names were registered in bad faith, they are not robotic. They apply this special purpose jurisprudence (consisting of a cabinet of principles) in a fair and balanced manner so that although the UDRP was crafted for trademark owners, it operates as a neutral forum. But precisely where to draw the line separating cybersquatting and trademark infringement is not always so certain because they are both present in that area of the continuum that defines the outer limit of one and the beginning of the other. Where the facts support either or both cybersquatting and trademark infringement what is within and outside jurisdiction is in the eyes of the beholder. Some panelists will accept what others decline. There are several considerations that go into accepting jurisdiction, one of them is the residence of parties in different jurisdictions. If Panels are convinced, there is compelling proof of abusive registration (or convince themselves that there is!) they push the jurisprudential envelope to assure that "justice" is done. Notable for accepting jurisdiction where the parties reside in different jurisdictions, and there is also potential (or alleged) trademark infringement are Boxador, Inc. v. Ruben Botn-Joergensen, D2017-2593 (WIPO February 27, 2018) ( and , U.S. Complainant, Norwegian Respondent) discussed further below in which the Panel awarded the domain names to Complainant, and Autobuses de Oriente ADO, S.A. de C.V. v. [...]

Berners-Lee Warns Rise of Dominant Platforms has Resulted in Weaponization of the Web at Scale


The World Wide Web turned 29 today and Sir Tim Berners-Lee, web inventor, has shared some stern warnings about the direction it is headed. In a post published in Web Foundation website commemorating the anniversary, Berners-Lee warns "the web is under threat" and shares his views on what is needed to ensure global access to a web "worth having." He writes: "The web that many connected to years ago is not what new users will find today. What was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms. ... the fact that power is concentrated among so few companies has made it possible to weaponise the web at scale. In recent years, we’ve seen conspiracy theories trend on social media platforms, fake Twitter and Facebook accounts stoke social tensions, external actors interfere in elections, and criminals steal troves of personal data. ... A legal or regulatory framework that accounts for social objectives may help ease those tensions."

Follow CircleID on Twitter

More under: Internet Governance, Policy & Regulation, Web

Changes to the Domain Name Marketplace


The new gTLD program and the introduction of 1200+ new domain name registries has significantly altered the marketplace dynamics. New domain name registries must navigate an environment that is, to an extent, stacked against them. For example, most registrars discourage direct communication between their customers and the registry operators. However, many new registries address specific markets and must maintain contact with future and existing registrants to accomplish their objectives. In addition, there is now a thriving and rapidly evolving Registry Service Provider (RSP) market that Registry Operators must navigate to procure economical and reliable service. There is also regulation. Registry Operators are required to have each innovation pre-approved by ICANN in a process that is costly, reveals trade secrets, and stifles innovation. These new entities must also absorb a $2 tax on every domain (through the Continuing Operations Instrument) before the name is sold. Others in the eco-system, registrars, RSPs and registrants, face no such restrictions. For example, while registry operators must make their inventory available to all registrars, there is no reciprocal requirement for registrars to carry that inventory. The challenges confronting these new TLDs are far different than those that faced the original seven gTLDs. Those challenges threaten the capacity of many TLDs to thrive. This requires an assessment and adjustment in the regulatory environment so that the marketplace can continue to provide increased competition and choice for the internet-using public. This paper recommends creation of some improvements and a general de-regulation of the marketplace to encourage innovation and promote its overall health: making certain aspects of the "level-playing field" requirement reciprocal between registry operators and registrars, elimination of restrictions on all types of vertical integration, automatic accreditation of registry operators as registrars, abolition of all or most of the RSEP process, creation of a flexible ICANN fee structure ICANN, or a combination of Registry Operators, should fund a brief, thorough study of the current marketplace because of the changes that have occurred from the original marketplace for which current regulations were developed. Background In the beginning, there was Network Solutions. NetSol operated the only open gTLDs (.com, .net and .org) and the only registrar. When NewCo (or ICANN) was created, it was charged with creating competition and choice for consumers. Even in those early days, it was recognized that creating competition in the registrar space was relatively easy and creation of more domain name registries would have to wait for the creation of ICANN. The creation of registry-registrar marketplace was done with some care in order to facilitate nascent registrars' engagement with the marketplace. Registry operators were tightly regulated (with a ruleset collectively called the "level-playing field). Registry operators were banned from selling directly to registrants; vertical integration was banned. Registry wholesale prices were fixed by contract. Registries had to make their names available to all registrars equally. All of these restrictions did not really matter because there was only one domain name registry. Soon there was a few more as .org was transferred to ISOC; then .biz, .info, and a few others were delegated. They all had price controls and incorporated the same restrictions to support the registrar marketplace. Some innovation was tried. The .museum registry inserted a wildcard into their zone in order to help users find their museum members. That was disallowed. The .museum registry also tried to reach out and sell directly to museums because registrars had little knowledge of museum culture, products, or services. That was also chall[...]

IETF and Crypto Zealots


I've been prompted to write this brief opinion piece in response to a recent article posted on CircleID by Tony Rutkowski, where he characterises the IETF as a collection of "crypto zealots." He offers the view that the IETF is behaving irresponsibly in attempting to place as much of the Internet's protocols behind session level encryption as it possibly can. He argues that ETSI's work on middlebox security protocols is a more responsible approach, and the enthusiastic application of TLS in IETF protocol standards will only provide impetus for regulators to coerce network operators to actively block TLS sessions in their networks. Has the IETF got it wrong? Is there a core of crypto zealots in the IETF that are pushing an extreme agenda about encryption? It appears that in retrospect we were all somewhat naive some decades ago when we designed and used protocols that passed their information in the clear. But perhaps that's a somewhat unfair characterisation. For many years the Internet was not seen as the new global communications protocol. It was a far less auspicious experiment in packet switched network design. Its escape from the laboratory into the environment at large was perhaps more because of the lack of credible alternatives that enjoyed the support of the computer industry as it was to the simplicity and inherent scalability of its design. Nevertheless, encryption of either the payload or even the protocols was not a big thing at the time. Yes, we knew that it was possible in the days of Ethernet common bus networks to turn on promiscuous mode and listen to all traffic on the wire, but we all thought that only network administrators held the information on how to do that, and if you couldn't trust a net admin, then who could you trust? The shift to WiFi heralded another rude awakening. Now my data, including all my passwords, was being openly broadcast for anyone savvy enough to listen to, and it all began to feel a little more uncomfortable. But there was the reassurance that the motives of the folk listening in on my traffic were both noble and pure. They twiddled with my TCP control settings on the fly so that I could not be too greedy in using the resources of their precious network. They intercepted my web traffic and served it from a local cache only to make my browsing experience faster. They listened in to my DNS queries and selectively altered the responses only to protect me. Yes, folk were listening in on me, but evidently, that was because they wanted to make my life better, faster, and more efficient. As Hal Varian, the Chief Economist of Google, once said, spam is only the result of incomplete information about the user. If the originator of the annoying message really knew all about you it would not be spam, but a friendly, timely and very helpful suggestion. Or at least that's what we were told. All this was making the Internet faster, more helpful and, presumably by a very twisted logic, more secure. However, all this naive trust in the network was to change forever with just two words. Those words were, of course, "Edward Snowden." The material released by Edward Snowden painted the shades of a world that was based on comprehensive digital surveillance by agencies of the United States Government. It's one thing to try and eavesdrop on the bad people, but it's quite another to take this approach to dizzying new heights and turn eavesdropping into a huge covert exercise that gathers literally everyone into its net. Like George Orwell's 1984, the vision espoused within these agencies seemed to be heading towards capturing not only every person and every deed, but even every thought. It was unsurprising to see the IETF voice a more widespread concern about the erosion of the norms of each individual's sense of personal privacy as a consequence of these disclosures. F[...]

Why Has ICANN Cut Subsequent TLD Round Preparations From Its Budget?


As we approach another ICANN meeting and another opportunity for our community to come together to discuss, collaborate and work, there is naturally a flurry of activity as stakeholders push for a spot on the agenda for their key areas of interest. And in the midst of current discussions, particularly around important topics like GDPR, it's easy for other vital conversations to be missed. But one topic I hope will be on the agenda in Puerto Rico will be ICANN's plans and roadmap for working towards another new TLD application window. I'm confident I'm not alone in this interest; as a passionate member and stakeholder of the domain name community, I have heard and taken part in many discussions recently about what it will take to make new TLD applications available again. And as ICANN61 draws even nearer, I can't help but ask; how did we get to this position of uncertainty? How is it even possible that in 2018, more than six years after the previous application window closed, we still don't have any clarity on how ICANN is preparing for a new window or even when this preparation will be undertaken? We know that there is already community policy development underway to determine the best way forward for new TLD applications. We are also under no illusions that this will be an easy process. There's a lot of work to be done. But I noted with interest the ICANN FY19 Draft Budget, which now states that "no resources are in the FY19 budget for [the] implementation work" resulting from the New gTLD Subsequent Procedures PDP Working Group. The latest expectation is that this work will be complete by December 2018, with the consensus recommendations to be adopted by the Board by the end of FY19. In addition, ICANN has already recognized in the past that "some amount of preparatory work could be done in parallel to the PDP Working Group's Discussions." So if this is the case, why has ICANN removed the resources it needs to complete this work from its FY19 budget? Is it going to be a decade between application rounds? If so, that seems highly unacceptable and frankly pretty embarrassing for us as an industry. As the adage goes: if you fail to plan, you plan to fail. Ensuring there is adequate resourcing to complete the important work required in this project is vital; for us as a community and the countless stakeholders that we represent and to whom we answer. I look forward to hearing what ICANN and the broader community has to say on this topic in Puerto Rico. As was laid out by ICANN way back in 2008, "the introduction of new gTLDs ... are central to fostering choice and competition in the provision of domain registration services the promotion of ICANN's core values." Such a vital project can't be allowed to slide off the radar. Let's plan to succeed, and get it back on the agenda. Written by Tony Kirsch, Head of Professional Services at NeustarFollow CircleID on TwitterMore under: ICANN, New TLDs [...]

The IPv4 Market: 2017 and Beyond


The IPv4 market has grown significantly in the last four years. It finished particularly strong in 2017, both in terms of the total volume of addresses traded and overall number of intra- and inter-RIR transactions in the ARIN region. Over the last four years, the steady and sometimes substantial growth in the number of transactions has been mostly attributable to a dramatic increase in small block trades of fewer than 4,000 addresses. In contrast, the volume of addresses sold during the same period was much more volatile. Between 2014 and 2015, the volume of addresses traded increased seven-fold to nearly 40 million addresses. Between 2015 and 2016, this number dropped by half. Then between 2016 and 2017, the trading volumes skyrocketed again, more than doubling in the intra-RIR market. This pattern of activity is directly correlated to the dips and surges in available large block supply. In our 2016 report, we attributed the sharp reduction in trading volumes to both the depletion of large block supply available in the marketplace, and the decision by some large block holders to delay entering the market altogether until large block pricing improved. The 2017 rebound in part reflects the market's response to this large block scarcity. Unit pricing for large block transactions has continued the steep upward trajectory that began at the end of 2016. Large blocks were trading for as little as $4 per number in 2015. By the end of 2017, they were trading for around $17-$18 per number — surpassing small block unit pricing for the first time. Heavily influenced by conditions in the large block market, unit pricing across the entire market has also climbed. In 2018, buyers should expect most informed sellers to set a floor price of $15 per number. These escalating prices, in combination with increased buyer flexibility, prompted some new large block sellers to enter the market. Buyers are increasingly more willing to enter into contract structures that afford sellers more time to undertake renumbering efforts required to free up their supply. Avenue4's 2017 State of IPv4 Market Report – Analysis of the 2017 market and where it's likely headed in 2018. (Download Report)In addition, large block buyers are more willing to accept smaller and smaller block sizes, which appeals to sellers that have substantial but fragmented unused address space. In 2017, the /16 (65,536 numbers) continued to be a popular block size for both large and mid-block buyers. For the first time, there were many blocks smaller than a /16 transferred as part of larger transactions. We expect the small block market to continue to thrive over the next several years until IPv6 becomes the dominant Internet protocol. The large block market is another story, however. Although we expect additional large block supply to enter the market later this year into early 2019, mostly in the form of legacy /8 address space, the available large block space is dwindling and could disappear entirely within the next 2 years. IPv6 migration was not a market factor in 2017.That should continue into 2018 and 2019 given the current pace of IPv6 adoption. A full analysis of the IPv4 market, with additional data, can be found in Avenue4's 2017 State of the Market Report. Written by Janine Goodman, Vice President and Co-founder at Avenue4 LLCFollow CircleID on TwitterMore under: IP Addressing, IPv6 [...]

Microsoft, Facebook and Others Demand ICANN Take a Closer Look at Questionable Registrars


Adobe, Facebook, Microsoft and eBay are among a group of leading companies demanding ICANN to take a closer look at an "immediate and urgent matter" involving a subset of questionable domain name registrars. Kevin Murphy of Domain Incite reports: "The ad hoc coalition, calling itself the Independent Compliance Working Party, wrote to ICANN last week to ask why the organization is not making better use of statistical data to bring compliance actions against the small number of companies that see the most abuse. AlpNames, the Gibraltar-based registrar under common ownership with new gTLD portfolio registry Famous Four Media, is specifically singled out in the group's letter." Independent Compliance Working Party has suggested the development of a data-driven roadmap for compliance based on key information and statistics.

Follow CircleID on Twitter

More under: Cybercrime, Domain Management, Domain Names, ICANN, Policy & Regulation, Spam, New TLDs

ICANN Proposed Interim GDPR Compliance Model Would Kill Operational Transparency of the Internet


ICANN has consistently said its intention in complying with the European Union's General Data Protection Regulation (GDPR) is to comply while at the same time maintaining access to the WHOIS domain name registration database "to greatest extent possible." On February 28, ICANN published its proposed model. Strangely, while ICANN acknowledges that some of the critical purposes for WHOIS include consumer protection, investigation of cybercrimes, mitigation of DNS abuse, and intellectual property protection, the model ICANN proposes provides no meaningful pathway to use WHOIS in those ways. Under ICANN's model, use of WHOIS "to the greatest extent possible," really means use will become practically impossible. They are, in effect, proposing to end the internet's operational transparency. Today, while users can easily access a full set of publicly available WHOIS data for purposes like fighting fraud or enforcing IP rights, the published ICANN model removes most information from public view and turns WHOIS into a tiered or "gated" access system, where the vast majority of key data will only be available for users who are accredited — at some undefined point in the future — to pass through the gate. Although a gated access model could be crafted to provide full access to approved users, ICANN did not appear to incorporate any of the approaches found in the community proposed models relating to gated access. Although ICANN originally said that it was not set on picking one of its own models and would be open to taking elements from the community models, it appears not to have done so and to have instead charted its own course. Denial of appropriate WHOIS access will force law enforcement, cybersecurity professionals, and intellectual property rights holders to pursue John Doe lawsuits, simply to identify registrants through subpoenas and court orders. That will not only greatly amplify overall online enforcement costs for all concerned, but also waste judicial resources, and increase overhead costs and liability for contracted parties. Contracted parties who are presented clear-cut evidence of illegality yet deny demands to reveal registrants can quickly lead to potential contributory liability. Worse yet, denial of bulk WHOIS access would immediately grind to a halt vitally important cybersecurity efforts, upon which all major social media platforms, online marketplaces, and other websites depend to identify and remove malicious content, false narratives which threaten democracy, and the sale of counterfeit or pirated goods, among other harmful activity. Overbroad removal of WHOIS service could also potentially fuel further political or even legal challenges to the IANA transition and even embolden possible new legislation aimed at intermediary accountability and liability for contracted parties and ICANN itself. It is not too late for ICANN to make the necessary changes to its proposed interim model to address these concerns. Work on an accreditation system work must start right away, and in the absence of such a system by May 25, some form of self-certification must be recognized as an acceptable interim approach. Ultimately, ICANN's model needs to ensure that accredited users, like fully licensed drivers, get to drive easily and quickly across all WHOIS roads instead of being presented with unexpected roadblocks. There is also the risk that registries and registrars might ignore ICANN, go their own route on compliance and decide that the easiest solution is to make WHOIS "go dark." We are already starting to witness this with bulk access, where certain registrars are already unilaterally masking data and throttling the service, disrupting critical tools that promote the public intere[...]

Experience 'a Walk in the Shoes of a Registry Operator' at ICANN 61


One of the ever-present questions in the domain name community is "have new TLDs been a success in the marketplace?" As many within the industry will appreciate, it's a difficult question to answer using traditional metrics (such as domain registration volumes), and it is important to remember that the new TLD expansion in 2012 was all about diversity, competition and choice. I think this is exactly what has happened over the last few years. To explore exactly this topic, I'll be participating alongside a fantastic line-up of TLD representatives who will be sharing their story at next week's ICANN meeting in Puerto Rico. This session will be moderated by industry legend Kurt Pritz and include speakers such as; Dirk Krischenowski of .berlin GG Levine of .pharmacy Andrew Merriam of .design Craig Schwartz of .bank Matt Embrescia of .realtor So what can you expect from this session? In essence, we're going to give a 'pulse check' on the new TLD program, from those who are in amongst it every day. The TLDs represented have a wide range of purposes, business models, target markets, and strategies. We're at different stages of launch, rollout or availability, and we comprise a combination of brands, generics and geographic domains. One of the difficulties of assessing the success of the new TLD program is that there is no one definition of 'success.' That's why I believe a panel like this is so important — to hear perspectives from different facets of our space and hear in their own words how they're working towards (or achieving) their goals and impacting their audiences and communities. From a Neustar perspective, I'll be sharing some of our experiences in transitioning our online presence from to our new identity. It's been a long and complicated road, but we're excited about the early results and are doing all we can to share what we've learned to assist and inspire others. If you're attending the ICANN meeting in Puerto Rico, I strongly encourage you to come along to the session. The details are below — I look forward to seeing you there. ICANN 61 – Puerto Rico Presentation to Cross Community Working Group 'A walk in the shoes of a Registry operator' Monday 12th March, 1:30 – 3:00pm Ballroom A Written by Jason Loyer, Product Management Director, Registry Services at NeustarFollow CircleID on TwitterMore under: Domain Management, ICANN, Registry Services, New TLDs [...]

Domaining Europe Becomes NamesCon Europe


After 10 years as one of the top-level European Domaining Conferences, it is our pleasure to announce the transition of Domaining Europe into NamesCon Europe! This is an exciting new chapter for the NamesCon brand that expands into the European domaining market. The agreement between Domaining Europe and NamesCon was confirmed at the beginning of 2018. The rebranding of Domaining Europe to NamesCon Europe is in effect for the upcoming June 2018 event in Valencia. Dietmar Stefitz, who founded Domaining Europe in 2008, will directly manage the 2018 event with the assistance of NamesCon producers, and then evolve into a brand ambassador position and continue to advise the NamesCon team to ensure the spirit and theme of the conference remains intact. "After 10 years of hard work I am thrilled to find a new home for Domaining Europe. NamesCon is the only entity to carry on this conference in the spirit of all involved, be it attendees, sponsors, or speakers. I want to thank all participants of Domaining Europe in the last years and wish the team of NamesCon Europe all the best for the future." — Dietmar Stefitz, Domaining Europe The NamesCon team is contributing top-level speakers and content, global attendee marketing efforts, and expanded sponsor outreach. You can join the conference in Valencia this June, where decision-makers from the global domain industry will come together to share new ideas. "We are very excited to contribute to a successful NamesCon Europe 2018 under the direction of founder Dietmar Stefitz. Europe is an important forum for NamesCon, and as we enter into the 10th year of Domaining Europe, we look forward to honoring Dietmar's legacy and bringing even more value to the event for both attendees and partners." — Soeren von Varchmin, NamesCon Register now for an unforgettable conference! Sign up before it's too late and enjoy 50% off (limited availability) using the code DESVIP on Eventbrite. Remember, your conference pass covers all catering including coffee breaks, lunches, a gala dinner, and a sightseeing tour. Written by Sara Vivanco, Marketing ManagerFollow CircleID on TwitterMore under: Domain Management, Domain Names, ICANN, New TLDs [...]

FCC Announces Near $1 Billion Plan to Restore Broadband in Puerto Rico and the U.S. Virgin Islands


Federal Communications Commission Chairman Ajit Pai today proposed close to $954 million toward restoring and expanding communications networks in Puerto Rico and the U.S. Virgin Islands that were damaged and destroyed as a result of the 2017 hurricane. "The people of Puerto Rico and the U.S. Virgin Islands are still recovering from last year's devastating storms. That means the FCC's work is far from over," said Chairman Pai. ... With the 2018 hurricane season less than three months away, we need to take bold and decisive action." The proposed plan includes immediate infusion of approximately $64 million in additional funding for short-term restoration efforts and $631 million in long-term funding for the restoration and expansion of fixed broadband connectivity."

Follow CircleID on Twitter

More under: Access Providers, Broadband, Wireless

From Net Neutrality to Seizing Opportunity


Network neutrality is an important issue. We mustn't allow transport owners to limit our ability to communicate. But, NN in itself positions the Internet as a telecommunications service. We need to step back and recognize that the Internet itself is part of a larger shift wrought by software. I thought about this more when I found myself in my hospital room (after knee surgery) unable to open and close the shades by myself. But yet I could control the lights in my house! It wasn't simply that I built a one-off special case but rather I carefully architected my home lighting control implementation to minimize the inter-dependencies while taking advantage of existing technologies. For example, to the extent I could, I avoided depending on the accidental properties of silos such as Zigbee. I normalized any transport to simple packets. This is how the Internet works — it normalizes the underlying infrastructure to IP, so I don't care if a particular segment is ATM or cellular. I can use the same technique as in tunneling IP through Bluetooth using the general serial protocols. Software has given us the ability to stitch things together. In designing (and redesigning) applications we have also gained an understanding of the importance of (dynamic) architectural boundaries that minimize coupling (or entanglement). Thus, with an IP connection, I could insert shims (AKA work-arounds as long as they preserve the architectural integrity) such as NATs or, indeed, treat the entire telecom system as a simple link. I can normalize this by overlaying my own IP connection on top of what I find, including existing IP connections (as we do with VPNs). This allows us to implement and then evolve systems as we improve our understanding. In the telecom paradigm, I'd have to rely on the network to assure a path from my phone to a device in my house as a virtual wire. But in the new paradigm, we have relationships that are abstract. We can represent the relationship "[a, b]" where a is the app element, and b is the device (or virtual device). It needn't involve a physical wire. The network connection is not a layer but simply one resource I can use. It does require thinking differently and discovering what is possible rather than having rigid requirements. Though I avoid depending on a provider's promises, I may be limited by policies that second-guess what I'm doing. This is why neutrality is an important principle. That includes not doing me favors by second-guessing my needs and thus working at cross purposes as I innovate outside their design point. A better term is "indifference" because the intermediaries don't know my intent and thus can't play favorites. More important is that it means that paywalls and security barriers may make it impossible for my application to work. I had to manually intervene to use the hospital's WiFi connection. I can do that in simple cases, so we assume that status quo is fine, but it is a fatal barrier for "just works' connectivity. As with any new paradigm, it is difficult to explain because our very language embeds implicit assumptions. It also means that often those most expert in network architecture can get lost in their expertise. In the case of the Internet, I see this in the idea of endpoints identifiers being IP addresses assigned by network operators. As one who takes advantage of the opportunities I find lying around, I view networks as just a means and try to program around limits. If the opportunities aren't available, I can create my own. One example is IPv6. V6 would make it easier to make a direct connection between two endpoints but in its abs[...]

U.S. Complaint to WTO on China VPNs Is Itself Troubling


On 23 February, the U.S. Administration had the chutzpah to file a formal communication to the World Trade Organization (WTO) complaining about "measures adopted and under development by China relating to its cybersecurity law." However, it is the U.S. complaint that is most troubling. Here is why. The gist of the U.S. complaint is that China's newly promulgated directive on the use of VPN (Virtual Private Network) encrypted circuits from foreign nations runs afoul of Article 5(c) of the Annex on Telecommunications of the General Agreement on Trade in Services (GATS). The U.S. alleges that "this provision was designed specifically to ensure access to leased lines and other services (e.g., VPN services..." Apart from the current reality that the U.S. Administration has been attempting to destroy the WTO and its agreements including calling for a Trade War, the complaint is factually wrong, and the notion from a cybersecurity standpoint is simply profoundly wrong-headed. The complaint is disingenuous First of all, the WTO Agreement and Annex on Telecommunications being referenced here emerged from negotiations in the 1986-1994 timeframe in conjunction with the ITU 1988 Melbourne Treaty which enabled the use of international leased lines for the first time for services to the public, including datagram internets. The WTO Agreement and Annex are explicitly included in the Melbourne Treaty reference materials. The implementation of the Art. 5(c) was expressly predicated on nations following ITU-T standards. (I can credibly assert this fact because I was the ITU representative to the GATS meetings who proposed placing the provisions into the draft agreement!) However, several years later, the Clinton Administration, decided to pursue a strategy of unilaterally ignoring the ITU 1988 Melbourne Treaty obligations and the standards that were intended to be used. Today, the U.S. has essentially walked away from their development, while China has continued to invest considerable resources in their continuing evolution and application for uses such as VPNs in conjunction with data centres. Now that the current U.S. Administration and its president are unceasingly disparaging multilateral trade cooperation and the WTO, as well as unilaterally abrogating its trade agreements, it is well beyond disingenuous to be complaining of another nation that is arguably acting in accordance with them. Trump creates a whiplash transition from WTO cooperation to WTF chaos. The complaint seeks to impose capabilities that did not exist at the time of the Agreement VPNs did not even exist at the time the GATS Agreement and the Annex were developed; and to the extent they were even contemplated, the WTO Agreement — like the ITU treaty provisions — has explicit national security exceptions. Indeed, the first apparent reference to the use of the term "encryption" within the WTO did not occur until 1998. Most histories of the origin of the VPN concept did not arise until 1996. Technical standards were not even discussed globally until around 2000; and just began to be discussed in conjunction with data centres in 2011. The ITU-T itself published multiple international standards for VPN, that include: Rec. ITU-T Y.1311, Network-Based VPNs — Generic architecture and service requirements (03/02); Rec. ITU-T Y.1314, Virtual private network functional decomposition (10/05); Rec. ITU-T Y.2215, Requirements and framework for the support of VPN services in NGN, including the mobile environment (09/06). There are also two relatively recent ITU-T standards: Supp. 30 to Rec. IT[...]

Women in Security Organize New Conference in Reaction to RSA's Lack of Female Speaker Inclusion


RSA, one of the largest cybersecurity conferences, has been criticized for booking only one female keynote speaker this year who is Monica Lewinsky. She will be speaking against online bullying. Lewinsky herself has expressed concern over this "oversight". In protest, a group of women have organized a new conference called OURSA (Our Security Advocates Conference) that will feature more women in the industry. "Some conferences claim this is too hard to do because of the overall lack of diversity in the industry, we're going to prove otherwise," a spokesperson for the conference told reporter Kate Conger from Gizmodo. OURSA will be a single-track, one-day conference to be held in San Francisco on April 17, 2018. Speakers include Adrienne Porter Felt of Google of Aanchal Gupta, Facebook, Sha Sundaram of Snap and Eva Galperin of the Electronic Frontier Foundation.

Follow CircleID on Twitter

More under: Cybersecurity

Several Major Tech Companies File Suit Against FCC Over Net Neutrality Repeal


Several major tech companies, including Kickstarter, Foursquare and Etsy, filed a lawsuit today against the Federal Communications Commission in an effort to preserve net neutrality rules. Other companies in the group include Shutterstock, Expa and Automattic. Ali Berland reporting in The Hill: "The companies join Vimeo and Mozilla, as well as several state attorneys general who have also filed lawsuits against the FCC in support of the net neutrality rules. Like the other lawsuits, their new case hinges on the Administrative Procedure Act, which they argue prevents the FCC from 'arbitrary and capricious' redactions to already existing policy."

Follow CircleID on Twitter

More under: Access Providers, Broadband, Net Neutrality, Policy & Regulation