2016-10-20T17:00:01-08:00"Reverse Domain Name Hijacking" (RDNH) is a finding that a panel can make against a trademark owner in a case under the Uniform Domain Name Dispute Resolution Policy (UDRP). RDNH Defined Specifically, according to the UDRP Rules, RDNH is defined as follows: "Reverse Domain Name Hijacking means using the [UDRP] in bad faith to attempt to deprive a registered domain-name holder of a domain name." The Rules also state: "If after considering the submissions the Panel finds that the complaint was brought in bad faith, for example in an attempt at Reverse Domain Name Hijacking or was brought primarily to harass the domain-name holder, the Panel shall declare in its decision that the complaint was brought in bad faith and constitutes an abuse of the administrative proceeding." While neither the UDRP nor the Rules provide any further details or guidance, the WIPO Overview of WIPO Panel Views on Selected UDRP Questions, Second Edition, provides some insight into the circumstances in which panels have found RDNH: To establish Reverse Domain Name Hijacking, a respondent would typically need to show knowledge on the part of the complainant of the complainant's lack of relevant trademark rights, or of the respondent's rights or legitimate interests in, or lack of bad faith concerning, the disputed domain name. Evidence of harassment or similar conduct by the complainant in the face of such knowledge (e.g., in previously brought proceedings found by competent authorities to be groundless, or through repeated cease and desist communications) may also constitute a basis for a finding of abuse of process against a complainant filing under the UDRP in such circumstances. The WIPO Overview lists the following circumstances in which UDRP panels have actually entered a finding of RDNH: the complainant in fact knew or clearly should have known at the time that it filed the complaint that it could not prove one of the essential elements required by the UDRP the complainant failed to notify the panel that the complaint was a refiling of an earlier decided complaint or otherwise misled the panel a respondent's use of a domain name could not, under any fair interpretation of the reasonably available facts, have constituted bad faith the complainant knew that the respondent used the disputed domain name as part of a bona fide business for which the respondent obtained a domain name prior to the complainant having relevant trademark rights RDNH in Practice Although WIPO's UDRP statistics do not indicate how many cases have resulted in a finding of RDNH, a regular reading of decisions makes clear that RDNH is far from common. It appears as if a little more than 100 WIPO UDRP decisions so far this year have mentioned RDNH — out of more than 2,400 decisions to date. And, of course, not all of those decisions actually found RDNH; many of them denied it. Here's one particularly interesting example: In a decision denying transfer of the domain name
2016-10-20T12:22:00-08:00Update on the Digital Economy Officers Program at the U.S. Department of State Answering questions at the Internet Association's Virtuous Circle conference last week, Secretary Kerry presented the U.S. Department of State's effort to prioritize global digital economy issues abroad in order to reflect the growing importance of these issues in both economic and foreign policy. The State Department has made real progress on this initiative in the last year and hopes to continue our momentum going forward. Approximately six months ago, we announced the State Department's new Digital Economy Officers (DEO) Program with the goal of strengthening the capacity of our people, embassies, and consulates overseas to address the challenges and seize the opportunities that are emerging with the development of the global digital economy. We believe that this new global platform will help enhance the prosperity not only of U.S. people and firms, but that of other nations and their people, helping achieve more broadly shared prosperity and sparking innovative solutions to both commercial and social challenges that the world faces. Secretary Kerry Speaking About Internet Policy – Virtuous Circle Conference on October 10, 2016, at the Rosewood Sandhill Hotel in Menlo Park, California. State Department PhotoGiven that the internet and the digital economy are global in scope and affect a range of U.S. interests, the State Department is uniquely equipped among U.S. agencies, to engage, lead, and advocate on these issues.The component parts of the global digital economy are the communications networks that connect the world and the data, information, and services that ride over those wires and airwaves as well as every industry process across sectors dependent on those networks and services. With that definition in mind, it is clear that the global economy is in many ways dependent on the health of the global digital economy. And the issues involved — from debates over data localization to privacy to intellectual property and platform regulation — constitute a dynamic and rapidly changing area of foreign and economic policy that demands constant updating of skills, access to information, and new capacities to keep pace. The development of the modern digital economy creates immense opportunity for economic social progress due to its economies of scale and scope but it is not without its challenges. It raises complex issues that are often technical but require an understanding of how the technical interacts with the political and economic outcomes we are pursuing in the world. Issues ranging from market competition between firms operating in the digital space to how changes in production resulting from the digital economy are impacting labor markets to how all of this information is transferred and used in a manner that respects our basic dignity are confronting us in dialogues and debates within and across markets all over the world. Since the launch of the DEO program, we have identified nearly 140 digital economy officers at our embassies and consulates around the world. To make sure that our diplomatic workforce is informed and competitive in this space, we have taken some important steps in the last six months in key areas to elevate our game in this space: Training: We have strengthened our annual training course on Internet and telecommunications policy at the Foreign Service Institute and are working on a proposed global training event for digital economy officers to be held in the United States in the spring of next year. Communications: We have increased the frequency of our communications with posts on digital economy issues, improved the Department's internal website on digital economy issues, and kicked off a series of webinars on our work. We have hosted two webinars so far and have two more scheduled in the coming months. Human Resources Management: We are continually striving to make sure that we have the right people working on digital economy issues and that their perform[...]
2016-10-20T10:49:00-08:00The Uniform Domain Name Dispute Resolution Policy (UDRP) is an online dispute resolution regime. While panelists technically have discretion under Rule 13 to hold in-person hearings if they "determine[ ] ... and as an exceptional matter, that such a hearing is necessary for deciding the complaint" no in-person hearing has ever been held. Rule 13 exists to be ignored. Parties make their appearance and present themselves on the written page, and what they say and how they express themselves in pleadings and what they annex are crucial to their argument. Traditionally with live witnesses, juries and judges look and listen to performances; demeanor, comportment, and facial expression are important factors as indicators of truthfulness. While we can't transfer these qualities to paper submissions in any literal sense, there are equivalents if we think of these qualities in a broader sense as meaning the content and nuance of a speaker's presentation in writing, selecting, organizing, and proving contentions. What speakers say, the language they use, the allegations they make, the narratives they construct, and the evidence they produce or withhold play a decisive role in assessing their claims and defenses. In a word, speakers have to be credible, which is no small matter because it requires a disciplined approach to the content of argument both in the pleadings and annexes. We are constantly reminded of this in UDRP decisions. In the small percentage of contested disputes (that is, where respondents appear and defend), there is either a lack of evidence or lack of credibility, or both. It infects both parties' submissions. However, measuring credibility is not scientific and there are cases that go one way when they should have gone the other. The dispute over
2016-10-19T10:09:00-08:00Observers of IANA transition may have found a remarkably interesting fact that both supporters and opponents of the transition like citing China, along with a small number of other countries, as evidence in favor of their arguments. For supporters, take Larry Strickling as an example, blocking transition benefits China in that it will "intensify their advocacy for government-led or intergovernmental management of the Internet via the United Nations." On the contrary, opponents led by Ted Cruz think that the US should not "give away control of the Internet to a body under the influence and possible control of foreign governments" like China, as they will "censor the internet internationally." The understanding of relating IANA's technical coordination to censorship is certainly wrong, as Tim Berners-Lee and Daniel Weitzner have persuasively pointed out. By contrast, the pro-transition camp's arguments appear more plausible. Their arguments imply that China does not like the transition at all, therefore they have to make this happen. It is an unsurprising, even popular idea. In many places, China has been labeled as a stakeholder who at best "dislikes," and at worst "opposes" the multistakeholder governance process, which is claimed to be the building blocks of ICANN and the broader Internet community. Unfortunately, these understandings turned out to be misleading or wrong. China has recently extended welcome to IANA transition. In a press conference for the preparation of the third World Internet Conference a week ago, Ren Xianliang, the deputy chief of Cyberspace Administration of China (which oversees Internet governance) said that China welcomes US government's decision to relinquish its oversight of the critical Internet resources. Mr. Ren emphasized that China has given high-level attention to Internet development and Internet governance. In addition, China has consistently advocated constructing a cyberspace that features being peaceful, secure, open and cooperative. Wishing a smooth transition, Mr. Ren believed that the transition would have positive impact on the internalization of the critical resources management and on bridging the digital gap between the developed and developing countries. I am not in the position to elaborate too much about the policy implication of Mr. Ren's remarks. However, the positive attitude from high-profile authority at least sends a clear signal that China is not standing as a hurdle in the transition. I believe that it will encourage the Chinese Internet community to be more actively participating in the post-transition ICANN affairs and more broadly, in the global Internet governance discussions. Written by Jian Chuan Zhang, Senior Research Fellow at KENT and ZDNSFollow CircleID on TwitterMore under: ICANN, Internet Governance [...]
2016-10-19T05:00:00-08:00Google has recently announced the release of Nomulus, its free, open source registry software, triggering discussion of its impact on the industry. Afilias has over 15 years of experience in registry operations, and offers the following initial thoughts. * * * First, free registry software is not new. CoCCA (Council of Country Code Administrators) has offered this option for years, and TLDs such as .CX (Christmas Island) and .KI (Kiribati) use it. It is supported on a "best efforts" basis and appears to meet the limited needs of a few small operators. Second, registry services are about the SERVICE, not the software. While software is important, someone has to answer the phone when registrars (and ICANN) call. Someone has to deal with abuse if it happens. Someone has to accept deposits, manage billing, and keep the accounts straight. Even Afilias doesn't know how to automate EVERYTHING (and we have tried!). Most TLD owners don't like operational administrivia, and find it cheaper and easier to outsource it. Third, free registry software does not mean a free registry operation, as Minds and Machines (MMX) recently concluded. MMX has decided to stop running its own registry and outsource their entire registry (and registrar) operations. Why? As stated in their 20SEP2016 Investor Presentation, this was to "Rationalize the business into a pure play owner of top level domains. Historically, MMX ran its own technical backend (RSP) and retail outlet (registrar) at considerable cost." After years of trying to do everything themselves, MMX is outsourcing operations so they'll be free to focus scarce internal resources on the strategically more important parts of their business. Finally, even Google misses the mark sometimes, as evidenced in the Google Graveyard, which is rife with examples of products that were launched and then discontinued (e.g. Google Reader, Google Talk, iGoogle, Google Health, Knol, Picnik and many others). * * * What will be the impact of another free registry software option? With over 1400 TLDs in the root now, surely someone will try it and gain some real-life experience. Stay tuned. Written by Roland LaPlante, Senior Vice President and Chief Marketing Officer at AfiliasFollow CircleID on TwitterMore under: Domain Names, Registry Services, Top-Level Domains [...]
2016-10-18T11:28:01-08:00A certain number of brands are using a dot brand domain as their main actual website. The following analysis looks at how dot brands are used in brand communication. Are brands communicating about their domain names, or are domain names supporting brand communication? Dot brand domains – what are they ? Dot brand corresponds to the ability, for a certain number of pioneering brands, to use their brand name at the top level of the naming system for their own purpose, in parallel to a dot com or a country code. Early October, 553 brands had signed an agreement with ICANN to run their own brand Top Level Domain. Five more brands are still finalizing their discussions and agreements. Additional potential new openings will happen in the next years. Thirty of the brands have decided to shift major significant digital assets to their dot brand and around ten have chosen to use a dot brand domain as their main domain name. Three main communication options There are three main options on dot brand domain name communication: — Communicating about the launch and the new domain name itself — Explaining the change in a global communication — In a totally seamless way Communicating about the launch of domain name The launch of these new domain names is most often simultaneous to the launch of a new website, therefore most of the communications are a mix between launching a new website and a new digital platform. Weir's launch, in march 2016, is a very good example of a global digital platform launch. The main support of that communication are press releases issued by brands upon launch. In a previous article, we had outlined the content of the press releases issued upon launch of the new dot brand websites. There is limited communication with the end customer. The only brand that displayed a banner on the very first page of the sites and on social media, is Sener. Figure 1: SENER promotes its top-level dot brand domain on its new website. (Click to Enlarge) Explaining the change Canon has decided to explain to their visitors that there was a change in the structure of the domain name. Figure 2: Canon welcoming customers to global.canon: "Until now, the URL we used for Canon's global website was 'www.canon.com.' From now on, however, we will begin gradually introducing 'global.canon' to provide information to a global audience with a new online presence." (Click to Enlarge) This message is very prominently displayed on the first page of Canon's new corporate website. Here, Canon is not using their new domain names to communicate, but believes it is important to explain the change to their users and visitors, who may be surprised to have this new domain name that appears when typing in Canon.com. Totally seamless communication The majority of brands are launching their domain name as if it had a traditional structure. The first uses of domain hacks — mainly used as URL shorteners to post links on social media — are an interesting example. On December 15, 2009, Google used the ccTLD of Greenland to use the domain goo.gl, and then youtu.be, using the ccTLD of Belgium. There was no communication about that, but brands simply started to use them and it became natural. Dot brand is a much more significant change than simple domain hacks involving business issues such as distribution network or security. On October 1, 2016, Google launched blog.Google, described by the company as: "Discover all the latest about our products, technology, and Google culture on our official blog." Interestingly, the site discusses the introduction of new features in Google products such as docs, sheets, or the presentation of new phone, but nothing about dot Google domain name itself. Figure 3: Leclerc promoting branded TLD in a brochure. (Click to Enlarge)Brands are simply putting a new URL on their communication tools in a very seamless fashion. T[...]
The death of Thailand's King Bhumibol Adulyadej has led to stores running out of black and white clothing as the population mourns its leader in color-appropriate clothing.
What does this mean for website localization?
Consider the Thailand home pages for Apple:(image)
And Coca-Cola has gone black on its social feeds:(image)
Web localization isn't about creating a localized website and forgetting about it.
It's about creating a living and breathing website that responds quickly to local events. Web localization is about respect.
Written by John Yunker, Author and founder of Byte Level Research
Follow CircleID on Twitter
More under: Web
2016-10-16T12:08:00-08:00AFRINIC is the regional Internet registry for Africa, and our core activity is to manage and distribute Internet numbers resources (IPv4, IPv6 and ASN's) to the 57 economies in Africa. IPv4 address scarcity is a very real issue worldwide, the internet keeps growing and the demand for Internet addresses will continue to grow. Africa has the lowest number of Internet users in the world. Internet's penetration in Africa jumped from very low level in 2009, to around 16% of individuals in 2013 and over 20% in 2015. While the number of African's getting online has increased enormously over the last few years, and continues to grow, one of the biggest barriers to getting online, aside from prohibitive costs and lack of infrastructure, is the issue of network reliability and stability. To account for the massive expansion in Internet-enabled services and devices, a new system of addressing had to be introduced to ensure enough unique IP addresses were available, i.e. IPv6. IPv6 is an important enabler for the next wave of Internet of Things and mobile technologies which also cater for the increase of devices connected to the Internet. One of AFRINIC's critical missions is to ensure that everyone — from Government to network operators, to the general public — know the urgent need to deploy v6. Here are some recommendations that can help plan for the future. Government, ISP's and operators should, in the meantime, skill up their technicians and engineers on IPv6. AFRINIC provides training free of charge to Network Engineers all over Africa. As of now, AFRINIC has trained more than 3000 Network Engineers to deploy IPv6 in the last 10 years. Businesses and website operators, your public-facing services need to be accessible to the entire Internet — including those who have IPv6 only. On the upside, most large service, equipment and connectivity providers worldwide have already made the effort to ensure their infrastructure is ready for IPv6. Now it is vital for the African Internet companies to align with IPv6 so that networks, services and content remain relavant as global players. Government organisations should, coordinate with the industry and promote awareness and educational activities, adopt regulatory and economic incentives to encourage IPv6 adoption, require IPv6 compatibility in procurement procedures, and officially adopt IPv6 within government agencies. It is crucial that decision makers get involved in the planning and implementation of IPv6 deployment. Some deployment of v6 is delayed because organisations do not understand the value of investing in upgrading the network as the business is still running on v4. The networks and ISPs who will face the biggest challenge are those with legacy infrastructure that are completely unprepared for IPv6. They need to start revamping their networks to ensure they are ready in good time. Multiple transition technologies are available, and each provider needs to make its own architectural decisions. Though AFRINIC still has IPv4 addresses to spare, it is time for African government, ISP's, network operators, academia and big enterprises to consider deployment and integration of the new protocol as an imperative measure for the long-term growth and stability of the Internet. Future of the Internet is IPv6 and it is time to embrace that future now as the implications of failing to embrace IPv6 might be damaging to Africa's Internet growth and deployment. Written by Vymala Thuron, Head of External RelationsFollow CircleID on TwitterMore under: Access Providers, Internet of Things, IPv6 [...]
2016-10-15T13:45:00-08:00What should we do with software patents? I've seen both sides of the debate, as I work a great deal in the context of standards bodies (particularly the IETF), where software patents have impeded progress on a community-driven (and/or community-usable) standard. On the other hand, I have been listed as a co-inventor on at least 40 software patents across more than twenty years of work, and have a number of software patents either filed or in the process of being filed. The context of the question, of course, is the recent ruling by a United States Court of Appeals in a particular patent case. One particularly relevant statement from a concurring opinion (which means the judge writing the opinion agreed with the ruling, but not the reasoning the majority of the justices involved in the case used to reach their conclusion), is — Software lies in the antechamber of patentable invention. Because generically-implemented software is an "idea" insufficiently linked to any defining physical structure other than a standard computer, it is a precursor to technology rather than technology itself... (as quoted from http://fortune.com/2016/10/03/software-patents/) There were, in fact, two different areas where the judges were concerned about software patents. The first was in relation to free speech rights, where the court argued that software patents impinge on the right to receive information and ideas by attempting to patent the ideas themselves. An argument against this might be that the patent doesn't prevent the reception of the idea, only the implementation of the idea in a commercial product — but so far, this argument doesn't seem to have been tested (or perhaps it has been rejected in some case I'm not aware of). The second is that most software patents do not, in fact, rise to the level of "nonobviousness" required of "real" patents. At this point, software patents still stand in the United States. The reasoning of the primary and concurring opinion, however, is likely to be picked up by other courts, potentially reducing (or eliminating, over time) the enforceability of software patents. Since I'm not a legal scholar, I'm not going to comment on the overall likelihood of software patents becoming less than useful. Instead, what I'd like to think through is what the reaction of the network engineering world might be. This could be either good or bad news for standards bodies, like the IETF. It will be good news if companies continue to innovate interoperable network protocols and standards without the "protection" of software patents. On the other hand, it will be bad news if this movement away from software patents is replaced by a strong movement towards trade secrets to protect perceived value. Combined with the current movement towards adding primary value through vertical integration through hyperconvergence, rather than the older model of adding value in conjunction with open standards, the lack of software patents could serve to fragment the market. This will probably be bad news for network operators simply trying to understand the various vendor offerings, and to compare them directly. In the world of software patents, a company could protect new software with patents, and then freely explain to the world how they work. Of course, there was always a good bit of hype involved in the description, of course, but at least the description was generally "out there," and available. If patent applications are generally struck down, companies will resort to stronger NDAs, and less than complete descriptions, to protect what they perceive as intellectual property. Overall, the worst possible outcome would be potentially weaker standards bodies combined with more strongly vertically integrated, secretive vendors. The best possible outcome seems to be a more open environment, where id[...]
2016-10-14T15:30:00-08:00A few weeks ago, on Oct. 1, 2016, Verisign successfully doubled the size of the cryptographic key that generates Domain Name System Security Extensions (DNSSEC) signatures for the internet's root zone. With this change, root zone Domain Name System (DNS) responses can be fully validated using 2048-bit RSA keys. This project involved work by numerous people within Verisign, as well as collaborations with the Internet Corporation for Assigned Names and Numbers (ICANN), Internet Assigned Numbers Authority (IANA) and National Telecommunications and Information Administration (NTIA). As I mentioned in my previous blog post, the root zone originally used a 1024-bit RSA key for zone signing. In recent years the internet community transitioned away from keys of this size for SSL and there has been pressure to also move away from 1024-bit RSA keys for DNSSEC. Internally, we began discussing the root Zone Signing Key (ZSK) length increase in 2014. However, another important root zone change was looming on the horizon: changing the Key Signing Key (KSK). In early 2015, ICANN assembled a design team tasked with making recommendations about changing the root zone KSK. As the design team wrapped up its work and delivered its report, it became clear that we could either work quickly to increase the ZSK in 2016 before the KSK rollover, or wait until some date in 2018 after the KSK rollover. At the Root KSK Ceremony in May of this year, Verisign presented the first 2048-bit ZSK for signing, which was pre-published in the root zone on Sept. 20, 2016. Then, at the ceremony in August, we presented the next set of ZSKs to be signed, which will be used from October through December. On Oct. 1, 2016 we began publishing root zones signed with the larger ZSK. A Look at the Data During these two important milestones we collected data with the assistance of our root server operator colleagues. With this data we can observe the rates of DNSKEY queries, UDP truncation, TCP-based queries, as well as ICMP "need to fragment" and "fragment reassembly time exceeded" signalling (see Figure 1). We've been closely watching this data for any evidence of problems related to the larger ZSK, and I'm happy to say that none has been found and no problems have been reported. Figure 1: Rate of ICMP "Unreachable Need to Fragment" Messages(Click to Enlarge) Apart from the increased level of security provided to internet users by the larger ZSK, the parties most affected by this change are the root operators. On Oct. 1, the size of the root zone file jumped from 1.6 to 2.1 MB. Furthermore, due to the high percentage of queries requesting DNSSEC records, all root servers experienced a sudden 30 percent increase in outgoing bandwidth (see Figure 2). Figure 2: Change in Average UDP Response SizeUpon publishing the root zone signed with a 2048-bit ZSK (Click to Enlarge) Thanks for a Smooth Implementation I would again like to thank everyone who made this important change possible, especially those who we worked closely with at IANA (now Public Technical Identifiers), ICANN and NTIA. The root server operators played a crucial role and we appreciate their support and cooperation. Lastly, various DNS experts were also helpful in providing suggestions, criticisms and kudos throughout the process. Although we feel confident that the ZSK length change occurred without any negative impacts, we encourage everyone to remain vigilant and report any potential issues to Verisign at email@example.com. On to the KSK Rollover... Now that the ZSK length change is complete, we look forward to continued collaboration with ICANN on the KSK rollover. If you've had a chance to read their plans and documentation, you know that the KSK rollover timeline is longer and more complex, primarily because it involves updating trus[...]
2016-10-13T11:12:00-08:00Neustar, a leading provider of registry services, is hosting a Town Hall meeting this month for the United States' country code Top-Level Domain, .US. Neustar introduced the .US Town Hall last year to reflect our commitment — and the Commerce Department commitment to the bottom-up, multistakeholder model of DNS management. The public forum is an important part of ensuring that .US continues to be a vibrant namespace that reflects America's diversity, creativity, and innovative spirit. This is an opportunity for stakeholders and enthusiasts to collaborate, sharing expertise and ideas and providing feedback for everything related to .US and the Domain Name Industry. It is also an opportunity to share the policy work of the .US Stakeholders Council over the past 18 months. The virtual .US Public Stakeholder Town Hall Meeting will take place on October 26, from 3pm to 4:15pm Eastern, or 10am to noon Pacific and will provide an opportunity to share information about the .US Namespace. Featured speakers include members of the .US Stakeholders Council as well as Neustar representatives, who are prepared to discuss stats and updates on the state of the domain industry, the state of .US specifically and considerations for .US growth. Agenda items include: — State of the Domain Industry — State of .US — Policy proposals under consideration to grow .US and respond to the concerns and interests of .US registrants. Specifically, we will be discussing and soliciting community input on: • Privacy Registration Services • Premium Names, including 1 and 2 character strings • IDN registrations in .US. If you have an interest in the .US domain space and internet growth and trends, then you can be part of the discussion by registering here. Further details will be circulated to all registered participants in advance of the meeting. Written by Becky Burr, Deputy General Counsel and Chief Privacy Officer at NeustarFollow CircleID on TwitterMore under: DNS, Domain Names, Registry Services, Multilinguism, Privacy, Top-Level Domains [...]
2016-10-13T10:33:00-08:00The Uniform Domain Name Dispute Resolution Policy (UDRP) limits parties' submissions to complaints and responses; accepting "further statements or documents" is discretionary with the Panel (Rule 12, Procedural Orders), although the Forum (in Supplemental Rule 7) but not WIPO provides for supplementing the record with the proviso that "[a]dditional submissions must not amend the Complaint or Response." For some panelists, Rule 7 contradicts the Policy. A possible explanation for introducing the supplementary submission rule into the arbitral process was to conform with litigation practice in the U.S. since UDRP complaints perform a dual function in both stating a claim and moving for relief; so that while additional submissions are generally discouraged, an argument can be made for them on the ground that UDRP complaints are essentially motions for summary judgment. In any event, the temptation for last words is often irresistible but in this forum the general rule is that parties are expected to get their contentions and proofs right in their initial submissions. Rule 7 properly used, allows parties to sum up their adversaries' deficiencies; used improperly additional submissions contribute nothing new to the argument and have led to puzzling awards. This explains the tension in Rule 7 and the split that occasionally rises to the surface. One of the veteran panelists for both WIPO and the Forum has long questioned Rule 7. In YETI Coolers, LLC v. Ryley Lyon / Ditec Solutions LLC, FA160500 1675141 (Forum July 11, 2016) (
2016-10-13T07:40:00-08:00The first element of the Uniform Domain Name Dispute Resolution Policy (UDRP) requires a complainant to prove that the disputed domain name "is identical or confusingly similar to a trademark or service mark in which the complainant has rights." It's unusual for a complainant to fail on this first of three prongs, but one recent case demonstrates just how uncertain the UDRP can be sometimes. 'A Standing Requirement' The first UDRP element has been called nothing more than "a standing requirement" by the WIPO Overview of WIPO Panel Views on Selected UDRP Questions, Second Edition. Once a complainant has established that it actually has rights in a trademark, proving that the disputed domain name is "identical or confusingly similar" to that trademark is usually straightforward. Indeed, numerous UDRP decisions have said that "the fact that a domain name wholly incorporates a complainant's registered mark is sufficient to establish identity or confusing similarity for purposes of the Policy." The WIPO Overview elaborates the majority position this way: The threshold test for confusing similarity under the UDRP involves a comparison between the trademark and the domain name itself to determine likelihood of Internet user confusion. In order to satisfy this test, the relevant trademark would generally need to be recognizable as such within the domain name, with the addition of common, dictionary, descriptive, or negative terms… typically being regarded as insufficient to prevent threshold Internet user confusion. Application of the confusing similarity test under the UDRP would typically involve a straightforward visual or aural comparison of the trademark with the alphanumeric string in the domain name. In other words, so long as a disputed domain name contains the complainant's trademark, a UDRP panel will typically find that the domain name is confusingly similar to the trademark, even if the domain name contains additional words. Importantly, this low threshold only means that the complainant has succeeded on the first of three elements. The complainant still must prove the second and third UDRP elements: that the domain name registrant has "no rights or legitimate interests in respect of the domain name" and that the domain name "has been registered and is being used in bad faith." The bad faith element, in particular, is often the most challenging hurdle. One Little Word: The Exception of the Jaguar Land Rover Decision Like all rules, of course, the first UDRP element has exceptions. One of them appeared, albeit with little explanation, in a dispute over the domain name
2016-10-12T15:41:00-08:00The northern Syrian city of Aleppo is one of the key battlegrounds of that country's on-going civil war as well as the epicenter of the European refugee crisis. The most appropriate United States response to events in Aleppo has become a major foreign policy question among the candidates in this year's U.S. presidential election. Experts are now predicting that forces loyal to President Bashar al-Assad, backed by the Russian military, will take control of rebel-held eastern Aleppo within weeks. The image below illustrates the current state (as of 9 October 2016) of the conflict in Aleppo, depicting rebel-held regions in green and those under government control in red (see the full map on Wikipedia). BATTLE OF ALEPPO (2012 – Present) / The current state (as of 9 October 2016) of the conflict in Aleppo, depicting rebel-held regions in green and those under government control in red. Source: Wikipedia Amidst all of this, the Syrian Communications and Technology Ministry announced this week that they had completed a new fiber optic line connecting the parts of Aleppo loyal to President Assad to the state telecom's core network in Damascus, increasing available bandwidth for residents. It had previously been connected by a high-capacity microwave link. From a BGP routing standpoint, this development was reflected by the disappearance of AS24814 — appearance of AS24814 serving Aleppo was first reported in 2013. At 14:42 UTC (5:42pm local) on 10 October 2016, we saw the 14 prefixes announced by AS24814 shifted over to Syrian Telecom's AS29256. History of AS24814 In August of 2013, Dyn Research jointly published a blog post with the Washington Post documenting the loss of Internet service in Syria's most populous city, Aleppo. Up until then, Internet service in Aleppo had been reliant on transit from Turk Telekom via a fiber optic cable traversing the rebel-held city of Saraqib to the west. This is the same Saraqib that was the site of suspected chemical weapons attacks months earlier. Fiber optic communications gear in Saraqib was disabled by rebels on 29 August 2013, resulting in the outage we reported, and service through Saraqib has never been restored. In an effort to restore service to Aleppo, the Syrian Telecom (formerly known as STE) created an emergency fiber circuit to reach Turk Telekom via Idlib, Syria. The link was activated at 14:45 UTC on 8 October 2013 and it was then that AS24814 was first employed, designating this emergency communications link. (See report of this development in a 2013 blog post.) While AS24814 (i.e., Internet service to Aleppo and surrounding areas) suffered numerous outages, service was lost for an extended period of time beginning in March 2015 when rebels took Idlib as part of the 2015 Idlib offensive. The fiber optic cables carrying service for AS24814 (Aleppo) were destroyed as a result of that offensive, resulting in a nearly 8-month outage. AS24814 would only reappear on 8 November 2015 after Syria's government military forces, aided by a new Russian bombing campaign, made the situation safe enough for government telecommunications engineers to reconnect Aleppo using a high capacity microwave link to the coastal city of Latakia, Syria. Yesterday, that microwave link (along with the emergency Internet service designated by AS24814) was retired and replaced with a new fiber optic cable, providing substantially greater bandwidth and routed through Khanasir, a town located safely inside government-controlled territory. In Conclusion Internet service in rebel-held eastern Aleppo will not benefit from the new fiber optic cable as described here in this post. The remaining residents in this part of the city r[...]
2016-10-12T14:45:00-08:00There were highs and lows in city hall's rollout of the .nyc TLD last month. Early on we were cheered when we received notification that our application for the JacksonHeights.nyc domain name had been approved. And with the de Blasio Administration committed to putting the city's 350+ neighborhood domain names under the control of local residents, we began to imagine that our decade-old vision of an "intuitive" city Internet might materialize; where one would find informative presentations of our city's art galleries at artgalleries.nyc, find banks at banks.nyc, and locate a church at churches.nyc. And with each such directory a bonus would arrive: the opportunity for a New Yorker to form a new small business. But our confidence plummeted when the city's contractor announced that a high-bid auction was to be held on October 24 for 20 domain names, the first of what might ultimately be 3,000+ auctioned names, many of which are vital to the realization of that intuitive city and the utility of the TLD. The basis of our disappointment is epitomized by the hotels.nyc domain name. It's reasonable to assume that, in a high-bid auction, an entity such as the Hilton Corporation, with deep pockets and 30 hotels in or near the city, will win. When this occurs two associated outcomes can be predicted with reasonable certainty: a traveler looking to hotels.nyc for a city hotel would assuredly be provided with a highly skewed view of the city's 250+ hotels (a Hilton perhaps?). And a comprehensive listing of hotels, perhaps creatively mixed to include an AirBnB-like listing, fashioned by a local entrepreneur will never materialize. With our being awarded the license for JacksonHeights.nyc, we have a big stake in this development: If people come to believe that hotels.nyc and other such civic infrastructure names are in essence offering "biased directories," what hope is there that they will come to trust that JacksonHeights.nyc presents the considered and collaborative intelligence of its neighborhood namesake? To summarize, the city has established a workable model to guide the allocation of the neighborhood names, requiring detailed public interest commitments (PICs) from those interested in the rights to their development. Further, those awarded neighborhood name must return every three years to demonstrate they've met their PICs. In contrast, the plan for auctioning hundreds, perhaps thousands of these civicly important names does not require any PICs from the auction winners. And there's no review process whatsoever, with the names issued virtually forever. #StopTheAuctions If the city sticks with the high-bid auction (a holdover from the Bloomberg Administration), several negatives will result. Our opportunity to establish .nyc as a managed and trusted TLD, a safe port if you will, will be severely diminished. We'll loose the opportunity to provide access to these new resources to capital starved entities. The local flavor and creativity will suffer. We'll loose an opportunity to bolster our digital self reliance. We'll remain dependent on distant search engines to filter and present our digital resources. The city should stop the auctions and follow these steps to improve the name allocation process. City Hall should establish a public policy that facilitates the identification and development of civicly valuable domain names. Considering the economic and aggregation benefits that arise with a well managed and trusted digital resource, it should categorize the 3,000 names: those that can be auctioned immediately, names for negotiated allocation (like the neighborhood names), and names that have PICs and are destined for high-bid auction.[...]
2016-10-12T11:38:00-08:00Co-authored by Najla Dadmand, Incognito Software Systems’ product manager, and Patrick Kinnerk, senior product manager. Cable modem fraud can be a major source of revenue leakage for service providers. A recent study found that communication service providers lost $3 billion dollars worldwide due to cable modem cloning and fraudulent practices. To combat this problem, device provisioning solutions include mechanisms to prevent loss — but what do you really need to protect your bottom line? There are a number of DOCSIS-specific specifications designed to address this problem: TFTP Server Timestamp (TLV 19): This puts a timestamp in the TLV, which the CMTS uses to prevent a modem from downloading old files and incorrectly provisioning the device. IP Address Verification (TLV 20): The IP address is included in the TLV to enable the CMTS to verify that the correct IP is being provisioned. DOCSIS 3.0 Message Integrity Check (MIC): This feature provides additional security for file generation by ensuring the file the CMTS gives to the cable modem is correct. Baseline Privacy Plus (BPI+): When enabled on the DOCSIS network, this causes the CMTS to authenticate the cable modem through an exchange of certificates that includes the MAC address of the modem. The certificate exchange is very difficult to hack. This means that if the cable modem attempts to authenticate with a different MAC address than what is listed in the certificate, the CMTS will detect MAC address spoofing and will not authorize the CM for data services. As a result, BPI+ prevents simple MAC spoofing, which is one of the most common forms of theft of service, although further measures are required to detect whether the actual certificate itself has been cloned. Only provisioning solutions that dynamically generate DOCSIS and PacketCable configuration files on-demand can include features such as IP verification and TFTP server timestamp. Furthermore, in addition to the above specifications, further security measures should be considered for an extra level of protection against cable modem cloning. Dynamic File Generation It is more secure to generate dynamic files than static files as the unique file names can't be used in file replay attacks. In addition to the unique file name, the IP address assigned to a device must be verified to download the file. Why is this useful? Consider someone sniffing the network to see what is being downloaded (for example, a file called gold.bin). The person may assume this file is a gold-service package and they might attempt to download it. To prevent this from occurring, the file is stored in a short-term cache and the DHCP server assigns an IP to the device, along with the unique file. As a result, if a device with the wrong IP tries to download the file, it will not succeed. Dynamic file generation also offers operators a simple and secure way to change the MIC setting (also known as a Shared Secret). This is because any given CMTS may generate hundreds or even thousands of unique configuration files for devices. Without dynamic file configuration, an operator would need to manually rebuild every unique configuration file to change the Shared Secret, whereas a device provisioning solution that supports dynamic file generation gives operators the ability to make one central change. IP Limiting Limiting the number of IPs that the DHCP service can give to CPEs behind a modem can prevent more basic forms of service theft. For example, a DOCSIS provisioning service that includes IP limiting will restrict a legitimate subscriber from allowing a neighbor or friend who does not live in the household fr[...]
2016-10-12T05:55:01-08:00Why Technical Standards Matter & Make Our Technology Work Technical standards typically are not something we think about: they simply make things work. Credit for this goes to the innovators who ensure that the technical standards needed to make many of our devices work together are robust and effective. Given how central telecommunications and information and communications technologies (ICTs) are to our economies and to how we live, it is crucial that they function as expected. Standards enable interoperability, as well as functionality, reliability, and safety. For example, standards ensure that our smart phones still work when traveling abroad, that we can securely withdraw money from ATMs, and a grandfather's pacemaker works reliably to keep his heart beating. In sum, technical standards are the unsung heroes of today's technology. Consider this: Wi-Fi technology is based on a family of IEEE 802.11 technical standards. They were developed through open, consensus-based, standards processes with experts from over the world working together with no top-down governmental direction. Now more than ever, it is important to decide when, how, and where to develop standards for fast-moving technologies. This issue has significant economic and foreign policy implications. The dynamism of the bottom-up, multi-stakeholder approach has become increasingly important as technological changes accelerate. In 2000, the World Trade Organization adopted principles and procedures setting out transparency, openness, impartiality, and consensus, as among the core characteristics of international standards. Countries around the world have learned that anticipating and leveraging fast-moving ICT trends is an important economic driver. Nowhere are the stakes higher than in areas like Internet-of-Things (IoT). A McKinsey study projects the total IoT market size at $4 – 11 trillion dollars by 2025. Many countries are hoping to capitalize on that vast potential. Some governments see such trends as opportunities to jump start their economic engines or to leapfrog competition. Others may seek to uses the standardization process (along with domestic laws and regulations) to mandate a particular technology. Such uses of technical standards to promote essentially domestic policy goals can undermine the integrity of technical processes and diminish openness and innovation. Despite broad agreement on the potential of IoT and other fast-trending technologies, there is considerable disagreement about when to standardize and with how much government involvement. The volume of diverse applications makes one-size-fits-all standards development model impossible. Standards that do not have adequate support of technology developers will not find market acceptance. Standards have clear economic and consumer benefits, which is why standards are an important foreign policy priority. Recognizing the global nature of technology and trade, the United States is committed to working with industry and relevant SDOs to facilitate the development of international standards: in organizations that are open to all interested stakeholders, transparent, consensus driven, and based on principles of transparency and market needs. This approach has long been a part of our laws and policies -- most recently emphasized in the revised Office of Management and Budget's Circular A-119. The same philosophy is also reflected in the 2015 United States Standards Strategy, developed by the private sector and supported by the U.S. government. Who better to develop the standards than the very innovators who developed the technol[...]
2016-10-10T18:47:00-08:00Cloud-based interest in email infrastructure trended up this past quarter. Port25, a Message Systems Company, tracks cloud-based interest (CBIs) among large volume senders based on evaluation and purchase requests received, in conjunction with overall site engagement. In Q3, CBIs on Port25's website grew by 34.97% over Q2, to a total of 48.2% of unique evaluation and purchase requests. Essentially half of all visitors who made inquiries at the www.port25.com site were interested in learning more about cloud-based infrastructure for email. Port25's CBI number has been hovering around 38% of unique evaluation requests since Q1 of 2015 , so this uptick represents a significant upward spike in interest in cloud solutions. This mirrors general industry trends, which place public cloud services on a trajectory to grow 17% in 2016 over 2015. This growth represents a 22% increase in SaaS and a whopping 43% growth in infrastructure as a service (IaaS). To arrive at our trend numbers, we placed data from unique inquiries into five different volume bins, based on a client's maximum email messages per hour: less than 10K, 10K-50K, 50K-250K, 250K-1M, and 1M+. The 1M+ category includes some very large senders, since Port25 has customers who send more than 1B emails in a given 24-hour period. Every visitor in this data set completed a minimum of two and a maximum of nine events per session. The data has been normalized by placing visitors into event bins 2 through 9. Each event includes an action such as a knowledge base download, form submission, support request, button click, etc. The number of users who opted-in to receive cloud-based information rose most among smaller senders. In Q3, 31.34% of visitors who expressed interest in CBI fell into the volume bin of less than 10K per hour, while 25.36% of requests were generated by senders who mail 10K-50K per hour. Among the larger senders, CBI is much lower: roughly 14.81% across the larger sending volume categories expressed interest in cloud services. That number is consistent with the highest volume bin of over 1M per hour, which had a CBI of 14.25%. Our data suggests that, while senders in the less than 10K category appreciate the convenience of cloud-based email infrastructure, a stronger driver may be that small ESPs lack the resources needed to manage complex tasks involved in hosting their own sending infrastructure in house. Moving to a cloud-based email infrastructure can be a cost-effective way for smaller ESPs to meet their investment objectives, maintain security, and outsource the administrative knowledge needed to manage increasing volumes of email while properly configuring server requirements. The largest service providers, (the top 1% of ESPs) have been reticent to migrate to the cloud due to the complexity of their sending environments. One understandable constraint, mentioned to Port25 by a large ESP in Germany, is that large ESPs don't want to relinquish control of their reputable IP addresses. Reputation aside, certain SLAs among large senders prohibit them from releasing sensitive customer data to a third party. In addition, high volume senders generally require some degree of custom integration to create a seamless hybrid cloud infrastructure. Concerns about integration may be holding back some larger email senders from using cloud-based services. These headwinds are mitigated by the growing number of small and midsize ESPs that understand the economic and administrative benefits of a cloud solution for email infrastructure. They are driving the huge growth in API-driven cloud email infras[...]
2016-10-07T12:03:00-08:00"It's in the cloud" has become a phrase we don't think twice about, but less than a decade ago, you might have received some awkward looks using this kind of talk in the boardroom. Cloud-based software applications are heralding the fourth industrial revolution that will eventually lead to the industrial internet of things (IIOT). The reasons for this are because: Clear, cloud solutions are more accessible for a mobile workforce. They offer a uniform solution to customers who work in regulated fields. They handle large amounts of data exceptionally well across large geographical expanses. It's quite possible that in the future, you'll need to specify when an application is in fact local, and not cloud-based. The solution is very effective to fit the needs of the growing software-as-a-service model. However, the cloud solution isn't perfect. Making the transition from in-house to cloud-based solutions presents challenges that are more pronounced in some fields than others. Let's take a look at examples of industries that are feeling a little more weighed down than their cloud-bound compatriots: 1. Finance & Banking At first, you might think the folks at the teller stand are in fact using a cloud-based system. The fact is, though, most of the banks and financial institutions in the world still rely on relatively antiquated local systems. Probably the most compelling reason the finance industry has chosen to remain "on the ground" is security. Robbing banks has come a long way since 1886, and there's no reason to expect it will go out of style in the era of technology. People are warming up to the idea of placing their contact information in the cloud, but with cyber attacks increasing in frequency every year, it's not surprising the banking industry is waiting for a comprehensive solution before moving to the cloud. 2. Government This one may not surprise you, either. After all, government industries aren't known for being early adopters. The mobile access offered by industrial computers means that cloud solutions are often the solution to managing infrastructure projects, powering municipalities, and communicating within the military, which gives a compelling case for government entities to migrate. Still, techniques to take existing bodies of "big data" and migrate them to the cloud are not yet perfect. This presents a challenge for the Government where any hiccup in systems will have a very noticeable impact on public users. Worse yet, flaws in migrated data could cause all sorts of unplanned-for situations. Anyone who's worked in IT will attest that once a business has their system in place and working, they'll often times stretch it to the bleeding edge of its lifespan before making the upgrade to the latest and greatest. For bureaucrats running antiquated government systems, the idea that support cost can consume as much as 70-85% of total costs makes the transition to the cloud a proverbial root-canal. 3. Legal/Consultative Is moving to the cloud a loss for the legal industry? Well, no, but the draw isn't there the way it is in places like healthcare and education. Legal teams tend to be small in size, so they don't need a cloud solution to be consistent, even when mobile. There are also fewer visual assets involved, which can un-sell the cloud idea. But neither of these factors are deal-breakers. The real reason why cloud computing doesn't appeal in these types of businesses is that they can afford and make use of private cloud solutions. Similar in execution to public ones, but with increased securit[...]
2016-10-06T11:58:00-08:00In just the last two weeks, there were three major DNS outages between Google, Microsoft Azure, and Fonality. But only one of these companies was able to make even bigger waves with the way they handled their blunder. Fonality, who sells VoIP services and business phone systems, offered a very rare and transparent analysis of their outage. In a detailed statement from Chief Marketing Officer Jeff Valentine, readers were given crucial insight on how to prevent the same mistakes from happening to other companies. The four-hour outage crippled nearly every aspect of their online business, from their website, status site, voice services, and even video chat. All thanks to a simple DNS error. After hours of investigating, the problem was rooted back to a single mistake by a network engineer who had accidentally connected a test network to the company's production network. This mistake shut down their DNS system, which in turn knocked all of their services and websites offline. "DNS is one of those things that gets overlooked… You make your voice servers super-redundant, but you take it for granted that DNS will always work." Lessons Learned The sad reality is that often times things have to go horribly wrong for one person, in order for other people to learn how to prevent these issues from happening to them. Fonality's commitment to transparency will hopefully save many businesses from suffering the same fate. Here are just a few of the key lessons Fonality had to learn the hard way (but are easy fixes if you are proactive). Lesson #1: Backup Your DNS While there is no way to 100% be assured that your DNS will always be available, you can never have too many backup systems in place. Secondary DNS is easy to set up and requires little to no maintenance. Simply signup for services with another DNS provider (who assumable has Secondary DNS services) and configure your records to be delegated to your secondary provider in the event your primary provider goes down. This is extremely useful for clients who host their own DNS services and are reluctant to move over to a cloud-based DNS provider. This way you can keep your primary systems where you want, but just in case, you have a backup that is able to handle your traffic load. "DNS is too important. We cannot let it go down in the future. We already had a backup, but that didn't help in this case. So we need a backup for the backup. We are putting in place an offsite system with DNS and name servers at a different location." —Valentine Lesson #2: Keep Your Status Site Separate It's common sense, you don't use your corporate website to host your status page. But what most people forget is you should also keep your DNS records with separate providers. Fonality learned the hard way that even though they were hosting their status page on an entirely different site, they were using a subdomain "trust.fonality.com" who's records were dependent on their DNS system. Even if your status site is on a separate subdomain, web host, etc… it will still be unavailable if your DNS goes down. That's because DNS is the first point of contact between a user and your website, and if it is unavailable, so is everything else. This is a simple mistake even larger companies still forget, as seen with Fonality. Lesson #3: Connect with Clients The first that usually happens where there is an outage is your social media effectively "blows up". It is a true test of patience to be able to go through and talk to each client individually and address the complai[...]