Subscribe: CircleID: Featured Blogs
http://www.circleid.com/rss/rss_comm/
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
blockchain  bug  business  complainant  domain names  domain  icann  internet  iot  ipv  mark  new  panel  things  time  udrp 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID: Featured Blogs

CircleID: Featured Blogs



Latest blogs postings on CircleID



Updated: 2017-02-28T03:28:00-08:00

 



Two Approaches to Routers in Space: SpaceX and OneWeb

2017-02-27T19:28:00-08:00

"OneWeb, emboldened by the oversubscribed $1.2 billion Softbank-led investment gained in December, is on the verge of adding another 2,000 satellites to its previously proposed constellation of several hundred satellites."SpaceNews / Feb 24, 2017Two companies hope to revolutionize the Internet by providing global connectivity using constellations of low-earth orbit satellites — Elon Musk's SpaceX and Greg Wyler's OneWeb. It seems that SpaceX gets a lot more publicity than OneWeb, but both are formidable. They have the same goal, but their organizations are dissimilar. SpaceX is integrated — building the rockets, satellites and ground stations themselves — while OneWeb has a number of collaborators and investors, including Bharti Enterprises, Coca-Cola, Intelsat, Hughes, Totalplay Telecommunications, Virgin Galactic and Softbank. One strategic investor, Softbank, invested $1.2 billion last December and was given a board seat. OneWeb says they have now raised enough capital to finance the remainder of the project with loans. OneWeb had planned to build 900 satellites and initially launch 648, but Wyler says Softbank has encouraged them to be more aggressive and he is considering adding an additional 1,972 satellites. Doing so would dramatically increase the total capacity of the system. Regardless, their goal is to connect every school by 2022 and "fully bridge the digital divide" by 2027. Critics of the SpaceX and OneWeb projects argue that they will not be able to compete with terrestrial wireless and they also run the risk of causing "space junk" collisions in low-earth orbit. Others counter that it will be decades before ubiquitous, high-speed wireless connectivity reaches the majority of the people on Earth and the odds of such collisions are very small at such high altitudes. (Teledesic, a similar project, failed in the 1990s, but launch and communication technology have improved dramatically since that time and Internet connectivity has become much more valuable). What if one of these companies succeeds and the other fails? That would leave the winner with a monopoly in much of the rural and developing world. It is even conceivable that they could compete effectively with terrestrial ISPs — in access or backbone networks. Would global ISPs require unique regulation and, if so, what should it be and who has the power to do it? I'm not smart enough to answer the critics who raise difficult questions, but I hope SpaceX and OneWeb both succeed — competing global ISPs would be of great value to mankind. Written by Larry Press, Professor of Information Systems at California State UniversityFollow CircleID on TwitterMore under: Access Providers [...]



Blockchain of Things Goes Global at ITU-T Dubai Meeting

2017-02-27T11:17:00-08:00

Today, one of the world's largest Internet companies, Alibaba, together with a compelling array of other providers, vendors, and government bodies for the first time called for a visionary multilateral technical and operational "framework for a Blockchain of Things." The exceptionally thorough and comprehensive 23-page document, SG20-C.008, was submitted into the upcoming ITU-T SG20 Internet of Things (IoT) Study Group meeting at Dubai, 13–23 March — the first group gathering in the organization's new 2017-2020 study period. The action was a welcome step of strategic leadership at the global multilateral level to help accelerate a potentially far-reaching new platform for trusted, distributed identity management to make it a reality and apply it to the Internet of Things ecosystem. As the document's history section notes, scattered related developments have occurred in several other venues and industries. However, the time to scale up the worldwide collaboration was at hand. The Dubai venue was also significant given the UAE's recent actions to establish itself as a global leader in the sector — establishing the Global Blockchain Council and related international conferences. The very thorough document begins by providing: background information, an answer to why IoT needs blockchain, what is blockchain, the challenges, benefits, a gap analysis for blockchain-related standards, and the valuable role for ITU-T. It also underscores that "blockchain is not bitcoin." It then proposes a new work item — "Framework of blockchain of things as decentralized service platform." The proposal has a well-structured, limited scope and outline that includes common characteristics and requirements, a general framework as a decentralized service platform, and finally, an IoT reference model that notably addresses security concerns. Two extensive appendices review other blockchain-related platforms and use cases. Additional supporters are being solicited for the ground-breaking work. This well done contribution — with Alibaba and the supporting parties leading the activity combining collaboration with other venues — should significantly help "in bringing about great promises across a wide range of business applications in many fields, such as finance, banking, healthcare, government, manufacturing, insurance, retail, legal, media and entertainment, supply chain and logistics, finance and accounting, etc." The benefits are compelling. Blockchain offers new ways for [Internet of Things / Smart Cities & Communities] IoT/SC&C data to automate business processes among partners without setting up a complex and expensive centralized IT infrastructure. Data protection of blockchain fosters stronger working relationships with partners and greater efficiency as partners take advantage of the information provided. Making IoT/SC&C and blockchain together enables IoT devices to participate in blockchain transactions. Specifically, IoT devices can send data to public/consortium/private blockchain ledgers for inclusion in shared transactions with distributed records, maintained by consensus, and cryptographically hashed. The distributed replication in blockchain allows business partners to access and supply IoT/SC&C data without the need for central control and management. Additionally, the distributed ledger in a blockchain makes it easier to create cost-efficient business networks where virtually anything of value can be tracked and traded, without requiring a central point of control. Blockchain with IoT/SC&C together becomes a potential game changer by opening the door to invent new styles of digital interactions, enabling IoT devices to participate in blockchain transactions, as well as creating opportunities to reduce the cost and complexity of operating and sustaining business. There is a small IPR bump in the road. It appears that a small New York company called RightClick LLC DBA Blockchain of Things LLC, asserted it began using the work mark "blockchain [...]



Diversity of View or Unacceptable Inconsistency in the Application of UDRP Law

2017-02-27T07:57:00-08:00

The general run of Uniform Domain Name Resolution Policy (UDRP) decisions are unremarkable. At their least, they are primarily instructive in establishing the metes and bounds of lawful registration of domain names. A few decisions stand out for their acuity of reasoning and a few others for their lack of it. The latest candidate of the latter class is NSK LTD. v. Li shuo, FA170100 1712449 (Forum February 16, 2017) () (I'll call this the "Li shuo dispute" to distinguish it from other NSK cases discussed further below). It is an example of inconsistency in the application of law. In this case, the Panel denied the complaint. Inconsistency this time affects the trademark owner; other times it has affected the domain name registrant. Inconsistency in the application of law is always a problem because it affects the predictability of result. With judicial decisions, there's always higher authority. In addressing the issue in UDRP proceedings, there are three forms of inconsistency: (1) diversity of view; (2) different assessments of facts and applicable law; and (3) "manifest disregard of the law." Of the first, the World Intellectual Property Organization (wearing its provider hat) acknowledges diversity of view as a feature of the process: Overview 2.0 preliminary discussion reads "[o]n most of these issues, consensus or clear majority views have developed. Certain other questions attract a diversity of views." While there is not a strict equation between "diversity of view" and the two other forms of inconsistency all three affect predictability, but the last is the most pernicious, which is where the Li shuo dispute belongs. Not surprisingly for a regime that has no review mechanism, panelists must find their own way which mostly means following consensus or precedent and some go their own way. WIPO Overview of WIPO Panel Views on Selected UDRP Questions, Second Edition ("WIPO Overview 2.0") states that "[t]he UDRP does not operate on a strict doctrine of precedent" but then hedges by admitting that "panels [nevertheless] consider it desirable that their decisions are consistent with prior panel decisions dealing with similar fact situations." Paragraph 4.1. Whether we call it consensus or precedent, the agreed upon aim is predictability which is achieved through consistency. "Consistency" ( 4.1 continues) "ensures that the UDRP system operates in a fair, effective and predictable manner for all parties, while responding to the continuing evolution of the domain name system." Desirable though consistency is, however, panelists are not always in agreement on contested issues; this is the diversity of view construct. For example, the Overview records two views in assessing fair use. (Overview 1.0 superseded by 2.0 also flagged "majority" and "minority" views, no longer identified in 2.0). But these "diversities of view" don't generally express themselves as differences between WIPO and Forum appointees. In other words, there isn't a WIPO view and a Forum view. While there is not a strict equation between "diversity of view" and the two inconsistencies mentioned above they all affect predictability. Who the provider appoints to arbitrate the dispute could make the difference to its outcome. Doug Isenberg addresses this issue in his essay "When Two Trademarks Aren't Confusingly Similar to One Trademark," circleid essay (February 23,2017). The domain name in the Li shuo dispute combines two marks owned by unrelated entities. In denying the complaint, the Panel explained that Complainant lacked standing and gave as its reason that Complainant "alleges no nexus between it and the owner of the [SKF] mark. As such, Complainant essentially has standing to bring this claim regarding the NSK mark but not the SKF mark." (Note the emphasis on "nexus" which I'll explain briefly in conclusion). Mr. Isenberg cautions brand owners to brush up their knowledge of the UDRP and be prepared for the contingency of panels going their [...]



5G (and Telecom) vs. The Internet

2017-02-25T15:52:00-08:00

5G sounds like the successor to 4G cellular telephony, and indeed that is the intent. While the progression from 2G to 3G, to 4G and now 5G seems simple, the story is more nuanced. At CES last month I had a chance to learn more about 5G (not to be confused with the 5Ghz WiFi) as well as another standard, ATSC 3.0 which is supposed to be the next standard for broadcast TV. The contrast between the approach taken with these standards and the way the Internet works offers a pragmatic framework for a deeper understanding of engineering, economics and more. For those who are not technical, 5G sounds like the successor to 4G which is the current, 4th generation, cellular phone system. And indeed, that is the way it is marketed. Similarly, ATSC 3 is presented as the next stage of television. One hint that something is wrong in 5G-land came when I was told that 5G was necessary for IoT. This is a strange claim considering how much we are already doing with connected (IoT or Internet of Things) devices. I'm reminded of past efforts such as IMS (IP Multimedia Systems) from the early 2000's which were deemed necessary in order to support multimedia on the Internet even though voice and video were working fine. Perhaps the IMS advocates had trouble believing multimedia was doing just fine because the Internet doesn't provide the performance guarantees once deemed necessary for speech. Voice over IP (VoIP) works as a byproduct of the capacity created for the web. The innovators of VoIP took advantage of that opportunity rather than depending on guarantees from network engineers. 5G advocates claim that very fast response times (on the order of a few milliseconds) are necessary for autonomous vehicles. Yet the very term autonomous should hint that something is wrong with that notion. I was at the Ford booth, for example, looking at their effort and confirmed that the computing is all local. After all, an autonomous vehicle has to operate even when there is no high-performance connection or, any connection at all. If the car can function without connectivity, then 5G isn't a requirement but rather an optional enhancement. That is something today's Internet already does very well. The problem is not with any particular technical detail but rather the conflict between the tradition of network providers trying to predetermine requirements and the idea of creating opportunity for what we can't anticipate. This conflict isn't obvious because there is a tendency to presuppose services like voice only work because they are built into the network. It is harder to accept the idea VoIP works well because it is not built into the network and thus not limited by the network operators. This is why we can casually do video over the Internet  —  something that was never economical over the traditional phone network. It is even more confusing because we can add these capabilities at no cost beyond the generic connectivity using software anyone can write without having to make deals with providers. The idea that voice works because of, or despite the fact that the network operators are not helping, is counter-intuitive. It also creates a need to rethink business models that presume the legacy model simple chain of value creation. At the very least we should learn from biology and design systems to have local "intelligence". I put the word intelligence in quotes because this intelligence is not necessarily cognitive but more akin to structures that have co-evolved. Our eyes are a great example  —  they preprocess our visual information and send hints like line detection. They do not act like cameras sending raw video streams to a central processing system. Local processing is also necessary so systems can act locally. That's just good engineering. So is the ability of the brain to work with the eye to resolve ambiguity as for when we take a second look at something that didn't make sense at first glance. The ATSC 3.0 session at ICCE [...]



There Is No Cuban Home Internet Plan - And That's Good News

2017-02-25T10:13:00-08:00

I've followed Cuba's home-connectivity "plan" from the time it was leaked in 2015 until the recent Havana home Internet trial. I thought the plan was a bad idea when it was leaked — it calls for installation of obsolete DSL (digital subscriber line) technology — and now that the Havana trial is complete, I question whether the plan was real. ETECSA denied the validity of the leaked presentation at the time, and their definition of "broadband" was "at least 256 kb/s." Furthermore, the goal was stated as "Alcanzar para el 2020 que no menos del 50% de los hogares disponga de acceso de Banda Ancha a Internet." My Spanish is not very good, so I am not sure whether the plan was for connectivity in 50% of homes or connectivity being available to 50% of homes. Either way, slow DSL will be a joke in 2020. But, the free home-connectivity trial in Havana used the DSL technology described in the leaked plan — might it be for real? I don't think so. At the end of the free trial, a friend told me that around 700 of the 2,000 of eligible Havana homes agreed to pay to continue the service. He also said that around 12 homes had been connected in Bayamo and the same was going to happen in Santa Clara and Las Tunas. If this home connectivity roll-out has been planned since 2015, why is it going so slowly? Why aren't other parts of Havana open? Why aren't they doing large-scale trials in Bayamo, Santa Clara, and Las Tunas? The quality of a DSL connection is a function of the length and condition of the telephone wire running between a home and the central office serving it. If they had really planned to bring DSL to many Cuban homes, they would have understood the necessity of investing heavily in wiring as well as central office equipment. My guess is that the Havana trial and the installations in Bayamo, Santa Clara and Las Tunas are not part of a national home-connectivity plan, but ends in themselves — interim measures aimed at bringing slow DSL connectivity to small businesses and self-employed people in the most affluent parts of selected cities. That makes more sense to me than a plan to spend a lot of money upgrading copper telephone wires and central office equipment in order to be able to offer obsolete connectivity to 50% of Cuban homes by 2020. (I've always hoped Cuba would leapfrog today's technology, opting for that of the next generation). If the DSL "plan" was never a plan, what might we expect? (The following is highly speculative). My hope is that Cuba regards efforts like home DSL, WiFi hotspots, Street Nets and El Paquete as temporary stopgap measures while waiting for next-generation technology. If that is the case, we might see progress when Raúl Castro steps down next year. Miguel Díaz-Canel Bermúdez, who is expected by many to succeed Castro, acknowledged the inevitability of the Internet in a 2013 talk, saying "today, news from all sides, good and bad, manipulated and true, or half-true, circulates on networks, reaches people — people hear it. The worst thing, then, is silence." (I think Donald Trump may have been in the audience :-). In a later speech Díaz-Canel recognized that the Internet is a social and economic necessity, therefore the government has the responsibility of providing affordable connectivity to every citizen, but there is a caveat — the government must be vigilant in assuring that the citizens use the Internet legally. Here is a clip from that speech. In 1997, the Cuban government decided that the political risk posed by the Internet outweighed its potential benefit and decided to suppress it. At the same time, China opted for an ubiquitous, modern Internet — understanding they could use it as a tool for propaganda and surveillance. It sounds to me like Díaz-Canel has endorsed the Chinese model and will push for next-generation technology with propaganda and surveillance. (Again, my Spanish is not so great, and I may hav[...]



When Two Trademarks Aren't Confusingly Similar to One Trademark

2017-02-23T15:26:01-08:00

As I've written before, domain name disputes involving multiple trademarks sometimes raise interesting issues, including whether a panel can order a domain name transferred to one entity without consent of the other. While panels typically have found ways to resolve this issue, one particularly troubling fact pattern arises when a panel denies a complaint simply because a disputed domain name contains trademarks owned by two different entities. The situation presents itself when a panel considers whether a domain name containing two trademarks is "identical or confusingly similar" to a single trademark — that is, the trademark owned by the complainant — as required by the first factor of the Uniform Domain Name Dispute Resolution Policy (UDRP). In one odd case, a UDRP panel confronted the issue when a complaint was filed by the owner of the trademark NSK, but the disputed domain name also contained the trademark SKF — "which is a third-party brand of bearing products which competes with Complainant." Therefore, the panel was faced with the question of whether the domain name was confusingly similar to the complainant's SKF trademark. Many UDRP panels apply this first UDRP factor liberally. Indeed, the WIPO Overview of WIPO Panel Views on Selected UDRP Questions, Second Edition, says, "The first element of the UDRP serves essentially as a standing requirement." And, many UDRP panels have adopted the position that "the fact that a domain name wholly incorporates a complainant's registered mark is sufficient to establish identity or confusing similarity for purposes of the Policy." Still, the panel in the case saw things differently, writing: The Panel finds that Complainant has not met its burden regarding confusing similarity. Complainant has adequately alleged its interests in and to the NSK mark; however, Complainant has no rights or interests in the SKF mark. Complainant alleges no nexus between it and the owner of the SKF mark. As such, Complainant essentially has standing to bring this claim regarding the NSK mark but not the SKF mark. As a result, the panel denied the complaint, allowing the respondent to retain registration of the disputed domain name even though it contained the complainant's trademark. Amazingly, two days after the decision in the case had been published, the Forum published another UDRP decision in a similar case also filed by the owner of the trademark NSK and also containing the SKF trademark. And the panel in that case reached a different conclusion! In that case, the panel found confusingly similar to the NSK trademark, writing that "the Panel agrees that the additions Respondent has made to the NSK mark are insufficient to overcome Policy ¶ 4(a)(i)." The panel's denial in the case also contradicts an earlier decision with similar facts, involving the domain name . There, the panel simply wrote: "Complainant argues that the domain name is confusingly similar to the NSK mark for the following reasons: 'skf' refers to a third-party brand of bearing products which competes with Complainant, and 'bearings' is a term descriptive of Complainant's business. The Panel agrees." The panel's denial in the case is difficult to reconcile with the other decisions, and it seems to be quite unusual. Still, it is not the only time a panel has taken this perspective. In a case involving the domain name , a panel denied the complaint because it was filed only by Nike, not by Google, and "Complainant has failed to establish rights in or to the GOOGLE mark per" the first requirement of the UDRP. Interestingly, the case was refiled soon after the denial, with both Nike and Google as complainants. The second panel ordere[...]



At the NCPH Intersessional, Compliance Concerns Take Centre Stage

2017-02-23T11:35:00-08:00

The non-contracted parties of the ICANN community met in Reykjavík last week for their annual intersessional meeting, where at the top of the agenda were calls for more transparency, operational consistency, and procedural fairness in how ICANN ensures contractual compliance. ICANN, as a quasi-private cooperative, derives its legitimacy from its ability to enforce its contracts with domain name registries and registrars. If it fails to implement the policies set by the community and to enforce its agreements with the contracted parties, the very legitimacy and credibility of the multistakeholder governance model would be threatened, and the ability of ICANN to ensure the stability and security of the Domain Name System could be questioned. The Commercial and Non-Commercial Stakeholder Groups are not unified in their views on how ICANN should manage contractual compliance, but both largely agree that ICANN should be more open with the community regarding its internal operating procedures and the decisions that are made. Some members of the Commercial Stakeholder Group desire an Internet policeperson, envisioning ICANN's compliance department as taking an active role in content control, disabling access to an entire website on the mere accusation of copyright infringement. ICANN has previously said it is not a global regulator of Internet content, but there is a sentiment in some circles that through shadow regulation, well-resourced and politically-connected companies should be able to determine which domain names can resolve and which cannot. The Non-Commercial Stakeholder Group believes that the Domain Name System works because Internet users trust it to redirect them to their intended destination. Likewise, if a registrant registers a domain name in good faith, they should expect to be able to use this Internet resource to disseminate the legal speech and expression of their choice. Domain names enable access to knowledge and opinions that sometimes challenge the status quo, but ultimately enable the fundamental human right to dissent and to communicate speech. If a website is hosting illegal content, it is the courts that have the authority to make such a determination and to impose appropriate remedies — not private enterprises that have struck deals with registries, and certainly not ICANN. The problem is, there is mission creep, and ICANN is indirectly regulating content by repossessing domain names from registrants sometimes without any investigation of fact. During the intersessional, the Non-Commercial Stakeholders Group probed the compliance department to outline how complaints can be filed, how they are reviewed, and to describe how the interests of registrants are represented during the investigation of complaints. The answers were very revealing: anyone can file a complaint with ICANN, even anonymously; there are no public procedures on the complaint process; and registrants can neither know that a complaint has been filed against them, nor can they feed into the decision-making process, nor challenge the decision. This is problematic, not least because ICANN staff admitted last November in Hyderabad that there has been abuse of the compliance department's complaints form, with some entities having made bad faith attempts to have domain names taken down. This is not a theoretical issue. In 2015, ICANN's compliance department caused financial harm to a domain name registrant because of a minor, perceived inaccuracy in their domain name's WHOIS records. In this instance, the registrant had a mailing address in Virginia and a phone number with a Tennessee area code. While both details were valid, and the registrant was contactable, a "violent criminal” filed a complaint with ICANN alleging that the details were inaccurate. The complaint was accepted by ICANN and passed along to the domain name registrar. The registrar, [...]



Ask Not What ICANN Can Do for You, But What You Can Do for ICANN

2017-02-23T05:12:00-08:00

In recent weeks, you may have seen several articles asking that "ICANN", the Internet Corporation for Assigned Names and Numbers, move more expeditiously to open up the next application window for new gTLDs. As one commenter wrote "Ask a Board member or ICANN staff when they expect the next application window to open, and they will inevitably suggest 2020 — another three years away. Any reasonable person would agree that eight years for a second application window is anything but expeditious, and some might say potentially anti-competitive." Rather than pointing the finger, maybe it's time to turn the question on its head and ask, "what can we do to help move things forward?" As one of the co-chairs of ICANN Policy Development Process working on Subsequent Procedures for the introduction of New gTLDs, I certainly understand the requests to move more quickly. That said, we need to stop asking others, like the ICANN Board, to move in a top-down fashion to start a new process when we are not actively participating in the process to enable that new application window to occur in the ICANN multi-stakeholder bottom-up process. We, the community, actually control our own destiny in this regard. Yes, it has been a number of years since the last round closed. But we, as a community, have all known the milestones that needed to be achieved before the ICANN Board could approve the next application window. Namely, they include completion of the Competition, Consumer Choice and Trust Review (CCT-RT), the ICANN staff implementation review, and the Policy Development Process on Subsequent Procedures. To date, I would argue that ICANN staff are the only ones that have completed their deliverable, the implementation review. The CCT-RT is several months behind schedule, and the PDP on Subsequent Procedures is making good progress. However, like many PDPs, there is certainly a lack of active participation from those that would like to see the process move more quickly. So rather than complaining to the ICANN Board about the speed of the process, please join the PDP on Subsequent Procedures and actively participate. Submit proposals rather than just complaining about things you didn't like. Respond to questions and surveys when they are released. NOTE: Shortly a Community Comment period will open up with a number of questions on improvements that can be made. This is exactly the kind of opportunity that, with plenty of community engagement, could help move things forward, so please respond in a timely manner. In short, please help us help you. If you want things to move more quickly, get involved. Written by Jeff Neuman, Senior Vice President, Valideus USAFollow CircleID on TwitterMore under: ICANN, Top-Level Domains [...]



Is Nokia's New Framework Announcement Bringing Us Closer to Truly Smart Cities?

2017-02-23T05:10:00-08:00

Nokia has developed a framework that will enable governments to implement smart cities. The framework is designed to aid regions to design and obtain services for smart city concepts. However, Nokia states that more emphasis needs to be put on developing an overarching strategy rather than small projects. The Australian government announced that they are interested in building smart cities, but there are still major gaps in figuring out how to do so. Nokia Oceania CTO Warren Lemmens said, in an interview for ZDNet, that cities are currently not equipped for the digital future and are being left to solve the problem by themselves. To address the issue, Nokia is suggesting an approach on a government level, where states and territories will work in conjunction with an overarching federal government program, allowing cities to focus on their specific needs. The Concept of a Smart City What Nokia has in mind is quite amazing — their six-point framework is using a horizontal approach. This means that Nokia is developing a horizontal Internet of Things platform (IoT). In short terms, IoT will be used to connect every device together. The platform called IMPACT (Intelligent Management Platform for all Connected Things) will manage every feature of machine-to-machine connections (M2M) for any protocol, any device and across any platform. Nokia's framework will institute one single City Digital Platform for all cities. This platform will help devise a new federal program for innovation that focuses on data. It's a beginning of a more collaborative approach between government, businesses, academia and startups that will cornerstone smart cities. This will assist the public-private partnerships for the improvement of smart cities, eliminating the current tendency to separate device, data and application environments — and ensuring the personalization of each city under the program. What Nokia is trying to do is gather everyone of importance to work together and contribute to turning the smart city concept into a reality, sooner rather than later. Smart Cities in the Future Many of you are probably wondering what will smart city look like in the future. Well if you're thinking about flying cars and teleportation fields, you're going to be disappointed. Smart city will be a hub of information, IoT devices and all kinds of algorithms and scans that will make the city livelier. This won't involve tearing down the old buildings and building them again from scratch. Instead, it will focus on improved urban planning like vertical gardens, new buildings with implemented smart technology and various other gadgets. The point is that a smart city will utilize the digital economy and IoT will be the main distributor of data and information. According to Nokia: What the discussion about smart cities should focus on, should be data. Silos need to be broken down in order to leverage data, so it could be collected and shared between governments and business. This will improve personalization because we'll find out how businesses and citizens use the city. Nokia also said that every city needs a "control center" to collect and utilize this data to drive this personalization. Lemmens said that an operations environment — which consists of three separate layers: application, service, and infrastructure operations, with security "straddling all the operation layers" — should be used in conjunction with this control center. What is Next? Basically, the smart city will focus on personalization through the IoT devices. This is quite exciting, because of the possibilities that come with this concept. It is quite possible that cities will prosper based on personalization, digital economy and startup business will flourish with this new trend. It is safe to assume that startup business will ha[...]



Could a Vanity URL Strategy with Your .Brand Be the Key to Supercharging Your SEO?

2017-02-22T21:30:00-08:00

I feel incredibly lucky to work every day with some of the biggest, most recognised and most innovative organisations from around the world on developing strategies for their .brand TLDs. In this capacity, I also have the privilege to meet some of the most knowledgeable and forward-thinking experts in branding, digital marketing, web development and technology, to name a few. One such person is the brilliantly talented Matt Dorville, Content and SEO Strategist for Major League Baseball Advanced Media. Matt is widely regarded as a leading global SEO expert and develops SEO strategy for MLB.com, NHL.com and all 61 clubs within as well as advising on SEO for Major League Baseball Advanced Media partners both domestic and internationally. Matt has some fantastic insights related in particular to recent changes in Google's treatment of vanity URLs, and in particular how using vanity domains within your .brand can supercharge your existing site's SEO ranking. Vanity URLs: no longer a risk Matt explains that many SEO managers have shifted their view on vanity URLs as an SEO strategy due to changes in Google's search algorithm. "In the past, while vanity URLs were frequently seen as a viable strategy for generating quality links and building one's website, there was often conflict with this decision. Many SEO managers tended to request that their sites shy away from vanity URLs, as redirecting each vanity URL redirect resulted in a loss of around 15 percent of the link strength," writes Matt. "However, recent changes to Google combined with the continued global emergence of .brand TLD usage opens up a new strategy that shows great potential. "On July 26th 2016 Google Webmaster Trends Analyst Gary Illyes made the announcement that 30x redirects no longer lose PageRank, which was a significant shift in the underlying algorithm. You can hypothesize that Google did this for a great many reasons, no doubt their ongoing campaign to encourage websites to migrate to https being a large factor in this." It's all about amplification Matt explains that since Google changed its algorithm to no longer penalise 30x redirects in search, vanity URLs redirecting to deeper content within a site could provide a fantastic opportunity for further amplification of links and therefore an increase in SEO strength. Using vanity URLs with your .brand allows: the marketing team to get the URL with the product name they want, the development team to avoid a lot of work getting the consumers there, and SEO to gain strength both on the landing page for the campaign as well as use the link building to strengthen the entire site. "The recent change in 301 redirection is significant news and a vanity URL within a .brand domain should provide excellent benefits in broadcasting on social as well as generating links to the domain through amplification. The simplicity of the vanity domain, most times pairing up the product, action, or campaign with .brand should be able to tie in with marketing to increase SEO strength on both the landing page and the entire site, and generate traffic through organic and social channels." Why is this relevant for .brands? According to Matt, .brand TLDs have even greater potential to capitalise on vanity URL strategies for five main reasons: Direct navigation for customers with simpler URLs that take visitors directly to deep content within a website Flexibility to change where a vanity URL directs as your business or the market changes Global benefits with the ability to geo-locate visitors and send them to the most relevant content No more availability issues as you own the entire namespace and won't spend a fortune acquiring domains Focus on products and campaigns with stronger calls-to-action that directly tie keywords to your brand. "For .brand owners ̵[...]



Reaction: Do We Really Need a New Internet?

2017-02-20T16:39:00-08:00

The other day several of us were gathered in a conference room on the 17th floor of the LinkedIn building in San Francisco, looking out of the windows as we discussed some various technical matters. All around us, there were new buildings under construction, with that tall towering crane anchored to the building in several places. We wondered how that crane was built, and considered how precise the building process seemed to be to the complete mess building a network seems to be. And then, this week, I ran across a couple of articles (Feb 14 & Feb 15) arguing that we need a new Internet. For instance, from Feb 14 post: What we really have today is a Prototype Internet. It has shown us what is possible when we have a cheap and ubiquitous digital infrastructure. Everyone who uses it has had joyous moments when they have spoken to family far away, found a hot new lover, discovered their perfect house, or booked a wonderful holiday somewhere exotic. For this, we should be grateful and have no regrets. Yet we have not only learned about the possibilities, but also about the problems. The Prototype Internet is not fit for purpose for the safety-critical and socially sensitive types of uses we foresee in the future. It simply wasn't designed with healthcare, transport or energy grids in mind, to the extent it was 'designed' at all. Every "circle of death" watching a video, or DDoS attack that takes a major website offline, is a reminder of this. What we have is an endless series of patches with ever growing unmanaged complexity, and this is not a stable foundation for the future. So the Internet is broken. Completely. We need a new one. Really? First, I'd like to point out that much of what people complain about in terms of the Internet, such as the lack of security, or the lack of privacy, are actually a matter of tradeoffs. You could choose a different set of tradeoffs, of course, but then you would get a different "Internet" — one that may not, in fact, support what we support today. Whether the things it would support would be better or worse, I cannot answer, but the entire concept of a "new Internet" that supports everything we want it to support in a way that has none of the flaws of the current one, and no new flaws we have not thought about before — this is simply impossible. So lets leave that idea aside, and think about some of the other complaints. The Internet is not secure. Well, of course not. But that does not mean it needs to be this way. The reality is that security is a hot potato that application developers, network operators, and end users like to throw at one another, rather than something anyone tries to fix. Rather than considering each piece of the security puzzle, and thinking about how and where it might be best solved, application developers just build applications without security at all, and say "let the network fix it." At the same time, network engineers say either: "sure, I can give you perfect security, let me just install this firewall," or "I don't have anything to do with security, fix that in the application." On the other end, users choose really horrible passwords, and blame the network for losing their credit card number, or say "just let my use my thumbprint," without ever wondering where they are going to go to get a new one when their thumbprint has been compromised. Is this "fixable?" sure, for some strong measure of security — but a "new Internet" isn't going to fare any better than the current one unless people start talking to one another. The Internet cannot scale. Well, that all depends on what you mean by "scale." It seems pretty large to me, and it seems to be getting larger. The problem is that it is often harder to design in scaling than you might think. You often do no[...]



Keys to Successful Collaboration and Solving Wicked Internet Problems

2017-02-20T13:24:00-08:00

Co-authored by Leslie Daigle, Konstantinos Komaitis, and Phil Roberts. The incredible pace of change of the Internet — from research laboratory inception to global telecommunication necessity — is due to the continuing pursuit, development and deployment of technology and practices adopted to make the Internet better. This has required continuous attention to a wide variety of problems ranging from "simple" to so-called "wicked problems". Problems in the latter category have been addressed through collaboration. This post outlines key characteristics of successful collaboration activities (download PDF version). Problem difficulty and solution approaches Wikipedia offers a definition of "wicked problems” [accessed September 16, 2016]: "A wicked problem is a problem that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize. The use of the term "wicked" here has come to denote resistance to resolution, rather than evil [.] Moreover, because of complex interdependencies, the effort to solve one aspect of a wicked problem may reveal or create other problems." Of course, not all large problems are wicked. As noted in the Internet Society's commentary on Collaborative Stewardship, sometimes an Internet problem has a known answer and the challenge is to foster awareness and uptake of that known solution. Denning and Dunham characterize innovation challenges as simple, complex, or wicked [see Denning, Peter J. and Robert Dunham "The Innovator's Way – Essential Practices for Successful Innovation" page 315]. In the Internet context, the characteristics and approaches to addressing them can be summarized as follows: TypeCharacteristicsSolution pathSimpleSolutions, or design approaches for solutions, are knownCooperation: Awareness-raising and information sharing — typically through Network Operator GroupsComplexNo known solution exists, the problem spans multiple parts of the InternetConsensus: Open, consensus-based standards developmentWickedNo solution exists in any domain, general lack of agreement on existence or characterization of the problemCollaboration: moving beyond existing domain and organization boundaries and set processes for determining problems and solutions Why Internet problems are often wicked First, it is important to understand that, today, the Internet is largely composed of private networks. Individual participants, corporations or otherwise, must have a valid business reason for the adoption of a certain technology or practice in their own network. This does not necessarily rise to the level of a quantifiable business case, but they have to have some valid reason that it helps them make something better in their own networks or experience of the Internet. However, if the practice is a behavior on the network that is impacted by, or includes other networks, the participants must have a standard they agree to. This might be a protocol standard governing bits on the wire and the exchange of communication, or a common practice. To get to that level of agreement, participants — whether private companies with financial stakes in the situation, or governments, or individuals — must be disposed and willing to collaborate with others to instantiate the adoption. Addressing wicked Internet problems: Keys to successful collaboration We identify here four important characteristics of collaborative activities that have driven the success and innovation of the Internet to date. There must be a unifying purpose. There can be any number of participants in a successful collaboration, and they can have a range of different perspectives on what a good outcome looks like, but the p[...]



Bug Bounty Programs: Are You Ready? (Part 3)

2017-02-20T12:30:00-08:00

The Bug Bounty movement grew out a desire to recognize independent security researcher efforts in finding and disclosing bugs to the vendor. Over time the movement split into those that demanded to be compensated for the bugs they found and third-party organizations that sought to capitalize on intercepting knowledge of bugs before alerting the vulnerable vendor. Today, on a different front, new businesses have sprouted to manage bug bounties on behalf of a growing number of organizations new to the vulnerability disclosure space. Looking forward, given the history and limitations of bug bounty operations described in part 1 and part 2 of this blog series, what does the future hold? The Penetration Testing Business The paid bug bounty movement has been, and continues to be, a friction point with the commercial penetration testing business model. Since penetration testing is mostly a consultant-led exercise (excluding managed vulnerability scanning programs from the discussion for now), consumers of penetration testing services effectively pay for time and materials — and what's inside the consultants' heads. Meanwhile, contributors to bug bounty programs are paid per discovery — independent of how much time and effort the researcher expended to find the bug. Initially many commercial penetration testing companies saw bug bounty programs as a threat to their business model. Some organizations tied to adapt, offering their own bug bounty programs to their clients, using "bench time" (i.e. non-billable consultancy hours) to participate in third-party bug bounties and generate revenue that way, or sought collaboration with the commercial bug bounty operators by picking up the costly bug triaging work. Most of the early fears by penetration testing companies were ill founded. The demand for compliance validation and system certification has grown faster than any "erosion" of business due to bug bounties, and clients have largely increased their security spend to fund bug bounty programs rather than siphon from an existing penetration testing budget. While the penetration testing market continues to grow, it is perhaps important to understand the future effect on the talent pool from which both, that, and bug bounty industry, must pull from. There are several constraints that will influence the future of bug bounty and penetration testing businesses. These include: The global pool of good and great bug hunters is finite (likely limited to less than 5,000 people worldwide in 2017). Both industries need to tap this pool in order to be successful in finding bugs and security vulnerabilities that cannot be found via automated tools. Advances in automated code checking, adherence and enforcement of the SDL (Secure Development Lifecycle), adoption of DevOps and SecDevOps automation, and more secure software development frameworks, are resulting in less bugs making it to public release — and those bugs that do make it tend to be more complex and require more effort to uncover. The growing adoption and advancement of managed vulnerability scanning services. Most tools used by bug hunters are already enveloped in the scanning platforms used by managed services providers — meaning that up to 95% of commonly reported bugs in web applications are easily discovered through automated scanning tools. As security researchers identify and publish new attack and exploitation vectors, tools are improved to identify these new vectors and added to the scanning platforms. Over time the gap between automated tool and bug hunter is closing — requiring bug hunters to become ever more specialized. It is possible to argue that the growth and popularity of bug bounty programs is a dire[...]



Commercial Incentives Behind IPv6 Deployment

2017-02-20T07:25:00-08:00

From "IGF 2016 Best Practice Forum on IPv6," co-authored by Izumi Okutani, Sumon A. Sabir and Wim Degezelle. The stock of new IPv4 addresses is almost empty. Using one IPv4 address for multiple users is not a future proof solution. IPv4-only users may expect a deterioration of their Internet connectivity and limitations when using the newest applications and online games. The solution to safeguard today's quality is called IPv6. The Best Practice Forum (BPF) on IPv6 at the Internet Governance Forum (IGF) explored what economic and commercial incentives drive providers, companies and organizations to deploy IPv6 on their networks and for their services. The BPF collected case studies, held open discussions online and at the 2016 IGF meeting, and produced a comprehensive output report. This article gives a high-level overview. IP addresses and IPv6 An IP address, in layman terms, is used to identify the interface of a device that is connected to the Internet. Thanks to the IP address, data traveling over the Internet can find the right destination. The Internet Protocol (IP) is the set of rules that among other things define the format and characteristics of the IP address. IPv4 (Internet Protocol version 4) has been used from the start of the Internet but has run out of newly available address stock. IPv6 (Internet Protocol version 6) was developed to address this shortage. IPv6 is abundant in its address space, can accommodate the expected growth of the Internet, and allows for much more devices and users to be connected. To communicate over IPv6, devices must support the IPv6 protocol, networks must be capable of handling IPv6 traffic and content must be reachable for users who connect with an IPv6 address. General state of IPv6 deployment According to the APNIC Labs measurements for November 2016, the global IPv6 deployment rate was close to 8%, with large differences between countries from zero to double-digit IPv6 deployment rates up to 55%. The higher deployment does not entirely follow the traditional division between industrialized and developing countries. There is not always a clear link between economic performance (e.g. GDP) or Internet penetration and IPv6 uptake in a country. The top 20 countries (end 2016), in terms of IPv6 deployment, are a diverse group with among others (in alphabetical order): Belgium, Ecuador, Greece, Malaysia, Peru, Portugal, Trinidad and Tobago, United States, Switzerland The commercial incentives for IPv6 deployment Major global players and some local and regional companies and organizations have commercially deployed IPv6. The BPF collected case studies from different regions and industry sectors to learn about the key motivations behind these decisions to deploy IPv6. The imminent shortage of IPv4 addresses is the obvious and most cited reason to deploy IPv6. IPv6 is regarded as the long-term solution to prepare network and services for the future and to cope with growth. Investing in IPv6 is cheaper in the long-term than the alternative solutions that now prolong the life of IPv4. Alternatives come with their own cost, and eventually, IPv6 deployment will be inevitable. It is advised to plan IPv6 deployment over a longer period and include it in existing maintenance cycles and in projects to renew and upgrade infrastructure, equipment and software. This can drastically reduce the burden and cost. Some see IPv6 deployment and providing IPv6 services as a way to show that a company has the technical know-how and capability to adapt to new technical evolutions. In today's competitive markets branding and image building are important. IPv6 can also create new business opportunities. It allows offering a high-quality[...]



Geo and Brand TLDs Only for a 2019 Second Round of New gTLDs?

2017-02-20T06:02:00-08:00

Let's be clear: right now, any statements on when (or even if) a follow-up round of new gTLD applications might happen are pure conjecture. The first round closed on April 12, 2012. Since then, the pressure has been increasing for ICANN to actually live up to the guidebook premise of launching "subsequent gTLD application rounds as quickly as possible" with "the next application round to begin within one year of the close of the application submission period for the initial round." But that deadline is clearly not going to be met. ICANN no longer expects to complete reviewing the first round — a prerequisite for initiating a follow-up — before some time around 2020. Work has begun on imagining what a second round might look like, but that also seems a long way away from completion. Reviews and classes So to try and get a second round out of the gate, imaginations have been working overtime. What if only certain categories of applicants, say cities and brands, were allowed in? The logic being that by restricting applicant types, evaluating them would be easier. And not all the reviews, for all the TLD types applied for in 2012, would need to be completed before any new calls for applications go out. For cities and geographic terms (dubbed "Geo TLDs"), where the applicant needs to show support from the local government or authorities, the initial gating process could be somewhat easier. As for brands, there were many non-believers in 2012. Then Amazon, Axa, Barclays, BMW, Canon, Google and many others were revealed as applicants. And now those that didn't then, certainly want to now. They are lobbying hard to get their shot as quickly as possible. So when could that be? Those who understand ICANN know the organisation is notoriously slow at getting anything done… unless you do one of a couple of things. Get governments to push, or add symbolism to the mix. ICANN insiders who would see a second round asap are trying door number 2, by suggesting that launching a subsequent application window exactly 7 years after the first, i.e. on January 12, 2019, would satisfy the program's initial intent for a (relatively) quick follow-up to round 1 whilst being a nice nod to history at the same time. In the weird alternative logic universe of ICANN, that actually makes sense! Doesn't make it any more likely to actually happen though… Written by Stéphane Van Gelder, TLD FastrackFollow CircleID on TwitterMore under: ICANN, Top-Level Domains [...]



Mitigating the Increasing Risks of an Insecure Internet of Things

2017-02-18T12:12:00-08:00

The emergence and proliferation of Internet of Things (IoT) devices on industrial, enterprise, and home networks brings with it unprecedented risk. The potential magnitude of this risk was made concrete in October 2016, when insecure Internet-connected cameras launched a distributed denial of service (DDoS) attack on Dyn, a provider of DNS service for many large online service providers (e.g., Twitter, Reddit). Although this incident caused large-scale disruption, it is noteworthy that the attack involved only a few hundred thousand endpoints and a traffic rate of about 1.2 terabits per second. With predictions of upwards of a billion IoT devices within the next five to ten years, the risk of similar, yet much larger attacks, is imminent. The Growing Risks of Insecure IoT Devices One of the biggest contributors to the risk of future attack is the fact that many IoT devices have long-standing, widely known software vulnerabilities that make them vulnerable to exploit and control by remote attackers. Worse yet, the vendors of these IoT devices often have provenance in the hardware industry, but they may lack expertise or resources in software development and systems security. As a result, IoT device manufacturers may ship devices that are extremely difficult, if not practically impossible, to secure. The large number of insecure IoT devices connected to the Internet poses unprecedented risks to consumer privacy, as well as threats to the underlying physical infrastructure and the global Internet at large: Data privacy risks. Internet-connected devices increasingly collect data about the physical world, including information about the functioning of infrastructure such as the power grid and transportation systems, as well as personal or private data on individual consumers. At present, many IoT devices either do not encrypt their communications or use a form of encrypted transport that is vulnerable to attack. Many of these devices also store the data they collect in cloud-hosted services, which may be the target of data breaches or other attack. Risks to availability of critical infrastructure and the Internet at large. As the Mirai botnet attack of October 2016 demonstrated, Internet services often share underlying dependencies on the underlying infrastructure: crippling many websites offline did not require direct attacks on these services, but rather a targeted attack on the underlying infrastructure on which many of these services depend (i.e., the Domain Name System). More broadly, one might expect future attacks that target not just the Internet infrastructure but also physical infrastructure that is increasingly Internet- connected (e.g., power and water systems). The dependencies that are inherent in the current Internet architecture create immediate threats to resilience. The large magnitude and broad scope of these risks implore us to seek solutions that will improve infrastructure resilience in the face of Internet-connected devices that are extremely difficult to secure. A central question in this problem area concerns the responsibility that each stakeholder in this ecosystem should bear, and the respective roles of technology and regulation (whether via industry self-regulation or otherwise) in securing both the Internet and associated physical infrastructure against these increased risks. Risk Mitigation and Management One possible lever for either government or self-regulation is the IoT device manufacturers. One possibility, for example, might be a device certification program for manufacturers that could attest to adherence to best common practice for device and software security. A well-known ([...]



Timing Is All: Cybersquatting or Mark Owner Overreaching?

2017-02-17T12:58:00-08:00

Admittedly, timing is not altogether "all" since there's a palette of factors that go into deciding unlawful registrations of domain names, and a decision as to whether a registrant is cybersquatting or a mark owner overreaching, is likely to include a number of them, but timing is nevertheless fundamental in determining the outcome. Was the mark in existence before the domain name was registered? Is complainant relying on an unregistered mark? What was complainant's reputation when the domain name was registered? What proof does complainant have that registrant had knowledge of its mark? Simply to have a mark is not conclusive of a right to the domain name. Owners of newly minted marks complaining of domain names registered long before is a classic example of overreaching. To have an actionable claim for cybersquatting, the mark must predate the domain name registration. Examples of this type of "mis-timing" appear with some regularity. This month we have Obero Inc. v. Domain Manager, eWeb Development Inc., D2016-2591 February 10, 2017) () and Faster Faster, Inc. DBA Alta Motors v. Jeongho Yoon c/o AltaMart, FA1612001708272 (Forum February 6, 2017 (); last month there was UTILIBILL, Pty. Ltd. v. JOHN POWERS / 191 Chandler Rd, FA1611001705087 (Forum January 9, 2016). The Anticybersquatting Consumer Protection Act (ACPA) — but not the UDRP — is explicit that the mark must be "distinctive at the time of the registration of the domain name." In the majority of disputes filed for adjudication under the Uniform Domain Name Dispute Resolution Policy (UDRP), complainants prevail on the merits; not because respondents default (which they generally do) but because respondents have no defensible right or legitimate interest in the accused domain names and their registrations are clearly abusive. The daily roll call from WIPO and the Forum testify to this. Well-known trademarks incorporated into domain names is a common feature. But default is not, in fact, a determinative factor of bad faith. Respondents can default and prevail as in Utilibill. With a single exception under UDRP (where a respondent is aware of an impending registration of a mark and anticipates it by registering the corresponding domain name, WIPO Overview 2.0 at paragraph 3.1) owners of trademarks and service marks acquired after domain name registrations have standing but no actionable claim against domain name holders. Stand is granted to test whether the facts support the exception, which is rare. Paradoxical though it may sound, the factual situation may support two different aspects of timing. "Timing" is not just a matter of there being a "before and after." How can this be? It can happen where complainant has priority in the sense that the mark existed before the registration of the domain name but its reputation (and hence respondent's knowledge of the mark) comes after the registration. Taking the simple issue of "priority" first, Complainants can compensate for weak marks by expanding their reputations. Parties' close geographic proximity even of weak marks yields an inference of bad faith while marks in remote jurisdictions have to overcome respondent's denial of knowledge. Reputation broadens as owners expand their markets but for marks to travel well complainants have to offer persuasive evidence consumers (and respondent) know who they are. Lacking association of mark owner with any good or service is at the core of the decision in CIA. Industrial H. Carlos Schneider v. WHOIS Privacy Service Pty Ltd. / Domain Admin, Ashantiplc Limited, D2016-2167 (WIPO Januar[...]



Market Flatlines After ICANN Introduces New gTLDs

2017-02-17T02:36:00-08:00

The choices for consumers and business in Europe to get themselves online have never been so great. Social media, apps and blogsites all have made a lasting impression, and we are now in an increasingly crowded market with the addition of hundreds of new gTLDs. So how has all this affected growth and market shares among domain names in Europe? As seen in the chart, annual growth among European ccTLDs has been sliding for many years — until recently. In 2015 and 2016, just as many of the new gTLDs were being delegated, ccTLD growth rates began to stabilise, and downward trends flattened off to a median rate of 3.4% per year. Although it is unclear if this growth stabilisation will continue, it's certainly a positive sign for European ccTLDs who now are competing for attention with hundreds of new gTLDs. Drivers of stabilisation Buyer behaviour is notoriously difficult to assess to any fine detail, however based on market averages in registrations, we can get a sense of the different dynamics of activity. For example, in 2015 we observed that there was a noticeable reduction in churn ratios (domains that were deleted or did not renew). At the same time, new add ratios (new domain sales) remained stable compared to previously declining rates. This meant that the gap between new adds and churn, on average, widened helping to push up domain retention rates1 and of course slow down the decline in long-term growth trends. In 2016, domain registration activity was generally higher. Medians in new add ratios were up but so were churns. Overall the gap between the two remained relatively stable, however, as deletes increased at a slightly higher rate than new adds, the median retention rate felt a small negative pressure. A ccTLD is a brand Ten years ago, a ccTLD had relatively limited competition. There were only a few other relevant TLDs to choose from; internet usage was not as high and social media did not have the reach it does today. Many ccTLD registries did not spend much time in marketing, so simple volume discounts and other pricing incentives were the most common options to drive sales. Although the effects of new gTLDs have not been felt greatly in Europe (at least in terms of volume/market share), they still have the potential to develop and start chipping into new domain sales, so complacency is not an option. In today's competitive TLD market, a new business has plenty of choices and might choose to integrate a new gTLD in its' strategy, however it's perhaps less likely for an existing business that has held and used its local ccTLD for many years to quickly switch to a new gTLD — the cost benefit is probably a hard sell. None the less, ensuring awareness of the ccTLD brand is also now more important than ever. Market buyer behaviour in many sectors often tells us that familiarity is an important aspect in decision making — with that, ccTLDs have a good starting point however, and should continue to capitalise on their unique position as country identifiers as well as their reputations as trusted and secure options for its citizens. For more information on latest trends in ccTLD registrations see latest CENTR DomainWire Global TLD Report. 1 Retention rate is a standardised methodology used in CENTR across the European ccTLD market. It is an indication of renewals and is calculated as the difference between total domains at two points in time minus the new domains registered between those points. Written by Patrick MylesFollow CircleID on TwitterMore under: Domain Names, ICANN, Top-Level Domains [...]



Thoughts on the Proposed Copyright Alternative Dispute Resolution Policy

2017-02-16T13:45:00-08:00

A proposal from the Domain Name Association (DNA) would provide copyright owners with a new tool to fight online infringement — but the idea is, like other efforts to protect intellectual property rights on the Internet, proving controversial. The proposed Copyright Alternative Dispute Resolution Policy is one of four parts of the DNA's "Healthy Domains Initiative" (HDI). It is designed to: construct a voluntary framework for copyright infringement disputes, so copyright holders could use a more efficient and cost-effective system for clear cases of copyright abuse other than going to court and registries and registrars are not forced to act as "judges" and "jurors" on copyright complaints. The concept of the Copyright ADRP appears similar to the longstanding Uniform Domain Name Dispute Resolution Policy (UDRP). But, unlike the UDRP, which applies only to domain names, the Copyright ADRP would apply to what the DNA describes as "pervasive instances of copyright infringement." While many domain names are used in connection with infringing websites, the UDRP is only available when the domain name itself is identical or confusingly similar to a relevant trademark. As a result, the UDRP is often not available to copyright owners, despite obviously infringing content. Although the Digital Millennium Copyright Act (DMCA) already is frequently invoked by copyright owners to take down infringing content, it has significant limitations. For example, many website hosting companies (especially those outside the United States) do not participate in the DMCA system, and the counter-notification process for infringers can easily be used to defeat a DMCA claim. In those cases, a copyright owner often has no choice but to accept the infringing website or incur the burdens of fighting it in court. The Copyright ADRP is a fascinating idea that, if properly drafted and implemented, could help reduce infringing content on the Internet and would complement both the UDRP (and other domain name dispute policies) as well as the DMCA. Still, the idea of the Copyright ADRP already is meeting resistance. A blog post at Domain Incite expresses concern that the policy could be unfairly applied "in favor of rights holders." The Electronic Frontier Foundation reportedly has called it "ill-conceived" and "the very epitome of shadow regulation." And the Internet Commerce Association is worried about "a chilling effect on the domain leasing and licensing business." Given the early stage of the proposed Copyright ADRP and the undeniable prevalence of online copyright infringement, the criticism seems premature and/or unwarranted. Like any legal enforcement mechanism, the devil will be in the details — and, at this point, the details seem to be minimal. As of this writing, it is unclear how the DNA's proposal would be applied other than a broad statement that it should be limited to instances "where the alleged infringement is pervasive or where the primary purpose of the domain is the dissemination of alleged infringing material." How to define "pervasive" or "primary purpose" (let alone "infringement" — something with which the courts have long struggled) is far from clear. Plus, numerous questions remain to be answered. Among the most important: As a voluntary dispute system (not mandated by ICANN), which registries and registrars would adopt the Copyright ADR? And who would administer it? The answers to all of these questions are worth pursuing because, regardless of whether the DNA's idea is workable, reducing online copyright infr[...]



A Q&A on Google's New gTLD Solution, Nomulus

2017-02-16T12:09:00-08:00

Nomulus is the code for the backend domain name registry solution offered by Google which requires the use of Google Cloud. This solution is the one used for all of Google's new gTLDs and the solution works. An announcement for this solution can look like a potentially "simple" solution for future .BRAND new gTLD applicants — but is it truly the case? When Google makes such an announcement, it immediately catches the eye of the entire new gTLD industry as well others. If such an announcement is seen as a troublemaker for other backend registry businesses, it also alerts other potential new gTLD service providers — such as law firms — that would be interested in using a registry platform to avoid contracting with a backend registry. To help us clear some points, we sent our questions to one the key people involved with Nomulus, Ben McIlwain, Google's senior software engineer who was kind enough to answer them. * * * Q: What technical knowledge would a Law Firm need to offer Trademarks their .BRAND gTLD using Nomulus? A: "A law firm? They'd definitely need technically minded people, and probably at least one developer. There are not many law firms running TLDs, I would imagine? It seems more likely to me that said hypothetical law firm would want to use a registry service provider". Q: Can Google Registrar (for US companies) be the single registrar authorized to create a ".brand" new domain name, when using Nomulus? This question is important since a registrar is required to allow the registration of domain names. A: "With a relatively small amount of custom development to Nomulus, you could add a whitelist of registrar(s) to TLDs so that only those registrar(s) could register domain names on said TLD. This would work for any registrar and isn't specific to Google Domains. It'd all be using standard EPP". Q: Does it make sense to say that Google Registry has already passed the ICANN technical requirements and so a company using Nomulus+GoogleCloud would easily pass these tests prior to being delegated? A: "Not sure. You'd still need someone who's already familiar with ICANN's pre-delegation testing process to get through it "easily". But you could say that, since we passed the testing on our ~40 TLDs, there's more assurance that someone else could do so using our software than with some other software that hasn't yet passed testing for any TLD". Q: Starting from scratch with Nomulus, and knowing that OpenRegistry was recently sold for $3.7 million, what could be the estimated cost to build a backend registry solution with Nomulus? A: "I have no idea. Hopefully not too much, but there are way too many factors in play (requirements, prevailing wage of the area in which you're hiring developers, etc.)" Q: Are there already service providers able to build a backend registry solution for third-party customers? (Such as a law firm looking for a service provider to build its solution.) A: "There are registry service providers that exist, e.g. Rightside and Afilias. Did you mean using Nomulus though? If so, I'm not aware of any, but maybe someone would do so in the future?" Q: Automating and managing the invoicing process seems to be a though part of a backend registry solution: how can Nomulus help simplify this for an entrepreneur willing to operate a generic TLD dedicated to selling domain names? A: "I don't entirely understand the question. "though part"? The problem is that there are so many potential different ways to handle invoicing and payments, and what is availa[...]