Subscribe: CircleID: Featured Blogs
Added By: Feedage Forager Feedage Grade A rated
Language: English
community  complainant  decisions  dns  domain  internet  net  new  plan  record  search  service  trademark  udrp  whois 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID: Featured Blogs

CircleID: Featured Blogs

Latest blogs postings on CircleID

Updated: 2017-08-15T16:46:00-08:00


SpaceX Satellite Internet Project Status Update


SpaceX orbital path schematic, sourceIf all goes according to plan, SpaceX will be offering global Internet connectivity by 2024. I've been following the efforts of SpaceX and OneWeb to become global Internet service providers using constellations of low-Earth orbit (LEO) satellites for some time. Launch times are getting close, so I'm posting a status update on SpaceX's project. (I'll do the same for OneWeb in a subsequent post). The Senate Committee on Commerce, Science, and Transportation held a hearing titled "Investing in America's Broadband Infrastructure: Exploring Ways to Reduce Barriers to Deployment" on May 3, 2017, and one of the expert witnesses was Patricia Cooper, SpaceX Vice President, Satellite Government Affairs. She began her oral testimony with a description of SpaceX and its capability and went on to outline the disparities in broadband availability and quality and the domestic and global broadband market opportunities. Next, she presented their two-stage plan. The first, LEO, satellite constellation [PDF] will consist of 4,425 satellites operating in 83 orbital planes at altitudes ranging from 1,110 to 1,325 km. They plan to launch a prototype satellite before the end of this year and a second one during the early months of 2018. They will start launching operational satellites in 2019 and will complete the first constellation by 2024. The LEO satellites launched in the first phase of the project will enable SpaceX to bring the Internet to all underserved and rural areas of the Earth. If all goes according to plan, SpaceX will be offering global Internet connectivity by 2024. These satellites may also have an advantage over terrestrial networks for long-range backhaul links since they will require fewer router hops, as shown in the following illustration comparing a terrestrial route (14 hops) with a satellite route (5 hops) between Los Angeles and a University in Punta Arenas, Chile (The figure is drawn to scale). Ms. Cooper also said they had filed for authority to launch a second constellation of 7,500 satellites operating closer to the Earth — in very low Earth orbit (VLEO). A 2016 patent by Mark Krebs, then at Google, now at SpaceX, describes the relationship between the two constellations. I don't have dates for the second constellation, but the satellite altitudes will range from 335.9 to 345.6 km. (The International Space Station orbits at 400 km). These satellites will be able to provide high-speed, low-latency connectivity because of their low-altitude orbits. Coverage of the two constallations will overlap, allowing for dynamic handoffs between them when desireable. When this second constellation is complete, SpaceX might be able to compete with terrestrial networks in densely populated urban areas. These VLEO satellites might also be used for Earth imaging and sensing applications and a bullish article by Gavin Sheriden suggests they may also connect all Tesla cars and Tesla solar roofs. Very low Earth orbit (VLEO) satellites have smaller footprints, but are faster and have lower latency times than higher altitude satellites. Image Source Ms. Cooper concluded her testimony with a discussion of administrative barriers they were encountering and listed six specific policy recommendation. You can see her full written testimony here. The entire hearing is shown below, and Ms. Cooper's testimony begins at 13:54. width="644" height="362" src="" frameborder="0" allowfullscreen> I will follow this post with a similar update on OneWeb, SpaceX's formidable competitor in the race to become a global Internet service provider using satellites. Global connectivity is a rosy prospect, but we must ask one more question. Success by either or both of these companies could, like the shift from dial-up to broadband, disrupt the Internet service industry. As of July/August 1997, there were 4,009 ISPs in North America, and today few people in the United States have more than two ISP choices. Might we end up with o[...]

Aviation: The Dirty, Not-So-Little Secret of Internet Governance


This article aims to provide an overview of carbon offsetting, a guide to investing in carbon offsetting programs, and concludes with a call to action by the Internet governance community to research and ultimately invest in suitable carbon offsetting programs. Almost a year ago, I began writing about the relationship between the Internet/information and communications technologies (ICTs), the environment, and sustainability. One of the points I made in my first article on the subject is that there is much more we as a community can do to reduce our ecological footprint and enhance the sustainability of the Internet — which happens to be good for both the planet and business. This necessity combined with the ever-growing urgency to act hit hard when I recently read a New York Times article about how bad flying is for the environment. If anyone is reading this in the northern hemisphere at the moment (or anywhere, really), I don't need to remind you about how hot it is outside. The fact is, the earth is getting warmer, and anecdotes only serve as a reminder — for instance, it's been so hot in parts of the Middle East this summer that palm trees have been spontaneously combusting, and it was so hot in Phoenix, Arizona, in June that some airplanes couldn't even takeoff. What's the connection, though, between our warming planet and Internet governance? Beyond the façade of the seemingly glamorous lifestyle of the Internet governance community, marked by cocktail dinners, international travel, and exotic locales, lies an uncomfortable truth: we are high carbon emitters. Let's face it: if anyone is working on Internet infrastructure or Internet governance, traveling is practically a requirement for the job and a veritable necessity for the Internet governance community. As I wrote previously on CircleID: "[Factor in] the uncomfortable reality that to effectively govern a critical global resource means heavy reliance on air travel, it places a lot of existential pressure on the Internet governance community and policy-makers, while also providing even more impetus to produce effective, sustainable, and impactful outcomes at global meetings." So, even if it is unsurprising news, you can nevertheless imagine my ethical dilemma as I continue to advocate for sustainability but take more flights in a year than many people will take their whole lifetime. While the guilt and the accompanying affliction of hypocrisy are pernicious, I am working to reduce my carbon footprint in other ways and trying travel less. Yet, even though we can — and should — make changes to our lifestyles, such as eating less meat, recycling more, and cutting down on emissions wherever possible, personal responsibility can only go so far — even renewables aren't necessarily a panacea. The real cost of flying According to the Australian consumer advocacy group CHOICE, airlines emitted 781 million tons of carbon dioxide (CO2) into the atmosphere in 2015, representing about 2% of human-caused CO2 emissions. While this pales in comparison to other industries like energy production, manufacturing, or agriculture in terms of total global emissions, "If global aviation was a country," Alison Potter writing for CHOICE stressed, "Its CO2 emissions would be ranked seventh in the world, between Germany and South Korea. And as flying becomes cheaper and more popular, the problem is heading skyward. Global passenger air traffic grew by 5.2% from June 2015 to June 2016 and emissions are growing at around 2-3% a year." Moreover, let's say someone flew round-trip in economy class directly from New York City to the Internet Corporation for Assigned Names and Numbers' (ICANN) headquarters in Marina del Rey, California, a municipality in the Los Angeles area, they would emit anywhere between 1.09 metric tons of CO2 to 1.78 metric tons (the discrepancy depends on the methodology used by different carbon calculators, common ones being Atmosfair's, Sustainable Travel International's, and Climate Care's). To put thi[...]

Should the EB-5 Investor Visa Program Recognize Cyber Workers?


The EB-5 Investor Visa Program was created by Congress in 1990 to "stimulate the U.S. economy through job creation and capital investment by foreign investors." The program, administered by the Department of Homeland Security's U.S. Citizenship and Immigration Services (USCIS), provides that "entrepreneurs (and their spouses and unmarried children under 21) are eligible to apply for a green card (permanent residence) if they: Make the necessary investment in a commercial enterprise in the United States; and Plan to create or preserve 10 permanent full-time jobs for qualified U.S. workers." The EB-5 program encourages foreign entrepreneurs to invest in a Targeted Employment Area (TEA). A TEA is defined as a rural area or an area where the unemployment rate is at least 150% of the national average. The EB-5 program has delegated to states the authority to designate various TEAs on a project-specific basis. By locating a commercial enterprise in a state-designated TEA, foreign investors sharply reduce the size of investment that is needed to qualify for a green card. The EB-5 regulations, which were written in 1990, take a geocentric approach to defining TEAs by assuming that an enterprise's employees live near its principle place of business. In 1990, this was not an unreasonable assumption. It is today. It is now common for American workers to physically commute to jobs that are located in different metropolitan areas and to cyber-commute to jobs anywhere in the country. Internet-based employment is an efficient means of providing economic opportunities to workers who live in rural America and areas of high unemployment. The current TEA designation process has been the subject of criticism over concerns that the EB-5 investments are not helping the program's intended beneficiaries. In response to criticism and the passage of time, DHS is updating its EB-5 regulations. DHS's Notice of Proposed Rulemaking explains that the program's reliance "on states' TEA designations has resulted in the application of inconsistent rules by different states. ... the deference to state determinations provided by current regulations has resulted in the acceptance of some TEAs that consist of areas of relative economic prosperity linked to areas with lower employment, and some TEAs that have been criticized as 'gerrymandered.'" DHS's response to this concern is a proposal to (1) centralize TEA decisions in Washington and (2) create an even more georestrictive requirement for TEAs. The proposed rule does not consider Americans who could "commute" to work via the internet. In short, the proposed regulation doubles-down on the 1990 mindset that workers live near their place of employment, a faulty assumption that has helped fuel the "gerrymandering" issue. There is no statutory requirement that new commercial enterprises be physically located in TEAs in order for investors to qualify for the TEA provisions of the EB-5 program. To the contrary, the statute states that investor visas "shall be reserved for qualified immigrants who invest in a new commercial enterprise ... which will create employment in a targeted employment area." One option for the EB-5 program would be to allow a new commercial enterprise to qualify for TEA EB-5 investment, irrespective of the business's location, by committing to hire workers who live in a designated TEA. By leveraging the internet, the EB-5 program could provide technology jobs to Americans who live in rural and high unemployment areas. Written by Bruce Levinson, SVP, Regulatory Intervention - Center for Regulatory EffectivenessFollow CircleID on TwitterMore under: Law, Policy & Regulation [...]

Supporting New DNS RR Types with dnsextlang, Part II


Previous article introduced my DNS extension language, intended to make it easier to add new DNS record types to DNS software. It described a new perl module Net::DNS::Extlang that uses the extension language to automatically create perl code to handle new RRTYPEs. Today we look at my second project, intended to let people create DNS records and zone files with new RRTYPEs. I've long had a DNS "toaster", a web site where my users and I could manage our DNS zones. Rather than limiting users to a small list of RRTYPEs, it lets users edit the text of their zone files, which works fine except when it doesn't. Every hour a daemon takes any changed zonefiles, signs them, and passes them to the DNS servers. With no syntax check in the toaster, if there's a syntax error in a zone file, the entire rebuild process fails until someone (usually me) notices and fixes it. Since the toaster is written in python, I wrote a python library that uses the same extension language to do syntax checked zone edits and a simple version of the toaster as a django app that people can start with. The syntax checker does two things: one is to read text strings that are supposed to be DNS master files, or single master records and check whether they're valid. The other is to create and parse HTML forms for DNS records to help people enter valid ones. To show how this works, I put a series of screen shots in this PDF so you can follow along. The first screen shows the site after you log in, with a few existing random domains. If you the Create tab, you get the second screen, which lets you fill in the domain name and (if you're a site admin) the name of the user who owns the site. Click Submit, and now you're on the edit page, where you can see the zone has been created with a single comment record, just so it wouldn't be empty. There's a New Record: section where you can choose the record type you want to create, and click Add. The set of record types is created on the fly from the extension language database in the DNS that I described in the last blog post, so you can create and later edit any RRTYPE that the extension language can describe. We choose MX and click Add, which gives us a screen with a form that has all of the fields in the MX record. This form is also created on the fly by the extension language library, so for each rrtype, it will show an appropriate form with prompts for each field. Fill in the form and click Submit, and the record is added to the zone file if it's valid. The next screen shows what happens if you get the syntax wrong, in this case, an A record with an invalid IPv4 address. The extension library has a class for every field type that produces helpful error messages in case of syntax errors. Since sometimes it's tedious to edit a record at a time, there's also a Block edit mode, shown in the next screen, where you can edit the zone as a block of text. When you submit the changes, it syntax checks the zone. The next screen shows an error message for an AAAA record with an invalid IPv6 address. Not shown are some other odds and ends, notably a batch script that exports a list of zone names and a set of zone files that you can give you your DNS server. The django app is only about 1000 lines of python, of which about 1/3 is managing the various web pages, 1/3 is connecting the extlang library to the forms generated by django's forms class, and 1/3 is everything else. The python library is in pypi at, currently python3 only. The django app is on github at, written in django 1.9 and python3. It uses the dnsextlang library, of course. Written by John Levine, Author, Consultant & SpeakerFollow CircleID on TwitterMore under: DNS [...]

Is a New Set of Governance Mechanism Necessary for the New gTLDs?


In order to be able to reply to the question of whether a new set of governance mechanisms are necessary to regulate the new Global Top Level Domains (gTLDs), one should first consider how efficiently the current Uniform Domain-Name Dispute-Resolution Policy (UDRP) from the Internet Corporation for Assigned Names and Numbers (ICANN) has performed and then move to the evaluation of the Implementations Recommendations Team (ITR) recommendations. In September 2008, an analysis of the opportunities and problems for trademark owners presented by the introduction of new gTLDs [1] was published in the Trademark World magazine. That analysis identified several brand protection challenges such as the absence of required pre-launch rights protection mechanisms (RPMs), the problems of defensive registrations, and the unprecedented potential for cybersquatting [2]. According to Kristina Rosette [3] an Intellectual Property Constituency Representative to the ICANN's Generic Name Supporting Organization (GNSO) Council and ex-member of the Implementation Recommendation Team, ICANN has made little advancement on the issue of trademark protection in the new gTLDs despite the efforts of numerous trademark owners, associations, and lawyers. Issues with the UDRP In February 2010, the ICANN GNSO council passed a resolution [4], requesting ICANN staff to draft an Issues Report [5] on the current state of the UDRP. According to that motion, the draft had to focus mainly on issues of: — How insufficiently and unequally has the UDRP addressed the problems of cybersquatting; — Whether the definition of the term 'cybersquatting' needed to be reviewed or updated in the existing UDRP language including a possible revision of the policy development process. In his book [6] 'The Current State of Domain Name Regulation: domain names as second-class citizens in a mark-dominated world', Dr. Komaitis has interestingly outlined some of the major issues related to the UDRP, which have commonly contributed to its procedural unfairness. Some of those issues [7] can be broken down to: The panellists associated with the UDRP have mainly a trademark law background which is not sufficiently oriented to the multi-stakeholder approach; The UDRP makes arbitrary use of precedent. One unique feature of the emerging arbitration process under the UDRP has been the development of its own jurisprudence. While most arbitration is done with little, if any, public disclosure, the publication of UDRP opinions on the Web, has led to a practice of citing back to previous panel decisions. Some decisions have used the previous cases with only the weight of persuasive authority, while others appear to view themselves as being bound by precedent. In several cases, panels have used opinions from previous cases as persuasive authority to help address a variety of procedural and substantive matters. For example, J.P. Morgan v. Resource Marketing ( D2000-0035 [8] was a dispute involving an American Complainant and an American Respondent. The Respondent's reply was late and the Complainant argued for the inadmissibility of the late response (the Complainant cited Talk City, Inc. v. Robertson ( D2000-0009 [9] as precedent for this position). The UDRP is based upon the assumption that all domain name registrations are potentially abusive and harmful without any distinction or assessment between actual harm and the likelihood of such harm. In practice, this is not always the case. There is no authority responsible for the validation of the decisions that emerge from the UDRP panels. The bad faith element is open to wide and discretionary, if not discriminatory, interpretations. Trademark attorneys were initially concerned by the UDRP's badfaith use requirement because under US trademark law, "use" meant that the domain name had to be "used in commerce" [10]. Three of the four factors outlining bad faith do not require any use per se[...]

Where to Search UDRP Decisions


Searching decisions under the Uniform Domain Name Dispute Resolution Policy (UDRP) is important — for evaluating the merits of a potential case and also, of course, for citing precedent when drafting documents (such as a complaint and a response) in an actual case. But, searching UDRP decisions is not always an easy task. It's important to know both where to search and how to search. Unfortunately, there is no longer an official, central repository of all UDRP decisions that is freely available online. Instead, each of the UDRP service providers publishes its own search page, at the links below: World Intellectual Property Organization (WIPO) The Forum (formerly known as the National Arbitration Forum, or NAF)Asian Domain Name Dispute Resolution Centre (ADNDRC)Czech Arbitration Court (CAC) (The newest UDRP service provider, the Arab Center for Dispute Resolution, has had only two cases as of this writing and does not have — or, therefore, need — a search tool.) Each of these providers offers a different search engine, some of which are better than others. For example, three of the search pages (WIPO, the Forum and CAC) offer field-based searches with the ability to find decisions based on criteria such as the disputed domain name, the complainant or the respondent; ADNDRC provides only a general search field. There are other differences, too. For example, WIPO and the Forum are the only providers that also provide an index-based search. WIPO and the Forum offer the ability to limit searches to specific domain name dispute policies (other than the UDRP), but none of the providers lets users search by all relevant criteria, and only the Forum allows searches by specific top-level domains within the UDRP. Google and Other Services For advanced searches, it's sometimes helpful to conduct a Google search instead of a UDRP-specific tool, limiting results to the relevant UDRP service provider's domain. For example, adding "" (without the quotes) to the beginning of a general Google search will produce results only from the WIPO website. Because this means that the results may contain pages other than UDRP decisions, I often add another phrase to my search that I know will produce a UDRP result (such as: "The Complaint was filed with the WIPO Arbitration and Mediation Center"). Yes, it's awkward, but it works pretty well. There are some third-party websites that offer UDRP search tools, such as UDRP Search, DNDisputes and But, like the official UDRP service providers' engines, these, too, have limitations. While DNDisputes offers more search fields, decisions are limited to those from WIPO; and's are limited to WIPO and the Forum. UDRP Search's options are not very robust. Bottom Line: Use Them All Ultimately, using some combination of all of the above tools and techniques is often the best practice. Doing so will enable you to search the widest number of decisions in the most advanced way possible. After more than 17 years and 60,000 decisions, UDRP jurisprudence is obviously very robust. Unfortunately, finding the most important and relevant decisions requires mastery of both the art and science of search. Written by Doug Isenberg, Attorney & Founder of The GigaLaw FirmFollow CircleID on TwitterMore under: Domain Names, Law, UDRP [...]

Broadband Providers: What are the Implications of Virtual Reality?


Broadband service providers take note, personal virtual reality (VR) platforms are going to reshape the industry sooner than you think. We've seen a constant stream of VR-related news coming from major industry tradeshows, online broadband publications, and even broadband CEO blog posts. I'll try to generalize their comments succinctly: personal VR platforms are expected to bring massive sales, huge increases in bandwidth consumption, and dramatic shifts in subscriber quality expectations. This is an exciting time for broadband service providers, but it's essential to consider what implications VR will have on your organization. Here are some key questions to answer if you want to stay ahead of the VR trends: Which Access Network Delivers the Bandwidth Required for VR? Bandwidth usage is about to go way, way up. Major League Baseball has recently teamed up with Intel to announce a new project, where they will deliver one live-streamed game per week in virtual reality. This represents a new age for sports enthusiasts, who will be able to tune in from the comfort of their homes to watch live, 360 degree footage of a baseball game as if they were in the stadium. The bandwidth required to not only deliver this footage, but to maintain high quality throughout, will be unprecedented. Forbes Magazine recently broke down what gigabits per second it would take to generate a digital experience at the full fidelity of human perception, predicting that humans can process an equivalent of nearly 5.2Gbps of sound and light — more than 200x what the FCC predicted to be the future requirement for broadband networks (25 Mbps). Operators who want to stay ahead of the curve have to make important decisions about what access network will best fit their subscribers' needs. Fiber, DOCSIS 3.1 (Full Duplex DOCSIS 3.1), and converged approaches all have their benefits. How Much Network and Subscriber Visibility is Required to Optimize Services? The simple answer: A lot. Providers looking to satisfy the needs of their subscribers as next-generation content delivery platforms like VR enter the mainstream need a holistic view of the network, but they also need holistic vision beyond the network edge into the subscriber premises. This can be achieved with a combination of TR-069 standards and Internet Protocol Detail Record (IPDR) data, which allows operators to monitor the access network as well as customer edge equipment. By gaining a picture of the entire services network (including beyond the last mile), you can ensure service quality issues are minimized while also proactively resolving network and customer equipment issues, many times before the subscriber is even affected. This also leads to better network intelligence, meaning faster issue resolution when subscribers have to phone the customer call center. Increasing visibility over the usage habits of your subscribers also enables you to make better predictions when planning for future capacity requirements. As an added bonus, you will open up new avenues to optimize and personalize the user experience like never before, which brings me to the last question. Will Traditional Service Models Still Meet Subscriber Needs? Service usage habits are changing more rapidly than ever, and customer preferences are becoming more unique. The era of simple tiered service plans is reaching an impasse. Broadband providers must look for ways to implement strategic service plans that can deliver the right services at high quality to the subscribers that need them, while ensuring subscribers who require less bandwidth aren't affected with network congestion and buffering. Diversity among service plans is essential. Early adopters of VR must have the bandwidth available that they need to stream live events in high quality, or play videogames without being interrupted. Traditional cable and Internet subscribers, on the oth[...]

Supporting New DNS RR Types with dnsextlang, Part I


The Domain Name System has always been intended to be extensible. The original spec in the 1980s had about a dozen resource record types (RRTYPEs), and since then people have invented many more so now there are about 65 different RRTYPEs. But if you look at most DNS zones, you'll only see a handful of types, NS, A, AAAA, MX, TXT, and maybe SRV. Why? A lot of the other types are arcane or obsolete, but there are plenty that are useful. Moreover, new designs like DKIM, DMARC, and notoriously SPF have reused TXT records rather than defining new types of their own. Why? It's the provisioning crudware. While DNS server software is regularly updated to handle new RRTYPEs, the web based packages that most people have to use to manage their DNS is almost never updated, and usually, handles only a small set of RRTYPEs. This struck me as unfortunate, so I defined a DNS extension language that provisioning systems can use to look up the syntax of new RRTYPEs, so when a new type is created, only the syntax tables have to be updated, not the software. Paul Vixie had the clever idea to store the tables in the DNS itself (in TXT records of course), so after a one-time upgrade to your configuration software, new RRTYPEs work automagically when their description is added to the DNS. The Internet draft that describes this has been kicking around for six years, but with support from ICANN (thanks!) I wrote some libraries and a sample application that implements it. Adding new RRTYPEs is relatively straightforward because the syntax is quite simple. Each record starts with an optional name (the default being the same as the previous record) optional class and time to live, the mnemonic for the record type such as A or MX or NAPTR, and then a sequence of fields, each of which is a possibly quoted string of characters. Different RRTYPEs interpret the fields differently, but it turns out that a fairly small set of fields types suffice for most RRTYPEs. Here's a typical rrype description, for a SRV record. In each line, the stuff after the space is descriptive text. SRV:33:I Server Selection   I2:priority Priority   I2:weight Weight   I2:port Port   N:target Target host name The first line says the mnemonic is SRV, the type number is 33, it's only defined in the IN class (the "I".) There are three two-byte integer fields, priority, weight, and port, and a DNS name target. The first word on each field line is the field name, the rest of the line is a comment for humans. When stored in the DNS, each of those lines is a string in DNS TXT records, like this: SRV.RRNAME.ARPA. IN TXT ("SRV:33:I Server Selection" "I2:priority Priority"   "I2:weight Weight" "I2:port Port" "N:target Target host name") 33.RRTYPE.ARPA. IN TXT ("SRV:33:I Server Selection" "I2:priority Priority"   "I2:weight Weight" "I2:port Port" "N:target Target host name") In the DNS, there are two copies, one at the text name of the RRTYPE, and one at its numeric code. (Until the records are there, the software packages let you change the location. I've put descriptions at name.RRNAME.SERVICES.NET and number.RRNAME.SERVICES.NET.) See the Internet Draft for the full set of field types and syntax details. The first software package I wrote is an extension to the popular perl Net::DNS module called Net::DNS::Extlang. With the extension, if Net::DNS sees a text master record with an unknown RRTYPE name, or a binary record with an unknown RRTYPE number, it tries to look up the record description in the DNS, and if successful, passes the description to Net::DNS::Extlang which compiles it into a perl routine to encode and decode the RRTYPE which Net::DNS installs. The authors of Net::DNS worked with me so recent versions of Net::DNS have the necessary hooks to do this all automatically. For example, if Net::DNS didn't alre[...]

Wireless Innovations for a Networked Society


Last week, I had the honor of moderating an engaging panel discussion at Mozilla on the need for community networks and the important role they can play in bridging the digital divide. The panel highlighted the success stories of some of the existing California-based initiatives that are actively working toward building solutions to get under-served communities online. But why do we need community networks when nation-wide service providers already exist? According to Recode and the Wireless Broadband Alliance, 62 million Americans in urban centers and 16 million Americans in rural locations either don't have access to or can't afford broadband Internet. Locally, even in tech-driven San Francisco, more than 100,000 residents still do not have access to the Internet at home. A potential solution to help close this gap is to develop community-based connectivity projects, centered around the needs of the local individuals. Empowering New Voices The goals of the Mozilla challenge are simple: how do you connect the unconnected and how do you connect people to essential resources when disaster strikes? The technical challenge can be approached in many different ways, but the crux of the problem lies in understanding and meaningfully addressing specific community needs. Ideally, by empowering individuals at the grassroots level, the dream is also to cultivate new voices that shape the future of the web and members who partake in the digital economy, reaping the benefits of a connected society. Championing Connectivity A key take-away from the event was the recurring challenge most of these organizations have faced at some point — how to build a project that is sustainable in the long term? Often, despite projects being adequately funded and having a clear technical plan of action, lack of local leadership and community engagement to carry these projects forward can result in an abrupt end once the initial deployment is completed. Local champions for connectivity, people who understand that digital dividends can change lives and people who know the value of a connected society, are critical in developing a solution that serves the community and builds an empowered Internet citizenry. As speaker Steve Huter recounted from his many years of experience at the Network Startup Resource Center, "Request driven models tend to evolve better and have more engagement from developers and beneficiaries working together. Often as technologists, we get excited about a particular technology and we have the solution in mind; but it is really important to make sure that we are addressing the needs of the specific community and solving the right model." Breaking Barriers Important work driving this mission is already underway in the San Francisco Bay Area. On the Mozilla panel, experts from three initiatives furthering this cause discussed their work, but more importantly provided guidance to the attendees on how to approach these Challenges. Speakers represented on the panel included: Marc Juul from People's Open – a community owned and operated peer-to-peer wireless network in Oakland Thu Nguyen from Flowzo – a start-up trying to fix the issue of last mile connectivity through multi-stakeholder cooperation Steve Huter from Network Startup Resource Center – a 25 year old organization that has helped build Internet infrastructure in 100 countries around the world Fatema Kothari from the San Francisco Bay Area Internet Society A complete video recording of this event can be found here. * * * Mozilla and the National Science Foundation are partnering to give away $2M in prizes for wireless solutions that help connect the unconnected, keep communities connected following major disasters, and which make use of urban infrastructure. Got an idea? Apply here. Written by Fatema Kothari, Vice-C[...]

Cuban Professors Get Laptops But No Wifi Capabilities


Late last year, we learned that China's 90,000 employee Haier Group would be producing laptops and tablets in partnership with GEDEME, a Cuban manufacturer that will assemble the machines using Haier parts, equipment, and production processes. Last week, a friend who is a professor at the University of Havana told me that he and other professors have been given GDM laptops. He said UCI, ISPJAE and Univerisity of Havana faculty were the first to receive the laptops, but eventually all professors at all universities would get them. When Haier announced they would be producing laptops in Cuba, they said would be Core i3, Celeron and Core i5 CPUs with up to 1 TB of memory. The processor in my friend's machine is a 1.60GHz Celeron N3060, which Intel announced April 1, 2015. The N3060 is a system on a chip with two processor cores, a graphic processing unit, and a memory controller. His laptop has 4 GB of RAM, a 97.31 GB hard drive, a CD-ROM drive and a 1,024 x 768 pixel display with 32-bit color depth. It has a wired Ethernet port, but no WiFi or Bluetooth. The machine came with UCI's Nova Unix operating system, but my friend has installed Windows in its place and he says most people do the same. (Cuban officials say they can achieve software independence using Nova, but Cuba is not large enough to support its own software, services, and standards). These are low-end laptops, but they represent a significant step up over phones and tablets for content creation. They are also power-efficient, making them suitable for portable use, but for some reason, they do not have WiFi radios. A laptop without WiFi is striking today. I don't know what the marginal cost of WiFi would have been, but Alibaba offers many chips for under $5 in relatively small lots. Why don't these machines have WiFi radios? Is the government trying to discourage portable use at home or public-access hotspots? Regardless of the reason, WiFi dongles are a low-cost fix. There are not a lot of WiFi dongles for sale on Revolico today and their prices are high, but I bet the offerings pick up if these laptops roll out. Written by Larry Press, Professor of Information Systems at California State UniversityFollow CircleID on TwitterMore under: Access Providers [...]

Renewed Website: Modern Standards Need to be Used for a Free, Open and Secure Internet


Modern Internet Standards provide for more reliability and further growth of the Internet. But are you using them? You can test this on the Dutch website (also available in English and Polish). Recently the website was renewed. Not only the style has been adapted, but also the way the tests are performed and the test results are shown. A lot of additional information has been added, so that even the tech savvy internet users can find an explanation underpinning the test results. The website, an initiative of the Dutch internet community and the Dutch Government, is used to promote standards that will enable us to make the best possible use of the internet as we know it. To beat internet crime and to improve our interconnectivity, we strongly believe in applying these modern internet standards. These will safeguard our websites, our email communications and our privacy — something all of us should care about. We are very happy to see a growing number of users, that test local connections, domains as well as email settings. The tests provided at are quite fast and based on international collaboration efforts within the internet community. We think this is the only reasonable way forward in order to keep the internet as a source of connecting people, sharing information and open access to a wide range of resources. We constantly aim at improving both our tests and our advise to the users of our website. This is only possible thanks to the continuing support of the members of the Dutch Internet Standards Platform. But also your use of and all your questions and comments help us to better understand how these modern standards can be used in the best way. As a spin-off of our efforts, earlier this year the Dutch Secure Email Coalition was established: an initiative that aims to focus on improving security in our daily use of email. Besides members of the Dutch Internet Standards Platform this coalition has new members from the Government and Industry sector that closely work together in sharing knowledge and experience regarding the implementation of modern standards like DMARC/DKIM, SPF and DNSSEC. Our first meetings have been very informative and productive and I look forward to see a growing number of organizations implementing these standards. I hope that still more users will find not only useful, but inspiring as well. The Hall of Fame shows that a growing number of organizations and individuals are able to reach a 100% score. I think that this is really promising and hope that all of you will help us to keep the internet, open, free and secure! Written by Gerben Klein Baltink, Chairman of the Dutch Internet Standards PlatformFollow CircleID on TwitterMore under: Access Providers, Cybersecurity, Policy & Regulation, Web [...]

When a Domain Name Dispute is 'Plan B'


"Plan B" (noun): an alternative plan of action for use if the original plan should fail. While having a backup plan is usually a good idea, it's often not an effective way to obtain someone else's domain name — at least not when Plan B consists of a company filing a UDRP complaint with the hope of getting a domain name to which it is not entitled and could not acquire via a negotiated purchase. "Plan B" as a derogatory way of describing an attempted domain name acquisition usually arises in the context of a domain name that is not protected by exclusive (or any) trademark rights, or where the complainant clearly could not prevail in a UDRP proceeding. Such as (to name a few real examples): , and . A Short History of 'Plan B' The label appears to have first been used in the context of domain name disputes in two decisions from 2007, one involving , the other and . In the case, the panel described Plan B like this: "Complainant commenced this proceeding as an alternative ('Plan B') to acquire the disputed domain name after being rebuffed in the anonymous auction process." Through the years, the "Plan B" terminology has been invoked in about 50 cases, nearly all of which resulted in decisions allowing the current registrants to keep the disputed domain names. The panel in the proceeding called it "a classic 'Plan B' case where a party, having been frustrated in its negotiations to buy a domain name, resorts to the ultimate option of a highly contrived and artificial claim not supported by any evidence or the plain wording of the UDRP." In that case, the facts at first glance might seem appropriate for a UDRP case: The complainant had trademark rights in QUEEN that pre-dated the 1997 registration of the domain name, and the domain name was used in connection with a pornographic website. When the complainant contacted the registrant about buying the domain name, the registrant quoted a purchase price of $2 million or a lease at $15,000 per month. But, the panel focused on the "dictionary meaning" of the word "queen" and the fact that the domain name was not directed at the complainant, which is active in the flower-growing industry: [T]he Disputed Domain Name consists of a common term and the Respondent has used the Disputed Domain Name in a way which corresponds to one of the common meanings of that term. The Complainant has failed to give the Panel any reason to think that the Respondent registered the Disputed Domain Name to capitalize on the alleged fame of the Complainant's trademarks in any way, rather than in connection with one common meaning of the Disputed Domain Name. The fact that the Disputed Domain Name redirects to adult material does not alter this finding. Where a domain name registrant tries to obtain financial gain by registering and using a non-generic domain name in which it has no rights or legitimate interests, the offering of adult content may be evidence of bad faith use.... However, as the Disputed Domain Name has a dictionary meaning, those cases do not apply. In other words, thwarted in its effort to purchase the domain name, the complainant resorted to Plan B, filing a UDRP complaint. Obviously, that didn't work, and the complainant lost the UDRP case. Reverse Domain Name Hijacking In addition to simply losing a UDRP decision, a complainant that pursues a Plan B domain name dispute could see its plan backfire, if the UDRP panel finds that the complainant tried to use the policy in bad faith to attempt to deprive a registrant of its domain name. That's the definition of "reverse domain name hijacking" (RDNH), and some[...]

Unlocking the Hidden Value Within Your IT Organization


Many C-level executives are unaware their IT organizations could be sitting on a lucrative sellable supply of unused IPv4 addresses. Assessing and executing on the opportunity takes planning, but there's a clear path for getting it done. In 2014 and 2015, buyers had their pick of large block holders with millions of available and unused numbers. This surplus allowed large buyers to shop around for the lowest offer and, as a result, drive prices down to a low of $4/number. The combination of low unit prices and large quantities of easily accessible address space stimulated a buying spree with over 50 million numbers transferred globally in 2015. With the "low hanging fruit" sold off in 2015 and effective depletion of ARIN's, RIPE's and APNIC's IPv4 free pools, the supply of IPv4 numbers in North America, Europe and APAC diminished significantly. Demand, however, did not. This drove prices up considerably in 2016. Prices are expected to continue their upward trend. C-level executives are doing all they can to control costs, increase margins and drive revenue growth. But many of them are unaware that their IT organizations could be holding onto lucrative sellable assets. Address space flowing into the trading market often originates from companies that were given large quantities of "legacy" IPv4 numbers back in the 1980s and 90s, but no longer need so much address space. By selling their unused numbers, savvy address holders add directly (and often significantly) to their bottom lines while also helping the Internet community bridge the gap between IPv4 free pool depletion and full IPv6 migration. Market conditions are now producing considerable upward pressures on both demand and prices. This will increase sell-side opportunities and returns over the next two to three years, which will make it easier for executives to justify investing in renumbering projects to optimize their address space utilization, and free up IPv4 blocks for sale. Assessing the opportunities and executing IPv4 sale strategies takes some planning. Below is a map to getting it done. • Figure out what you have. Getting a good handle on a company's IPv4 inventory is the executive's first, most important step in understanding the value of the potential opportunity. The ARIN whois registry is a good place to start but may not tell the full story, particularly if the company has been involved in mergers or acquisitions. • Figure out what you can sell. If a block is registered in the company's name, unadvertised and otherwise unused internally, then it may be a good candidate for immediate sale subject to validating that the numbers have not been hijacked by a third party and do not have negative reputations or blacklist scores. But things are often more complex, and require prospective sellers to perform some pre-work to clean up the registration records and renumber their space for greater usage efficiency and aggregation. The good news is that the returns on a sale typically dwarf the costs incurred. • Bring your in-house counsel on board. Five years after the market's public emergence with the Microsoft-Nortel sale, some still question the legality of buying and selling numbers, and regard the market with suspicion. Their concerns are unfounded. Internet governance institutions fully recognize and openly promote the marketplace as the only meaningful resource for their members to obtain IPv4 resources. And federal courts, typically in bankruptcy proceedings, have approved the conveyance of IPv4 numbers as alienable assets. • Get the technical team on board. Queries to the IT department often yield a "no can do" response. They may be uncomfortable with the marketplace. For years, Internet community dogma [...]

Slovaks Worry About the Future of Their Country's .SK TLD


Almost every country code Top-Level Domain (ccTLD) has had some kind of rough and clumsy start at its sunrise. Internet was young, everything was new, and whoever took the national TLD first, got power over it. The situation eventually sorted out, and now most ccTLDs are drama free, well-operated for the benefit of people and the Internet communities in those countries. Unfortunately, not in Slovakia. Troublesome .SK DOT SK has been in some kind of trouble since its beginning. After the dissolution of Czechoslovakia in 1993, which at that time operated its own .CS TLD, two new countries were created: Czech Republic with .CZ TLD, and Slovakia with .SK TLD. Slovakian TLD was managed by a non-profit organization called Eunet Slovakia, seated at the Comenius University. Those were good times. However, certain people decided to rename their company to Eunet Slovakia, s.r.o. (s.r.o. means Ltd., note the almost exact name). Then in 1999 they purposely misguide ICANN to change .SK ownership to this company, which was immediately afterward sold to the foreign investors. ICANN executed delegation record update in good faith, not knowing that ownership was in fact transferred from a non-profit to private business. In effect, .SK was stolen. As disturbing as this sounds, it continues to be the case. We in Slovakia deal with the consequences every day. I do not want to dig much into the history, as it would be certainly a good topic for a separate article. If you are curious more about this, look at the story by Ondřej Caletka. The story is based on my speech given at the IT17 conference in Prague a few weeks ago. Now, it is not impossible to run a ccTLD through private ownership if reasonable policies are in place that meet the satisfaction of the government, citizens and the community. This is the case in many countries. Let's look at how it is in Slovakia. Stuck in the past The system we operate now was created in 2002 when a major pre-registration occurred. Since then, there have been only fractional changes to this system. Whatever you see on now, was pretty much what you would have seen 15 years ago. During all this time, SK-NIC was purely focused on its profit. There were no significant changes, no updates, no investments back to TLD. Selling a unique commodity without any competition is indeed a great business. There is no API, so registrars need to emulate browser clicks to automate domain operations. Also, DNSSEC is missing. Domain changes and transfers are not done online as you would expect, but they need a signed paper document to be sent to SK-NIC for an actual confirmation. Foreign personnel and companies are forbidden to register .SK, so they had to use local proxy contacts, which is usually a registrar company. As an outcome of those neglected domain rules, we ended up with more than 50% of all .SK registrations having inaccurate owner data on file. In other words: take any random existing .SK domain, and you have only 50% chance to know who the real domain owner is. Non-revokable Contract All this irresponsibility would be a valid reason for looking into alternate solutions for managing .SK. However, it is not that easy. SK-NIC, a.s., as a follow-up company of aforementioned Eunet Slovakia, s.r.o., has a valid contract with the Government of Slovak republic. And such contract is non-revokable. It cannot be terminated without SK-NIC consent. Something like this would definitely be considered blatant operation today, but this agreement is the result of corrupt environment that existed in the wild 90's and early 2000's. At that time, former post-communist Eastern European countries looked more like the wild west than a well-ar[...]

IGF's Brexit Moment


When people feel powerless, they sometimes push for change at any price, and in the absence of a guillotine reach for institutions instead. This makes some sense: at worst it feels good, and at best if you believe things can't get any worse, then what's to lose by shaking them up? Calling the Floor Normally potent members of ICANN's community — people and entities for whom the sensation of powerlessness is largely unfamiliar — are nonetheless feeling that way in respect of the Internet Governance Forum (IGF). This imperfect and misshapen pollywog, half one thing and half another, born into a purposeful life but aging now into a grumpy adolescence, has a lot that's wrong about it: opaque, quixotic and inflexible governance, spasms of excessive partiality, a seemingly systemic inability to reform or face up to its own shortcomings (in this it behaves like most of us)… A long list of grievances forces us to ask whether the IGF is now so incorrigible that we must, as a matter of good governance (or just plain mercy) call time on our hapless beast. A World Without You The Community should not decide the fate of the IGF in the way an impetuous majority of Britons chose to leave the European Union: without undertaking some scenario planning. Start with what we know: that celebrity funerals demand a good eulogy. The IGF would have at least two. Version 1 would be written by the euthanisers themselves: the IGF served its purpose and was effective as long as it lasted. It proved that multistakeholderism was the right model, if one that has now been transposed onto more steady, structured, and predictable groups, the WSIS Forum not least among them. The IGF saw us through so much: early ICANN Inc, Net Mundial, the IANA transition, the Labors of Senator Cruz. RIP IGF, thank you for showing us the way. Version 2 would be written differently, but the authors would not be so obvious: the IGF's failure proves the multistakeholder model failed in concept and execution. Its formlessness became such a risk to the stability of the Internet that even the last of the Old Believers, the business constituency, piled in with the fatal blow. But now we have heard the Community, and return the governance of this shared resource to where it belongs — along with other delicate matters such as climate change, whales, spectrum, the peaceful uses of outer space — to an intergovernmental body. Do not fear: in this process we learned that the Internet community must have a voice and be heard, and we will almost certainly pay lip service to this aspiration; for now, please form an orderly queue outside the room while the State Councillors consider your rights to petition (be patient). Version 3 In this version, the IGF doesn't die. Rather, when the business constituency and other dissatisfied stalwarts cede the field, they are replaced by governments who, in their International Strategy of Cooperation on Cyberspace or elsewhere, have already announced their intention to reform the IGF into a more structured thing. In combination with UNESCO, a newly constituted Dynamic Coalition on Appropriate Internet Content, a new Joint IGF-WIPO-ITU Policy Committee, the unruly cornerstone of the multistakeholder model would retain its pride of place. In its new form, it would turn out senior-level communiqués and policy recommendations on the need for alternative governance arrangements, standards for the use of content, the proper and complete internationalization of ICANN, and robust system alternatives to ensure redundancy. These calls would echo, if not appear in advance of, the same demands made at the UNGA and ITU. Consensus would then be br[...]

Bespoke Processors and the Future of Networks


As I spend a lot of time on Oak Island (not the one on television, the other one), I tend to notice some of those trivial things in life. For instance, when the tide is pretty close to all the way in, it probably is not going to come in much longer; rather, it is likely to start going back out soon. If you spend any time around clocks with pendulums, you might have noticed the same thing; the maximum point at which the pendulum swings is the point where it also begins swinging back. Of course my regular readers are going to recognize the point, because I have used it in many presentations about the centralization/decentralization craze the networking industry seems to go through every few years. Right now, of course, we are in the craze of centralization. To hear a lot of folks tell it, in ten years there simply are not going to be routing protocols. Instead, we are going to all buy appliances, plug them in, and it is "just going to work." And that is just for folks who insist on having their own network — for the part of the world that is not completely consumed by the cloud. But remember, when the tide seems highest… This last week an article in another part of the information technology world caught my eye that illustrates the point well enough that it is worth looking at. Almost everyone in IT knows, of course, that Moore's law is dying; there is not much hope that we will be able to double the number of transistors in a processor forever. Not that there ever was any hope that this could go on forever, but it does seem like there was a long stretch there where all you had to do was wait a couple of years, and you could buy a processor that was twice as fast, and twice as powerful. Note: I wonder how many people have considered that maybe the PC is not "going away," but rather that sales are slowing down, at least in part, because unboxing a new one no longer means a major performance boost? The problem is, of course, that processing needs are not slowing down in line with the death of Moore's law. What is the solution to this problem? Bespoke processors. It's a well-known and necessary truth: In order to have programmability and flexibility, there's simply going to be more stuff on a processor than any one application will use. —Samuel K. Moore @ IEEE Spectrum / 24 Jul 2017 What is happening is no less than a revolution in the idea of how a processor is built. Rather than building a processor that can do anything, companies are now starting to build a processor that can do one thing really well, with a minimum of overhead (such as power and cooling). What really set me to thinking about this is the news that the Apple iPad actually carries a specialized processor, the most current version of which seems to be the A11 and A11X. This isn't quite a bespoke processor, of course, because tablets are still designed to do a wide range of tasks. But lets not forget the Network Processing Units (NPUs, which we normally just call ASICs) housed in most routers and switches. Nor the Graphics Processing Units (GPUs) housed in most video systems and electronic currency mining operations. What we are actually seeing in the processor world is the beginning of the end of the centralized model. No longer will one processor solve "every" problem; rather computers will need to become, must become, even more of a collection of processors than they have ever been in the past. But this is the key, isn't it? The centralized model will continue alongside the decentralized model, each solving different sorts of problems. There will be no "one processor to rule them all." Now let's turn to the networking w[...]

UDRP and the ACPA Differences, Advantages and Their Inconveniences


Along came the Cyber-squatters with the dot COM boom One problem with the Internet, non-existent before 1994, is the confrontation between persons who, either intentionally or unintentionally, create an address on the Internet which includes someone else's trademark. — Michael A. Daniels [1], Chairman of the Board for Network Solutions Inc. (July 1999) With the difference of just a month the Anti Cybersquatting Consumer Protection Act (ACPA) was enacted in November 29, 1999 while the Uniform Domain Name Dispute Resolution Policy (UDRP) of ICANN was approved in October 24, 1999. While any decision to pursue cyber-squatters under the ACPA or the UDRP belongs to the trademark owner, the attorney who advises the trademark owner should have a good working knowledge of the benefits and weaknesses of each method. The UDRP and ACPA differences – their advantages and inconveniences The ACPA and the UDRP provide two separate and distinct methods for resolving domain name disputes. Both alternatives have many critics and proponents, but the true value of each will ultimately be determined by how well each combats cyber-squatting. Separately, the UDRP and the ACPA will probably work well to defuse most of the cyber-squatting that is currently invading the Internet. If combined together the UDRP and the ACPA can be a cost saving and effective way to prevent cybersquatting with the top-level domains (TLDs), the country codes top-level domains (ccTLDs) and the future new generic top-level domains (gTLDs). Nonetheless, neither is specifically tailored to be more effective for any specific case, but each one provides noticeable benefits to different types of cases. Because the UDRP is less expensive than litigation, ICANN's UDRP is probably best suited for small businesses and trademark owners that are merely attempting to stop the use of their trademark. This method will also be helpful to those trademark owners who are fighting registrants that registered their domain names prior to the enactment of the ACPA, because under the ACPA, the trademark owners would not be able to receive damages. Litigation under the ACPA [2] will be better suited for celebrities, i.e., Tom Cruise, Brad Pitt, etc., and for large companies seeking damages. Also, the 'in rem' proceeding seems enticing, but the well-advised counsellor should note that this proceeding is used only in very specific circumstances. The downside of ACPA lawsuit, is that lawsuits are extremely expensive, time-consuming, stressful and uncertain. Considerable investment is required in terms of a good attorney and it can take years to get a resolution as to be successful in an ACPA lawsuit, the trademark owner must prove: (1) that the mark is valid; (2) the mark was distinctive when the site was registered; and (3) the domain is identical or confusingly similar to the mark. (4) the website owner registered the site in bad faith in order to profit from the mark; The 'in rem' provision in the ACPA lawsuit is limited to the United States because ACPA is a U.S. Statute, hence the concerned trademark business needs to have substantial ties to the U.S. in order to bring a case under the ACPA lawsuit in U.S. Courts for example CNN news Vs CNN china. If speed and cost efficiency are the two most desirable objectives for the client, then the UDRP is the best alternative. If these two objectives are not the primary concerns, then the ACPA may be a better alternative. Some of the major drawbacks [3] of an UDRP proceeding is that there is no possibility of monetary damages in a UDRP proceeding. This is probably the major reason th[...]

Some Whois Lookup Services Might be Broken


There are thousands of sites and services on the 'net that offer domain name whois lookup services. As of last night, many of them may have stopped working. Why? Many of them rely on fairly rudimentary software that parses the whois from Verisign (for .com and .net) and then relays the query to the registrar whois. The site or service then displays the whois output from the registrar's whois server to you. For "thick" registries, like .biz or any new TLD, the whois is always served directly by the registry whois server, so there's no "referral" or extra parsing. Additional fields in thick whois, therefore, shouldn't have much impact on most 3rd party whois lookup services. So what happened? Verisign, along with most other gTLD registries, updated their whois output last night to include some new fields namely the registrar abuse contacts: Registrar Abuse Contact Email: Registrar Abuse Contact Phone: +xxx.xxxxx As a result of the changes, a lot of software is currently "broken" as it simply cannot cope with the new output. Is whois broken? No. It's working fine. The issue is with the software clients and how they are written. A lot of them were written years ago and have hardcoded in certain settings. From what I can see the changes in whois output only seem to be impacting .com and .net whois lookups and only with *some* software. Doing whois lookups from my Mac's command line, for example, still works fine for "thick" registries, but is failing miserably for .com and .net. Will this impact registrars? That depends. Registrars generally do NOT use whois to check if domain names are available. Registrars tend to use EPP checks, zone files or other tools to see if a domain is taken or not. (And no, using DNS checks would be a terrible idea!) There is a possible impact on *some* registrars when it comes to domain name transfers of .com and .net domain names. However any impact is going to be short lived, as registrars will be aware of the issues and will update their software to handle the changes. UPDATE: Here's a fix for jwhois and similar software from Chris Pelling In /etc/jwhois.conf (the location might vary) find the line referencing "verisign-grs": } ".*" { whois-redirect = ".*Whois Server: (.*)"; } and replace it with: } ".*" { whois-redirect = ".*Registrar WHOIS Server: (.*)"; } The above changes *should* help, though it will depend on which software you are using. Thanks to Paul Goldstone for mentioning the issue to me first :) Written by Michele Neylon, MD of Blacknight SolutionsFollow CircleID on TwitterMore under: Domain Names, Registry Services, Whois [...]

Trademark Registrations on the 'Supplemental Register' Don't Count (in Domain Name Disputes)


The Uniform Domain Name Dispute Resolution Policy (UDRP) has never required that a complainant own any trademark registrations to succeed in a domain name dispute, given that common law trademark rights (if properly established) are sufficient. But, as a pair of recent UDRP decisions reminds us, even some registrations are inadequate. The issue relates to the first element of every UDRP complaint, which requires the party seeking relief to prove that the "domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights” (emphasis added). The UDRP doesn't specify what kind of "rights" are necessary. Through the years, and in the two recent decisions, UDRP panels have been presented with trademarks registered on the "Supplemental Register" at the U.S. Patent and Trademark Office (USPTO). The Supplemental Register is by definition reserved for trademarks that are "capable of distinguishing applicant's goods or services" but are "not registrable on the principal register." An application to register a trademark on the Supplemental Register "shall not be published for or be subject to opposition" and are always subject to cancellation. What is the Supplemental Register? The Supplemental Register offers some protections for trademark owners, such as the ability to use the circle-R symbol and the right to bring an infringement action in federal court. But, trademarks on the Supplemental Register do not attain the same status as those on the Principal Register. As the International Trademark Association (INTA) has summarized: [A] Supplemental Registration does not convey the presumptions of validity, ownership and exclusive rights to use the mark that arise with a registration on the Principal Register. In addition, a Supplemental Registration cannot be used to prevent the importation of infringing or counterfeit products. Finally, a Supplemental Registration can never become incontestable. UDRP panels have traditionally looked upon trademarks on the Supplemental Register with great skepticism. Here's what WIPO's Overview of UDRP decisions says about these registrations: Complainants relying on trademark registrations listed solely on the USPTO Supplemental Register are expected to show secondary meaning in order to establish trademark rights under the Policy because under US law a supplemental registration does not by itself provide evidence of distinctiveness to support trademark rights. Even where such standing is established, panels may scrutinize the degree of deference owed to such marks in assessing the second and third elements. And yet, some complainants in UDRP proceedings still try to rely on trademarks that exist only on the Supplemental Register. UDRP Panel: Supplemental Registrations 'Insufficient' The most recent decisions are from a pair of cases filed by the same complainant, Corporate Image Marketing, Inc., which filed complaints for the domain names <> and <> (in one case) and <> and <> (in a second case). In both of those cases, Corporate Image Marketing alleged that it had common law trademark rights in the relevant marks, as well as rights based on registrations on the Supplemental Register. But in both cases, citing the WIPO Overview quoted above, the panel found that the Supplemental Register registrations were inadequate for purposes of the UDRP. Those registrations, the panel wrote in both decisions, are "insufficient to establish Complainant's[...]

Telecom Heroics in Somalia


Internet service in and around Mogadishu, Somalia suffered a crippling blow recently as the East African Submarine System (EASSy) cable, which provides service to the area, was cut by the anchor of a passing ship. The government of Somalia estimated that the impact of the submarine cable cut was US$10 million per day and detained the MSC Alice, the cargo vessel that reportedly caused the damage. The cable was repaired on 17 July. The incident is the latest in a series of recent submarine cable breaks (see Nigeria, Ecuador, Congo-Brazzaville and Vietnam) that remind us how dependent much of the world remains on a limited set of physical connections which maintain connectivity to the global Internet. Internet in Mogadishu width="350" height="197" src="" frameborder="0" allowfullscreen style="display:block;margin-bottom:10px;" />West Indian Ocean Cable Company together with local partner Dalkom Somalia, brought the first broadband cable to the troubled horn of Africa via the East Africa submarine cable system. (Video source: CGTN Africa)The story of how high-speed Internet service came to Mogadishu is nothing short of remarkable. It involved Somali telecommunications personnel staring down the threat of a local terrorist group (Al-Shabaab) in order to establish Somalia's first submarine cable connection. This submarine cable link would be vital if Mogadishu were to have any hope of improving its local economy and ending decades of violence and hunger. However, in January 2014, Al-Shabaab announced a prohibition against 'mobile Internet service' and 'fiber optic cables' stating, Any individual or company that is found not following the order will be considered to be working with the enemy and they will be dealt with in accordance with Sharia law. The government of Somalia urged its telecoms not to comply with the Al-Shabaab ban. Then in February 2014, technicians from Somalia's largest operator Hormuud Telecom were forced at gunpoint by Al-Shabaab militants to disable their mobile Internet service. At that time, Internet service in Mogadishu was entirely reliant on bulk satellite service, which has limited capacity and suffers from high latency when compared to submarine cable or terrestrial fiber-based service. Liquid Telecom's terrestrial service to Mogadishu wouldn't become active until December 2014 and the semi-autonomous regions of Somaliland and Puntland in the northern part of the country use terrestrial connections to Djibouti for international access. Despite the threats from Al-Shabaab, Hormuud Telecom elected to press ahead with its planned activation of new service via submarine cable that would be crucial for the development of Mogadishu's economy.Source: "Telecom companies in #Somalia spent millions on fiber optic internet service and have no plans to slow down despite #Shabab ban".— Harun Maruf (@HarunMaruf) January 22, 2014As illustrated in the graphic below, the loss of EASSy caused Hormuud to revert to medium-earth orbit satellite operator O3b and, to a lesser degree, Liquid Telecom out of Kenya. As we have noted in the past, O3b enjoys a latency advantage over traditional geostationary satellite service; however, a satellite link cannot replace the considerable capacity lost due to a submarine cable cut. As a result, during the cable outage, there were widespread connectivity problems in Mogadishu. Conclusion A couple of months after the EASSy cable went live in 2014, BBC reported on the 'culture sho[...]