Subscribe: CircleID: Featured Blogs
http://www.circleid.com/rss/rss_comm/
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
bitcoin  cuba  data  domain names  domain  gdpr  icann  industry  infrastructure  internet  names  network  new  service  whois 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID: Featured Blogs

CircleID: Featured Blogs



Latest blogs postings on CircleID



Updated: 2018-02-20T20:17:00-08:00

 



WHOIS Access and Interim GDPR Compliance Model: Latest Developments and Next Steps

2018-02-20T12:17:00-08:00

WHOIS access and development of an interim GDPR compliance model remains THE hot topic within the ICANN community. Developments are occurring at a break-neck pace, as ICANN and contracted parties push for an implementable solution ahead of the May 25, 2018 effective date of the GDPR. To quickly recap: Between November 11, 2017 and January 11, 2018, various ICANN community participants submitted different proposed interim GDPR compliance models to ICANN; On January 12, 2018, ICANN published a set of three proposed interim GDPR compliance models of its own design for community input; On January 24, 2018, the ICANN Intellectual Property and Business Constituencies (IPC and BC, respectively) held a community-wide webinar, with in-person attendees in Washington, DC and Brussels, to discuss the ICANN and community models, and key issues and concerns in developing an interim compliance model while preserving access to WHOIS data for specific legitimate purposes, including law enforcement, cybersecurity, consumer protection, and intellectual property enforcement, among other business and individual user needs; On January 29, 2018, ICANN formally closed its community input period on the compliance models; On February 1, 2018, the IPC and BC sent a joint letter to the Article 29 Working Party, with a copy to ICANN, providing an overview of WHOIS uses and needs for law enforcement, cybersecurity, consumer protection and intellectual property enforcement, and how these legitimate purposes fit within the framework of the GDPR; On February 2, 2018, ICANN published a matrix of all the proposed interim compliance models, and a draft summary of discussion and comments regarding the models; On February 7, 2018, the European Commission provided additional input to ICANN regarding the various proposed compliance models; and Between February 10 and February 16, 2018, ICANN provided updates to various community leaders regarding a compliance model that ICANN had begun to coalesce around, based on the prior models, community input, and community discussions (the "convergence model"). ICANN is now poised to formally publish the convergence model, although the community continues to discuss and seek a solution that is acceptable for all stakeholders. As part of those continued discussions, the IPC and BC will be hosting another cross-community discussion, following up on their co-hosted event on January 24. This second event will take place on Thursday February 22, 2018 from 9 am to 12 pm Eastern (US) (1400 – 1700 UTC), with in-person participation in the Winterfeldt IP Group Offices in Washington, DC and the ICANN office in Brussels, Belgium. There will also be remote participation available through Adobe Connect. We invite all readers to participate in this important ongoing conversation. Please RSVP to denise@winterfeldt.law if you or your colleagues would like to join in person in Washington, DC or Brussels, or via remote participation. Written by Brian Winterfeldt, Founder and Principal at Winterfeldt IP GroupFollow CircleID on TwitterMore under: Domain Names, ICANN, Law, Privacy, Whois [...]



SpaceX Starlink and Cuba - A Match Made in Low-Earth Orbit?

2018-02-20T11:05:00-08:00

I've suggested that Cuba could use geostationary-orbit (GSO) satellite Internet service as a stopgap measure until they could afford to leapfrog over today's technology to next-generation infrastructure. They did not pick up on that stopgap suggestion, but how about low-Earth orbit (LEO) satellite Internet service as a next-generation solution? SpaceX, OneWeb, Boeing and others are working on LEO satellite Internet projects. There is no guarantee that any of them will succeed — these projects require new technology and face logistical, financial and regulatory obstacles — but, if successful, they could provide Cuba with affordable, ubiquitous, next-generation Internet service. Cuba should follow and consider each potential system, but let's focus on SpaceX since their plan is ambitious and they might have the best marketing/political fit with Cuba. LEO satellite service will hopefully reach a milestone this week when SpaceX launches two test satellites. If the tests go well, SpaceX plans to begin launching operational satellites in 2019 and begin offering commercial service in the 2020-21 time frame. They will complete their first constellation of 4,425 satellites by 2024. (To put that in context, there are fewer than 2,000 operational satellites in orbit today). SpaceX has named their future service "Starlink," and, if Starlink succeeds, they could offer Cuba service as early as 2020 and no later than 2024 depending upon which areas they plan to service first. What has stopped the Cuban Internet and why might LEO satellites look good to Cuba? Cuba blames their lack of connectivity on the US embargo, but President Obama cleared the way for the export of telecommunication equipment and services to Cuba and Trump has not reversed that decision. I suspect that fear of losing political control — the inability to filter and surveil traffic — stopped Cuba from allowing GSO satellite service. Raúl Castro and others feared loss of control of information when Cuba first connected to the Internet in 1996, but Castro is about to step down and perhaps the next government will be more aware of the benefits of Internet connectivity and more confident in their ability to use it to their advantage. A lack of funds has also constrained the Cuban Internet — they cannot afford a large terrestrial infrastructure buildout and are reluctant (for good and bad reasons) to accept foreign investment. SpaceX is building global infrastructure so the marginal cost of serving Cuba would be near zero. They say that the capital equipment for providing high-speed, low-latency service to a Cuban home, school, clinic, etc. would be a low-cost, user-installed ground-station. I've not seen ground-station price estimates from SpaceX, but their rival OneWeb says their $250 ground-station will handle a 50 Mbps, 30 ms latency Internet link and serve as a hot-spot for WiFi, LTE, 3G or 2G connectivity. Since the marginal cost of serving a nation would be small and they hope to provide affordable global connectivity, I expect their service price will vary among nations. Prices would be relatively high in wealthy and low in poor nations — there would be no point in having idle satellites flying over Cuba or any other place. Expansion of the Cuban Internet is also constrained by bureaucracy and vested financial interest in ETECSA and established vendors. While I do not endorse Cuba's current monopoly service and infrastructure ownership policy, it could remain unchanged if ETECSA were to become a reseller of SpaceX Internet connectivity. In summary, if Starlink succeeds, they could offer affordable, ubiquitous high-speed Internet, saving Cuba the cost of investing in expensive terrestrial infrastructure and allowing ETECSA to maintain its monopoly. The only intangible roadblock would be a loss of control of traffic. (But Cuban propagandists and trolls would be able to reach a wider audience :-). That is the rosy picture from the Cuban point of view, what about SpaceX? OneWeb[...]



WHOIS Inaccuracy Could Mean Noncompliance with GDPR

2018-02-15T12:41:00-08:00

The European Commission recently released technical input on ICANN's proposed GDPR-compliant WHOIS models that underscores the GDPR's "Accuracy" principle — making clear that reasonable steps should be taken to ensure the accuracy of any personal data obtained for WHOIS databases and that ICANN should be sure to incorporate this requirement in whatever model it adopts. Contracted parties concerned with GDPR compliance should take note. According to Article 5 of the regulation, personal data shall be "accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay." This standard is critical for maintaining properly functioning WHOIS databases and would be a significant improvement over today's insufficient standard of WHOIS accuracy. Indeed, European Union-based country code TLDs require rigorous validation and verification, much more in line with GDPR requirements — a standard to strive for. The stage is set for an upgrade to WHOIS accuracy: ICANN's current approach to WHOIS accuracy simply does not comply with GDPR. Any model selected by ICANN to comply with GDPR must be accompanied by new processes to validate and verify the contact information contained in the WHOIS database. Unfortunately, the current Registrar Accreditation Agreement, which includes detailed provisions requiring registrars to validate and verify registrant data, does not go far enough to meet these requirements. At a minimum, ICANN should expedite the implementation of cross-field validation as required by the 2013 RAA, but to date has not been enforced. These activities should be supplemented by examining other forms of validation, building on ICANN's experience in developing the WHOIS Accuracy Reporting System (ARS), which examines accuracy of contact information from the perspective of syntactical and operational validity. Also, validation and accuracy of WHOIS data has been a long-discussed matter within the ICANN community — with the 2014 Final Report from the Expert Working Group on gTLD Directory Services: A Next-Generation Registration Directory Service (RDS) devoting an entire chapter to "Improving Data Quality" with a recommendation for more robust validation of registrant data. And, not insignificantly, ICANN already has investigated and deployed validation systems in its operations, including those in use by its Compliance department to investigate accuracy complaints. Despite its significance to the protection and usefulness of WHOIS data, the accuracy principle is surprisingly absent from the three WHOIS models presented by ICANN for discussion among relevant stakeholders. Regardless of which model is ultimately selected, the accuracy principle must be applied to any WHOIS data processing activity in a manner that addresses GDPR compliance — both at inception, when a domain is registered, and later, when data is out of date. All stakeholders can agree that WHOIS data is a valuable resource for industry, public services, researchers, and individual Internet users. Aside from the GDPR "Accuracy" principle, taking steps to protect the confidentiality of this resource would be meaningless if the data itself were not accurate or complete. Written by Fabricio Vayra, Partner at Perkins Coie LLPFollow CircleID on TwitterMore under: Domain Names, ICANN, Privacy, Whois [...]



Who Will Crack Cloud Application Access SLAs?

2018-02-14T12:14:01-08:00

The chart below ought to be in every basic undergraduate textbook on packet networking and distributed computing. That it is absent says much about our technical maturity level as an industry. But before we look at what it means, let's go back to some basics. When you deliver a utility service like water or gas, there's a unit for metering its supply. The electricity wattage consumed by a room is the sum of the wattage of the individual appliances. The house consumption is the sum of the rooms, the neighbourhood is the sum of the houses, and so on. Likewise, we can add up the demand for water, using litres. These resource units "add up" in a meaningful way. We can express a service level agreement (SLA) for utility service delivery in that standard unit in an unambiguous way. This allows us to agree both the final end-user delivery, as well as to contract supply at any administrative or management boundaries in the delivery network. What's really weird about the broadband industry is that we've not yet got a standard metric of supply and demand that "adds up." What's even more peculiar is that people don't even seem to be aware of its absence, or feel the urge to look for one. What's absolutely bizarre is that it's hard to get people interested even when you do finally find a really good one! Picking the right "unit" is hard because telecoms is different to power and water in a crucial way. With these physical utilities, we want more of something valuable. Broadband is an information utility, where we want less of something unwanted: latency (and in extremis, loss). That is a tricky conceptual about-turn. So we're selling the absence of something, not its presence. It's kind of asking "how much network latency mess-up can we deal with in order to deliver a tolerable level of application QoE screw-up”. Ideally, we'd like zero "mess-up" and "screw-up," but that's not on offer. And no, I don't expect ISPs to begin advertising "a bit less screwed-up than the competition" anytime soon to consumers! The above chart breaks down the latency into its independent constituent parts. What it says is: For any network (sub-)path, the latency comprises (G)eographic, packet (S)ize, and (V)ariable contention delay — the "vertical" (de)composition. Along the "horizontal" path the "Gs", "Ss", and "Vs" all "add up". (They are probabilities, not simple scalars, but it's still just ordinary algebra.) You can "add up" the complete end-to-end path "mess-up" by breaking each sub-path "mess-up" into G, S and V; then adding the Gs, Ss, and Vs "horizontally"; and then "vertically" recombining their "total mess-up" (again, all using probability functions to reflect we are dealing with randomness). And that's it! We've now got a mathematics of latency which "adds up", just like wattage or litres. It's not proprietary, nobody holds a patent on it, everyone can use it. Any network equipment or monitoring enterprise with a standard level of competence can implement it as their network resource model. It's all documented in the open. This may all seem a bit like science arcana, but it has real business implications. Adjust your retirement portfolio accordingly! Because it's really handy to have a collection of network SLAs that "add up" to a working telco service or SaaS application. In order to do that, you need to express them in a unit that "adds up". In theory, big telcos are involved in a "digital transformation" from commodity "pipes" into cloud service integration companies. With the occasional honourable exception (you know who you are!), there doesn't seem to be much appetite for engaging with fundamental science and engineering. Most major telcos are technological husks that do vendor contract management, spectrum hoarding, and regulatory guerrilla warfare, with a bit of football marketing on the side. In contrast, the giant cloud companies (like Amazon and Google) are thronged with PhDs thinking about flow efficiency, trade-offs[...]



GDPR - Territorial Scope and the Need to Avoid Absurd and Inconsistent Results

2018-02-14T09:54:00-08:00

It's not just establishment it's context! There is an urgent need to clarify the GDPR's territorial scope. Of the many changes the GDPR will usher in this May, the expansion of EU privacy law's territorial scope is one of the most important. The GDPR provides for broad application of its provisions both within the EU and globally. But the fact that the GDPR has a broad territorial scope does not mean that every company, or all data processing activities, are subject to it. Rather, the GDPR puts important limitations on its territorial scope that must be acknowledged and correctly analyzed by those interpreting the regulation for the global business community. Otherwise, it could lead to absurd implementation and bad policy which no one wants. EU Establishment In essence: Where registrars are established in the EU, the registrars' use and processing of personal data is subject to the GDPR. That is no surprise to anyone. Where registrars have no establishment in the EU, but offer domain name registration services to data subjects in the EU, the processing of personal data in the context of such offer will also be subject to the GDPR. Again no surprise and logical. However, where a registrar is based outside the EU, without an establishment in the EU, and uses a processor in the EU, such non-EU based registrar (as a controller) will not be subject to the GDPR due to the EU based processor's establishment in the EU. The GDPR only applies to the controller according to Article 3 (1) GDPR where the processor in the EU would be considered the controller's establishment. If the controller uses an external service provider (no group company), this processor will generally not be considered an establishment of the controller. It would only be caught by GDPR if the processing is done "in the context" of that establishment. That is the key, and I'll discuss an example of potentially absurd results if this is not interpreted correctly. NB All obligations directly applicable to the processor under the GDPR will, of course, apply to the EU based processor. WHOIS If we look at the example of WHOIS (searchable registries of domain name holders) where there is presently much debate amongst the many and varied actors in the domain name industry over whether public WHOIS databases can remain public under the GDPR. The second part of ICANN's independent assessment of this issue offered an analysis of the GDPR's territorial reach that deserves closer scrutiny. Addressing the territorial limits of the law, the authors state: "Therefore, all processing of personal data is, no matter where it is carried out, within the territorial scope of the GDPR as long as the controller or processor is considered established within the EU; the nationality, citizenship or location of the data subject is irrelevant." In other words, the authors conclude that as long as a controller or processor has an "establishment" in the EU, all processing of personal data it undertakes, regardless of the location or nationality of the data subject and regardless of whether the processing has any nexus to the EU, is subject to the GDPR. This is wrong. The analysis overlooks key language of the GDPR. Under Article 3.1, the law applies not to any processing that is done by a company that happens to have an establishment in the EU, but to processing done "in the context of" that establishment. This distinction makes a difference. Imagine, for example, a Canadian company that has an office in Paris. Under the authors' analysis, the GDPR would apply to all processing done by that company simply by virtue of it having a Paris office, whether the data subjects interacting with it were French, Canadian, or even American, whether they accessed the company's services from France, Canada, or the U.S., and even if all the processing occurred outside of the EU. This would be an absurd result inconsistent with the text of the GDPR and sound policy. In order to deter[...]



The Future of .COM Pricing

2018-02-13T08:59:00-08:00

When you've been around the domain industry for as long as I have, you start to lose track of time. I was reminded late last year that the 6-year agreement Verisign struck with ICANN in 2012 to operate .com will be up for expiration in November of this year.

Now, I don't for a second believe that .com will be operated by any other party, as Verisign's contract does give them the presumptive right of renewal. But what will be interesting to watch is what happens to Verisign's ability to increase the wholesale cost of .com names.

The 2012 agreement actually afforded Verisign the ability to increase prices by 7%, up to four times over the 6-year course of the contract. However, when the US Commerce Department approved the agreement, it did so without the ability for Verisign to implement those price increases.

At that time, the wholesale price of a .com domain was $7.85, and that's where it stands today with the prices to registrars being frozen. Under the terms of the original 2012 agreement, .com prices could have been as high as $10.26 today had Verisign taken advantage of their price increases.

As an aside, I've long thought that the price of a single .com domain was incredibly inexpensive when you think about it in comparison to other costs of running a business.

While I don't have any concrete insight into whether the price freeze will continue, there is obviously a new administration in Washington DC. Their view on this agreement could be different than the previous administration. Since this administration has come into office, we have seen a number of pro-business initiatives undertaken, so perhaps that will carry over to the Verisign agreement as well.

Another big difference today is that the domain market, in general, is vastly different than it was in 2012 — with the introduction of hundreds of new gTLDs. There are exponentially more alternatives to .com today than there were 6 years ago, so it's possible that too will have an impact on the decision.

With over 131 million registered .com names, it will be interesting to see how a potential increase of a few dollars per name would play out in the market, and the impact that it would have on corporate domain portfolios which are still largely comprised of .com names.

Written by Matt Serlin, SVP, Client Services and Operations at Brandsight

Follow CircleID on Twitter

More under: Domain Names, ICANN




Why Is It So Hard to Run a Bitcoin Exchange?

2018-02-13T08:42:00-08:00

One of the chronic features of the Bitcoin landscape is that Bitcoin exchanges screw up and fail, starting with Mt. Gox. There's nothing conceptually very hard about running an exchange, so what's the problem? The first problem is that Bitcoin and other blockchains are by design completely unforgiving. If there is a bug in your software which lets people steal coins, too bad, nothing to be done. Some environments need software that has to be perfect, or as close as we can make it, such as space probes that have to run for years or decades, and implanted medical devices where a bug could kill the patient. Programmers have software design techniques for those environments, but they generally start with a clear model of what the environment is and what sort of threats the device will have to face. Then they write and test the code as completely as they can, and burn it into a read-only memory in the device, which prevents deliberate or accidental later changes to the code. Running an online cryptocurrency exchange is about as far from that model as one can imagine. The exchange's web site faces the Internet where one can expect non-stop hostile attacks using rapidly evolving techniques. The software that runs the web site and the databases is ordinary server stuff, reasonably good quality, but way too big and way too dynamic to allow the sorts of techniques that space probes use. Nonetheless, there are plenty of ways to try and make an exchange secure. A bitcoin exchange receives bitcoins and money from its customers, who then trade one for the other, and later ask for the results of the trade back. The bitcoins and money that the customers have sent stay in inventory until they're returned to the customers. If the exchange closes its books once a day, at that point the bitcoins in inventory (which are public since the bitcoin ledger is public) should match the amount the customers have sent minus the amount returned. Similarly, the amount in the exchange's bank account should match the net cash sent. The thing in the middle is a black hole since with most bitcoin exchanges you have no idea where your bitcoins or cash have gone until you get them back, or sometimes you don't. To make it hard to steal the bitcoins, an exchange might keep the inventory in a cold wallet, one where the private key needed to sign transactions is not on any computer connected to the Internet. Once a day they might burn a list of bitcoin withdrawals onto a CD, take the CD into a vault where there's a computer with the private wallet key, create and sign the withdrawal transactions, and burn them onto another CD, leave the computer, the first CD, and a copy of the second CD in the vault, and take the second CD to an online computer that can send out the transactions. They could do something similar for cash withdrawals, with a bank that required a cryptographic signature with a key stored on an offline computer for withdrawal instructions. None of this is exotic, and while it wouldn't make anything fraud-proof, it'd at least be possible to audit what's happening and have a daily check of whether the money and bitcoins are where they are supposed to be. But when I read about the endless stories of crooks breaking into exchanges and stealing cryptocurrencies from hot (online) wallets, it's painfully clear that the exchanges, at least the ones that got hacked, don't do even this sort of simple stuff. Admittedly, this would slow things down. If there's one CD burned per day, you can only withdraw your money or bitcoins once per day. Personally, I think that's entirely reasonable — my stockbroker takes two days to transfer cash and longer than that to transfer securities, but we all seem to manage. Written by John Levine, Author, Consultant & SpeakerFollow CircleID on TwitterMore under: Blockchain, Cyberattack, Cybercrime, Cybersecurity [...]



Will 5G Trigger Smart City PPP Collaboration?

2018-02-13T08:18:00-08:00

As discussed in previous analyses, the arrival of 5G will trigger a totally new development in telecommunications. Not just in relation to better broadband services on mobile phones — it will also generate opportunities for a range of IoT (internet of things) developments that among other projects are grouped together under smart cities (feel free to read 'digital' or 'connected cities'). The problems related to the development 5G infrastructure as well as to smart cities offer a great opportunity to develop new business models for both telecommunications companies as well as for cities and communities, to create win-win situations. 5G will require a massive increase in the number of infrastructure access points in mobile networks; many more towers and antennas will need to be installed by the telecommunications companies to deliver the wide range of services that are becoming available through this new technology. Furthermore, all the access points need to be connected to a fibre optic network to manage the capacity and the quality needed for the many broadband services that will be carried over it. This is ideal network structure for cities which require a very dense level of connectivity, but cities don't have the funds to make that happen. So telecommunications companies working together with cities could be a win-win situation. Cities that do have a holistic and integrated smart city strategy in place can take a leadership role by developing the requirements needed for a city-wide digital infrastructure that can provide the social and economic benefits for its citizens. The critical element of an integrated strategy is that it must cut through the traditional bureaucratic silo structures. 5G is an ideal technology for a range of city-based IoT services in relation to energy, environment, sustainability, mobility, healthcare, etc. Mobile network infrastructure (incl 5G) will generally follow the main arteries and hotspots of the city, where there at the same time is usually a range of city- and utilities-based infrastructure that can be used for 5G co-location. IoT is also seen by the operators as a new way to move up the value chain. But if we are looking at 5G as potential digital infrastructure for smart cities, the cities infrastructure requirements will need to be discussed upfront with the network operators who are interested in building 5G networks. By working with the cities, these operators instantly get so-called anchor tenants for their network, which will help them to develop the viable business and investments models needed for such a network. The wrong strategy would be put the requirements of the telecommunications before that of the cities. The development of 5G will take a decade or so (2020-2030), and it is obvious that cities that already have their strategic (holistic) smart city plans ready are in a prime position to sit down with the operators; and they will be among the first who will be able to develop connected cities for their people. This will, of course, create enormous benefits and will attract new citizens and new businesses, especially those who understand the advantages of living or being situated in such a digital place. MVNOs (mobile virtual network operators) are another potential winner in this development — they could specialise in what is needed to create a smart city, smart community, smart precinct, etc. Telecommunication companies AT&T and Verizon in the USA clearly see the opportunities to work with cities, however, this is mainly based on getting easy access to valuable city real estate to install thousands of new antennas rather than looking at this infrastructure development from a city perspective. There is even some bullying involved by threatening that cities will be left behind if they don't jump onboard their 5G plans. The city of Knoxville in [...]



Suggestions for the Cuba Internet Task Force

2018-02-13T07:18:00-08:00

The Cuba Internet Task Force (CITF) held their inaugural meeting last week. Deputy Assistant Secretary for Western Hemisphere Affairs John S. Creamer will chair the CITF, and there are government representatives from the Department of State, Office of Cuba Broadcasting, Federal Communications Commission, National Telecommunications and Information Administration and Agency for International Development. Freedom House will represent NGOs and the Information Technology Industry Council will represent the IT industry. They agreed to form two subcommittees — one to explore the role of media and freedom of information in Cuba and one to explore Internet access. The subcommittees are to provide preliminary reports of recommendations within six months, and the CITF will reconvene in October to review those preliminary reports and prepare a final report with recommendations for the Secretary of State and the President. They are soliciting public comments, looking for volunteers for service on the subcommittees and have established a Web site. I may be wrong, but it sounds like the subcommittees will be doing much of the actual work. The subcommittee on technological challenges to Internet access will include US technology firms and industry representatives and the subcommittee on media and freedom of information will include NGOs and program implementers with a focus on activities that encourage freedom of expression in Cuba through independent media and Internet freedom. They aim to maintain balance by including members from industry, academia and legal, labor, or other professionals. I hope the subcommittee on media and Internet freedom resists proposals for clandestine programs. Those that have failed in the past have provided the Cuban government with an excuse for repression and cost the United States money and prestige. Both the Cuban and United States governments have overstated what their impact would have been had they succeeded. Cuba's current Wi-Fi hotspots, navigation rooms, home DSL and 3G mobile are stopgap efforts based on obsolete technology, and they provide inferior Internet access to a limited number of people. (El Paquete Semanal is the most important substitute for a modern Internet in Cuba today). It would be difficult for the subcommittee on technological challenges to devise plans or offer support for activities the current Cuban government would allow and be able to afford. That being said, the situation may ease somewhat after Raúl Castro steps down in April. Are there short-run steps Cuba would be willing to take that we could assist them with? For example, the next Cuban government might be willing to consider legitimizing and assisting some citizen-implemented stopgap measures like street nets, rural community networks, geostationary satellite service and LANs in schools and other organizations. They might also be willing to accept educational material and services like access to online material from Coursera or LAN-based courseware from MIT or The Khan Academy. (At the time of President Obama's visit, Cisco and the Universidad de las Ciencias Informaticas promised to cooperate in bringing the Cisco Network Academy to Cuba, but, as far as I know, that has not happened). The US requires Coursera and other companies to block Cuban access to their services. That is a policy we could reverse unilaterally, without the permission of the Cuban government. Google is the only US Internet company that has established a relationship with and been allowed to install infrastructure in Cuba. The next Cuban administration might be willing to trust them as partners in infrastructure projects like, for example, providing wholesale fiber service or establishing a YouTube production space in Havana. Cuba could also serve as a test population for Google services optimized for [...]



Automation for Physical Devices: the Holy Grail of Service Provisioning

2018-02-13T06:19:00-08:00

Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) are finally starting to pick up momentum. In the process, it is becoming clear that they are not the silver bullet originally advertised to be. While great for some use cases, emerging technologies like SDN and NFV have been primarily designed for virtual greenfield environments. Yet large service providers continue to run tons of physical network devices that are still managed manually. Based on discussions with senior executives at various service providers, the industry is gearing towards service agility and minimizing Operating Expenses (OPEX) through automation. But as fully automated workflows typically involve also physical network devices at select phases of the process, most network infrastructure vendors have been unable to go the whole nine yards together with their clients. One of the obvious reasons why carriers have been hesitant to embrace automation is that any automated process is only as strong as its weakest link. By having to resort to manual steps towards the end of the process, the service agility suffers. But perhaps even more importantly, partial automation abilities will diminish OPEX savings and limit the number of possible business cases. This is why automation for physical network devices is becoming the holy grail of service provisioning. Enter Ansible – the Network Robot Traditional orchestrators such as Chef, Puppet and Jenkins require physical agents to be installed on the managed devices. For large service providers with tens of thousands of devices to manage, this model is simply not practical. But over the last six months, the traditional approach has started giving way to agentless orchestration based on standard protocols such as SSH and SCP. Pioneered by Red Hat with its Ansible network module, service providers are now able to weave the management of physical devices into their lifecycle orchestration models. For practical purposes, this is almost like placing a robot onto a network technician's seat, ensuring that changes to physical network devices are carried out automatically. Because Ansible is an open source solution backed up by nearly every major vendor in the industry, the breadth of the ecosystem also enables valuable multi-vendor scenarios. This is important because it allows automated processes to run all the way from cloud portals to the physical devices on the ground. Given some time, this will be nothing less than revolutionary in unleashing the digital transformation. Spreadsheets that Choked the Robot The curious thing about network management is that there are typically no sophisticated solutions in place for managing VLAN spaces, Virtual Routing Functions (VRFs) and their connections with logical networks. Instead, the most common tool used for this purpose is a humble spreadsheet. Considering that automated management of physical network devices relies heavily on assigning suitable VLANs, networks, and device-specific configuration parameters, the last manual hurdle for automated network services is the spreadsheet used to manage them. Without a backend from which to query all these properties, initiatives aimed at end-to-end automation are likely to hit a wall. To eliminate the spreadsheets that choke the network robots, orchestrators need a single backend they can use to obtain all network-related data needed to configure devices. Here is a simple three-step methodology for unleashing the network robot: 1) Merge the entire network structure including logical networks, VLAN spaces and VRFs into a unified management system. This backend should provide all orchestrators with a simple REST-based API from which they can query free network resources and device-specific configurations automatically. 2) To ensure smoot[...]



Software-Defined Networking: What's New, and What's New for Tech Policy?

2018-02-12T09:40:00-08:00

The Silicon Flatirons Conference on Regulating Computing and Code is taking place in Boulder. The annual conference addresses a range of issues at the intersection of technology and policy and provides an excellent look ahead to the tech policy issues on the horizon, particularly in telecommunications. I was looking forward to yesterday's panel on "The Triumph of Software and Software-Defined Networks", which had some good discussion on the ongoing problem surrounding security and privacy of the Internet of Things (IoT); some of the topics raised echoed points made on a Silicon Flatirons panel last year. My colleague and CITP director Ed Felten made some lucid, astute points about the implications of the "infiltration" of software into all of our devices. Unfortunately, though (despite the moderator's best efforts!), the panel lacked any discussion of the forthcoming policy issues concerning Software-Defined Networking (SDN); I was concerned with some of the incorrect comments concerning SDN technology. Oddly, two panelists stated that Software Defined Networking has offered "nothing new". Here's one paper that explains some of the new concepts that came from SDN (including the origins of those ideas), and another that talks about what's to come as machine learning and automated decision-making begin to drive more aspects of network management. Vint Cerf corrected some of this discussion, pointing out one example of a fundamentally new capability: the rise of programmable hardware. One of same panelists also said that SDN hasn't seen any deployments in the wide-area Internet or at interconnection, a statement that has many counter-examples, including projects such as SDX (and the related multi-million dollar NSF program), Google's Espresso and B4, and Facebook's Edge Fabric to name just a few of the public examples. Some attendees commented that the panel could have discussed how SDN, when coupled with automated decision-making ("AI" in the parlance du jour) presents both new opportunities and challenges for policy. This post attempts to bring some of the issues at the intersection of SDN and policy to light. I address two main questions: What are the new technologies around SDN that people working in tech policy might want to know about?; What are some interesting problems at the intersection of SDN and tech policy? The first part of the post summarizes about 15 years of networking research in three paragraphs, in a form that policy and law scholars can hopefully digest; the second part of the post are some thoughts about new and interesting policy and legal questions — both opportunities and challenges — that these new technologies bring to bear. SDN: What's New in Technology? Software-defined networking (SDN) describes a type of network design where a software program runs separately from the underlying hardware routers and switches can control how traffic is forwarded through the network. While in some sense, one might think of this concept as "nothing new" (after all, network operators have been pushing configuration to routers with Perl scripts for decades), SDN brings several new twists to the table: The control of a collection of network devices from a single software program, written in a high-level programming language. The notion that many devices can be controlled from a single "controller" creates the ability for coordinated decisions across the network, as opposed to the configuration of each router and switch essentially being configured (and acting) independently. When we first presented this idea for Internet routing in the mid-2000s, this was highly controversial, with some even claiming that this was "failed phone company thinking" (after all, the Internet is "decentralized"; this centralized controller nonse[...]



What's So Outrageous Asking High Prices for Domain Names?

2018-02-12T09:28:00-08:00

Panels appointed to hear and decide disputes under the Uniform Domain Name Dispute Resolution Policy (UDRP) have long recognized that three letter domains are valuable assets. How investors value their domains depends in part on market conditions. Ordinarily (and for good reason) Panels do not wade into pricing because it is not a factor on its own in determining bad faith. That is why a Panel of distinguished members' decision to transfer — Autobuses de Oriente ADO, S.A. de C.V. v. Private Registration / Francois Carrillo, D2017-1661 (WIPO February 1, 2018) — received what in polite society is known as a "Bronx Cheer." Morgan Linton headlined: "ADO.com is lost in a UDRP due to its $500,000 price tag the same day DAX.com sells for $500,000." Andrew Allemann's blunt comment in DomainNameWire was "WIPO panel screws Domaining.com owner Francois Carrillo out of Ado.com" (explaining that the Panel gave improper weight to the price). And Raymond Hackney declares that "The ADO.com decision brings up another potential problem" (referring to the logo analysis that the three-member Panel found persuasive in reaching its decision). The single most prominent reason long-held domain names are lost is the failure to properly curate (by which I mean populating the website with bad faith content from which registration in bad faith can be inferred). Price is not a factor for bad faith without concrete proof of the 4(b)(i) elements, yet in Autobuses de Oriente price was elevated as a prominent factor. The Panel also condemned Respondent because it was passively holding and offering it for sale on a page that included other domain names each with a designed logo. Passive holding, too, a not a factor when considered alone; but when combined with other factors bad faith registration can be inferred. Does the Autobuses de Oriente decision deserve the universal condemnation it has received? (The three industry bloggers noted above are of the view the Panel put their combined fingers on the scale, and I think that criticism is fair). What constitutes concrete and "fake" evidence is worth exploring because it makes investors (large and small) of random letter domain names vulnerable to Complainants who claim the letters are not random but infringing. No doubt, this is a difficult area for Panels. 2017 saw some notable decisions on three-letter domain names, going both ways. was lost, but was not. What we know from the summary of the record in Autobuses de Oriente is that Respondent acquired in 2012 from an earlier holder whose website (allegedly) contained infringing links to transportation. Ordinarily, a successor is not held responsible for its predecessor's conduct, as long as it does not continue the bad faith after its acquisition (my emphasis). Here, the Panel conjectured that even if Respondent did not know of Complainant's (allegedly) "famous" mark, it was guilty of "willful blindness": [It] does not excuse willful blindness in this case, as it seems apparent from the record that even a cursory investigation by Respondent would have disclosed Complainant's mark especially given the use made of the Domain Name of which Respondent was aware when negotiating for the Domain Name. But, what would a "cursory investigation" have revealed? Well, it would have revealed that the website contained links to transportation, but up to that point in time there had been no UDRP claims from Autobuses de Oriente, so why would an investor (or any ordinary registrant for that matter) "know" that the links were infringing? To have determined that the domain name violated anyone's statutory rights would have required a deeply focused investigation. It is a fundamenta[...]



Foggy Bottom's New Cyberspace Bureau "Lines of Effort": Dumb and Dumber

2018-02-10T08:17:00-08:00

The release of the Tillerson letter to the House Committee on Foreign Affairs describes the State Department's new "Cyber Bureau" together with its "primary lines of effort." The proposal is said to be designed to "lead high-level diplomatic engagements around the world." Two of those "efforts" deserve special note and provide an entirely new spin on the affectionate local term for the Department — Foggy Bottom. While a few of the "efforts" are longstanding reasonable roles, most evince a new bilateral "America First" belligerence. Two deserve to be called "dumb and dumber." Perhaps they are best described in a hypothetical dialogue between a US high-level diplomat (call him Donald) and one from a foreign country (call him Vlad). [With apologies to SNL.] Donald: I'm here today to tell you about two of our new Cyber Bureau dictates...I mean efforts. Vlad: Please do tell. Donald: The world must "maintain open and interoperable character [sic] of the Internet." Vlad: Well said, Donald. Your foolish effort will greatly help our intelligence service penetrate American infrastructure and further manipulate elections! Will also help extend effort to other countries. It is better you spend money on huge wall. (wink, wink) Donald: I have more. The Cyber Bureau also says that everyone must "facilitate the exercise of human rights, including freedom of speech and religion through the Internet." Vlad: Well said again, Donald! Your effort will greatly assist comrade Assange, and his colleagues get all those WikiLeaks streamed out to the world. Our intelligence agents have more coming — including through their social media bots. After all, our intelligence agents have their rights too. (wink, wink) We can also get all of your Nazis to get their message out and disrupt your society. Hurray for internet freedom of speech. Donald: Thank you Vlad for supporting our Cyber Bureau efforts. Vlad: Your bilateral bullying efforts and isolation will greatly assist our diplomats in increasing our global stature. However, you have to be careful. You don't want to look like you are supporting Hillary's Internet Arab Spring strategy or ISIS getting their messaging out. If there is intelligent life left in the U.S. Congress, it should remind Tillerson that for over a hundred years, American cyber diplomacy was based on a strategy of technology neutrality and not "politicizing" the focus on the common global interest in "facilitating peaceful relations, international cooperation among peoples and economic and social development by means of efficient telecommunication services." The text is the preamble of a treaty that every nation on Earth, including the U.S., has signed scores of times over the past 168 years. The strategy is also a pragmatic one because as the treaty text notes at the outset everyone "fully recogniz[es] the sovereign right of each State to regulate its telecommunication," and communications at borders can be stopped. That long-standing strategy was abandoned twenty years ago when the Clinton Administration seized upon one particular technology platform — the DARPA academic research internet — and sought to evangelize it as the world's unfettered technology mandate. It was packaged as a utopian vision of happiness and economic wealth for all, while having a plethora of fatal flaws and disastrous potential effects with no effective mitigations. It should never have been allowed into the public infrastructure. The Cyber Bureau mandate urgently needs to be re-written. Written by Anthony Rutkowski, Principal, Netmagic Associates LLCFollow CircleID on TwitterMore under: Internet Governance [...]



The New State Department Cyberspace Bureau: from Multilateral Diplomacy to Bilateral Cyber-Bullying

2018-02-08T15:27:00-08:00

These days in Washington, even the most absurd proposals become the new normal. The announcement yesterday of a new U.S. State Department Cyberspace Bureau — with far-flung responsibilities and authority over anything relating to cybersecurity — is yet another example of setting the nation up as an isolated, belligerent actor on the world stage. In some ways, the reorganization almost seems like a companion to last week's proposal to take over the nation's 5G infrastructure. Most disturbingly, it transforms U.S. diplomacy assets from multilateral cooperation to becoming the world's bilateral cyber-bully nation. The State Department role has long had a very limited role in dealing with cybersecurity over the decades. The small substantive cybersecurity expertise resided in the century-old office devoted to evolving and implementing ITU treaties and occasionally facilitating major initiatives by other agencies and industry in expanding the means of international cooperation for new cyber technologies during periods of technology change. Notably, this included the U.S. assisting in 1988 in bringing about the world's existing cybersecurity instrument for datagram internets. So what exactly is the Trump Administration proposing as the remit for the Bureau of Cyberspace and the Digital Economy? Establish a global deterrence framework in which participating States make a political commitment to work together to impose consequences on States that engage in malicious cyber activities, based on participating States' shared understanding of what constitutes responsible State behavior in cyberspace. Develop and execute key adversary specific strategies to impose costs and alter calculus of decision-makers Advise and coordinate external responses to national-security-level cyber Incidents Promote adoption of national processes and programs that enable foreign territorial cyber, threat detection, prevention, and response Build foreign capacity to protect the global network thereby enabling like-minded participation in deterrence framework Maintain open and interoperable character of the Internet with multi-stakeholder governance, instead of centralized government control Promote an international regulatory environment for technology investments and the internet that benefits U.S. economic and national security interests Promote cross-border flow of data and combat international initiatives which seek to impose restrictive localization or privacy requirements on U.S. businesses Protect the integrity of U.S. and international telecommunications infrastructure from foreign-based threats. Serve as the USG interagency coordinator for international engagement. Secure radio frequency spectrum for U.S. businesses and national security needs Facilitate the exercise of human rights, including freedom of speech and religion through the internet Build capacity of U.S. diplomatic officials to engage in these issues. If you peek at the org chart, the office titles are equally alarming as the remit. DAS for Cyberspace: Four Key Adversaries and Cyber Operations; Cyber Stability and Deterrence PDAS: Strategic Planning & Capacity Building; Office of Technology & National Security; Global Challenges & Policy Coordination DAS for Digital Economy: Global Networks & Radio Frequency Coordination; Data and Digital Regulatory Advocacy Although a few of those responsibilities have long existed within the State Department, most are new. And, for those that have existed, State has deferred to the many other agencies with the substantive expertise to undertake the international cybersecurity activities — limiting the State role only to coordinating representation in a few[...]



Bitcoin Domain Names Become Popular - and Attract Disputes

2018-02-08T07:00:01-08:00

Cryptocurrencies (such as Bitcoin) are all the rage — so, naturally, related domain name disputes are, too. The wild fluctuations in cryptocurrency prices (Bitcoin hit a low of close to $6,000 this week, after reaching an all-time high of more than $19,000 only two months ago, and less than $1,000 a year ago) have attracted speculators, regulators and now even cybersquatters. Bitcoin + Trademark Domain Names About 16 cases involving domain names with the word "Bitcoin" have been filed as of this writing under the Uniform Domain Name Dispute Resolution Policy (UDRP). Each of the disputed domain names contains what appears to be a well-known trademark in addition to the word "Bitcoin," such as , , and (each of which was ordered transferred to the obvious trademark owner). These multi-word cryptocurrency domain name disputes arose not because they contain "Bitcoin" but because they contain another entity's trademark. Indeed, it appears as if the word "Bitcoin" itself is not protected by any trademark registrations in the United States, although there are more than a dozen U.S. trademark registrations that include "Bitcoin," such as AMERICAN BITCOIN EXCHANGE (U.S. Reg. No. 4,665,053) and BITCOIN.GURU (U.S. Reg. No. 5,129,377). So, it seems unlikely that anyone could successfully assert rights to a domain name based only on the word "Bitcoin," and the inclusion of another word may be essential to winning a UDRP dispute. For example, in a UDRP decision transferring the domain name to the drug company F. Hoffmann-La Roche, the panel wrote that the "dominant part of the disputed domain name" contained the trademark VALIUM and that the presence of the word "Bitcoin" in the domain name "does not affect the overall impression" of it. And in a UDRP decision ordering transfer of three domain names including , the panel said that the word "Bitcoin" was simply a "generic financial term[]" that did not affect the UDRP's "confusingly similar" factor. Interestingly, at least as of this writing, no UDRP complaints have been filed for domain names containing the names of some of Bitcoin's cryptocurrency competitors, such as Litecoin. That could simply be an indication of Bitcoin's dominance and, I suspect, is likely to change in the near future. However, one company, Bittrex, which operates a cryptocurrency exchange, has been quite active in filing UDRP complaints for domain names that contain its BITTREX trademark, winning 23 decisions as of this writing, including for . Why Cryptocurrency Domain Names? Cybersquatters appear to be attracted to Bitcoin-related domain names at least in part to profit from questionable practices. For example, in the case, the panel wrote that the domain name "resolve[d] to a website offering generic products identical to Complainant's Valium products, and which are sold under Complainant's VALIUM trademark" — something the panel said created a likelihood of confusion and, therefore, bad faith under the UDRP's third element. In the case (which also involved four other domain names), the panel applied the UDRP's "passive holding" doctrine to find bad faith even though the domain names were not associated with active websites. "Using a confusingly similar domain name that disrupts a complainant's business and trades upon the goodwill of a complainant shows bad faith..., even when a respondent does not actively use the domain names," the panel wrote. Cybe[...]



Preparing for GDPR's Impact on WHOIS - 5 Steps to Consider

2018-02-07T11:55:00-08:00

With GDPR coming into effect this May, it is almost a forgone conclusion that WHOIS as we know it today, will change. Without knowing the full details, how can companies begin to prepare? Communicate Changes – First and foremost, ensuring that brand protection, security and compliance departments are aware that a change to WHOIS access is on the horizon is an important first step. Just knowing that the ability to uncover domain ownership information is likely to change in the future will help to relieve some of the angst that is likely to occur. Leverage Reverse WHOIS Now – Secondly, take advantage of Reverse WHOIS tools while the data they contain is still meaningful. Regardless of whether you use Reverse WHOIS to uncover rogue registrations, identify other infringing domains for UDRP filings, or for due diligence in support of mergers and acquisitions, run your searches now given that the quality of this data is likely to degrade over time. Don't Delay - Take Action Against Infringing Domains – Take action now against infringing domains while access to ownership information is readily available. It's expected that in the future, brand owners and law enforcement will still be able to request contact information, but this could be a more onerous process. Will ICANN's new model allow for tiered access - at this point, we just don't know. Understand Changes to Online Brand Protection Solutions – For those companies that have online brand protection solutions in place, begin working with your providers to understand what impact they are expecting, and how they are planning to support changes to WHOIS. Stay Informed – With ICANN poised to introduce an interim WHOIS model, companies can stay informed by visiting icann.org/dataprotectionprivacy. Given the short timelines, it's expected that ICANN's interim model will be released shortly. Undoubtedly, there are still so many unknowns regarding the impact of GDPR on WHOIS — but preparing now can help to relieve some of pressure likely to be caused by the changes to its format. Written by Elisa Cooper, SVP Marketing and Policy at Brandsight, Inc.Follow CircleID on TwitterMore under: Domain Management, DNS, Domain Names, ICANN, Internet Governance, Policy & Regulation, Privacy, Whois [...]



Transition of the Telecoms Industry Is Overdue

2018-02-05T06:13:00-08:00

It is interesting to observe the changes in the telecommunications environment over the last few decades. Before videotex (the predecessor of the internet) arrived in the late 1970s early 1980s, 90% of telecommunications revolved around telephone calls. And at that time telephony was still a luxury for many, as making calls were expensive. I remember that in 1972 a telephone call between London and Amsterdam cost one pound per minute. Local telephone calls were timed, and I still remember shouts from my parents when I was on a call to my girlfriend — 'don't make it too long' and 'get off the phone.' This basically set the scene for the industry ever since. Only reluctantly, and only under the pressure of competition from outside the traditional industry, did changes start to occur. In the 1990s we saw resale providers bypassing the national long-distance and international telco tariffs, offering significantly lower prices. With digital technologies emerging we saw the arrival of so-called value-added service providers (VAS), often led by publishing companies. This environment improved significantly once the internet became web-based. The incumbents initially fought tooth and nail against these changes before finally being dragged into the new world, kicking and screaming. They used all the tricks in the book to stop innovations and to stop competition. The current net neutrality failure in the USA is a good example of the strength of the incumbent lobby in that country, which totally ignored the wishes of the majority, who were in favour of net neutrality. However this monopolistic behaviour of the traditional telcos is still happening in many countries around the world, hampering innovation and competition, and it is very often supported by their local governments. The traditional telecoms industry, therefore, was never a leader in the new developments that were occurring in their own industry. Interestingly, most of these new externally-driven developments saw telecoms becoming more of a facilitator than a service in itself. The outcome is clear if we look at the internet and the smartphones of today. Because of its resistance, the traditional industry has never been able to lead these changes. Looking at WCIT-12 (the World Conference on International Telecommunications) in Dubai we saw that the international telecom tariffs remain an area of dispute within the industry, and at that same conference, the 'them and us' situation between the traditional telcos and the internet companies took centre-stage as well. Regulations, linked to technologies, are used on both sides to either protect their market or to open up the market. This underlying politicised situation makes it very difficult to put the user central and build services such as e-health, e-education, smart cities, smart grids, etc. from a customer perspective. A great deal of lip service is being paid, but in reality, the user is still taking a back seat. This is also reflected in a blatant disrespect for privacy and cyber safety. While significant changes have happened over the last 20 years, the underlying structure is still largely in place, and because of the heavy lobby in the industry, it is supported by international institutions such as the ITU. While these institutions support new — customer-focused — developments they are still heavily influenced by the vested interests. Increasingly vested interests also include the new internet monopolies (Facebook, Google, Amazon, Apple). While the UN does have a more social approach at the same time, they lack a holistic approach towards these developm[...]



ICANN Maps Whois Models for GDPR

2018-02-02T10:48:00-08:00

Earlier today ICANN held a webinar to provide an update on their data privacy activities in relation to whois and GDPR.

Rather than simply talking about the various "models" they produced both a visual mapping as well as a matrix.

While some attendees may not agree with how all the models are classified it is still a helpful way of showing the deviations from the current fully public whois model for gTLD domain name registrations.

(image)

ICANN has been publishing updates and related documents on their site here.

Written by Michele Neylon, MD of Blacknight Solutions

Follow CircleID on Twitter

More under: ICANN, Whois




The Rise of a Secondary Market for Domain Names (Part 4/4): Facilitating the Secondary Market

2018-02-01T19:26:00-08:00

The defining of rights in the UDRP process is precisely what WIPO and ICANN contemplated, but it is unlikely they foresaw the destination of the jurisprudence. Since its inception, UDRP Panels have adjudicated over 75,000 disputes, some involving multiple domain names. (These numbers, incidentally, are a tiny fraction of the number of registered domain names in legacy and new top-level domains, which exceeded 320 million in the first quarter 2017). However, roughly ninety percent of UDRP decisions can be discounted because respondents have no defensible claim to accused domain names and do not even bother to appear or argue that they do. I do not regard this class of registrants as entrepreneurs (which I reserve for the investor class) but rather as bottom feeders, although there are some who fancy themselves to be acting in good faith when the evidence is clearly against them. The development of domain name jurisprudence insofar as drawing the boundaries of rights is therefore based on some ten percent of the adjudicated disputes. Panels began parsing rights in the first year of the UDRP, and they have not stopped. In the first denial (the fifth filed complaint), the respondent acquired the domain names before the complainant rebranded its business with the knowledge that the corresponding domain names were unavailable. [1] The respondent-investor had priority, and it prevailed. This was quickly followed by another dispute in which the mark owner had priority, but the domain name was composed of a dictionary word, "allocation." The panel explained that the difficulty lay in the fact that the domain name allocation.com, although descriptive or generic in relation to certain services or goods, may be a valid trademark for others. This difficulty is [com]pounded by the fact that, while "Allocation" may be considered a common word in English speaking countries, this may not be the case in other countries, such as Germany. [2] The panel found that the registration and offering for sale of allocation.com constituted a legitimate interest of the respondent in the domain name, although it would be "different if it were shown that allocation.com has been chosen with the intent to profit from or otherwise abuse Complainant's trademark rights." The complainant offered no evidence of "intent to profit," and its complaint was, accordingly, denied. Chief among the principles of domain name jurisprudence for investors are rights or legitimate interests founded on (1) a "first-come, first-served" basis (not necessarily limited to registrations postdating marks' first use in commerce); (2) registration of generic strings used (or potentially usable) in non-infringing ways for their semantic or ordinary meanings; and (3) making bona fide offerings of goods or services (which by consensus includes pay-per-click websites and reselling domain names on the secondary market). Thus, as a general matter, it is not unlawful to have registered successbank.com following its abandonment by a bank known before its merger with another financial institution as "Success National Bank." [3] The complainant's rebranding to SUCCESS BANK notwithstanding, it had no right to a lawfully registered domain name even though the second level domain is identical to its mark. Nor, is it unlawful to register a geographic indicator — a cambridge.com for example — where the resolving website is devoted to providing information about Cambridge. [4] Cambridge University may have a seven-hundred-year history of marketing its services, but the domain name does[...]



The Rise of a Secondary Market for Domain Names (Part 3/4): Domain Names as Virtual Real Estate

2018-01-31T16:52:00-08:00

The way the Internet operates drove a wedge between strings of lexical and numeric characters used as marks and alphanumeric strings used as addresses. Domain names were described by Steve Forbes in a 2007 press release as virtual real estate. It is, he said, analogous to the market in real property: "Internet traffic and domains are the prime real estate of the 21st century." [1] Mr. Forbes was not the first to recognize this phenomenon. In a case decided in 1999 (the same year ICANN implemented the UDRP), a federal district court presciently observed that "[s]ome domain names ... are valuable assets as domain names irrespective of any goodwill which might be attached to them." The court continued: "Indeed, there is a lucrative market for certain generic or clever domain names that do not violate a trademark or other right or interest, but are otherwise extremely valuable to Internet entrepreneurs." [2] I have already mentioned the reason they are valuable, but how have they become so? The answer (I think) lies in the commodification of words and letters. Before the Internet, businesses had the luxury of drawing on cultural resources of such depth (dictionaries, thesauruses, and lexicons, among them) that it never appeared likely they would ever be exhausted or "owned." However, what was once a "public domain" of words and letters has become commodified, as investors became increasingly active in vacuuming up every word in general and specialized dictionaries as well as registering strings of arbitrary characters that also can be used as acronyms. Even the definite article "the" is registered — the.com — although it has never been the subject of a cybersquatting complaint. The WhoIs directory shows that it was registered in 1997 and is held anonymously under a proxy. The result of commodifying words and letters is that investors essentially control the market for new names, particularly for dot-com addresses, which remain by far the most desirable extension. This is what the Panel meant when it stated that domain names are a "scares resource." As the number of registered domain names held by investors has increased, the free pool of available words for new and emerging businesses has decreased. Put another way, there has been a steady shrinking of the public domain of words and letters for use in the legacy spaces that corresponds in inverse fashion to the increase in the number of registered domain names. [3] This is not to criticize investors who have legitimately taken advantage of market conditions. They recognized and seized upon an economic opportunity and by doing so created a vibrant secondary market. Nevertheless, as I have already noted the emergence and protection of this market for domain names has been facilitated by panelists working to establish a jurisprudence that protects both mark owner and investors. Endnotes: [1] Further, "[t]his market has matured, and individuals, brands, investors and organizations who do not grasp their importance or value are missing out on numerous levels." Reported in CircleID at http://www.circleid.com/posts/792113_steve_forbes_domain_name_economics/. [2] Dorer v Arel, 60 F. Supp. 2d 558 (E.D. Va. 1999). [3] See 848 F.3d 292, 121 U.S.P.Q.2d 1586 (4th Cir., 2017). The evidence in that case indicated that "99% of all registrar searches today result in a 'domain taken' page." The Court noted further that "Verisign's own data shows that out of approximately two billion requests it receives each month to register a .com[...]