Subscribe: CircleID: Featured Blogs
Added By: Feedage Forager Feedage Grade A rated
Language: English
business  domain names  domain  fcc  gac  internet  names  new  overview  policy  rights  software  systems  udrp  wipo 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID: Featured Blogs

CircleID: Featured Blogs

Latest blogs postings on CircleID

Updated: 2017-05-26T19:17:00-08:00


Help Shape the Future of the Internet


This year, the Internet Society celebrates its 25th anniversary. Our own history is inextricably tied to the history of the Internet. We were founded in 1992 by Internet pioneers who believed that "a society would emerge from the idea that is the Internet" — and they were right. As part of the celebration, this September we will launch a comprehensive report that details the key forces that could impact the future of the Internet. The report will also offer recommendations for the Future and we need your input. Our work on this started last year, when we engaged with a broad community of Members, Chapters, Internet experts and partners. We conducted two global surveys that generated more than 2,500 responses representing the business, public policy, civil society, Internet development, academic and technology communities from 160 countries and economies. Individuals from 94% of the Internet Society's global chapters participated in the survey. We interviewed more than 130 Internet experts and hosted 15 virtual roundtables. My colleague Sally Wentworth has shared some thoughts on these conversations as she presented the project to UN trade experts in April, in Geneva. Throughout the project, our community reaffirmed the importance of six "Drivers of Change" and identified three areas that will be significantly impacted in the future: Digital Divides; Personal Freedoms and Rights; and, Media, Culture and Society. These "Impact Areas" are core to the Internet Society's focus on putting the user at the forefront when considering the future of the Internet. This has been community-driven from the beginning to the end, and as we reach the final stage, we would like your input on recommendations for Internet leaders and policy makers to ensure the development of an open, trusted, accessible, and global Internet in the future. We'll discuss these recommendations in September at our global membership meeting, InterCommunity 2017. It's open to all. Unleash your imagination. Tell us how we can address emerging issues while harnessing the opportunities that the future will bring. Note: This post originally appeared on the Internet Society blog. Written by Constance Bommelaer, Senior Director, Global Internet Policy, Internet SocietyFollow CircleID on TwitterMore under: Internet Governance, Policy & Regulation [...]

What It Takes to Prove Common Law Rights in UDRP Complaints


The Uniform Domain Name Dispute Resolution Policy now has seventeen years of history. A high percentage of disputes are indefensible and generally undefended. As the history lengthens, early registrants of dictionary word-, common phrase-, and arbitrary letter-domain names have been increasing challenged in two circumstances, namely by businesses who claim to have used the unregistered terms before respondents registered them and later by emerging businesses with no history prior to the registrations of the domain names. I have discussed the latter in earlier essays. Some examples from recently decided cases of the former include "gabs" (the only recent dictionary word case); phrases include "Gotham construction," "Minute Clinic," "Stage Coach," and "Desert Trip" and for random letters (acronyms to complainants) "atc" and other three-character domains. Some of these second level domains are discussed further below. Claiming unregistered rights is a recurring motif, important because it affects whether complainants have standing, discussed in an earlier essay, UDRP Standing: Proving Unregistered Trademark Rights). Typical complainant allegations of common law rights confess they never registered their marks but their priority in the marketplace ought nevertheless to support abusive registration of the corresponding domain names. However, as a general rule complainants alleging common law rights have to work harder to overcome the distance of time. To prevail in a UDRP proceeding parties have to be alert to their evidentiary demands. When a complainant alleges priority in using a mark currently being exploited by a respondent arguably violating its representations and warranties, it has to prove "reputation in and public recognition of the trademark" prior to the registration of the domain name (the now versus then burden). The quotation comes from the Gotham construction case, Joel I. Picket v. Niyazi Palay / Gotham Constructions, FA1702001717501 (Forum April 10, 2017). Put another way—Stacy Hinojosa v. Tulip Trading Company, FA1704001725398 (Forum May 24, 2017) (): [A] date of first use alone is not enough to establish common law rights in a mark. In order to have common law rights, a complainant must establish secondary meaning. Secondary meaning requires establishing that the public primarily associates the mark in question with certain goods or services originating from the purported mark holder." The underlying rationale is simple: if prior to the registration of the domain name the unregistered mark had no reputation, it follows respondent could not have registered the domain name in bad faith. It's worse for a complainant who had no reputation in the past, and has none now! But these failures are frequently traceable to complainants not understanding what has to be proved, and argued by pro se disputants. The term "rights" in paragraph 4(a)(i) of the Uniform Domain Name Dispute Resolution Policy — "[the] domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights" — encompasses unregistered as well as registered rights but whereas a complainant with a registered mark by definition has a "right" complainant with an unregistered right has to prove something more than simple priority. It may indeed have had a market presence, but who knew about it? This is the kind of knowledge only a complainant would have, and if it doesn't have (or doesn't offer) documented proof it will be read negatively that silence means there is no proof to offer; and if there is no proof, it loses. It has been said that Panels generally "approach[] the issue of proof of [unregistered] trademark 'rights' ... in a slightly more relaxed manner than does the USPTO when it requires proof of secondary meaning." NJRentAScooter v. AM Business Solutions LLC, FA0909001284557 (Nat. Arb. Forum November 4, 2009). However, "slightly more relaxed" has to be understood in a relative sense. The weak[...]

WIPO's UDRP 'Overview' Gets Bigger (and Better)


Just as the number of domain names and domain name disputes have expanded significantly in recent years, so, too, has WIPO's "Overview," which has been updated to address the growing complexity of cases under the Uniform Domain Name Dispute Resolution Policy (UDRP). WIPO has just published the third edition of its "WIPO Overview of WIPO Panel Views on Selected UDRP Questions” — commonly referred to as "WIPO Jurisprudential Overview 3.0." The document addresses some of the most common, important and difficult issues that frequently arise in UDRP cases. WIPO Overview 3.0 is the first update to this document in six years — a time period in which a lot of changes have come to the domain name system, including the arrival of more than 1,200 new generic top-level domains (gTLDs) and a new domain name dispute policy (the Uniform Rapid Suspension System, or URS). "Following a review of thousands of WIPO panel decisions issued since WIPO Overview 2.0, this edition has been updated to now include express references to over 800 representative decisions (formerly 380) from over 250 (formerly 180) WIPO panelists," according to an introduction to WIPO Overview 3.0. "The number of cases managed by the WIPO Center has nearly doubled since its publication of WIPO Overview 2.0; as a result, the number of issues covered in this WIPO Jurisprudential Overview 3.0 has significantly increased to reflect a range of incremental DNS and UDRP case evolutions." New and Expanded Topics New or expanded topics addressed in WIPO Overview 3.0 include the following: The relevance of a top-level domain name – a topic I have written about before. The Overview says: "Where the applicable TLD and the second-level portion of the domain name in combination contain the relevant trademark, panels may consider the domain name in its entirety for purposes of assessing confusing similarity (e.g., for a hypothetical TLD '.mark' and a mark 'TRADEMARK', the domain name would be confusingly similar for UDRP standing purposes)." The relationship between the UDRP and the URS – another topic I have written about before. Citing a decision in which I successfully represented a trademark owner in both a URS and a UDRP proceeding, the Overview says, "There have… been UDRP proceedings filed where the same domain name was previously subject to a URS case. In such event, the UDRP complaint should make this clear." WIPO's role in implementing a UDRP decision — an issue that occasionally arises when a registrar fails to transfer a domain name despite a UDRP order to do so. In my experience, this is often attributable to ignorance, not defiance, but in either case enlisting WIPO's assistance can be helpful. Although the Overview makes clear that WIPO's role "normally ends upon notification of a panel decision to the parties and registrar," it also says that parties may "raise such implementation matters to the WIPO Center's attention." The Role of the Overview In any event, WIPO Overview 3.0 should be helpful to any party filing or defending a UDRP complaint. Not only does the document explain the consensus view on many issues, it also provides numerous citations to relevant decisions, which can provide a useful resource for additional research. Still, as the Overview itself makes clear, not all UDRP issues are entirely settled, and (as in all legal proceedings) the facts of each case will be important. As the Overview states, the document "cannot serve as a substitution for each party's obligation to argue and establish their particular case under the UDRP, and it remains the responsibility of each party to make its own independent assessment of prior decisions relevant to its case." Therefore, parties would be wise to consult the newly expanded and even more helpful Overview — but, they still must conduct appropriate research and analysis to prepare and present the strongest possible arguments in a UDRP case. Written by Doug Isenberg, Attor[...]

Be Agile or Be Edged Out: "Live" from TM Forum 2017


I like a conference that's "Live". Not just a lively crowd coalescing together to passionately discuss and debate matters of common interests, but more so in the sense of physical presence: things you can feel and touch. In the case of the TM Forum Live! 2017 event, held last week at Nice, France, it's the Catalyst Pavilions where innovative solutions, best practices, and even exploratory experimentations were on full display. Do I mean that for an IT Operation Support Systems (OSS) and Business Support Systems (BSS) trade show, you can touch it? Yep. "Touching" in the sense that you can see and interact with real tools, platforms, and live demonstrations from live telecom networks, in real life deployments. You can see how concepts are developed into operational tools; you can touch tools that became operational platforms powering network and service convergences for service providers; you can come and visualize how disparate, siloed processes and manual work are being automated and integrated; and you can interact and even challenge why innovations haven't delivered the results promised. "Hands-on" is what really grabbed my attention. IT operations optimization, data analytics, service quality improvements, customer-centric processes, interfaces, APIs — you can touch them all under one roof. That "hands-on" engagement is what makes TMF feel close: touch it, play with it, see how it would apply in your own world. Demonstrations and examples range from IT operation process automation, Quality of Services (QoS) for customer-centric operations models, to the Internet of Things (IoT), data analytics, platforms, and APIs. So much has changed and evolved from the traditional OSS/BSS to what is now the OSS/BSS of the Network Function Virtualization (NFV) and Software-Defined Networking (SDN) landscape. The stodgy old OSS/BSS is challenged to go through a transformative change, which is driven by the demand for business agility. Business agility is a reality for any IT department and for service providers who need to survive in a world quickly transformed by increasingly interactive service provider/subscriber relationships. Consumer demand for access is high, leading to fierce competition amongst providers for subscriber loyalty and creating business drivers for fast new service launches, targeted and personalized service packages, easy and on-demand self-service and self-authentication of services, and promotional sign-ons. As a result, the collaboration between the Chief Marketing Officers' (CMOs) department, Chief Information Officers' (CIOs) department and Chief Technology Officers' (CTOs) department has intensified. IT is no longer satisfied with being handed down business requirements by business groups such as Sales and Marketing and Product Management. IT has to strive to be a business partner. Service providers aligning their organization to achieve business agility are merging their traditional Network Engineering functions and back-office IT organization all under one executive branch of the CTO. The goal is to drive DevOps agility and faster time to deployment. Leveraging technology to create business agility is easier said than done, as often lamented by people working in the trenches. A lot of it has to do with integrating legacy systems, but it's also related to what I would call "self-inflicted" processes and workflows built for yesterday's market and subscribers. Today, the combination of 4G LTE fast speed broadband connections, Google searches that put information at consumers' fingertips, the omnipresent and accessibility of information as organizations digitize their assets, and the power of video from companies like Google, YouTube, Facebook, and Twitter are changing our lives. This reflects and changes the way service providers interact with their target audiences. Business agility is not simply a buzzword, but a matter of survival! How do I create the stickiness with my existing subscribers? How do I[...]

Hidden in Plain Sight: FCC Chairman Pai's Strategy to Consolidate the U.S. Wireless Marketplace


While couched in noble terms of promoting competition, innovation and freedom, the FCC soon will combine two initiatives that will enhance the likelihood that Sprint and T-Mobile will stop operating as separate companies within 18 months. In the same manner at the regulatory approval of airline mergers, the FCC will make all sorts of conclusions sorely lacking empirical evidence and common sense. FCC Chairman Pai's game plan starts with a report to Congress that the wireless marketplace is robustly competitive. The Commission can then leverage its marketplace assessment to conclude that even a further concentration in an already massively concentrated industry will not matter. Virtually overnight, the remaining firms will have far less incentives to enhance the value proposition for subscribers as T-Mobile and Sprint have done much to the chagrin of their larger, innovation-free competitors AT&T and Verizon who control over 67% of the market and serve about 275 million of the nation's 405 million subscribers. Like so many predecessors of both political parties, Chairman Pai will overplay his hand and distort markets by reducing competition and innovation much to the detriment of consumers. He can get away with this strategy if reviewing courts fail to apply the rule of law and reject results-driven decision making that lacks unimpeachable evidence supporting the harm free consolidation of the wireless marketplace. Adding to the likely of successful overreach, is the possibility of a muted response in the court of public opinion. So how will the Pai strategy play out? First, the FCC soon will invite interested parties to provide evidence supporting or opposing a stated intent to deem the wireless marketplace sufficiently accessible and affordable throughout the nation. The FCC has lots of evidence to support its conclusion, but plenty of countervailing and inconvenient facts warrant a conditional conclusion, particularly in light of future market consolidation. Wireless carriers have invested billions in network infrastructure and spectrum. Rates have significantly declined as the industry has acquired scale and near full market penetration. Bear in mind that all of this success has occurred despite, or possibly because of a federal law requiring the FCC to treat wireless carriers as public utility telephone companies. Congress opted to treat wireless telephone service as common carriage, not because of market dominance, but because it wanted to maintain regulatory parity with wireline telephone service as well as apply essential consumer safeguards. How ironic — perhaps hypocritical — of Chairman Pai and others who surely know better to characterize this responsibility as the product of overzealous FCC regulation that has severely disrupted and harmed ventures providing wireless services. Just how has common carrier regulation created investment disincentives for wireless carriers when operating as telephone companies? Put another way, how would removal of the consumer safeguards built into congressionally-mandated regulatory safeguards unleash more capital investment, innovation and competitive juices? U.S. wireless carriers regularly report robust earning and average revenue per user that rival any carrier worldwide. Of course industry consolidation would further improve margins while relaxation of network neutrality and privacy protection safeguards would create new profit centers. T-Mobile shareholders get a big payout, while the remaining carriers breath a sigh of relief that their exhaustively competitive days are over. Will the court of public opinion detect and reject the FCC's bogus conclusion that common carrier regulation has thwarted wireless investment and innovation? That requires a lot of vigilance and memories of the bad old days when no carrier opted to play the role of maverick innovator and marketplace disrupter. With T-Mobile or Sprint merged or acquired, the remain[...]

Registered Your DMCA Contact Address Yet?


It is not much of an exaggeration to say that the Digital Millenium Copyright Act of 1998 makes the Internet as we know it possible. The DMCA created a safe harbor that protects online service providers from copyright suits so long as they follow the DMCA rules. One of the rules is that the provider has to register with the Copyright Office to designate an agent to whom copyright complaints can be sent. The original process was rather klunky; send in a paper form that they scan into their database, along with a check. This year there is a new online system, and as of December, they will no longer provide the old paper database. So if you are a provider (run web servers, for example) and want to take advantage of the safe harbor, you have to register or re-register. Fortunately, the process is pretty simple. You visit the new DMCA site at, click Registration Account Login at the upper right, then the "register here" link on the login page. Then you set up an account with yourself as a primary contact, and if you want, a secondary contact. It sends a confirmation message to the e-mail you provide, and once you click the link, you have an account. Then you log in and add a service provider which will generally be you or your company, and a designated agent which will generally also be you. Then you add all the names by which someone might look for your company, which can include your business name, any other names your business uses, and all the domain names you use. There is as far as I can tell, no limit to the number of alternate names, so be comprehensive. Then you pay $6 by credit card, and you're done. If you later want to make changes, such as adding new alternate names, that's another $6 so take a few minutes and think of them all before pushing the pay button. After three years they will send you a reminder to renew, which will cost another $6. When the Copyright Office set up the new process, there was a certain amount of grousing that the old registrations were allegedly permanent while the new registrations have to be renewed every three years. While there is some merit to this complaint, it must be noted that the old registrations cost $140 while the new ones are $6, so if your business lasts for less than 70 years, the new scheme is cheaper. For six bucks, it's cheap insurance against even unlikely copyright suits. Written by John Levine, Author, Consultant & SpeakerFollow CircleID on TwitterMore under: Intellectual Property [...]

Security Costs Money. So - Who Pays?


Computer security costs money. It costs more to develop secure software, and there's an ongoing maintenance cost to patch the remaining holes. Spending more time and money up front will likely result in lesser maintenance costs going forward, but too few companies do that. Besides, even very secure operating systems like Windows 10 and iOS have had security problems and hence require patching. (I just installed iOS 10.3.2 on my phone. It fixed about two dozen security holes.) So — who pays? In particular, who pays after the first few years when the software is, at least conceptually if not literally, covered by a "warranty". Let's look at a simplistic model. There are two costs, a development cost $d and an annual support cost $s for n years after the "warranty" period. Obviously, the company pays $d and recoups it by charging for the product. Who should pay $n·s? Zeynep Tufekci, in an op-ed column in the New York Times, argued that Microsoft and other tech companies should pick up the cost. She notes the societal impact of some bugs: As a reminder of what is at stake, ambulances carrying sick children were diverted and heart patients turned away from surgery in Britain by the ransomware attack. Those hospitals may never get their data back. The last big worm like this, Conficker, infected millions of computers in almost 200 countries in 2008. We are much more dependent on software for critical functions today, and there is no guarantee there will be a kill switch next time. The trouble is that n can be large; the support costs could thus be unbounded. Can we bound n? Two things are very clear. First, in complex software, no one will ever find the last bug. As Fred Brooks noted many years ago, in a complex program patches introduce their own, new bugs. Second, achieving a significant improvement in a product's security generally requires a new architecture and a lot of changed code. It's not a patch, it's a new release. In other words, the most secure current version of Windows XP is better known as Windows 10. You cannot patch your way to security. Another problem is that n is very different for different environments. An ordinary desktop PC may last five or six years; a car can last decades. Furthermore, while smart toys are relatively unimportant (except, of course, to the heart-broken child and hence to his or her parents), computers embedded in MRI machines must work, and work for many years. Historically, the software industry has never supported releases indefinitely. That made sense back when mainframes walked the earth; it's a lot less clear today when software controls everything from cars to light bulbs. In addition, while Microsoft, Google, and Apple are rich and can afford the costs, small developers may not be able to. For that matter, they may not still be in business, or may not be findable. If software companies can't pay, perhaps patching should be funded through general tax revenues. The cost is, as noted, society-wide; why shouldn't society pay for it? As a perhaps more palatable alternative, perhaps costs to patch old software should be covered by something like the EPA Superfund for cleaning up toxic waste sites. But who should fund the software superfund? Is there a good analog to the potential polluters pay principle? A tax on software? On computers or IoT devices? It's worth noting that it isn't easy to simply say "so-and-so will pay for fixes". Coming up to speed on a code base is neither quick nor easy, and companies would have to deposit with an escrow agent not just complete source and documentation trees but also a complete build environment — compiling a complex software product takes a great deal of infrastructure. We could outsource the problem, of course: make software companies liable for security problems for some number of years after shipment; that term could vary for differe[...]

Balancing Rights: Mark Owners, Emergent Businesses, and Investors


Is there any act more primary than naming? It comes before all else and makes possible what follows. For the most part, names are drawn from cultural assets: collections of words, geographic locations, family names, etc. They can be valuable, which is why they are guarded, protected, and hoarded. The balancing of rights among those competing for names is a deliberate feature of the Uniform Domain Name Dispute Resolution Policy (UDRP). The jurisprudence is "concerned [quoting from WIPO Final Report at paragraph 13] with defining the boundary between unfair and unjustified appropriation of another's intellectual creations or business identifiers." While businesses have statutory protection for the names, they use to identify themselves in the marketplace their choices of dictionary words and common expressions (excluding coined words) are nonexclusive. So for example "Prudential," "United," and "American" (to take the most obvious) are shared by many companies in different Classes. Coined words such as Google stand apart. Although it may be said (in a colloquial sense) that dictionary word-marks are "owned", it can never be equated with owning the grammatical constituents from which they are composed. (Virgin Enterprises and Easy Group have no monopoly on the dictionary words "virgin" and "easy" although they and other companies with long and/or deep presences in their marketplaces have been particularly successful in shutting down any use of their dictionary word-marks (combined or not with other grammatical elements) as domain names.) This sharing of names under the trademark system works because each of the sharers operates in and is confined to Classes that define the metes and bounds of their rights. (Non-shared marks higher on the classification scale can also be lawfully used in combination with other words in both the actual and virtual marketplaces (noted below), so they too are not entirely exclusive.) Since the Internet is class-less and there are no gatekeepers (as there are in obtaining trademarks), complainants are put to the test of proving breach of registrants' warranties and representations. It is not sufficient merely to show that domain names are identical or confusingly similar to marks in which complainants have rights. For all the successes of major brands in policing their marks, it is not unlawful to register dictionary words or letters as domain names as long as there is no intention to take advantage of or traffic in already established marks. Just as "sharing" names in commerce is balanced by protecting those with priority of use, so are there tests of rights under the UDRP. Basic to this assessment is a recognition that "[i]n the Internet context, consumers are aware that domain names for different Web sites are quite often similar, because of the need for language economy, and that very small differences matter." Entrepreneur Media, Inc. v. Smith, 279 F.3d 1135, 1147 (9th Cir. 2002). The boundary that defines "small differences" has been tested in a variety of factual circumstances: dictionary words ( and and common expressions (such as ) as well as defenses based on nominative fair use or similar concepts under other legal traditions such as "valid and honest competition" discussed below in Franke Technology and Trademark Ltd v. hakan gUlsoy, CAC 101464 ( May 11, 2017). The Panel in Gabs S.r.l. v. DOMAIN ADMINISTRATOR — NAME ADMINISTRATION INC. (BVI), CAC 101331 (ADReu February 26, 2017) found that "[t]he word 'gabs' is a common English word based on 'gab', meaning 'talk, prattle, twaddle' (Concise Oxford Dictionary) and it is used to invoke notions such as 'the gift of the gab' and in colloquial words such as 'gabfest' and 'gabble'". It does not strain the language at all to accept that it is u[...]

WannaCry: Patching Dilemma from the Other Side


WannaCry, originated firstly in state projects but spread by other actors, has touched upon myriads of infrastructure such as hospitals, telecommunication, railroads that many countries have labelled as critical. IT engineers are hastily presenting patching codes in various localized versions. The other patch needed, however, is more than technical. It is normative and legislative. The coding of that patch for a situation like this is in two layers of dilemma. The first dilemma is about the appropriateness and legitimacy of state's exploitation of industrial software vulnerabilities. For the government experts who are writing the norms for responsible state behavior in cyberspace at the UN level, should such exploitation be considered as responsible or reasonable or as damaging cyber stability? There is a general division of ideas about this point among different nations. Many cyber powers have actually acknowledged and approved the legitimacy of state behavior like that. The fact that they have founded their cyber force implies that message. Many other nations are uncomfortable about the militarization of the cyberspace and choose to condemn any behavior towards such a direction. They either have not fully grasped the complexity of the situation or lack the capability to face the strategic challenges. This division has significantly reduced room for UN GGE talks on norms of state behavior. The second dilemma is about non-proliferation of the state's cyber weapons. The previous GGE report has recommended that States should seek to prevent the proliferation of malicious ICT tools and techniques and the use of harmful functions. However, unlike nuclear weapons or missiles, the spread of the malware is much easier and faster, taking a non-conventional route. Compared with the conventional weapons, the cyber ammunition of a state seems to be much more vulnerable to invasion from other actors. An individual Robin Hood could shake the whole system. This has made future talks on disarmament and non-proliferation of cyber weapons harder. The division of opinions on the first dilemma has made it even more difficult to solve the dilemma on non-proliferation. An interesting phenomenon in the case is that Microsoft is presenting patches both in terms of code and in terms of policy and law by calling for, on earlier occasions this year, a Digital Geneva Convention, a Tech Accord, and an Attribution Council. Written by Peixi (Patrick) XU, Associate Professor, Communication University of ChinaFollow CircleID on TwitterMore under: Cyberattack, Cybercrime, Cybersecurity, Internet Governance, Malware, Policy & Regulation [...]

The 2-Character Answer to this GAC Advice Should be "No"


Overview: ICANN's Governmental Advisory Committee (GAC) has reacted to the ICANN Board's November 2016 decision to authorize the release of two-character domains at new gTLDs with advice to the Board that does not have true consensus backing from GAC members and that relates to procedure, not policy. The Board's proper response should be to just say no, stick to its decision and advise the GAC that it will not consider such advice. Instead, the Board has, against the preliminary advice of the policy-making Generic Names Supporting Organization (GNSO) Council, initiated discussions with aggrieved GAC members that may reopen its decision. Continuing down this dangerous path may provide governments with far more leverage over ICANN policy decisions than was ever envisioned or intended by the long debated and carefully crafted new Bylaws language addressing the Board's responsibility to give attention to GAC advice. Here's the full story — On March 15, 2017 ICANN's Governmental Advisory Committee (GAC) issued its Communique at the ICANN 58 meeting in Copenhagen, Denmark. Section VI of that document contains what purports to be GAC Consensus Advice to the Board, and the fourth item on which such advice is rendered is 2-Character Country/Territory Codes at the Second Level. Such policy advice would arguably be in order as a valid response to the ICANN Board's decision of November 8, 2016 relating to "Two-Character Domain Names in the New gTLD Namespace", in which it authorized the delegation of 2-chacter domains at new gTLDs, subject to certain conditions and safeguards. The adopted Resolution makes clear that the GAC's prior advice on this matter had been duly taken into account, and thereby the Board had fulfilled its duty under the relevant Bylaws provision. That Board Resolution contains this important passage relating to the GAC's input on this matter: Whereas, the GAC has issued advice to the Board in various communiqués on two-character domains. The Los Angeles Communiqué (15 October 2014) stated, "The GAC recognized that two-character second level domain names are in wide use across existing TLDs, and have not been the cause of any security, stability, technical or competition concerns. The GAC is not in a position to offer consensus advice on the use of two-character second level domains names in new gTLD registry operations, including those combinations of letters that are also on the ISO3166-1 alpha 2 list.” (Emphasis added) The GAC's Copenhagen Communique shows that it is still not in a position to offer consensus policy advice on the use of two-character second level domains. The Copenhagen advice regarding 2-character country codes (CCs) is clearly procedural in nature as it merely requests that the Board negotiate with certain disaffected GAC members; noting the "serious concerns expressed by some GAC Members" and advising the Board to "engage with concerned governments by the next ICANN meeting to resolve those concerns" and "immediately explore measures to find a satisfactory solution of the matter to meet the concerns of these countries before being further aggravated". This does not constitute substantive policy advice. By its own terms, it makes clear that only some governments have serious concerns regarding the Board's decision and that the engagement sought is not with the full GAC but with that relative handful of disaffected governments. ICANN's Board should properly provide the simple 2-character response of 'No' to this GAC advice. That's No as a firm word of rejection; not .NO, the ccTLD of Norway. And that is precisely the substantive policy point — that CCs used by a nation state at the top level of the DNS as the two character designator of its ccTLD have no claim on or control over meaningful two character dictionary ter[...]

It's Up to Each of Us: Why I WannaCry for Collaboration


WannaCry, or WannaCrypt, is one of the many names of the piece of ransomware that impacted the Internet last week, and will likely continue to make the rounds this week. There are a number of takeaways and lessons to learn from the far-reaching attack that we witnessed. Let me tie those to voluntary cooperation and collaboration which together represent the foundation for the Internet's development. The reason for making this connection is because they provide the way to get the global cyber threat under control. Not just to keep ourselves and our vital systems and services protected, but to reverse the erosion of trust in the Internet. The attack impacted financial services, hospitals, medium and small size businesses. It was an attack that will also impact trust in the Internet because it immediately and directly impacted people in their day-to-day lives. One specific environment raises everybody's eyebrows: Hospitals. Let's share a few takeaways: On Shared Responsibility The solutions here are not easy: they depend on the actions of many. Solutions depend on individual actors to take action and solutions depend on shared responsibility. Fortunately, there are a number of actors that take their responsibility. There is a whole set of early responders, funded by private and public sector, and sometimes volunteers, that immediately set out to analyze the malware and collaborate to find root-causes, share experience, work with vendors, and provide insights to provide specific counter attack. On the other hand, it is clear that not all players are up to par. Some have done things (clicked on links in mails that spread the damage) or not done things (deployed a firewall, not backed up data, or upgraded to the latest OS version) that exaggerated this problem. When you are connected to the Internet, you are part of the Internet, and you have a responsibility to do your part. On proliferation of digital knowledge The bug that was exploited by this malware purportedly came out of a leaked NSA cache of stockpiled zero-days. There are many lessons, but fundamentally the lesson is that data one keeps can, and perhaps will, eventually leak. Whether we talk about privacy related data-breaches or 'backdoors' in cryptography, one needs to assume that knowledge, once out, is available on the whole of the Internet. Permissionless innovation The attackers abused the openness of the environment — one of the fundamental properties of the Internet itself. That open environment allows for new ideas to be developed on a daily basis and also allows those to become global. Unfortunately, those new innovations are available for abuse too. The uses of Bitcoins for the payment of ransom is an example of that. We should try to preserve the inventiveness of the Internet. It is also our collective responsibility to promote innovation for the benefit of the people and to deal collectively with bad use of tools. Above all, the solutions to the security challenges we face should not limit the power of innovation that the Internet allows. Internet and Society Society is impacted by these attacks. This is clearly not an Internet-only issue. This attack upset people, rightfully so. People have to solve these issues, technology doesn't have all the answers, nor does a specific sector. When looking for leadership, the idea that there is a central authority that can solve all this is a mistake. The leadership is with us all, we have to tackle these issues with urgency, in a networked way. At the Internet Society we call that Collaborative Security. Let's get to work. This post is a reprint of a blog published at the Internet Society. Written by Olaf Kolkman, Chief Internet Technology Officer (CITO), Internet SocietyFollow CircleID on TwitterMore under[...]

How Aruba is Using Racing Sponsorship to Make Itself a Premium Brand


Sitting in the Aruba hospitality at the Italian round of the Superbike World Championship in Imola, CEO Stefano Cecconi exudes passion. The love he has for motorcycles in general, and racing in particular, is evident. Less so is the rational behind Aruba's multi-million-euro-a-year spend to be the title sponsor for the factory Ducati World Superbike team. For Internet industry onlookers at least. To Cecconi, it's obvious. This sponsorship provides a golden opportunity for Aruba to bask in Ducati's glory. Making an emotion connection to the customers As a provider of web services, Aruba exists in a highly competitive and price sensitive market which is devoid of emotional customer attachment. Ducati, on the other hand, is a maker of premium motorcycles and often spoken of as the 2-wheeled equivalent of Ferrari. Now the property of the Volkswagen group, Ducati commands the same level of fervour and uses the same signature red as the Italian sportscar maker. Associating its image with Ducati's means Aruba is no long just another IT company. The results seem to speak for themselves. Aruba is a market leader in Italy, and also very active in Eastern Europe (primarily the Czech Republic and Slovakia, but also in Poland and Hungary). It is expanding in the West (it already operates in the UK, France and Germany). The company claims 2.5 million domain names under management and over 3 million websites hosted. "Yes the racing team is expensive," says 38-year old Cecconi who founded the family-run business when he was 18 as an ISP and then had the foresight to switch focus to web content services in 2000. "But what we get back in visibility and brand recognition is immense. To get the same results through advertising would have cost us much more." Although a self-confessed bike fanatic, Cecconi refutes the notion that his company's involvement with Ducati is the result of a CEO whim. "We thought about doing PR through sponsorship almost from day 1," he explains. "Once we had the budget to actually make it happen, we started looking at football. We sponsored the national league in Italy, but we really wanted to reach a more international audience. So we looked for the best place to do this." The World Superbike championship wasn't the obvious choice. In the world of bike racing, the premier league is the MotoGP series. But just like Formula 1 car racing, the number of teams in MotoGP is capped, meaning that Aruba would've had to settle for being a secondary sponsor. "We've always liked to execute our projects ourselves," says Cecconi. "That's what we did when we built our own datacenters for example. So we wanted our own team, whatever the sport." Putting the cloud first Then Aruba started talking to Ducati, and to his surprise, the factory were interested in more than just a sponsor, they wanted a partner. "A unique opportunity to run a team jointly with Ducati, an extremely powerful brand in Italy," beams Cecconi. Also, a way to display the company's strong focus on cloud services. The Aruba bikes show two of the company's brands: its own name of course, and DotCloud. Aruba acquired the new gTLD in a private auction for an estimated $12 million. Here again, the rationale for paying so much would be lost on some, but Cecconi seems to have found a way to make it work. A year after launch, it has over 105,000 domains under management. "Just like the Ducati sponsorship deal, DotCloud was a huge bet for us," admits Cecconi. "And just like our Superbike venture, this TLD is a reflection of our passion. The cloud is what our company's about. It's what's driving where we want to be." So from the get-go, Aruba made a conscious choice to go for that TLD, and only that one. When ICANN announced the round 1 applicants in[...]

The Criminals Behind WannaCry


359,000 computers infected, dozens of nations affected world-wide! A worm exploiting a Windows OS vulnerability that looks to the network for more computers to infect! This is the most pernicious, evil, dangerous attack, ever. "The Big One” Wired pronounced. "An unprecedented attack”, said the head of Europol. Queue the gnashing of teeth and hand-wringing! Wait, what? WannaCry isn't unprecedented! Why would any professional in the field think so? I'm talking about Code Red, and it happened in July, 2001. Since then dozens, perhaps hundreds of Best Common Practice documents (several of which I've personally worked on) have been tireless written, published, and evangelized, apparently to no good effect. Hundreds of thousands, perhaps millions of viruses and worms have come and gone. Our words 'update your systems, software, and anti-virus software' and 'back up your computer', ignored. The object lesson taught by Code Red, from almost sixteen years ago, forgotten. Criminal charges should be considered: Anyone who administers a system that touches critical infrastructure, and whose computers under their care were made to Cry, if people suffered, or died, as is very much the possibility for the NHS patients in the UK, should be charged with negligence. Whatever ransom was paid should be taken from any termination funds they receive, and six weeks pay deducted, since they clearly were not doing their job for at least that long. Harsh? Not really. The facts speak for themselves. A patch was available at least six weeks prior (and yesterday was even made available by Microsoft for 'unsupported' platforms such as Windows XP), as was the case with Code Red. One representative from a medical association said guilelessly, in one of the many articles I've read since Friday 'we are very slow to update our computers'. This from someone with a medical degree. Yeah, thanks for the confirmation, pal. The worm has been stopped from spreading. For now. was registered by a security researcher, and sinkholed. Sorry, forget it. I went for a coffee while writing this, and predictably WannaCry V2 has since been spotted in the wild, without the kill-switch domain left dangling. What have we learned from all of this, all of this for a lousy $26,000? If someone gets arrested and charged, and by someone, I mean systems administrators, 'CSOs' and anyone else in line to protect systems who abjectly failed this time, a lot. WannaCry infections to critical infrastructure are an inexcusable professional lapse. Or, we could just do all of this again, next time, and people may die. Afterthought: My organization, recently turned 20 years old. When it started, we didn't believe things could get this bad, but it wasn't too soon after that it became apparent. I issued dire warnings about botnets in 2001 to the DHS, I made public pronouncements to these ends in 2005 (greeted by rolled eyes from an RCMP staff sergeant). I may have been a little too prescient for my own good at the time, but can anyone really say, in this day and age, that lives are at stake, and we are counting on those responsible for data safety to at least do the bare minimum? I await your comments, below. Written by Neil Schwartzman, Executive Director, The Coalition Against unsolicited Commercial Email - CAUCEFollow CircleID on TwitterMore under: Cybercrime, Cybersecurity, Malware [...]

Why Not Connect Cuba's Gaspar Social Streetnet to the Internet?


I've been covering Cuban streetnets (local area networks with independent users that are not connected to the Internet) for some time. Reader Doug Madory told me about Gaspar Social, a new streetnet in Gaspar, a small town in central Cuba. Gaspar Social opened to the public last October and has grown quickly — about 500 of Gaspar's 7,500 residents are now users. Streetnets are illegal in Cuba and the government has ignored some and cracked down on others, but they seem to be tolerating them now as long as they remain apolitical and avoid pornography and other controversial material. Last month, Communist Party officials noticed Gaspar Social but did not shut it down. Yoandi Alvarez, one of the network creators, said "they made it clear our network was illegal but they wouldn't be taking our antennas down" and they were given instructions for applying for a permit. So, residents of Gaspar can play games, download software, share files, socialize, etc., but they can not access the global Internet. Why not connect Gaspar Social to the Internet? Gaspar is in the province of Ciego de Ávila and the capital city is Ciego de Ávila. ETECSA has six WiFi hotspots and three navigation rooms in Ciego de Ávila and, as a provincial capital, the city must have many government, medical and educational users. In other words, there must be relatively fast backhaul to the Internet in Ciego de Ávila. Connecting Gaspar to Ciego de Ávila seems like it would be cheap and easy. As you see below, they are only 28.2 kilometers apart on the road (25 kilometers as the crow flies) and the terrain is flat. (Gaspar's elevation is 5.1 meters and Ciego de Ávila's 49 meters). They could be connected with a high-speed wireless link or fiber. The flat terrain favors a wireless link and the road could provide a right-of-way for fiber. Installing 28 kilometers of fiber would be expensive in the US, but Cuba is not the US. One can imagine a community project using International Telecommunication Union (ITU) L.1700 cable. (For an example of a community fiber project, in Bhutan, click here). ETECSA is the elephant in this hypothetical room. The ITU tracks regulatory evolution and, as of 2013, Cuba was one of the few remaining first-generation (regulated public monopoly) nations. I suggested earlier that ETECSA consider streetnets as complementary collaborators rather than competitors or outlaws and last year they allowed a small streetnet to connect to a WiFi hotspot. Cuba has a well-deserved reputation for improvisation and appropriate-technology innovation. I am not suggesting that they jump suddenly to fourth-generation regulation (regulation led by economic and social policy goals), but that they run a pilot test, connecting Gaspar Social to the Internet. Here is a short video (1:56) on Gaspar Social: width="644" height="362" src="" frameborder="0" allowfullscreen> And here is a longer video (13:48) with interviews of the network creators: width="644" height="362" src="" frameborder="0" allowfullscreen> Written by Larry Press, Professor of Information Systems at California State UniversityFollow CircleID on TwitterMore under: Access Providers, Internet Governance, Policy & Regulation [...]

8 Reasons Why Cybersecurity Strategy and Business Operations are Inseparable


In modern society, there is one fact that is unquestionable: The hyper-connectivity of the digital economy is inescapable. A financial institution without an online presence or omni-channel strategy will cease to be competitive. Universities (for-profit or non-profit) must develop and continuously evolve their online learning capabilities if they are to stay relevant. Online retailers are quickly outpacing and rendering their 'brick-and-mortar' counterparts irrelevant. Travel agents have been largely relegated to dinosaur status in this era of online travel search aggregators and booking portals. A payments ecosystem mostly dominated by major card networks and processors now includes closed loop systems such as Apple Pay, Google Wallet and others. When we add the Internet of Things (IoT), robotics and artificial intelligence (AI) to the mix, the networked society has become a monolith that we simply cannot ignore. What is most concerning about the ubiquity of technology is the multitude of cyber threats which organizations and individuals have to contend with. While the risks to individuals are relatively high as it relates to invasion of privacy, identity theft and financial loss, cyber-attacks can have a particularly critical impact on businesses. Depending on market and jurisdictional realities, the consequences can include heavy regulatory penalties, plummeting stock prices, lawsuits or mass layoffs — The effect on a company's bottom line can be catastrophic. But how are corporations responding to this ever-evolving threat landscape? The resulting strategies fall mostly into the following categories. There are the large organizations which employ the '3 lines of defense' approach where an IT department owns and manages cyber risks, the operational risk and/or compliance departments specialize in risk management (including cyber), and the internal audit function provides independent assurance that cyber risks are being effectively managed. This approach is resource intensive and demands highly specialized (and costly) personnel. There are the generally under-staffed companies that limp along from day-to-day reacting to cyber-attack after cyber-attack, many of them not even aware that their systems and networks have been compromised. And finally, there are the SMEs that basically stick their heads in the sand and pretend that their operation is too small or insignificant to be the target of cyber criminals. More often than not, business leaders across the board fail to recognize that cybersecurity is no longer the domain of the IT organization. Cybersecurity strategy is now business strategy, and the response to cyber threats is the responsibility of every individual that works for or runs a company. And here are 8 key reasons why this is undeniably the case: 1) Corporate governance – A 2016 survey by Goldsmiths that included responses from 1,530 non-executive directors and C-level executives in the United States, United Kingdom, Germany, Japan and Nordic countries showed that 90% of respondents admitted to not being able to read a cybersecurity report and were not prepared to respond to a major attack. Even more worrisome was the fact that over 40% of executives did not feel that cybersecurity or protection of customer data was their responsibility. Let that sink in for a moment. This is why ensuring that cybersecurity is a running topic at executive and board level meetings is imperative for organizations. Even more, greater ownership should be ascribed to all levels of personnel for cyber risks. Cybersecurity culture is a collective effort that starts at the top and works its way down through the organization. 2) Regul[...]

Patching is Hard


There are many news reports of a ransomware worm. Much of the National Health Service in the UK has been hit; so has FedEx. The patch for the flaw exploited by this malware has been out for a while, but many companies haven't installed it. Naturally, this has prompted a lot of victim-blaming: they should have patched their systems. Yes, they should have, but many didn't. Why not? Because patching is very hard and very risk, and the more complex your systems are, the harder and riskier it is. Patching is hard? Yes — and every major tech player, no matter how sophisticated they are, has had catastrophic failures when they tried to change something. Google once bricked Chromebooks with an update. A Facebook configuration change took the site offline for 2.5 hours. Microsoft ruined network configuration and partially bricked some computers; even their newest patch isn't trouble-free. An iOS update from Apple bricked some iPad Pros. Even Amazon knocked AWS off the air. There are lots of reasons for any of these, but let's focus on OS patches. Microsoft — and they're probably the best in the business at this — devotes a lot of resources to testing patches. But they can't test every possible user device configuration, nor can they test against every software package, especially if it's locally written. An amazing amount of software inadvertently relies on OS bugs; sometimes, a vendor deliberately relies on non-standard APIs because there appears to be no other way to accomplish something. The inevitable result is that on occasion, these well-tested patches will break some computers. Enterprises know this, so they're generally slow to patch. I learned the phrase "never install .0 of anything" in 1971, but while software today is much better, it's not perfect and never will be. Enterprises often face a stark choice with security patches: take the risk of being knocked of the air by hackers, or take the risk of knocking yourself off the air. The result is that there is often an inverse correlation between the size of an organization and how rapidly it installs patches. This isn't good, but with the very best technical people, both at the OS vendor and on site, it may be inevitable. To be sure, there are good ways and bad ways to handle patches. Smart companies immediately start running patched software in their test labs, pounding on it with well-crafted regression tests and simulated user tests. They know that eventually, all operating systems become unsupported, and they plan (and budget) for replacement computers, and they make sure their own applications run on newer operating systems. If it won't, they update or replace those applications, because running on an unsupported operating system is foolhardy. Companies that aren't sophisticated enough don't do any of that. Budget-constrained enterprises postpone OS upgrades, often indefinitely. Government agencies are often the worst at that, because they're dependent on budgets that are subject to the whims of politicians. But you can't do that and expect your infrastructure to survive. Windows XP support ended more than three year ago. System administrators who haven't upgraded since then may be negligent; more likely, they couldn't persuade management (or Congress or Parliament...) to fund the necessary upgrade. (The really bad problem is with embedded systems — and hospitals have lots of those. That's "just" the Internet of Things security problem writ large. But IoT devices are often unpatchable; there's no sustainable economic model for most of them. That, however, is a subject for another day.) Today's attack is blocked by the MS17-010 [...]

Telecoms Competition on a Downhill Slide in America


That is what happens when you base your telecommunications policies on the wrong foundations. The problems with the telecommunications industry in America go back to 1996 when the FCC decided that broadband in America should be classified as internet (being content) and that therefore it would not fall under the normal telecommunication regulations. Suddenly what are known as telecommunications common carriers in other parts of the world became ISPs in the USA. How odd is that? This is rather incomprehensible since, to the rest of the world, it is very clear that broadband is an access product and has nothing to do with the internet as such — and most certainly nothing to do with content! While nothing to do with this topic, it is also important to mention that in exchange for these generous gifts from the FCC the incumbents promised to fibre up America. Over the last 20 years such promises were made again and again, and every time these promises were broken with no retributive action from the FCC whatsoever. And — surprise, surprise — in the resulting monopolistic situation — the incumbents used their newly won classification as ISPs to their own advantage. As an immediate result retail-based broadband services were monopolised by Verizon and AT&T. There was no longer any obligation on their part to make broadband available on a wholesale basis, and as a consequence, there is very little retail-based broadband competition in the USA. It has to be said that the very weak regulation in place before 1996 made it very difficult for retail broadband providers to obtain affordable wholesale rates from the carriers. But the classification as broadband being content totally killed the DSL retail market, and within a year all those DSL retail providers were all dead and buried. The door was now also open for the incumbents to look for extra revenues that they could extract from the big digital media content providers by offering preferential treatment over their broadband infrastructure. This would create a high-speed internet lane for these companies and put the rest in a second-rate slow broadband lane. This generated a public outcry, and as a result, the FCC had to introduce what is known as net neutrality (NN), which stopped the incumbents from creating fast and slow lanes. NN was an attempt by the then FCC Chairman Tom Wheeler to keep the internet open. Now I am certainly not in favour of net neutrality, and in principle, I don't have a problem with telecommunications providers offering managed network services at premium prices to business users and others, but it needs to be provided in the context of a competitive environment. With competition in place, the costs of these managed services will be kept in control; some providers will concentrate on business users and others on residential services, so competition will keep misuse in check. When you don't have such a competitive environment, it would be very dangerous to just let incumbents do and charge whatever they want. It would, of course, be far better if the USA were to address the underlying issues and do what every other country in the world does — classify broadband as a telecommunications access service and regulate it — with proper wholesale requirements — under such a regime, in which case there would be no need to bolt NN onto the regulatory regime. However there is no hope whatsoever under the current government in the USA that such a broader telecommunications review is possible; and certainly not under the leadership of FCC Chairman Ajit Pai (an ex-Verizon execut[...]

Trying to Predict Miguel Diaz-Canel's Internet Policy


I recently gave a short talk [PowerPoint] that concluded with some speculation on the attitude of Miguel Diaz-Canel, who is expected to replace Raúl Castro next year, toward the Internet. I searched online and came up with three clues — two talks he has given and one act. In May 2013, Diaz-Canel gave a speech at an educator's conference in which he anticipated today's preoccupation with fake news. He acknowledged the futility of trying to control information: Today, news from all sources — good ones, bad ones, those that are manipulated, and those that are true, and those that are half-truths, all circulate on the web and reach people and those people are aware of them. Miguel Diaz-Canel speaking at an educator's conference, May 2013He said the worst response to this would be silence and called upon schools to teach kids to spot fake news. You can watch news coverage of his talk (2:57). The second talk I found was the closing address to the First National Workshop on Informatization and Cybersecurity in February 2015. The three-day workshop was streamed to over 11,500 professionals in 21 auditoriums throughout the country and Diaz-Canel mentioned online discussion by over 73,000 users. (This "national workshop" sounds like a unique mass-collaboration event and I would like to hear more about the format from those who participated). Diaz-Canel said the Cuban State would work to make (safe and comprehensive Internet) available, accessible and affordable for everyone and that the Internet should be a tool for the sustainable human development in Cuba and its effective integration into the community of nations. He recognized the Internet as a tool benefiting the economy, science, and the culture. This positive message was dampened somewhat by his recitation of the threats posed by the US and the responsibility of the citizens to use the Internet legally. Reading between the lines, it may be that he envisions a China-like policy of reaping the benefits of the Internet by expanding it while using it as a political tool by restricting access to controversial content, surveilling users and spreading propaganda. (Freedom House considers the Cuban Internet unfree today and the only nations they consider less free are Uzbekistan, Ethiopia, Iran, Syria and China). This video shows news coverage of Diaz-Canel's talk (3:26) and you can read the transcript here. The third and perhaps most encouraging clue I found regarding Diaz-Canel's view of the Internet was not a speech, but his support of freedom of expression on the Lajovencuba Web site. Lajovencuba, which refers to itself as a "socialist project of political debate speech on the web" was created at the University of Matanzas in April 2010. It was named after a political and revolutionary organization created by Antonio Guiteras in mid-1934. The original tagline was "A blog of university students that speaks of the Cuban reality" and today it is "Socialism and revolution." That's the bad news. The good news is that it was restored in April 2013. The better news is that Diaz-Canel met with and endorsed the founders of Lajovencuba. I started this post thinking I would at least come to a tentative conclusion as to the likely Internet policy of Diaz-Canel and the next generation of Cuban leaders, but I am still up in the air. Written by Larry Press, Professor of Information Systems at California State UniversityFollow CircleID on TwitterMore under: Censorship, Internet Governance, Policy & Regulation [...]

Would You Like Your Private Information to be Available on a VHS or Betamax Tape?


When I was a young child growing up in the late 1980s, my parents were lucky enough to be able to afford to have both a VHS-tape video-recorder in the living room and a Betamax tape recorder in their bedroom. This effectively meant that to me, the great video format wars weren't a decade-defining clash of technologies, but rather they consisted mainly of answering the question "in which room can I watch my favorite cartoons?". It is only now with the perspective of time that I realize that my small dilemma was the result of two distinct groups with contradictory interests bidding for control of a massive market of home video users. I was reminded of this piece of digital archeology with the recent news regarding the repeal of the FCC rules regarding internet privacy, partly because I'm starting to recognize similar patterns to the video wars in the field of digital privacy, the kind of patterns that should give business leaders and stakeholders in privacy-sensitive business pause as to a potentially strategic business consideration that lies in the immediate future. It comes as no news to privacy practitioners that there is a long-existing schism between the European approach to digital privacy and the American approach to the subject: The US legislative and administrative bodies generally tend to adopt more business-friendly regulations prohibiting the abuse of information but permitting its commodification and trade, while the European stance is to consider digital privacy as a human rights issue (in some European-influenced jurisdictions, such as Israel, the concept of privacy is even explicitly designated as a basic human right and afforded constitutional protection). The European legal institutions have consistently shown that they are not deterred by the international implications of their rulings (as demonstrated recently by the repeal of the Safe Harbour program that took place following the October 2015 decision in Schrems v. DPC, necessitating the expedited negotiations of the Privacy Shield agreement) — which is why I believe we're on the verge of a major event, one in which the distance between the two legal perceptions of privacy systems becomes impossible to bridge. When one takes into account the EU's General Data Protection Regulation (set to enter into effect in spring of 2018) and contrasts it with the recent repeal of the FCC's rules, it is impossible not to notice that battle lines are being drawn. This is particularly true given the fact that the GDPR applies not only to data processed or located inside the scope of the EU itself — but also applies personally to the citizens of the EU nations themselves, even if they are not physically currently in the EU. Under this principle, the latest move by American authorities not to prohibit ISPs from selling information that was until now accepted as private poses therefore an interesting challenge: if a German citizen purchases the services of an American VPN provider to mask her IP address, and said VPN provider routinely sells the information of its clients — would it be allowed to sell the sensitive information it gathers regarding the browsing habits of its German customer? Alternatively, if an American citizen purchases the services of an Estonian VPN — would the information gathered by the Estonian ISP be eligible for sale under the FCC's new, slimmer rules? Furthermore - suppose a more remote but still possible case in which an ISP with multiple local subsidiaries or partnerships wishes to balance the load on its network by routing so[...]

In Response to Offensive Destruction of Attack Assets


It is certainly true that DDoS and hacking are on the rise; there have been a number of critical hacks in the last few years, including apparent attempts to alter the outcome of elections. The reaction has been a rising tide of fear, and an ever increasing desire to "do something." The something that seems to be emerging is, however, not necessarily the best possible "something." Specifically, governments are now talking about attempting to "wipe out" the equipment used in attacks — Berlin was studying what legal changes were needed to allow authorities to purge stolen data from third-party servers, and to potentially destroy servers used to carry out cyber attacks. "We believe it is necessary that we are in a position to be able to wipe out these servers if the providers and the owners of the servers are not ready to ensure that they are not used to carry out attacks," Maassen said. — Reuters / 4 May 2017 "Wiping out" (destroying?) a server because the owner cannot ensure the server will be used in a way the government agrees with — sounds like a good idea, right? And how do we make certain such laws are not extended to destroy the servers of those who host "hate speech" and "fake news" at some point in the future? Will we have server burnings to match the printing press burnings of yesteryear (like this, or this, or this). What if the owner of that server is actually the proud owner of a newly minted "connected" television set or toaster, and who does not know enough about technology to secure the device properly? Is it okay to "wipe out" the server then? The obvious answer to such objections is that the capability to "wipe out a server" will only be used when authorized through the proper channels. Scope creep, however, is always real, and people who work for the government are still people who have desires and fears, and who make mistakes. Maybe being able to "wipe out" a server remotely, and break into third party networks to erase data you don't think they should have, is all justified. But there seems to be some dangerous precedent being set here, and this story will not end in a happy place for anyone on the Internet. Written by Russ White, Network Architect at LinkedInFollow CircleID on TwitterMore under: Cyberattack, Cybersecurity, Policy & Regulation [...]