Subscribe: CircleID: Featured Blogs
Added By: Feedage Forager Feedage Grade A rated
Language: English
data  domain names  domain  industry  internet  million  new  services  times  tlds  trademark  udrp  viewed times  viewed 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID: Featured Blogs

CircleID: Featured Blogs

Latest blogs postings on CircleID

Updated: 2017-01-19T22:54:00-08:00


Help Us Answer: What Will the Internet Look Like in 10 Years?


What will the Internet look like in the next seven to 10 years? How will things like marketplace consolidation, changes to regulation, increases in cybercrime or the widespread deployment of the Internet of Things impact the Internet, its users and society? At the Internet Society, we are always thinking about what's next for the Internet. And now we want your help! The Internet is an incredibly dynamic medium, shaped by a multitude of pressures — be they social, political, technological, or cultural. From the rise of mobile to the emergence of widespread cyber threats, the Internet of today is different than the Internet of 10 years ago. The Internet Society and our community care deeply about the future of the Internet because we want it to remain a tool of progress and hope. Last year, we started a collaborative initiative — the Future Internet project — to identify factors that could change the Internet as we know it. We asked for your views and heard from more than 1,500 members across the world — thank you! That feedback provided a strong foundation for the development of our Future Internet work. We have consolidated that input into nine driving forces for the Internet. The list is posted on our future Internet webpage, along with the challenges and uncertainties raised by our community. Our community identified these forces as the things that will influence how the Internet will evolve in the future. They include: Convergence of the Internet and the Physical WorldArtificial Intelligence and Machine LearningNew and Evolving Digital Divides Increasing Role of GovernmentFuture of the Marketplace and CompetitionImpact of Cyberattacks and CybercrimeEvolution of Networks and StandardsImpact on Media, Culture, and Human InteractionFuture of Personal Freedoms and Rights Now, we need your help again. Please review this work and let us know what you think by sending your answers to the following questions to Which of the nine drivers do you think will have the biggest impact on the future of the Internet in the next seven to 10 years?Are there major issues that are missing from this list?What 2-3 issues would you prioritize in our Future Internet project? 2017 is the Internet Society's 25th anniversary. It is an opportunity to look back and see how the Internet has grown and evolved since our earliest days. It is also a chance to look ahead and imagine the future. Will the Internet continue to be a tool to build community, drive innovation, and create opportunity? With this Future Internet project, we can imagine some different futures and then think together about what steps we need to take today to bring about the future that we want. More updates will be coming soon, with a final report in September. Thanks in advance for your participation and input! Note: an earlier version of this post appeared on the Internet Society blog. Written by Sally Shipman Wentworth, VP of Global Policy Development, Internet SocietyFollow CircleID on TwitterMore under: Broadband, Internet Governance, Internet of Things, Law, Policy & Regulation, Privacy, Security, Telecom, Web [...]

If Slate Comes in Standard Sizes, Why Not Broadband?


Why does the broadband industry, supposedly a "high technology" one, lag behind old and largely defunct industries that now have reached the "museum piece" stage?Last week I was at the National Slate Museum in Wales watching slate being split apart. On the wall were sample pieces of all the standard sizes. These have cute names like "princess". For each size, there were three standard qualities: the thinnest are the highest quality (at 5mm in thickness), and the thickest have the lowest quality (those of 13mm or more). Obviously, a lighter slate costs less to transport and lets you roof a wider span and with less supporting wood, hence is worth more. These slates were sold around the world, driven by the industrial revolution and need to build factories and other large structures for which "traditional" methods were unsuitable. Today we are building data centers instead of factories, and the key input is broadband access rather than building materials. Thankfully telecoms is a far less dangerous industry and doesn't give us lung disease that kills us off in our late 30s. (The eye strain and backache from hunching over iDevices is our deserved punishment for refusing to talk to each other!) What struck me was how this "primitive" industry had managed to create standard products in terms of quantity and quality, that were clearly fit-for-purpose for different uses such as main roofs versus drainage versus ornamental uses. This is in contrast to broadband where there is high variability in the service, even with the same product from the same operator being delivered to different end users. With broadband, we don't have any kind of standard units for buyers to be able to evaluate a product or know if it offers better or worse utility and value that another. The only promise we make is not to over-deliver, by setting an "up to" maximum burst data throughput! Even this says nothing about the quality on offer. In this sense, broadband is an immature craft industry which has yet to even reach the most basic level of sophistication in how it defines its products. To a degree, this is understandable, as the medium is a statistically multiplexed one, so naturally is variable in its properties. We haven't yet standardized the metrics in which quantity and quality are expressed for such a thing. The desire is for something simple like a scalar average, but there is no quality in averages. Hence we need to engage with the probabilistic nature of broadband, and express its properties as odds, ideally using a suitable metric space that captures the likelihood of the desired outcome happening. This is by its nature something that is an internal measure for industry use, rather than something that end consumers might be exposed to. Without standard metrics and measures, and transparent labeling, a proper functioning market with substitutable suppliers is not possible. The question that sits with me is: whose job is it to standardize the product? The regulator? Equipment vendors? Standards bodies? Network operators? Industry trade groups? Or someone else? At the moment we seem to lack both awareness of the issue, as well as incentives to tackle it. My hunch is that the switch-over to software-defined networks will be a key driver for change. When resources are brought under software control then they have to be given units of measure. Network operators will have a low tolerance for control systems that have vendor lock-in at this elementary level. Hence the process of standardizing the metrics for quantity and quality will rise in visibility and importance in the next few years. Written by Martin Geddes, Founder, Martin Geddes Consulting LtdFollow CircleID on TwitterMore under: Access Providers, Broadband, Policy & Regulation, Telecom [...]

Cyber-Terrorism Rising, Existing Cyber-Security Strategies Failing, What Are Decision Makers to Do?


A Global Paradigm Change is Threatening us All While conventional cyber attacks are evolving at breakneck speed, the world is witnessing the rise of a new generation of political, ideological, religious, terror and destruction motivated "Poli-Cyber™" threats. These are attacks perpetrated or inspired by extremists' groups such as ISIS/Daesh, rogue states, national intelligence services and their proxies. They are breaching organizations and governments daily, and no one is immune. This is a global paradigm change in the cyber and non-cyber threat landscape. The world has moved from resisting the attack, to surviving the inevitable. Traditional Cyber-Security Strategies are Struggling at Best, and Failing Daily With traditional cyber-security strategies failing, how can Decision Makers defend and protect national and corporate interests against existing serious conventional attacks and the new generation of Poli-Cyber terrorism? This is not just a threat to profitability, this is a threat to "Survivability". New & innovative solutions are most urgently needed. The MLi Group is organizing Decision Maker Symposiums & Briefings aimed at Chairmen, CEOs, Board members and senior government officials, as well as Summits around the world to address these new threats, and offer innovative solutions that address them. On March 22-23, 2017, an MLi summit is taking place in London aimed at: "Securing Survivability in Cyber Threatened World” This Summit is a new format created by MLi based on its proprietary and holistic Survivability Solution™ to address these grave new threats posed by conventional and destruction motivated Poli-Cyber attacks threatening businesses and governments globally. The Summit Draft Program illustrates the innovative MLi developed model as well as some of their partners' mechanisms and processes to help stakeholders first come to terms with the severity of new threat landscape and to be able to operate in it. Only then are they in a position to start their journey to better ensuring "Survivability". Decision Makers who are keen on making their organizations become better protected against these new threats would significantly benefit from attending. But those who also see the value of turning a threat into a unique competitive edge and opportunity for years to come would find Joining, Witnessing & Engaging in the New Mind-Set, Approach, & Solutions Needed to Address this Critical New Challenge very Compelling. Written by Khaled Fattal, Group Chairman, The Multilingual Internet GroupFollow CircleID on TwitterMore under: Cloud Computing, Cyberattack, Cybercrime, DDoS, DNS Security, Internet Governance, Internet of Things, Internet Protocol, Law, Malware, Policy & Regulation, Security, Spam [...]

How a 'Defensive Registration' Can Defeat a UDRP Complaint


A company that registers a domain name containing someone else's trademark may be engaging in the acceptable practice of "defensive registration" if (among other things) the domain name is a typographical variation of the registrant's own trademark. That's the outcome of a recent decision under the Uniform Domain Name Dispute Resolution Policy (UDRP), a case in which the domain name in dispute,, contained the complainant's DOCLER trademark — but also contained a typo of the respondent's DOLCER trademark. The UDRP complaint was filed by Docler IP S.à r.l. and related companies, all in Europe, that own the DOCLER trademark. According to the UDRP decision, Docler IP apparently uses the DOCLER trademark in connection with "a web platform with music, storytelling, and similar entertainment services." The disputed domain name was registered by a Chinese company that "sells speakers and similar products under the name DOLCER," which is protected by an EU trademark registration. Note the slight difference: The complainant's trademark is DOCLER, while the respondent's trademark is DOLCER. And, importantly, the respondent's domain name contains the complainant's trademark. The UDRP panel had no trouble finding the domain name confusingly similar to the complainant's trademark DOCLER, succinctly stating that the addition of the letter "i" to the domain name "does not obviate confusion." (Indeed, other UDRP decisions have found that inclusion of the letter "i" in a domain name that contains the complaint's trademark is irrelevant for purposes of confusing similarity. For example, in a dispute that included the domain name, one panel said that the letter "i" is "a common prefix and suffix in domain names" that "may lead consumers to believe that a product or service may be ordered online" and therefore can "heighten the risk of confusion.") However, a finding of confusing similarity is just one of three UDRP requirements, the third of which — bad faith — proved determinative. The panel in the case found that the respondent had engaged in a "defensive registration" of the domain name and therefore had not acted in bad faith. As a result, the UDRP panel denied a transfer of the domain name. So, what exactly, is a defensive registration? According to email correspondence reviewed by the panel in the case, "the Respondent has suggested that it registered the disputed domain name well prior to the commencement of this dispute in connection with its speaker business, to protect against typosquatting on its own DOLCER trademark." In other words, the respondent allegedly registered a variation of its own trademark as a domain to prevent a typosquatter from doing the same thing. The panel found this explanation acceptable. (Interestingly, the panel reached this decision based on email correspondence submitted by the complainant, given that the registrant of the domain name did not submit a response. As I've written before, many trademark owners have lost UDRP cases even in the absence of a response, since there is no "default judgment" available under the UDRP. See: "The Most Embarrassing Way to Lose a UDRP Complaint.") The case is not the first UDRP case to address the issue of a defensive domain name registration. In a 2011 decision cited in the decision, a panel described a defensive registration this way: The Panel finds that the Respondent registered the Domain Name in 1999 as part of a policy of protecting itself against cybersquatters by the defensive registration of a large number of domain names similar to its own which might be used (if registered by others) to divert its customers or otherwise to damage its business. In that case, the respondent was allowed to keep the domain name even though the complainant owned trademarks that contained the word "SHOEBY" — because the respondent owned trademark regis[...]

Zero-Touch Provisioning… Really?


Zero-touch provisioning (ZTP) — whatever does that mean? Of course, it is another marketing term. I think the term "closer to zero touch provisioning" is probably better, but CTZTP — as opposed to ZTP — is a bit more of a mouthful. Whenever I hear language like this that I'm not familiar with, I get struck by a bolt of curiosity. What is this new and shiny phrase that has just appeared as if from nowhere? Zero means zero, right? So by zero-touch provisioning, I was expecting to be dazzled. Services could be delivered to the customer without anyone having to put their hands near anything. How was this going to be done? Had someone invented a system run by robots and mind-control? Did we just need to think about what we wanted and it would get done? Unfortunately, this was not the case. Some touches were required. Whole networks needed to be in place and this was going to require some physical touches. Already we are way above zero. Okay, so ZTP is probably based on the assumption that the infrastructure is in place. Is there a case to be made for zero touches? I'm still not seeing it. Someone still needs to take the customer order. If it is a new customer, then usually someone needs to go onsite. The service still needs to be checked to ensure it meets the standards required; at a minimum, the customer needs to access the internet, see a TV channel, or get a dial-tone. For the sake of getting to our goal of zero touches, we can make that process better. How about we just ship the required devices to the customer? That makes it so the customer just needs to plug-in, turn on, and connect to the network. Okay, so this is still not quite zero-touch as the customer needs to do something, but it is zero touches for us. Now we don't need to send someone onsite. That helps a lot. Not only do we save on labor costs but the customer becomes a shade more technical. But what if there's a problem? Now the customer has plugged everything in and they're not getting service. So much for the great plan of just shipping the device out! Well, actually, this is where we can get really creative. Nowadays, we can generally determine if and when a device is connected. Once we know the device is connected, we can then ensure that the service is good quality, e.g. using TR-069, SNMP, IPDR, and so on. Before we can do this though, we need to map a device to a customer order. In other words, even if a device comes online, how do I know that this device is sitting in the right customer's premise? There are ways to deal with this, for example: Log the device that is sent to the customer address prior to delivery Once the device is plugged in, use a walled garden to discover the device information and map that back to the customer. Once the customer tries to access the Internet, they will be redirected to a walled garden. This redirection captures the device information, thereby registering the device. In both cases above, once the device is properly associated with the customer and is online services will be set up and the service assurance workflows will be triggered. Decreasing the touches generally means increasing the automation. As we get closer and closer to zero touches, the automation increases and gets more complex. I'm sure you're also seeing other options here. NFV and SDN can contribute greatly to this. In my mind's eye, "zero touch" is a bit like that exponential decay curve that will forever go towards zero but never quite reach it. So even though it will probably never be literally "zero touch", I get the idea. The more we can remove "touches" from the process, the easier it will be to deploy new devices and make the whole provisioning cycle so much easier. We offer a white paper with more information about getting as close as possible to zero-touch provisioning. Written by Ronan Bracken, SAC Product Manager at Incognito Software SystemsFollow CircleID on TwitterMore under: Access Provi[...]

History is Written and Revised by the Winners - Can the Internet Archive Change That?


Kremvax during the Soviet coup attempt (Top), Mumbai terrorist attack (Middle), The Arab Spring (Bottom) – Click to EnlargeI was naively optimistic in the early days of the Internet, assuming that it would enhance democracy while providing "big data" for historians. My first taste of that came during the Soviet coup attempt of 1991 when I worked with colleagues to create an archive of the network traffic in, out and within the Soviet Union. That traffic flowed through a computer called "Kremvax," operated by RELCOM, a Russian software company. The content of that archive was not generated by the government or the establishment media — it was citizen journalism, the collective work of independent observers and participants stored on a server at a university. What could go wrong with that? The advent of the Web and Wikipedia fed my optimism. For example, when terrorists attacked various locations in Mumbai, India in 2008, citizen journalists inside and outside the hotels that were under attack began posting accounts. The Wikipedia topic began with two sentences: "The 28 November 2008 Mumbai terrorist attacks were a series of attacks by terrorists in Mumbai, India. 25 are injured and 2 killed." In less than 22 hours, 242 people had edited the page 942 times expanding it to 4,780 words organized into six major headings with five subheadings. (Today it is over 130,000 bytes, revisions continue and it is still viewed over 2,000 times per month). What could go wrong with that? The 2011 Arab Spring was also seen as a demonstration of the power of the Internet as a democratic tool and repository of history. What could go wrong with that? What went wrong The problem is that the Internet turned out to be a tool of governments and terrorists as well as citizens. Furthermore, historical archives can disappear or, worse yet, be changed to reflect the view of the "winner." Our Soviet Coup archive was set up on a server at the State University of New York, Oswego, by professor Dave Bozack. What will happen to it when he retires? If someone tried to delete or significantly alter the Wikipedia page on the Mumbai attack, they might be thwarted by one of the volunteers who has signed up to be "page watchers" — people who are notified whenever the page they are watching is edited. We saw a reassuring demonstration of the rapid correction of vandalism in a podcast by Jon Udell. That was cool, but does it scale? Volunteers burn out. The page on the Mumbai attacks has 358 page watchers, but only 32 have visited the page after recent edits. Even if a Wikipedia page remains intact, links to references and supporting material will eventually break — "link rot." If our Soviet Coup archive disappears after Dave's retirement, all the links to it will break. By the time of the Arab Spring, we were well aware of our earlier naivete — the Internet was already being used for terrorism and government cyberwar and the dream of providing raw data for future historians and political scientists was fading. The Internet Archive I was slow to understand the fragility of the Internet, but others saw it early — most importantly, Brewster Kahle, who, in 1996, established the Internet Archive to cache Web pages and preserve them against deletion or modification. They have been at it for 20 years now and have a massive online repository of books, music, software, educational material, and, of course, Web sites, including our Soviet Coup archive. As shown here, it has been archived 50 times since October 3, 2002 and it will be online long after Dave retires — as long as the Internet Archive is online. Soviet coup archive from Internet Archive – Click to EnlargeKhale understands that saving static Web sites like the Soviet Coup archive only captures part of what is happening online today. Since the late 1990s, we have been able to add programs to Web sites, turn[...]

Fairness & Due Process Require Changes to ICANN's "Updated Supplementary Procedures" to the IRP


The Updated Supplementary Procedures for Independent Review Process ("IRP Supplementary Procedures") are now up for review and Public Comment. Frankly, there is a lot of work to be done. If you have ever been in a String Objection, Community Objection, or negotiated a Consensus Policy, your rights are being limited by the current way the IRP Supplementary Procedures proposal is structured. With timely edits, we can ensure that all directly-impacted and materially-affected parties have actual notice of the IRP proceeding, a right to intervene, a right to be heard on emergency requests, and a right to be part of the discussion of remedies and responses. History The IRP is based on commercial arbitration. Arising centuries ago, commercial arbitration was used when two merchants chose to bring their disputes to a wise and trusted private party rather than await the decision of the courts. Arbitration, as we can all recite, is faster and cheaper. But is it fairer? Currently, the IRP Supplementary Procedures proposal is optimized for the traditional IRP/arbitration scenario: a registration industry member has a dispute with ICANN. The first IRP filer was ICM Registry when Stuart Lawley felt that he had completed all of the requirements for a .XXX and the ICANN Board refused to delegate it to him (under a lot of pressure from the GAC). The ICM Registry wanted the .XXX Registry Agreement with ICANN and through the brilliant representation of Becky Burr and her then-law firm, it won. That's the classic IRP — a one-on-one arbitration between a single party and ICANN. Problem But we have decided to use the IRP in different ways — including as the forum for a range of challenges to the decisions of other arbitration forums and to our Multistakeholder Consensus Policies. For these purposes, the IRP is functioning more as an appellate court than an arbitration forum. Yet, we have not updated the IRP Supplementary Procedures to allow all involved parties to participate. Fair is fair; an IRP proceeding should not be a dance between the disgruntled Claimant and ICANN. It should include all parties to the underlying arbitration (should they choose to participate) and all parties to the underlying Consensus Policy (ditto). ICANN Counsel is brilliant, but they were not directly engaged in the underlying arbitration nor did they (or the ICANN Board) research, negotiate and write the Consensus Policy (the Community did!). Fundamental rules of due process in all developed country legal systems require that all directly impacted, materially affected parties have a legal right to be heard when there is a challenge to their rights and property. How in good faith, and in our new world of openness and transparency, can we exclude them from the IRP Proceeding? 1. IRPs Need to Include All Parties to a Previous Arbitration Decision — Especially the Winners!! ICANN's Bylaws expressly throw the IRP doors open to challenge decisions of other arbitration forums. This includes decisions of the World Intellectual Property Organization's Legal Rights Objections, International Chamber of Commerce's Community Objections, and even the International Center for Dispute Resolution (the ICDR which hosts the IRP) also decided String Objections in Round 1 of the New gTLD process. All of these proceedings are legitimate arbitrations in their own right by well-respected International arbitration forums. Yet, when it comes to the IRP, only the challenger (specifically, the losing party) is heard as a matter of right. How can that be? This must be an oversight in the IRP Supplementary Rules. Clearly, any challenge to another arbitration decision MUST include Actual Notice to All of the Parties to the Underlying Proceeding and the Underlying Provider. That Notice must be provided at the time of filing — not weeks or months later. Further, all parties to the Underlyin[...]

Should You Pay Ransomware Demands?


In 2016, ransomware became an increasingly serious problem for small and medium businesses. Ransomware has proven a successful revenue generator for criminals, which means the risk to businesses will grow as ransomware becomes more sophisticated and increasing numbers of ethically challenged criminals jump on the bandwagon. Every business must take steps to protect itself from ransomware, but talking about prevention doesn't help ransomware victims decide whether to pay to get their data back. It's an unpleasant position in which to find oneself. No-one wants to pay criminals for access to their own data, but nor do they want to permanently lose access to information vital to their business. To pay, or not to pay? As you might expect, there's no definitive answer, but we can think through some of the factors that should influence your decision. The FBI's position on ransomware payments is straightforward: don't pay. The FBI believes paying doesn't guarantee access to the encrypted data, that it "emboldens" criminals to target more organizations, and that it encourages more criminals to join the ransomware industry. All of that is true, but business owners are understandably more interested in getting their data back now than whether paying encourages future attacks. Nevertheless, before paying, business owners should consider that by paying, they paint a target on their back. Criminals will bleed a victim dry if they're able. If you make a payment, you show the attacker that you're the sort of person who pays, and that can only encourage the attacker to find out how much more they can extort. If you choose to pay, you may or may not receive the keys to unlock your data. There is no guarantee that the keys will ever be delivered. But, counter-intuitive as it may sound, the ransomware model is based on trust. Victims have to trust that attackers will release their data — otherwise there's no incentive to pay. In most cases, people who pay get their data back. In fact, the largest ransomware operations provide excellent customer service. They will help you pay and decrypt the data. Ultimately, your decision to pay should be predicated on a simple calculation: is the data I stand to lose and any future risk caused by paying worth the price being asked? The best way to avoid paying is to make sure that you never become the victim of a ransomware attack in the first place. That might seem like a truism, but it's surprising how many business owners don't take the simplest steps to keep their data safe. Educating employees about ransomware and phishing should be a high priority, but the single most important action a business owner can take is the creation of regularly updated offsite backups. Ransomware is only effective if it deprives the business of data; if that data is duplicated in a place the attackers can't reach, they have no leverage and you won't have to pay them a cent. Written by Rachel Gillevet, Technical WriterFollow CircleID on TwitterMore under: Cybercrime, Security [...]

New Report on "State of DNSSEC Deployment 2016" Shows Continued Growth


Did you know that over 50% of .CZ domains are now signed with DNS Security Extensions (DNSSEC)? Or that over 2.5 million .NL domains and almost 1 million .BR domains are now DNSSEC-signed? Were you aware that around 80% of DNS clients are now requesting DNSSEC signatures in their DNS queries? And did you know that over 100,000 email domains are using DNSSEC and DANE to enable secure email between servers? These facts and many more are available in a new report published by the Internet Society: State of DNSSEC Deployment 2016 While many separate sites provide DNSSEC statistics, this report collects the information into a series of tables and charts that paint an overall picture of the state of DNSSEC deployment as of December 2016. As the report indicates, there has been steady and strong growth in both the statistics around DNSSEC signing and validation — and also in the number of tools and libraries available to support DNSSEC. It also discusses the growth of DANE usage (DNS-based Authentication of Named Entities), particularly for securing email communication. That growth, though, is not evenly distributed. In some parts of the world, particularly in Europe, there is solid growth in both DNSSEC signing and validation. In other parts of the world, the numbers are significantly lower. Similarly, while some country-code top-level-domains (ccTLDs) such as .CZ, .SE, .NL and .BR are seeing high levels of DNSSEC signing of second-level domains, other ccTLDs are just beginning to see DNSSEC-signed domains. And among the other TLDs, some such as .GOV have almost 90% of their second-level domains signed, while .COM has under 1% signed. The report dives into all this and more. Beyond statistics, the document explores some of the current challenges to deployment of DNSSEC and provides a case study. It also includes many links to further resources for more exploration. Creating a report of this level involves a great number of people. I'd like to thank all the members of the DNS / DNSSEC community who provided data, reviews, proofreading and other support. Our intent is that this will be an annual report where we can look back and see what has changed year-over-year. Our target now is for the 2017 report to be delivered at the DNSSEC Workshop at ICANN 60 in November. To that end, I would definitely welcome any comments people have about what is in the report and what people find useful and helpful. I'd also welcome comments about anything we may have missed. Please do read and share this report widely. We'd like people to understand the current state of DNSSEC deployment — and how we can work together to accelerate that progress. On that note, if you want to get started with DNSSEC for your own network or domains, many resources are available to help. P.S. an audio commentary is also available on this topic for those interested in listening to me talk about this topic. Written by Dan York, Author and Speaker on Internet technologies - and on staff of Internet SocietyFollow CircleID on TwitterMore under: DNS, DNS Security, Domain Names, ICANN, Security, Top-Level Domains [...]

How a Plaintiff Was Undeceived and Lost at Spam Litigation - What Nobody Told You About!


Back in 2003, there was a race to pass spam legislation. California was on the verge of passing legislation that marketers disdained. Thus marketers pressed for federal spam legislation which would preempt state spam legislation. The Can Spam Act of 2003 did just that… mostly. "Mostly" is where litigation lives. According to the Can Spam Act preemption-exception: This chapter supersedes any statute, regulation, or rule of a State or political subdivision of a State that expressly regulates the use of electronic mail to send commercial messages, except to the extent that any such statute, regulation, or rule prohibits falsity or deception in any portion of a commercial electronic mail message or information attached thereto. 15 USC s 7707(b)(1). The preemption-exception is big because California affords a private right of action, where the Can Spam Act does not. The Can Spam Act is enforced by state and federal authorities only. This is where today's plaintiff, in Silverstein v. Keynetics, Inc., Dist. Court, ND California 2016, attempted to hang his coat. According to the court, "Plaintiff is a member of the group 'C, Linux and Networking Group' on LinkedIn, a professional networking website. Through his membership in that group, he received unlawful commercial emails that came from fictitiously named senders through the LinkedIn group email system. The emails originated from the domain "," even though non-party LinkedIn did not authorize the use of its domain and was not the actual initiator of the emails." The emails themselves contained marketing links that led, allegedly, to defendants' businesses. Plaintiff alleged that the names in the 'from' field of the emails were false or deceptive. According to Plaintiff, "the 'from' names include 'Liana Christian,' 'Whitney Spence,' 'Ariella Rosales,' and 'Nona Paine,' none of which identify any real person associated with any defendant. Further, Plaintiff alleges that the emails 'claim to be from actual people' and that all of the false 'from' names deceive the emails' recipients 'into believing that personal connection could be made instead of a pitch for Defendants' products.'" A reading of the Can Spam Act would appear to be clear. The Can Spam Act preempts state causes of action "except to the extent that any such statute prohibits [either] falsity or deception." If the email is either false or deceptive, it would seem, Plaintiff could proceed. In the case at hand, the information in the 'from' field would appear to be false. The Judge in the Silverstein decision, however, hangs her hat on a previous 9th Circuit decision in Gordon v. Virtumundo, 575 F.3d 1040 (9th Cir. 2009). In Gordon, defendant sent out marketing emails from domain names that it had registered such as "," "," and "" These were, in fact, defendant's domain names. While the 'from' field may not have clearly identified who the defendant was, the information was not false nor was it deceptive. Furthermore, according to the court, the WHOIS database accurately reflected to whom the domain names were registered. Therefore, at best, the 'from' field information was incomplete, but not false or deceptive. As a result, the Can Spam Act preempted litigation under state law. The Gordon court elaborated that it is insufficient for the information in the spam to be merely problematic. It had to be materially problematic. The Gordon court looked at the words "false" and "deceptive," and other language of the Can Spam Act, and said, "we know those words. Those words refer to 'traditionally tortious or wrongful conduct.'" Recognizing the Internet as a trans-border medium of communication, Congress had attempted to solve the patchwork of inconsistent state spam laws [...]

CircleID's Top 10 Posts of 2016


The new year is upon us and it's time for our annual look at CircleID's most popular posts of the past year and highlighting those that received the most attention. Congratulations to all the 2016 participants and best wishes to all in the new year. Additionally, you can also visit the leaderboards for CircleID's overall top 100 community and industry participants. Top 10 Featured Blogs from the community in 2016: #1How .MUSIC Will Go Mainstream and Benefit ICANN's New gTLD ProgramConstantine Roussos | Jan 06, 2016Viewed 39,642 times#2Examining IPv6 Performance - RevisitedGeoff Huston | Aug 19, 2016Viewed 18,134 times#3Cybersquatting & Banking: How Financial Services Industry Can Protect Itself Online (Free Webinar)Doug Isenberg | May 02, 2016Viewed 15,333 times#4We Need You: Industry Collaboration to Improve Registration Data ServicesScott Hollenbeck | May 24, 2016Viewed 14,751 times#5Usage Trumps Registrations: Why Past TLDs Failed and Why Many Will Follow in Their PathColin Campbell | Apr 09, 2016Viewed 14,529 times#6ICANN Fails Consumers (Again)Garth Bruen | Apr 15, 2016Viewed 14,390 times#7Canon Takes Its .brand to the World, Moves Its Global Site to .CANONTony Kirsch | May 18, 2016Viewed 14,248 times#8Internet Governance Outlook 2016: Cooperation & ConfrontationWolfgang Kleinwächter | Jan 11, 2016Viewed 14,222 times#9Internet Stewardship Transition Critical to Internet's FutureDaniel A. Sepulveda | Sep 16, 2016Viewed 13,347 times#10The Future of Domain Name Dispute Policies: The Journey BeginsDoug Isenberg | Apr 27, 2016Viewed 12,859 times Top 10 News in 2016: #1IPv6 Now Dominant Protocol for Traffic Among Major US Mobile ProvidersAug 21, 2016Viewed 16,228 times#2Sweden Makes its TLD Zone File Publicly AvailableMay 16, 2016Viewed 13,006 times#3Internet Governance Forum Puts the Spotlight on Trade AgreementsDec 09, 2016Viewed 11,462 times#4Hong Kong Billionaire Richard Li Becomes First Person to Own a TLD Matching His NameMay 12, 2016Viewed 10,466 times#5WordPress Announces New .BLOG TLD, to be Available This YearMay 12, 2016Viewed 9,686 times#6Next Round of New TLDs May Not Happen Until 2020, Says ICANNMay 05, 2016Viewed 8,399 times#7PirateBay Domains to Be Handed over to the State, Swedish Court RulesMay 14, 2016Viewed 8,052 times#8Series of New African TLDs Fail to Go Live, Get Termination Notice from ICANNMay 11, 2016Viewed 7,881 times#9Google Releases 'Noto', Free Font Covering Every Language and Every Character on the WebOct 09, 2016Viewed 7,057 times#10Cisco Issues Hight Alert on IPv6 Vulnerability, Says It Affects Both Cisco and Other ProductsJun 03, 2016Viewed 6,467 times Top 10 Industry News in 2016 (sponsored posts): #1Move Beyond Defensive Domain Name Registrations, Towards Strategic ThinkingBoston Ivy | May 17, 2016Viewed 15,807 times#2Verisign Launches New gTLDs for the Korean Market, .닷컴 and .닷넷Verisign | May 16, 2016Viewed 12,915 times#3Meet Boston Ivy, Home to Some of the Most Specialized TLDs in the Financial Services SectorBoston Ivy | May 24, 2016Viewed 12,397 times#4Verisign Opens Landrush Program Period for .コム Domain NamesVerisign | May 16, 2016Viewed 11,981 times#5New .PROMO Domain Sunrise Period Begins TodayAfilias | Apr 14, 2016Viewed 11,831 times#6Domain Management Handbook from MarkMonitorMarkMonitor | May 10, 2016Viewed 11,810 times#7Afilias Announces Relaunch of .GREEN TLDAfilias | Apr 22, 2016Viewed 11,338 times#8New TLD .STORE Crosses 500+ Sunrise ApplicationsRadix | May 31, 2016Viewed 10,095 times#9Minds + Machines Group Announces Outsourcing Agreements, Web Address ChangeMinds + Machines | Apr 08, 2016Viewed 9,943 times#10Is Your TLD Threat Mitigation Strategy up to Scratch?Neustar | May 17, 2016Viewed 9,411 times Written by CircleID ReporterFollow CircleID on TwitterMore under: Ac[...]

Internet Governance Outlook 2017: Nationalistic Hierarchies vs. Multistakeholder Networks?


Two events, which made headlines in the digital world in 2016, will probably frame the Internet Governance Agenda for 2017. October 1, 2016, the US government confirmed the IANA Stewardship transition to the global multistakeholder community. November 2, 2016, the Chinese government announced the adoption of a new cybersecurity law which will enter into force on July 1, 2017. IANA Transition and the Chinese Cybersecurity Law The IANA transition stands for a multistakeholder bottom up policy development process. The Chinese law stands for a top-down governmental approach. The new ICANN Bylaws are probably the most advanced version of a multistakeholder mechanism for a free, open and unfragmented Internet. The Chinese cybersecurity law is probably the most outspoken version of how a country can control the Internet within its territorial borders. Here we have a global multistakeholder network. There we have a national government. And it is not only the Chinese government which introduces strong national Internet legislation. It is Russia, Turkey, Iran, Pakistan, Saudi-Arabia, Hungary, Poland, and even the United Kingdom. Will we see a new type of conflict between multistakeholder networks and national Internet policies? Will the wave of the new nationalism swap into the borderless cyberspace? Will, with a new president in Washington's Oval Office, pure power politics trample collective wisdom? Will fictions beat facts? The short answer to this rhetorical question is, unfortunately "Yes". Yes, we will see a continuation of a chilly "Cold Cyberwar". Yes, we will see that more governments, in the name of security, will restrict fundamental individual human rights as privacy and freedom of expression. And yes, we will see that more governments want to re-nationalize the global cyberspace and erect borders around their "national Internet segment" where they can control individuals, private corporations, personal data as well as the flow and the content of communication. However, the short answer tells only half of the truth. The reality is more complex. To describe the basic cyberconflict of our time as "Democracies vs. Dictatorships" would be an oversimplification. Yes, there are conflicts between political structures, value systems, and ideologies. And yes, there are conflicts between borderless spaces (managed by multistakeholder networks) and bordered places (managed by hierarchically organized states). But the truth is, that there are hierarchies in networks and networks in hierarchies. And there is no 100 percent democracy on one side and 100 percent Dictatorship on the other side. There are western governments, which prefer strong Internet regulation, argue that cybersecurity is more important than data protection and reduce their commitment to the multistakeholder model to the technical management of Internet resources as domain names, IP addresses or Internet protocols. On the other hand, the Chinese government has recognized that the concept of sovereignty in cyberspace, as it is pushed forward by president Xi, has to take into consideration also the role of non-state actors. Critical observers recognized that during the 3rd high-level Wuzhen Conference in November 2016 Chinese officials introduced the terminology of "multi-party governance", which is the Chinese version of the multistakeholder model. "Multi-party Internet governance" invites the Chinese private sector, technical community, and even civil society to participate in Internet policy making. How far this will go in practice remains to be seen, but it is an interesting move in an ideologically overloaded Internet Governance language. With other words, what we saw in 2016 and what we will see in 2017 is a growing mix of approaches with a broad spectr[...]

Parsing Domain Names Composed of Random Letters for Proof of Cybersquatting


The Respondent's cry of pain in AXA SA v. Whois Privacy Protection Service, Inc. / Ugurcan Bulut, axathemes, D2016-1483 (WIPO December 12, 2016) "[w]hat do you want from me people? I already removed all the files from that domain and it's empty. What else do you want me to do???" raises some interesting questions. "A," "x," and "a" is an unusual string of letters but unlike other iconic strings such as "u," "b" and "s" and "i", "b" and "m" for example that started their lives as the first letters of three-word brands AXA is not an acronym. Whether invented strings or acronyms iconic strings are not just random letters. Combining them with dictionary words (whether or not suggesting an association with complainants' businesses) is essentially conclusive of cybersquatting. Adding "themes" to AXA particularly when AXA also owns AXA THEMA, "money" to UBS where UBS is in the money business, or "food" to IBM (even though food has no direct connection with IBM's business) undercuts credibility to these respondents even if they appear and argue ignorance of intention; that they had entirely different project "in mind" without reference to the trademarks. Innocence is essentially Respondent's position in AXA. He put on a show of indignation but had no explanation for incorporating the trademark in the domain name: "How can you know [he says] that [the domain name] refers [to] AXA THEMES, maybe it's AXAT HEMES or AXATH EMES." Well, why not? Conceivably, a registrant could use made-up phrases to create a business using either of the two alternative possibilities but to do that he would have to offer demonstrable proof of such a business existing or in formation. In the absence of proof, Panels will infer cybersquatting. Complainants prevail when Respondents cannot explain what the trademark is doing in the domain name even if the added word has no association with the trademark; for example. What if random letters claimed as a trademark appear in a domain name that spells a dictionary word? How far does a trademark owner's right extend with random letters? For example, could AXA SA have claims to (plural of taxon) or on the theory of a one letter replacement, "e" for "a" and charge respondent with typosquatting, or even more preposterous ? None of these compositions suggest the business of insurance so making a claim would be a stretch best not taken. The WIPO Overview gets into the act by suggesting "theatre" as an example in which a mark owner of HEAT commences a proceeding on the grounds of confusing similarity because the domain name includes the letters "h," "e," "a," and "t." The issues came up for real in Philipp Plein v. Kimberly Webb, D2014-0778 (WIPO July 30, 2014) in which Complainant saw its trademark embedded in , that is "people in casinos" an unlikely phrase certainly, but intentionally targeting Complainant? The Panel observed: [A] consequence of the Complainant's argument is that any trademark appearing in a domain name would satisfy the confusing similarity test on the basis that it would be recognized within the domain name by search engines… This result changes the current understanding of confusing similarity, and suggests that even if confusion by technological effect were accepted, visual or aural characteristics cannot be disregarded entirely… It is not sufficient that the trademark is only visible when the viewer is told it is definitely there. However, the "plein" case is exotic; so far one of a kind. As a general rule, to be confusingly similar the trademark must be visible to the objective observer. A string of letters with[...]

Is Proprietary Dead?


A new age of openness is coming upon us. At least that's what we're being told. For instance — "The reign of closed solution suites is over, shifting to the rise of open, heterogeneous software ecosystems." Maybe it's my 30 years in the information technology business (how many people remember Thomas-Conrad ARCnet hardware?), but I'm not convinced. It's worth taking a moment to consider the case. On the positive side, there is a huge movement towards openness in many areas of the IT world. It is slowly becoming possible, for instance, for mid-scale operators to disaggregate their routers and switches into multiple parts, each purchased and managed to obtain the best bang for the buck. In this regard, the importance of the router (or switch) as an appliance certainly seem to be on the wane. The open source movement, standing on the shoulders of open standards, certainly seems to be making huge strides. There are now a number of open source routing stacks available (including Free Range Routing, forked off of Quagga). Various flavors of *NIX are available through open source that are production grade, and many companies are contributing large and important projects (such as KAFKA) to the community. These open source projects form the backbone of the cloud, in fact; cloud providers largely build their services on open source software and white box hardware. Open19 is accelerating the move towards commodity compute and storage, as well, making the white box buy much more compelling for mid-scale operators. The existence of large-scale, widely available development platforms is making a lot of companies ask: Why buy hardware from a name brand vendor when you can rent a cheaper version that someone else maintains? But all the roads in the world do not lead to open software systems. There are several counter movements that need to be watched carefully if we are to see the whole picture. The first to note is the Software-Defined Wide Area Network (SD-WAN) movement. While it might be fairly invisible to hyper- and web-scale operators, it is "in-your-face" for last mile and transit providers. SD-WAN is taking the wide area world by storm, with most transit and last mile providers either scrambling to keep up, or partnering with an existing company in the space. More importantly for the question this post is asking: SD-WAN is based on completely closed, proprietary solutions that do not interoperate with anything else. The second to note is the serverless movement. At first glance, serverless is just another stage of the cloud. First, you remove the storage, then the compute, then the network, and, finally, the operating system. But there is a more important point in the serverless revolution. To go serverless, you must move your applications into an API controlled by a provider. While these applications might well interoperate with other applications and systems, they must do so through the facilities provided by the operator. Serverless is, to put it more familiar terms, at least for someone who used to work on mainframes and minis, a pretty mainframe'ish version of the cloud. Again, the very definition of a closed, proprietary system. So there are several examples of movement towards open software running on commodity hardware. There are, at the same time, several examples of movement towards closed systems running on what is essentially proprietary hardware bundled with software. Neither case is a "pure appliance" play, but both cases rely on strong vertical integration to create a new way of solving old (and sometimes new) problems. What does all of this tell us? If you hold one set of facts steadily in view, it appears the days of proprietary solutions are o[...]

It's Official: 2016 Was a Record Year for Domain Name Disputes


As I predicted more than three months ago, 2016 turned out to be a record year for domain name disputes, including under the Uniform Domain Name Dispute Resolution Policy (UDRP). That's according to statistics from the World Intellectual Property Organization (WIPO), the only UDRP service provider that publishes real-time data on domain name disputes. WIPO's statistics show 3,022 cases in 2016 — an increase of almost 10 percent from 2015. The previous most-active year for domain name disputes was 2012, and the number of cases has been on the rise ever since. Number of WIPO Domain Name Dispute CasesClick to Enlarge In addition to a rise in the number of cases filed at WIPO, the total number of domain names in dispute (since a single case can relate to more than one domain name) also rose, and significantly. WIPO's caseload in 2016 included 5,368 domain names — a spike of 23 percent since the previous year. The increase is likely attributable to a number of factors, including the economy, new cybersquatting tactics, the growing prevalence of the Internet and, most especially, the ongoing launch of more than 1,000 new generic top-level domain names (gTLDs). For example, although .com remains — by far — the most-often disputed top-level domain, the following new gTLDs were represented in a notable number of disputes at WIPO in 2016: .cloud, .club, .date, .lol, .online, .shop, .site, .space, .store, .top, .vip, .website and .xyz. Importantly, the total number of domain name disputes is much greater than represented by these WIPO statistics, for a number of reasons: In addition to WIPO, four other entities are accredited as UDRP service providers, including the Forum (formerly the National Arbitration Forum), which also receives a significant number of filings. The other UDRP provides are the Czech Arbitration Court, the Asian Domain Name Dispute Resolution Centre and the Arab Center for Domain Name Dispute Resolution. Some new gTLD disputes are resolved via the new Uniform Rapid Suspension System (URS) rather than the UDRP. (WIPO does not provide URS services.) Many country-code top level domain names (ccTLDs) are not subject to the UDRP, including such popular ccTLDs as .uk and .in, which are administered by other dispute service providers. Some domain name disputes are decided in court systems all around the world — rather than through administrative proceedings such as the UDRP, the URS and ccTLD policies. So, while it may be impossible to count the total number of domain name disputes worldwide, the WIPO statistics are probably the best gauge of trends. And the trend is clear: Domain name disputes are on the rise. Written by Doug Isenberg, Attorney & Founder of The GigaLaw FirmFollow CircleID on TwitterMore under: Cybersquatting, Domain Names, Law [...]

2016 New gTLD Year in Review (Infographic)


This post provides an overview of The 2016 New gTLD Year in Review infographic, reflecting on some of the intriguing highlights of the gTLD industry. The data analyzed within the infographic is based on the following: – New Top Level Domains (TLDs) contained in the data set reflect open TLDs and exclude single registrants such as brands – For greater insight, TLDs have been separated into four quartiles or 'tiers' with tier 1 being the top 25% and tier 4 being the bottom 25% – Initial registration upswings have been eliminated with TLDs in the data set to be in General Availability for at least 60 days – Top ten based on projected yearly revenues based on daily registration volumes – Registry revenues do not include premium name sales as dependable revenues are not available – Operational losses are based on TLD revenues with a conservative $150k in expenses – Revenues are based on the average retail price over four registrars (101Domain, eNom, GoDaddy and United Domains) in December 2016 – If significantly low registration pricing (less than $5) was employed on an extended or repetitive basis, the lowest price was used. This is a change from prior comparisons where the TLD was removed from the data set. * * * Top Level Domain Statistics and Business Implications 2016 Overview The dataset analyzed contains 475 TLDs that were in general availability for at least 60 days (an increase of 63 over 2015) Average number of registrations per day is 66 Top 25 TLDs account for 40% revenues and 12% of registration volumes (significant change from 2015 with half of revenues accounting for half of the registration volume) Less than 4% of TLDs will exceed ICANN's minimum yearly fee Largest group of TLDs are in the $20 - $25 retail price range followed by the $25 - $30 Average revenue of all gTLDs is $252k Average retail prices vary ($34.14 to $206.70) within each tier differ yet the median price variance is less significant ($28.49 to $33.74) All tiers have a 'very weak' correlation between price and volume Based on today's data, 66% of TLDs are projected to operate at a loss for the next year based on conservative, yearly expenses of $150k; with over 400 TLDs belonging to portfolio companies, the percentage decreases 2016 Insights from gTLD Statistics and Business Implications by Quartile Tier 1: Trailblazers – Leading TLDs with a consistent gap over the other three tiers based on higher prices and consistent volume Average retail price of $207 (increase from $91 in 2015) and a median of $32.99 (decrease from $35 in 2015 but lower than tier 3 of $33.74) 5 out of 10 of the highest average retail priced TLDs are in tier 1 (.auto, .car, .cars, .security, .protection) 70% of TLDs in tier 1 in 2015 remain in tier 1 in 2016; However, 51% had a reduction in their retail price on average $12.47 with a median of $3.72 resulting in an average registration volume increase of 48 but with a mean volume decline of 15 Projected yearly volume remains relatively unchanged with a small decrease of 2.2% over 2015 34% and 42% of TLDs that went into General Availability in 2015 and 2014 respectively are in tier 1 Average TLD length is up to 5.45 characters in length Deeply discounted TLDs in tier 1 include .loan, .online, .site, .gdn, .bid, .tech Tier 1 TLDs include: .vip, .shop, .mom, .bank, .design, .nyc, .games, .city, .lawyer More precise pricing needs to be tracked to provide comparable yearly revenue projections and analysis Tier 2: Path Finders – Finding their Way Average retail price jumped from $51 in 2015 to $75 in 2016; However, the median price had a[...]

Is 2017 Crunch Time for the Domain Industry?


Verisign's spent the best part of 2016 putting out warnings. The .COM operator and domain industry heavyweight highlighted its Q3 earnings report with a stern "Ending Q4 '16 Domain Name Base expected to decrease by between 1.5M to 2.8M registrations from the end of Q3 '16". A forecast which the company said was based on "on historical seasonality and current market trends." As 2016 drew to a close, the downturn seemed to materialize with specialist blog The Domains running a story on December 29 entitled ".Com Registrations Loses Another 1 Million Domains In Less Than 2 Weeks; Now Under 127 Million". So after years of growth, is the domain industry about to suffer bleaker times? What do the numbers say? Using Verisign as the authoritative source for global domain name stats, and looking at mid-year numbers from 2008 on, the trend remains clearly one of growth. H1 2008 ended with a total of 168 million domain names, 67 million of which were ccTLDs. H2 2009 showed a growth of 9% overall (to 184 million) and 14% for ccTLDs (to 74.4 million). H2 2010 had 7% growth overall (to 196.3 million) and 2.5% for ccTLDs (to 76.3 million). H2 2011 had 8.6% growth overall (to 215 million) and 8.4% for ccTLDs (to 84.6 million). H2 2012 had 11.9% growth overall (to 240 million) and 18.5% for ccTLDs (to 100.3 million). H3 2013 had 8% growth overall (to 265 million) and 14% for ccTLDs (to 119.5 million). H4 2014 had 7.5% growth overall (to 276 million) and 13.1% for ccTLDs (to 127.1 million). H4 2015 had 5.9% growth overall (to 296 million) and 8.2% for ccTLDs (to 138 million). The chartist view The half-year numbers for 2016 aren't yet published, but first quarter figures show a world total of 326.4 million with 148.2 million ccTLDs. So the growth trend certainly appears to be holding. For now. In recent conversations, two major registrars told me they had experienced a serious downturn in demand in Q3 2016. When I asked why, they admitted to not knowing… and being surprised by the severity of drop. So a blip or a trend? And what about 2017? Truth is, no-one knows what's in store for the domain industry this year. What if we use charts like financial analysts do to try and understand where the markets might go next? For example, the charts on new gTLDs seem very positive. The specialist website puts the number of new gTLD domains at around 1.4 million in July 2014, rising to 6.4 million a year later and to 22.8 million in July 2016. That's a strong upward trend on any chart! But how much of that is real demand, which would point to solid long-term growth potential, and how much is opportunistic speculation? Looking at rankings, the top 3 new gTLDs alone account for over 45% of total new gTLD registrations. These 3 TLDs, .xyz, .top and .win, have at times used heavy discounts to generate sales. But are they still speculative now, or have they now gone mainstream? Today, using leading European registrar 1&1 as an example, they seem to be offered at standard industry pricing (£19.99 first year for .win, £6.99 first year for .xyz and .top — by comparison, 1&1 is currently running a £2.99 promo on .eu first-year registrations). It's the economy, stupid! Whilst low-cost models are sometimes frowned upon, other approaches to TLD operation and domain use seem to aim at providing value to users, rather than being price-centric and therefore more amenable to domainers. Two examples are brand TLDs and City or Regional names. In these instances, the focus is not on the DNS side of the equation. The TLD becomes what some argue i[...]

Bridging California's Rural Digital Divide


A shift has occurred in agriculture: farmers are not only relying on clouds but increasingly, on the cloud. With the click of a mouse, farmers can find out which fields need water and chemical inputs in real time. The use of this technology, called precision agriculture, is helping farming become more productive, environmentally friendly and is revolutionizing how our food is cultivated. The Food Basket of the World California's Central Valley seems like it would be at the forefront of this shift towards precision agriculture. Called "the food basket of the world," California produces 70% of the total fruit and tree nut farm value and 55% of the vegetable farm value for the United States, all within driving distance from Silicon Valley's technology hub. But ironically, California's agriculture has fallen behind. Lack of Access Many rural communities in California lack reliable, fast mobile broadband that can keep them competitive with global agriculture and safe-guard our environment. While California has a program for ground truthing reported broadband speeds, the program prioritizes households, not farms, and considers farming areas "unpopulated" in terms of need. The result is that rural economies in California are falling behind due to inadequate broadband access. Funding The San Francisco-Bay Area ISOC Chapter is taking this problem very seriously and is working in collaboration with the Internet Society (ISOC) to alleviate it. The Chapter just received funding through ISOC's Beyond the Net funding program to support the "Bridging California's Rural/Urban Digital Divide with Mobile Broadband" project, which will collect data on mobile broadband performance in Yolo County — a 90 minute drive from San Francisco - and compare that performance to what mobile providers are claiming they're delivering and to what farmers need for precision agriculture. Data and Policy Information collected will be used to report to state officials and inform public policy making on rural broadband. The Chapter will be working together with California State University (CSU) Geographical Information Center (GIC), Chico and Valley Vision in order to develop the most robust report it can. Innovation in California has always propelled the rest of the USA. We need to look no further than Silicon Valley to confirm that. Now we're looking just outside the confines of Silicon Valley and towards our rural neighbors to help strengthen broadband capacity in Yolo County. Keep up to date with the "Bridging California's Rural/Urban Digital Divide with Mobile Broadband" project on the Chapter's website About the SF-Bay Area Chapter – The San Francisco Bay Area ISOC Chapter has almost 2,000 members and serves California, including the Bay Area and Silicon Valley, by promoting the core values of the Internet Society. Written by Jenna SpagnoloFollow CircleID on TwitterMore under: Internet of Things, Policy & Regulation, Wireless [...]

Edge Computing, Fog Computing, IoT, and Securing Them All


The oft used term "the Internet of Things" (IoT) has expanded to encapsulate practically any device (or "thing") with some modicum of compute power that in turn can connect to another device that may or may not be connected to the Internet. The range of products and technologies falling in to the IoT bucket is immensely broad — ranging from household refrigerators that can order and restock goods via Amazon, through to Smart City traffic flow sensors that feed navigation systems to avoid jams, and even implanted heart monitors that can send emergency updates via the patient's smartphone to a cardiovascular surgeon on vacation in the Maldives. The information security community — in fact, the InfoSec industry at large — has struggled and mostly failed to secure the "IoT". This does not bode well for the next evolutionary advancement of networked compute technology. Today's IoT security problems are caused and compounded by some pretty hefty design limitations — ranging from power consumption, physical size and shock resistance, environmental exposure, cost-per-unit, and the manufacturers overall security knowledge and development capability. The next evolutionary step is already underway — and exposes a different kind of threat and attack surface to IoT. As each device we use or the components we incorporate in to our products or services become smart, there is a growing need for a "brain of brains". In most technology use cases, it makes no sense to have every smart device independently connecting to the Internet and expecting a cloud-based system to make sense of it all and to control. It's simply not practical for every device to use the cloud the way smartphones do — sending everything to the cloud to be processed, having their data stored in the cloud, and having the cloud return the processed results back to the phone. Consider the coming generation of automobiles. Every motor, servo, switch, and meter within the vehicle will be independently smart — monitoring the devices performance, configuration, optimal tuning, and fault status. A self-driving car needs to instantaneously process this huge volume of data from several hundred devices. Passing it to the cloud and back again just isn't viable. Instead the vehicle needs to handle its own processing and storage capabilities — independent of the cloud — yet still be interconnected. The concepts behind this shift in computing power and intelligence are increasingly referred to as "Fog Computing". In essence, computing nodes closest to the collective of smart devices within a product (e.g. a self-driving car) or environment (e.g. a product assembly line) must be able to handle he high volumes of data and velocity of data generation, and provide services that standardize, correlate, reduce, and control the data elements that will be passed to the cloud. These smart(er) aggregation points are in turn referred to as "Fog Nodes". Source: Cisco / Click to Enlarge Evolutionary, this means that computing power is shifting to the edges of the network. Centralization of computing resources and processing within the Cloud revolutionized the Information Technology industry. "Edge Computing" is the next advancement — and it's already underway. If the InfoSec industry has been so unsuccessful in securing the IoT, what is the probability it will be more successful with Fog Computing and eventually Edge Computing paradigms? My expectation is that securing Fog and Edge computing environments will actual be simpler, and man[...]

Using Domain Name Privacy/Proxy Services Lawfully or to Hide Contact Information and Identity


Privacy/proxy services carry no per se stigma of nefarious purpose, although when first introduced circa 2006 there was some skepticism they could enable cybersquatting and panelists expressed different views in weighing the legitimacy for their use. Some Panels found high volume registrants responsible for registering domain-name-incorporating trademarks. Others rejected the distinction between high and low volume as a determining factor. WWF-World Wide Fund for Nature aka WWF International v. Moniker Online Services LLC and Gregory Ricks, D2006-0975 (WIPO November 1, 2006) expresses the consensus, namely that use of these services "does not of itself indicate bad faith; there are many legitimate reasons for proxy registration services"). Panels now see the services as factors among others; without more, their use cannot reach the threshold of abusive registration but there was sufficient concern in the ICANN community to investigate the issue and come up with terms to tether the services. The current Registrar Accreditation Agreement at Section 3.14 (2013) and the Specification on Privacy and Proxy Registrations located at the bottom of the RAA (2016) address the respective responsibilities of registrars and services. The earlier Accreditation Agreements 2001 and 2009 did not address the issue and the Privacy and Proxy Accreditation Program referred to in Section 3.14 is (as of December 2016) yet to be implemented. However, pending the implementation registrars and services are governed by the Specification. Registrants' responsibilities, on the other hand, are typically spelled out in the respective services' agreements. As with registration agreements, registrants who sign up for the services must warrant and represent that their domain names are not unlawful and if challenged the personal and contact information maintained by the services will be produced upon request "to resolve any and all third party claims, whether threatened or made, arising out of Your use of IDP Domain, or take any other action which Backend Service Provider deems necessary" (from ID Protection Service Agreement, sec. 5). The action deemed necessary, is to disclose the beneficial holder and its contact information when requested in connection with a UDRP proceeding. When the provider receives the information from the registrar it informs complainant who is given the opportunity of amending the caption to include the real party in interest. Occasionally, the real party in interest (the licensee from a proxy) is unknown. While privacy once disclosed is not an issue it nevertheless plays a role in determining bad faith. This is illustrated in Teva Pharmaceutical Industries Ltd. v. Teva Pharm, CAC 101326 ( December 5, 2016). Respondent (who did not appear) registered using a privacy service but it provided false information about its name and address, namely it registered under the name of "TEVA PHARM". The Panel concluded it did this "on purpose": "It shows that the Respondent [identified as 'Susan Fowler'] intended to appear as being the Complainant when sending emails to third parties." The Panel held that using a false name spoofing Complainant's name . . . as a first and last name to register the disputed domain name, and using Complainant's address in the U.S. headquarters as its residential address is additional evidence of bad faith registration and use. Respondent falsifying its identity is as close to a per se violation as can be found in a UDRP claim. In Federated Mutual [...]