Subscribe: CircleID: Featured Blogs
http://www.circleid.com/rss/rss_comm/
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
cloud  community  data  domain names  domain  iana  icann  internet  mail  names  network  new  number  transition  udrp 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID: Featured Blogs

CircleID: Featured Blogs



Latest blogs postings on CircleID



Updated: 2016-09-30T17:28:00-08:00

 



One-Click Unsubscription

2016-09-30T10:28:00-08:00

Unsubscribing from mailing lists is hard. How many times have you seen a message "please remove me from this list," followed by two or three more pointing out that the instructions are in the footer of every message, followed by three or four more asking people to not send their replies to the whole list (all sent to the whole list, of course,) perhaps with a final message by the list manager saying she's dealt with it? For marketing broadcast lists, it's even worse because there's no list to write to. Messages are supposed to have an unsubscribe link (required by law in most places) which usually works except when it doesn't, or it leads to a web page making incomprehensible demands ("click here unless you want not to be removed only from this sender's mail") so for a lot of users it's easier just to click the junk button until the messages go away. Mail system managers know that users aren't very good at unsubscribing, so they've invented some ad-hoc ways of dealing with it. Many large mail systems have feedback loops (FBLs) which let mail senders register their ranges of IP addresses or in Yahoo's case DKIM signatures, so the sender or perhaps sender's network gets a report when a recipient marks a message as junk. When the sender is a bulk mailer, they generally try to handle the report as an unsubscribe request. While FBLs are great for finding when an ISP customer is compromised and starts spamming, they're not so great as a substitute for unsubscriptions. One reason is that even though there's a standard format called ARF (see RFC 5965) for sending FBL reports, each mail system includes slightly different details, so the original mail sender needs to try and parse out enough from the report to identify the list and the subscriber. Many mail systems redact their ARF reports on advice of their lawyers, and the redaction is often so severe that it can be impossible to tell who to unsubscribe from what. AOL's reports are so redacted that the only way I can figure out who to unsubscribe is to take the transaction ID in a Received: header of the reported message and manually match it up with my outgoing mail logs. And Gmail doesn't provide individual FBL reports at all, only aggregate data. The obvious solution to this problem is the List-Unsubscribe: header that has been a standard since 1998 (see RFC 2369). It can contain an e-mail address with subject line, or a web URL or both. When a user clicks the junk button, the system could simulate a click on the URL, or send mail to the e-mail address, and in theory they're off the list. The practice is not so simple. The problem with the click is that a lot of anti-spam systems automatically follow all the URLs in the message to see if they lead to malicious sites, and there's no way for the target of the URL to mechanically tell a request from a spam filter from a click by a live user. It's quite reasonable for spam filters to do this: Imagine a bad guy sending deliberately uninteresting spam with a fake unsubscribe link leading to his malware site. As a result, the unsubscribe link usually leads to a web page with a confirmation button that the malware checkers won't click but a live person will. The confirmation page may also ask what address to remove. While there have been attempts to parse the web pages and figure out what to fill out and what to click next, they don't work very well since the confirmation buttons vary all over the place. Unsubscribing by mail works at small scale, but operators of large mail systems like Gmail and Yahoo have told me that they are so big compared to most other mail systems that what seems to them like a moderate amount of automated mail can easily overwhelm recipient systems. To solve this problem, a few people at Gmail, AOL, Optivo (the bulk e-mail part of the German post office) and I have come up with an automatic one-click unsubscribe scheme. The goal is to allow automatic unsubscribes as an option for the junk button — when the user clicks junk, a little window asks whether to unsubscribe too. One-click unsu[...]



A Look at New gTLDs Numbers… We're Doing Good

2016-09-29T18:52:00-08:00

Like it or not, new gTLDs are here and they're here to stay. If it is still common to read that the ICANN new gTLD program was a failure and few users are using new domain names, numbers show the opposite. I recently read very rude critics addressed to new gTLD applicants but surprisingly, critics often come from ".com" investors and my understanding of this is that new domain names lower their margins ...since the domain name offer is now larger. This being said, the good point about new gTLDs is that they offer something that ".com" has never been able to offer: precision. Something more which is not questionable anymore about new gTLDs is the availability of these domain names. I checked the numbers and listed new domain name registration figures according to 17 categories: catering, photography, cities, companies, law, finance, colors, sport, alcohol, real estate, singular and plural TLDs, French applications, religion, generic TLDs, cars, health and French TLDs. I extracted these figures each week and I wrote them "on the rock" so it is easy to notice which registry has increasing domain name registrations, which has less and which loses registrations. Businesses related to Catering There are 20 new gTLDs related to catering and very few registration volumes stick to the red but generally speaking, registration volumes are below 10,000 for most of these TLDs. The .RESTAURANT new gTLD is an interesting example of a very long domain name extension with a constant increase of registrations. Businesses related to Photography Many new gTLDs related to photography are above the 10,000 domain name registrations. There are 26 of them with a few Trademarks included. Most registration volumes are constant but the .WEBCAM and .GRAPHICS. I particularly like the examples of the .PHOTO and .PHOTOS which were not launched at the same time. If .PHOTOS was launched first, registration volumes show that the singular version generates more interest. Cities Note that this is not the list of Geographic Top-Level Domains but City names only. I also included two .CITY and .TOWN Top-Level Domains related to cities. You will also notice the .TOURS extension which is not related to the French city. When comparing the list of city new gTLDs with another category of strings: we easily realize that all registration volumes are constant: new domain names such as .NYC, .LONDON and .TOKYO now have a lot of Registrants. Eight more cities are about to launch their domain name extension: .MADRID, .DUBAI, .STOCKHOLM, .HELSINKI, .BUDAPEST, .DOHA, .ABUDHABI and .BOSTON Domain names for Companies There is a list of 28 domain name extensions which make sense for Companies to secure their domain name(s) with. This actually is the only listing to which I added three legacy TLDs: .COM, .PRO and .TEL. Some of these extensions are .COMPANY, .EMAIL, .GLOBAL etc. They have high registration figures: seven of them have more than 100,000 registrations and up to almost 500,000 for the .SITE new gTLD. A few IDNs were added recently. Businesses related to the Law There a few extensions which deal with the law and all but two of the 10 extensions listed are below 10,000 registrations. Still many law firms do not seem to have discovered the existence of these new domain names and I think a .LAW domain name is of high importance ...for a law firm. I dug and found a report from the American Bar Association on Lawyer Demographics and there were 1,300,705 licensed lawyers in 2015 in the USA but there are 13,521 ".lawyer" domain names registered only. Businesses related to Finance This list of domain name extensions is one of the longest. With .BRAND new gTLDs, there are 81 extensions and almost half of them are Trademarks. Most major banks and investment corporations are listed and 3 have more than 100 domain name registered. This is a lot for a .BRAND new gTLD (a Trademark which does not sell domain names). The .BRAND concerned are .CITIC, .BNPPARIBAS and .BRADESCO. The other half of the list are extensions dedicated[...]



Maintaining Security and Stability in the Internet Ecosystem

2016-09-29T09:29:00-08:00

DDoS attacks, phishing scams and malware. We battle these dark forces every day — and every day they get more sophisticated. But what worries me isn't just keeping up with them, it is keeping up with the sheer volume of devices and data that these forces can enlist in an attack. That's why we as an industry need to come together and share best practices — at the ICANN community, at the IETF and elsewhere — so collectively we are ready for the future. The challenge before us is growing every day. The Internet ecosystem is comprised of roughly 3.5 billion global Internet users, millions of businesses, civil society organizations and governments operating over 900 million websites using 334 million domains. With this growth, we are seeing major shifts in the how the Internet is used: It isn't just computers and smart phones anymore. Maintaining the security and stability of the internet means protecting more "things" — from smart watches to smart TVs to smart refrigerators — from more bots, scrapers, and spammers. The domain industry must place a greater focus on creating trust through security and stability. At Afilias, I focus the company to not only manage and develop our systems to scale and meet technical demands and ensure 100% availability and uptime, but also to focus efforts on maintaining interoperability and to develop open standards for the industry. How do we do this? Trustworthiness. In computing terms that means being able to rely on systems to be available and secure. While it sounds simple, it requires a combination of security, privacy and reliability to exist in all interactions with the registry system and DNS networks. It requires usage and storage practices to be well documented and audited to ensure that the organization does what it says. Trustworthiness is, in turn, dependent on several core principles: Global reach: The ability of any system to be able to reach any other system supported, wherever the sender and the receiver are on the Internet. To ensure this, we make sure to run a global addressing and naming service infrastructure. Interoperability: The primary factor in ensuring interoperability is an ironclad commitment to open standards. Afilias is a signatory to OpenStand, the modern paradigm for standards, shaped by adherence to five fundamental principles: due process, broad consensus, transparency, balance, and openness. Accessibility: Providing access to Internet resources, regardless of device or software. Afilias best embodies this by allowing networks on IPv4 and IPv6 to equally access its systems without bottlenecks. Other examples abound. These principles are essential to the security and stability of the ecosystem to meet the challenges of today, scale and grow in the future, and work harder to increase access for the billions of people yet to be connected. We believe — using the multi-stakeholder approach to Internet governance with broad participation from technical, business and civil society leaders — we can collaboratively develop a new model for the industry as a whole. We will continue to do our part to harness secure, reliable, scalable and globally available technology and to encourage others to do the same. Our belief and vision is that sharing such best practices can help the global Internet community will share a strong and vibrant Internet. The DDoS attacks, phishing scams and malware are not going to stop but we will do our part to help and lead our community to be better prepared for the battle. Written by Ram Mohan, Executive Vice President & CTO, AfiliasFollow CircleID on TwitterMore under: Cyberattack, Cybercrime, DDoS, Malware, Security [...]



A Record Year for Domain Name Disputes?

2016-09-29T08:16:00-08:00

With just a little more than three months left in 2016, the number of domain name disputes filed at the World Intellectual Property Organization (WIPO) appears to be headed for a record year. According to public data published on the WIPO website, the current number of domain name disputes filed this year (as of this writing, September 27, 2016) is 2,228 — which would indicate that the total might reach 3,011 cases by December 31. If that trend holds, the total would eclipse the previous most-active year of 2012, when 2,884 cases were filed. Number of WIPO Domain Name Dispute Cases (Click to Enlarge) Of course, it is impossible to predict how many domain name disputes will be filed between today and the end of the year. And, the WIPO statistics do not represent all of the complaints filed under the Uniform Domain Name Dispute Resolution Policy (UDRP) (since there are four other service providers), but WIPO traditionally has been the most popular provider and, in any event, is the only provider that publishes real-time data on case filings. Interestingly, even if the number of disputes remains constant between now and the end of 2016, the total number of disputed domain names (since a single case can include multiple domain names) would only reach 5,373 — the third-highest level ever recorded at WIPO. (The most active year was 2013, when 6,191 domain names were disputed; and the second most active year was 2014, when 5,603 domain names were disputed.) What's behind the potential record-setting year in domain name disputes? I see at least two trends. First, many domain name registrants are engaging in a new type of cybersquatting activity that trademark owners find particularly troublesome, involving fraudulent actions that target specific victims or groups of victims. Among these cases are those in which a domain name is used as part of an employment scam aimed at job seekers. For example, in a dispute involving the domain name , the panel wrote that the domain name was used "as part of an employment and phishing scam" where the registrant was "passing off itself as the Complainant, apparently so as to obtain certain information from Internet users as a result of the intentionally created confusion between the disputed domain name and the [Complainant's] [t] rademark." (Disclosure: I represented the complainant in that UDRP case.) A second trend that certainly accounts for an increase in the number of domain name disputes is the ongoing launch of new global top-level domain names (gTLDs). For example, recent UDRP decisions have involved such domain names as , , , , and many others. Obviously, the new gTLDs have created new opportunities for cybersquatters. Still, despite the rise of the new gTLDs, .com domain names remain the most frequently disputed, accounting for about 58% of all disputed domain names in WIPO proceedings this year. After 2016 comes to an end, I'll take a closer look at the the year in domain name disputes, including trends that may have impacted the potential record number of filings. Written by Doug Isenberg, Attorney & Founder of The GigaLaw FirmFollow CircleID on TwitterMore under: Domain Names, Law [...]



Exploiting the Firewall Beachhead: A History of Backdoors Into Critical Infrastructure

2016-09-28T15:34:00-08:00

Sitting at the edge of the network and rarely configured or monitored for active compromise, the firewall today is a vulnerable target for persistent and targeted attacks. There is no network security technology more ubiquitous than the firewall. With nearly three decades of deployment history and a growing myriad of corporate and industrial compliance policies mandating its use, no matter how irrelevant you may think a firewall is in preventing today's spectrum of cyber threats, any breached corporation found without the technology can expect to be hung, drawn, and quartered by both shareholders and industry experts alike. With the majority of north-south network traffic crossing ports associated with HTTP and SSL, corporate firewalls are typically relegated to noise suppression — filtering or dropping network services and protocols that are not useful or required for business operations. From a hacker's perspective, with most targeted systems providing HTTP or HTTPS services, firewalls have rarely been a hindrance to breaching a network and siphoning data. What many people fail to realize is that the firewall is itself a target of particular interest — especially to sophisticated adversaries. Sitting at the very edge of the network and rarely configured or monitored for active compromise, the firewall represents a safe and valuable beachhead for persistent and targeted attacks. The prospect of gaining a persistent backdoor to a device through which all network traffic passes is of insurmountable value to an adversary — especially to foreign intelligence agencies. Just as all World War I combatant sides sent intelligence teams into the trenches to find enemy telegraph lines and splice-in eavesdropping equipment, or the tunnels that were constructed under the Berlin Wall in the early 1950s to enable U.K. and U.S. spy agencies to physically tap East German phone lines, today's communications traverse the Internet, making the firewall a critical junction for interception and eavesdropping. The physical firewall has long been a target for compromise, particularly for embedded backdoors. Two decades ago, the U.S. Army sent a memo warning of backdoors uncovered in the Checkpoint firewall product by the NSA with advice to remove it from all DoD networks. In 2012, a backdoor was placed in the Fortinet firewalls and products running their FortiOS operating system. That same year, the Chinese network appliance vendor Huawei was banned from all U.S. critical infrastructure by the federal government after numerous backdoors were uncovered. And most recently, Juniper alerted customers to the presence of unauthorized code and backdoors in some of its firewall products — dating back to 2012. State-sponsored adversaries, when unable to backdoor a vendor's firewall through the front-door, are unfortunately associated with paying for weaknesses and flaws to be introduced — making it easier to exploit at a later date. For example, it is largely reported that the U.S. government paid OpenBSD developers to backdoor their IPsec networking stack in 2001, and in 2004, $10 million was reportedly paid to RSA by the NSA to ensure that the flawed Dual_EC_DRBG pseudo-random number-generating algorithm be the default for its BSAFE cryptographic toolkit. If those vectors were not enough, as has been shown through the Snowden revelations in 2013 and the Shadow Brokers data drop of 2016, government agencies have a continuous history of exploiting vulnerabilities and developing backdoor toolkits that specifically target firewall products from the major international infrastructure vendors. For example, the 2008 NSA Tailored Access Operations (TAO) catalogue provides details of the available tools for taking control of Cisco PIX and ASA firewalls, Juniper NetScreen or SSG 500 series firewalls, and Huawei Eudemon firewalls. Last but not least, we should not forget the inclusion of backdoors designed [...]



Increasing the Strength of the Zone Signing Key for the Root Zone, Part 2

2016-09-28T10:32:00-08:00

A few months ago I published a blog post about Verisign's plans to increase the strength of the Zone Signing Key (ZSK) for the root zone. I'm pleased to provide this update that we have started the process to pre-publish a 2048-bit ZSK in the root zone for the first time on Sept. 20. Following that, we will publish root zones with the larger key on Oct. 1, 2016. To help understand how we arrived at this point, let's take a look back. Beginning in 2009, Verisign, the Internet Corporation for Assigned Names and Numbers (ICANN), the U.S. Department of Commerce, and the U.S. National Institute of Standards and Technology (NIST) came together and designed the processes and plans for adding Domain Name System Security Extensions (DNSSEC) to the root zone. One of the important design choices discussed at the time was the choice of a cryptographic algorithm and key sizes. Initially, the design team planned on using RSA-SHA1 (algorithm 5). However, somewhat late in the process, RSA-SHA256 (algorithm 8) was selected because that algorithm had recently been standardized, and because it would encourage DNSSEC adopters to run the most recent name server software versions. One of the big unknowns at the time revolved around the size of Domain Name System (DNS) responses. Until DNSSEC came along, the majority of DNS responses were relatively small in size and could easily fit in the 512-byte size limit imposed by the early standards documents (in order to accommodate some legacy internet infrastructure packet size constraints). With DNSSEC, however, some responses would exceed this limit. DNS operators at the time were certainly aware that some recursive name servers had difficulty receiving large responses, either because of middleboxes (e.g., firewalls) and gateways that (incorrectly) enforced the 512-byte limit, blocked IP fragments or blocked DNS over Transmission Control Protocol (TCP). This uncertainty around legacy system support for large packets is one of the reasons that the design team chose to use a 1024-bit ZSK for the root zone, and also why NIST's Special Publication 800-57 Part 3 recommended using 1024-bit ZSKs through October 2015. A number of things have changed since that initial design. 1024-bit RSA keys have fallen out of favor: The CA/Browser forum, for example, deprecated the use of 1024-bit keys for SSL as of 2013. This event caused many in the community to begin the transition away from 1024-bit keys. Additionally, operational experience over the years has shown that the DNS ecosystem, and perhaps more importantly, the underlying IP network infrastructure, can handle larger responses due to longer key sizes. Furthermore, there is increased awareness that when DNSSEC signature validation is enabled, a recursive name server might need to rely on either fragmentation of large packets, or the transport of DNS messages over TCP. Today, more than 1,300 top-level domains (TLDs) are signed with DNSSEC. Of these, 97 are already using 2048-bit RSA keys for zone signing. Furthermore, more than 200 TLDs have recently published zones whose DNSKEY response size exceeds 1500 bytes. For these reasons, now is an appropriate time to strengthen the DNS by increasing the root zone's ZSK to 2048-bits. Our colleagues at ICANN agree. According to David Conrad, ICANN's CTO, "ICANN applauds Verisign's proactive steps in increasing the length of the ZSK, thereby removing any realistic concern of root zone data vulnerability. We see this, along with ICANN's updating of the Key Signing Key scheduled next year, as critical steps in ensuring the continued trust by the internet community in the root of the DNS." To raise awareness among the network and DNS operations communities of this improvement to the security of the internet's DNS, we presented our plans at the DNS-OARC, NANOG, IETF, RIPE and ICANN meetings; and will continue to post updates on the NANOG, dns-operations, and dns[...]



Making the Most of the Cloud: Four Top Tips

2016-09-28T09:49:00-08:00

Cloud computing is on the rise. International Data Corp. predicts a $195 billion future for public cloud services in just four years. That total is for worldwide spending in 2020 — more than twice the projection for 2016 spending ($96.5 billion). As a result, companies are flocking to both large-scale and niche providers to empower cloud adoption and increase IT efficacy. The problem? Without proper management and oversight, cloud solutions can end up underperforming, hampering IT growth or limiting ROI. Here are four top tips to help your company make the most of the cloud. Talking Tech The disconnect between corporate-approved and "shadow" IT services puts critical company data in peril, according to a recent report on cloud data security commissioned by Gemalto, a digital security firm. This gap exists thanks largely to the cloud: Tech-savvy employees used to the kind of freedom and customization offered by their mobile devices often circumvent IT policies to leverage the tools they believe are "best" for the job. Solving this problem requires a new tech conversation: IT professionals must be willing to engage with employees to ensure all applications running on corporate networks are both communicating freely and actively securing infrastructure from outside threats. By crafting an app ecosystem that focuses on interoperability and input from end users, it's possible to maximize cloud benefits. Ubiquitous Updates How often does your company update essential cloud services and applications? If the answer is "occasionally," you may be putting your cloud ROI at risk. Here's why: As cloud solutions become more sophisticated, so too are malicious actors as they leverage new techniques to compromise existing vulnerabilities or circumvent network defenses. By avoiding updates on the off chance that they may interfere with your existing network setup, you substantially increase your risk of cloud compromise. Best bet? Make sure all cloud-based applications, platforms and infrastructure are regularly updated, and keep your ear to the ground for any word of emergent threats. Address Automation For many companies, a reluctance to move to the cloud because of security concerns manifests itself as overuse of manual processes. For example, if you're leveraging a cloud-based analytics solution but still relying on human data entry and verification, you're missing out on significant cloud benefits. This is a widespread issue: Just 16 percent of companies asked said they've automated the majority of their total cloud setup, citing security, cost and lack of expertise as the top holdbacks, according to a recent report by Logicworks and Wakefield Research. Bottom line? One key feature of the cloud is the ability to handle large-scale, complex workloads through automation. Avoiding this in favor of manual "checking" means you're missing out on significant cloud returns. Solve SLA Issues Last but not least: Make the most of your cloud deployment by hammering out the ideal service-level agreement (SLA). Right now, there are no hard and fast "standards" when it comes to the language used in SLAs, or the responsibilities of cloud providers. As a result, many SLAs are poorly worded and put vendors in a position to avoid much of the blame if services don't live up to expectations. Avoid this problem by examining any SLA with a critical eye — ask for clarification where necessary and specifics wherever possible, and make sure your provider's responsibility for uptime, data portability and security are clearly spelled out. Want to make the most of your cloud services? Open the lines of communication, always opt for updates, embrace automation, and don't sign subpar SLAs. Written by Jeff Becker, Director of Marketing at ATIFollow CircleID on TwitterMore under: Cloud Computing [...]



The Great Telco Quality Transformation

2016-09-28T07:21:00-08:00

The telecoms industry has two fundamental issues whose resolution is a multi-decade business and technology transformation effort. This re-engineering programme turns the current "quantities with quality" model into a "quantities of quality" one. Those who prosper will have to overcome a powerfully entrenched incumbent "bandwidth" paradigm, whereby incentives are initially strongly against investing in the inevitable and irresistible future. Recently I had the pleasure of meeting the CEO of a fast-growing vendor of software-defined networking (SDN) technology. The usual ambition for SDN is merely internal automation and cost optimisation of network operation. In contrast, their offering enables telcos to develop new "bandwidth on demand" services. The potential for differentiated products that are more responsive to demand makes the investment case for SDN considerably more compelling. We were discussing the "on-demand" nature of the technology. By definition this is a more customer-centric outlook than a supply-centric "pipe" mentality, which comes in a few fixed and inflexible capacities. What really struck me was how the CEO found it hard to engage with a difficult-to-hear message: "bandwidth" falls short as a way of describing the service being offered, both from a supply and demand point of view. At present, telecoms services are typically characterised as a bearer type (e.g. Ethernet, IP, MPLS, LTE) and a capacity (expressed as a typical or peak throughput). Whatever capacity you buy can be delivered over many possible routes, with the scheduling of the resources in the network being opaque to end users. All kinds of boxes in the network can hold up the traffic for inspection or processing. Whatever data turns up will have a certain level of "impairment" as delay and (depending on the technology) loss. This means you have variable levels of quality on offer: a "quantity with quality" model. You are contracted to a given quantity, and it turns up with some kind of quality, which may be good or poor. Generally only the larger enterprise or telco-to-telco customers are measuring and managing quality to any level of sophistication. Where there is poor quality, there may be an SLA breach, but the product itself is not defined in terms of the quality on offer. This "quantity with quality" model has two fundamental issues. The first is that "bandwidth" does not reflect the true nature of user demand. An application will perform adequately if it receives enough timely information from the other end. This is an issue of quality first: you merely have to deliver enough volume at the required timeliness. As a result, a product characterised in terms of quantity does not sufficiently define whether it is fit-for-purpose. In the "quantity with quality" model the application performance risk is left with the customer. The customer has little recourse if the quality varies and is no longer adequate for their needs. Since SLAs are often very weak in terms of ensuring the performance of any specific application, you can't complain if you don't get the quality over-delivery that you (as a matter of custom) feel you are entitled to. The second issue is that "bandwidth" is also a weak characterisation of the supply. We are move to a world with ever-increasing levels of statistical sharing (packet data and cloud computing), and dynamic resource control (e.g. NFV, SD-WAN). This introduces more variability into the supply, and an average like "bandwidth" misses the service quality and user experience effects of these high-speed changes. The impact on the network provider is that they often over-deliver in terms of network quality (and hence carry excessive cost) in order to achieve adequate application performance. Conversely, they also sometimes under-deliver quality, creating customer dissatisfaction and churn, and may not know i[...]



SEO and New dot Brand gTLDs

2016-09-27T16:02:00-08:00

Search Engines drives traffic to a site that is well ranked for the right keyword According to a recent study carried out by Custora in the USA, search engines — paid and organic — represent close to 50% of e-commerce orders, compared to 20% for direct entry. A dot brand domain has the potential to boost direct entry, as it can be more memorable than traditional domains. Can dot brand domains also be part of a consistent search engine strategy? In order to have traffic coming in from search engines, it is necessary to achieve a good ranking: the link in the first position of search engine result page gets 6 times more clicks than the link in the fifth position. Click to Enlarge It is also important that the site is optimised for the right keyword, with enough search volume. For instance, "domain name" is searched 33,100 times per month in google.com, while "gTLD" is searched 1600 times. Being first on a search for "domain name" would generate approximately 20 times more traffic than being first on "gTLD". How important are domain names in search? The google algorithm is kept secret, and its artificial intelligence enables it to learn from the past behaviours and trends. This artificial intelligence enables google to show a different result page to every user, based on their profile and search history. It is therefore very difficult to have an exact list of criteria that play a role in the search rankings. Many specialists try to decrypt and anticipate the algorithm. Moz.com is animating a community of more than 2 million specialists, and every year publishes a list of 90 factors influencing search engine rankings. Every factor is weighted from 1 (meaning the factor has no direct influence) to 10 (meaning that the factor has a strong influence on ranking). Some of these factors do no depend on the keyword. For instance, the number and the quality of inbound links will show that a domain is more authoritative, and that it should therefore be ranked better. Some factors depend on the keyword — such as the number of keyword matches in body text. The global study is available on here on Moz. The domain name related factors fall into three main categories: Factors related to the domain characteristics: These factors include the age of the domain, the duration until expiration, the length of the domain etc.. The corresponding influence score varies from 2.45 to 5.37, which means that they have a relatively low influence. Factors related to the execution of the strategy: These factors are much more important, and they depend on how well the domain name is marketed and operated by the brand. The raw popularity of the domain. i.e. the number of links pointing to the domain, has a weight of 7.15, while the quantity of citations for the domain name across the web are weighted 6.26. Factors related to the presence of keywords: There are five factors that are directly linked to the presence of the keyword in the domain name, depending if they are in the root domain name, in the extension or if they are an exact match. These factors have a relatively low impact, with a score varying between 2.55 until 5.83, the highest being when the search corresponds to the exact match root domain name. Dot brand performance review In order to check the actual performance of the dot brand domains, we performed searches on the brand name and on the second level domain used by the brand. The analysis was performed on the 60 brands that are significantly ranked in the search engine, corresponding to around 250 websites. The full study is available to our members, but here are some of our findings: Presence of dot brand domains when searching for the brand name: 13% of the searches resulted in a "dot brand" domain names on the first position. Home.cern, group.pictet or engineeringandconstructio[...]



Filing Cybersquatting Complaints With No Actionable Claims

2016-09-26T08:01:00-08:00

I noted in last week's essay three kinds of cybersquatting complaints typically filed under ICANN'S Uniform Domain Name Dispute Resolution Policy (UDRP). The third (utterly meritless) kind are also filed in federal court under the Anticybersquatting Consumer Protection Act (ACPA). While sanctions for reverse domain name hijacking are available in both regimes, the UDRP's is toothless and the ACPA's a potent remedy. As a result, claimants who would not dare to file complaints in federal court (or if they do dare lack appreciation of the risk) have no hesitation in maintaining UDRP proceedings. There is a steady stream of UDRP complaints alleging cybersquatting against registrants whose registrations predate complainants trademark rights. While these complainants have standing they have no actionable claims. If the only risk to filing these complaints is a mild slap on the wrist, then complainants have no disincentive trying their luck in the hope providers will appoint panelists who either subscribe to the retrospective bad faith theory of liability or find bad faith on renewal of domain registrations. While the retrospective bad faith theory of liability appears to have retreated from panelists' repertory of awards it emerges in a less toxic form by panelists rejecting requests for reverse domain name hijacking even where trademarks postdate domain registrations and complaints could not possibly state any actionable claim. There is a split of view about sanctioning complainants under the UDRP who are overreaching their rights. This is very different from the view taken by federal judges under the ACPA. The better reasoning for RDNH under the UDRP where complainants knew or should have known their complaints could not succeed is to sanction complainant for abusive use of the proceedings; not appropriate for weak cases, but certainly warranted for meritless ones. This precisely describes trademark owners whose rights postdate domain name registrations. Two decisions from veteran panelists stand out: Nucell, LLC. v. Guillaume Pousaz, CAC 101013 (ADR.eu July 7, 2015) and Cyberbit Ltd. v. Mr. Kieran Ambrose, Cyberbit A/S, D2016-0126 (WIPO February 26, 2016). Majority Panels that have declined to award RDNH for abuse of process have elicited strong opinions from the concurring/dissenting members on RDNH. When we move to statutory claims of cybersquatting, we are in a totally different environment. Federal courts have no hesitation in awarding damages for reverse domain name hijacking under the ACPA. Commencing meritless actions is always risky but the risk is intensified under the ACPA because it expressly grants damages up to $100,000 per domain name under 15 U.S.C. §1117(d) and attorney's fees where the courts finds the case exceptional under 15 U.S.C. §1117(a). Whether plaintiffs can extricate themselves after commencing the action depends in part on defendant's aggressiveness in objecting to voluntary dismissal. In a Southern District of New York case in 2015, Office Space Solutions, Inc v. Jason Kneen, 15-cv-04941 dismissed with prejudice defendant (surprisingly) did not seek damages or attorney's fees. That is unusual and not typical; other complainants have not been so lucky. The degree of risk is illustrated in Heidi Powell v. Kent Powell and Heidi Powell, 16-cv-02386 (D. AZ) (a direct filing in federal court, not a de novo action following a UDRP award). Plaintiff alleged she is a well-known guru in the health area. When she "attempted to register the domain name www.heidipowell.com" she discovered it was already taken by defendant, a grandmother whose name happens to be Heidi Powell. The plaintiff "Heidi Powell" was not baptised with that name. It is evident from reading the complaint that plaintiff had no understanding of the risk in filing the complaint. She all[...]



Observation from Seminar on IANA Function Stewardship Transition Held by CAICT

2016-09-25T11:04:00-08:00

This article was co-authored by GUO Feng, Senior Research Fellow of China Academy of Information and Communication Technology (CAICT) and JI Yenan, Research Fellow of CAICT On August 16 of 2016, the US Government announced its intention to transit the stewardship of the Internet Assigned Numbers Authority (IANA) function to the multistakeholder community upon the expiration of the IANA function Contract as of October 1 of 2016, barring any significant impediment, in a formal letter to Mr. Göran Marby, President and CEO of the Internet Corporation for Assigned Names and Numbers (ICANN). This announcement attracts the close attention of Internet community around the world and also in China. On August 30 of 2016, China Academy of Information and Communication Technology (CAICT), which serves as a China ICT Think Tank and has long been concerned with and involved in global Internet governance, held a Seminar on IANA Function Stewardship Transition (Referred to as the "IANA transition") inviting representatives from China Internet community to discuss topics such as the progress of the IANA transition, the influences on China Internet community, post-transition developments, and attentive issues of future Internet governance. 16 representatives from government agencies, registries, registrars, industrial organizations, research institutes and universities participated in the seminar. The participants generally welcomed the progress of the IANA transition. They viewed that the process of IANA transition has entered a relatively steady stage and estimated that the smooth transition is highly probable at the time of the expiration of the IANA function Contract on September 30. However, some participants showed their concerns with possible problems and risks in the operation of the IANA function after transition. They identified that, although the rules and mechanisms for the operation of the IANA function after transition have been established, ICANN newly-established Post-Transition IANA (PTI), new community supervision mechanisms, such as Customer Standing Committee (CSC) and Root Zone Evolution Review Committee (RZERC), and other mechanisms and organizations related to the IANA function, may go through a long adaption period after transition which will be tested and verified whether these mechanisms and organizations could operate with one another stably and effectively. The participants of the seminar also looked into the future development after transition. The representative from registries and registrars clearly expressed that ICANN should focus on its main business after transition to provide better services. The business community in China tended to have high expectation of Mr. Göran Marby, the new President and CEO of ICANN, for his practical work style and attitudes after taking his post, and hoped that ICANN could initiate the next round of the new generic Top-Level Domain (gTLD) program as soon as possible. Some participants pointed out that some topics such as human rights discussed by the CCWG-Accountability Work Stream 2 (WS2) were, to certain extent, away from the working scope of ICANN as a "technical coordinator" of DNS whose influences on the future operation of ICANN needed to be further observed. Some participants also discussed and expressed their concerns with the attentive issues of Internet governance in the future such as Internet fragmentation, governance of domain name registration data, and development of the new gTLD market, etc. The participants unanimously expected that, with the enhanced capacity building and industrial development, China Internet community would participate more actively in ICANN processes and play a more important role in ICANN and global Internet governance arena after transition. They viewed the [...]



Moving to the Cloud? 10 Key Questions for CIOs

2016-09-23T12:19:00-08:00

As business computing demand explodes and web apps rule the market, moving to the cloud seems unavoidable. But even as cloud services mature, many organization make costly mistakes — and not all of them are technical in nature. According to Cloud Tech, CIOs are on the front lines: In 72 percent of companies surveyed, chief information officers lead the cloud computing charge. However, adoption without the right information is doomed to fail — here are 10 key questions CIOs should ask before moving operations to the cloud. What's the Business Benefit? INFORGRAPHIC – 10 Questions CIOs Should Ask Before Moving Operations to the Cloud by SingleHop (Click to View)First, it's critical to identify business benefits. Here the key to success lies in specifics rather than generalities — how will your company leverage cloud resources to benefit existing customers, open new markets or get ahead of competitors? How Will You Use Cloud Tech? Cloud solutions are quickly becoming ubiquitous; almost 50 percent of companies store more than half their data in the cloud, 94 percent run at least one cloud app, and 55 percent have some portion of their ERP in the cloud. With so many processes now running off site, it's critical to identify specific use cases or risk cloud sprawl driving up total costs. Which Solution Is Your Best Fit? Public, private or hybrid? All three are viable options. Companies are split on the use of public versus private resources, but 75 percent plan to implement a hybrid strategy. Before investing, determine: Are you looking for easy resource scaling and lower costs, on-site servers with the benefit of greater control, or a mix of both? Storage: How Much Is Enough? Is it better to buy more than you need or purchase "just enough" storage to meet your data needs? Current market trends suggest the latter: While many companies experience 40 to 60 percent growth in storage requirements year over year, datacenters typically see price drops of over 20 percent in the same period. The result? Data provisioning may be your best bet. Is Your Provider Industry Compliant? Cloud technology doesn't exist in a vacuum, and leveraging new solutions means partnering with a reliable cloud provider — but not all vendors are created equal. As a result, it's worth asking if your vendor is up to the task of meeting industry-standard compliance regulations — can it handle health data, credit card information or insurance information? Where Are Your Hidden Costs? CIOs often pitch cloud computing as a cost-effective alternative to in-house IT. With any cloud service, however — and public clouds especially — you may be on the hook for line items such as data uploads and downloads, disaster recovery or customer support. Make sure your SLA spells out all costs in detail before you sign. Can Staff Spare Time and Effort? While going cloud takes much of the burden off in-house IT pros, you still need a way to administer and provision these services. With 32 percent of companies citing lack of resources as their top cloud migration challenge it's worth asking if your staff can spare the time and effort to handle new tech deployments — for many companies, managed cloud services can help support new cloud initiatives by letting IT pros focus on local tech issues. What's Your Plan? The cloud can't guarantee success in isolation; 50 percent of companies moving to the cloud experience business-impacting performance issues because of poor network design. For CIOs this means crafting a plan for success that accounts for existing infrastructure, cloud scale-up and eventual phase-out of legacy solutions. Do You Need a Pilot Program? With so many companies shifting to the c[...]



Benefits and Challenges of Multiple Domain Names in a Single UDRP Complaint

2016-09-23T07:07:00-08:00

How many domain names can be included in a single complaint under the Uniform Domain Name Dispute Resolution Policy (UDRP)? Neither the UDRP policy nor its corresponding rules directly address this issue, although the rules state that a "complaint may relate to more than one domain name, provided that the domain names are registered by the same domain-name holder." As I have written before (see, "The Efficiency of Large UDRP Complaints”), there are obvious incentives for a trademark owner to include multiple domain names in a complaint. Chief among them: The filing per domain name can drop significantly when more than one is included in a complaint. Cost-Effectiveness Average Number of Domain Names per Case(Click to Enlarge)For example, under WIPO's fee schedule, the base filing fee of $1,500 doesn't change even if the complaint includes up to five domain names. Said another way, for a UDRP complaint with one domain name, the filing fee is $1,500 per domain name; but if five domain names are included, the effective filing fee drops to only $300 per domain name. Although the total filing fee increases if a UDRP complaint includes more than five domain names, the effective fee per domain name can be reduced tremendously with large complaints. (WIPO's published fee schedule only addresses complaints with up to 10 domain names, with larger filings incurring a fee "[t]o be decided in consultation with the WIPO Center.") Filing Trends As the chart above makes clear, the average number of domain names per complaint (at WIPO) has varied through the years — from one (in 1999, when the first and only UDRP complaint was filed) to 2.39 (in 2013). The largest UDRP complaint ever filed included more than 1,500 domain names. (Disclosure: I represented the complainant in that massive case.) So far in 2016, the average number of domain names per complaint is slightly higher than in 2015 (1.79 v. 1.58), thanks in part to some particularly large filings this year, such as a complaint filed by Jaguar Land Rover Limited for 101 domain names; a complaint filed by Bank of America Corporation for 59 domain names; a complaint filed (and terminated before decision) by Calvin Klein Trademark Trust & Calvin Klein, Inc. for 72 domain names; and a complaint filed by Facebook, Inc. and Instagram, LLC for 46 domain names. 'Same Domain-Name Holder' Confusion While the UDRP rules' reference to "the same domain-name holder" may at first glance seem clear when it is appropriate to include multiple domain names in a single complaint, in practice the issue can become quite complicated. As Gerald M. Levine has succinctly put it in his book on domain name disputes, "The phrase 'same domain name holder' has been construed liberally to include registrants who are not the same person but circumstances suggest the domain names are controlled by a single entity." Just what these "circumstances" are can sometimes be difficult to decipher, especially when one person or entity provides multiple registrant names for multiple domain names. For example, one UDRP panel wrote (in a case brought by General Electric Company for 17 domain names): "[T]he mere fact of registrants being differently named has, in various previous cases, not prevented a finding that there is one proper Respondent, in circumstances which indicate that the registrants may be regarded as the same entity in effect." The issue of whether multiple domain names are registered by a single "domain-name holder" is especially complicated when privacy or proxy services mask the registrant's true identity. * * * In any event, a trademark owner contemplating whether to file a UDRP complaint may find the process more compelling if [...]



Domain Name Association Supports IANA Transition, Petitions Congress to Move Forward

2016-09-22T15:13:00-08:00

I recently sent a letter to congressional leaders including Speaker of the House Paul Ryan; House Minority Leader Nancy Pelosi; Senate Majority Leader Mitch McConnell and Senate Minority Leader Harry Reid expressing the Domain Name Association's support of the U.S. Administration's planned transition of the Internet Assigned Numbers Authority (IANA) to the global multi-stakeholder community under the stewardship of the Internet Corporation for Assigned Names and Numbers (ICANN). The Domain Name Association is a non-profit association that represents the interests of the domain name industry. There is no industry more impacted by the work of ICANN than ours. Each of our companies rely on ICANN's efficient management of the IANA functions and fair administration of the domain name system's contractual regime. I believe these community-developed proposals will ensure the ability of our industry to thrive and encourage you to join us in supporting this necessary and overdue transition. As the domain name industry's leading business association, we have sought to ensure that, ICANN remains independent, free from capture from any entity or group; Governments may participate as stakeholders but may not assert undue authority; Permission-less innovation remains a prevailing force and ICANN becomes truly accountable to the global Internet community. Bill Empowers Businesses And Technical Experts; Reinforces Advisory Role Of Governments Our members have devoted countless hours to the lengthy process preparing for this transition and we can state with confidence that the IANA transition plan meets our requirements on all counts. Some are raising concerns, particularly stating that by relinquishing the IANA contract, the U.S. will be handing the Internet over to authoritarian regimes. That, simply, will not be case. In fact, we believe the exact opposite will occur with this transition. The community developed proposals contain a number of specific provisions that will make government interference at ICANN far less likely and will empower stakeholders to step in should ICANN waver under the outside influence of governments. Our member companies, those in the U.S., and those located elsewhere around the globe, have too much at stake to submit to a change that could empower governments over the policies and arrangements that guide our business. This transition plan empowers businesses and technical experts and properly reinforces the advisory nature of governments. The Domain Name Association Applauds Congress The Domain Name Association applauds the oversight role Congress has played throughout the transition. I believe the close attention paid by members and staff has directly led to a sharper product, helping ensure greater stability and accountability at ICANN. Given the success of that oversight, I now ask that Congress acknowledge the will of those impacted the most and allow the transition to occur in a timely manner. Written by Roy Arbeit, Executive Director at Domain Name Association Follow CircleID on TwitterMore under: Domain Names, Registry Services, ICANN, Policy & Regulation, Top-Level Domains [...]



Refutation of the Worst IANA Transition FUD

2016-09-21T14:33:00-08:00

Of all the patently false and ridiculous articles written this month about the obscure IANA transition which has become an issue of leverage in the partisan debate over funding the USG via a Continuing Resolution, this nonsense by Theresa Payton is the most egregiously false and outlandish. As such, it demands a critical, nearly line by line response. * * * Changing who controls the Internet Corporation for Assigned Names and Numbers (ICANN) so close to our presidential election will jeopardize the results of how you vote on Nov. 8 unless Congress stops this changeover. So the first sentence is fairly loaded with nuance. We aren't "changing who controls" ICANN, as much as letting them continue to do what they have been doing for the last 2 decades. The "change" is that they will run the Internet Assigned Numbers Authority (IANA) as a subsidiary instead of as a zero dollar contractor of the US. The Board of Directors of ICANN will continue to "control" ICANN and the ICANN policy community will have greater accountability measures in place to "control" the ICANN Board after the contract expires at midnight September 30th, 2016. But this contract expiration WILL IN NO WAY have anything to do with the US election voting. Nothing, nada, zilch. Pure FUD, totally made up out of thin air. * * * When the calendar hits Sept. 30, a mere 6 weeks before our election, the United States cannot be assured that if any web site is hacked, the responsible party will be held accountable. At the moment, the United States cannot be sure that responsible parties will be held to account for hacking today. ICANN has NOTHING to do with this aspect of cybersecurity, not a damn thing. This is what I call "Beyond the Palin" on Ms. Payton's part, a complete fabrication. * * * We cannot be sure if a web site is a valid. Not sure what she means here, but there is nothing that ICANN does or doesn't do in terms of website "validity" that will change after September 30th. * * * We cannot be sure if one country is being favored over another. In terms of nations states participating AS nation states inside ICANN's Government Advisory Committee, there is no change that will or will not favor one nation over another. The reality is that the ICANN policy making community is dominated in many ways by American Registries, Registrars and activists. This won't change after Sept 30th. * * * These are all the things ICANN is responsible for and has worked perfectly since the Internet was created. NONE of the things listed above by the author are things that ICANN is responsible for. Not one thing. It is a sheer fabrication! ICANN has patently not worked "perfectly" since the Internet was created. ICANN has been in existence for half of the life of the Internet and has acted in flawed ways over the last 17 years (some due to the existence of the contract about to expire). The reforms that are scheduled to go into effect on October 1 are attempts to fix some of these flaws. * * * Why change it now and so close to the election? Why does that matter to you as a voter? The Internet Naming, Numbering and Standards communities have been working diligently for years on these reforms so this contract CAN expire on Sept 30th. It only matters to voters who consume the fact free rhetoric of certain GOP politicians who SHOULD (if true to small government principles) be in favor of this privatization/contract expiration. * * * Take a look at recent cyber activity as it relates to the election. The Democratic National Convention was breached comprising the entire party's strategy, donor base, and indeed, national convention. Everything the DNC had done to prepare [...]



Breaking Nonsense: Ted Cruz, IANA Transition and the Irony of Life

2016-09-21T11:18:00-08:00

Harvard Professor Karl Deutsch, the late nestor of political science, described world history as the "history of side effects". Political actions, according to his theory, always have side effects which go out of control and constitute new history. The history of the Internet is full of side effects. But this time, we could have special unproductive side effects. A failure of the IANA transition could trigger a process towards a re-nationalization of the borderless cyberspace and Ted Cruz would go into the Internet history books as the "Father of the Internet Fragmentation". The IANA History The battle around the IANA transition meanwhile has a history of its own going back more than 30 years. IANA emerged as a one-man-institution of Jon Postel in the 1980s. IANA was never the "controller" of the Internet. It was an "enabler". The IANA database is just like a "phone book" which enables users to find addresses. Postel operated IANA with the help of one assistant under a contract of his Information Science Institute (ISI) at the University of Southern California (USC) with DARPA, the advanced research agency of the US Department of Defense. Under this contract the US government authorized the publication of zone files for top level domains in the Internet root server system. This contract expired in 1997 and was extended until 2000. In the early 1990s, after the invention of the world wide web, it became clear that the six gTLDs (.com, .net, .org, .gov, .edu and .mil), which were established in the 1980s, would not be enough. In the middle of the 1990s Postel had its own ideas how to extend the gTLD namespace. He flirted with the ITU and WIPO, two intergovernmental organizations of the UN system, to launch additional seven new gTLDs via an Interim Ad Hoc Committee (IAHC). The Clinton administration was not amused; saw the risk of a fragmentation of the Internet and proposed an alternative route. A private non-for profit corporation with an international board, incorporated under Californian law was seen as the better alternative. In this model the decision making power would remain in the hands of the non-governmental provider and users of Internet servicers from the private sector, the technical community and the civil society. Governments were put into a "Governmental Advisory Committee" (GAC). ICANN was established in 1998. This model — today known as the multistakeholder model — was a political innovation. The plan to give the management of a critical global virtual resource in the hands of qualified non-governmental stakeholders, rocked the traditional mechanisms of international relations. But not everybody was excited. Skeptical voices raised issues of legitimacy and accountability for the new ICANN. And many governments were not happy with the "advisory role" in the GAC. Indeed, when ICANN was established, it was unclear whether this innovation would work. To reduce the risk of a failure, the US government entered into a Memorandum of Understanding with the new ICANN which included the duty for ICANN to report on a regular basis to the National Telecommunication and Information Administration (NTIA) of the US Department of Commerce. Furthermore, the US government transferred the contract with the USC, into a contract with ICANN to continue its stewardship role with regard to the IANA service. ICANN was still untested. The original plan was to give ICANN full independence after two years. But even in the high speed Internet world, this was an unrealistic plan. To establish a multistakeholder mechanism is an extreme complex challenge. ICANN made progress from its very first day. But it w[...]



Infrastructure for a Connected World

2016-09-21T10:03:00-08:00

Connected devices need a free-to-use infrastructure that allows for innovation beyond the needs of a provider or other intermediary. An interface is best when it disappears and the user can focus the problem at hand. In the same way infrastructure, is best when it can simply be assumed and becomes invisible. With an invisible infrastructure as with an invisible interface a user can concentrate on their tasks and not think about the computer. Dan Bricklin and I chose to implement VisiCalc on personal computers that people could just purchase. This made VisiCalc free to use. The reason the Internet has been so transformative is that it gives us the ability to ignore the "between" and focus on the task at hand or problem we are trying to solve. To use a website all you need to do is open the browser and type the URL (or, often, use an app), and it "just works". We take this for granted now. But when the web first burst onto the scene it seemed like magic. And, amazingly the web is effectively free-to-use because you pay for the connectivity totally apart from each website or connection. If we are to extend this magic to connected things, aka the Internet of Things, we need to look behind the screen and understand the "why" of this magic. In order to use the web, we just need connectivity. This worked well in local networks such as Ethernets where you can just plug in your computer and connect to any other such computer locally and thanks to interworking (AKA The Internet) this simplicity was extended to any other connected computer around the world. Today I can connect to the web as I travel by having a cellular account and cadging connectivity here and there after manually signing up to websites (or lying by saying I read through an agree screen) and working past WiFi security perimeters. And we accept that oftentimes we're blocked. If we are to truly support an "Internet of Things" we need to assure free-to-use connectivity between any two end points. Achieving this is a matter of technology and economics. To take a simple example: if I'm wearing a heart monitor it needs to be able to send a message to my doctor's monitoring system without having to negotiate for passage. No agree screens or sign-up routines. For this to occur we need what I call Ambient Connectivity — the ability to just assume that we can get connected. This assumption is the same as assuming that we have access to sidewalks, drinkable water and other similar basics all around us. The principle challenge to achieving Ambient Connectivity today is economic. At present we fund the infrastructure we use to communicate in much the same way we paid for railroad trips by paying the rail companies for rides just as we pay a phone company to carry our speech. For a railroad operator, owning tracks is a necessary expense it bears so that it can sell the rides. It wouldn't make sense to offer rides to places that aren't profitable to the railroad. It doesn't allow you to explore beyond the business needs of the railroads' business model. In this same way the telecommunications company owns wires (or frequencies) so that it can sell (provide) services such as phone calls and "cable". It can't make money on value created outside the network. This is why there is so much emphasis on being in the middle of "M2M" or a machine-to-machine view of connected things and treating them like dumb end points like telephones. With the Internet we create solutions in our computers and devices without depending on the provider to assure they reach the messages' correct destination in order. In this sense they are more like [...]



The Kindness of Strangers, or Not

2016-09-20T11:07:01-08:00

A few days ago I was startled to get an anti-spam challenge from an Earthlink user, to whom I had not written. Challenges are a WKBA (well known bad idea) which I thought had been stamped out, but apparently not. The plan of challenges seems simple enough; they demand that the sender does something to prove he's human that a spammer is unlikely to do. The simplest ones just ask you to respond to the challenge, the worse ones like this one have a variety of complicated hoops they expect you to jump through. What this does, of course, is to outsource the management of your mailbox to people who probably do not share your interests. In this case, I sent a message to a discussion list about church financial management, and the guy sending the challenges is a subscriber. Needless to say, an anti-spam system that challenges messages from mailing lists to which the recipient has subscribed is pretty badly broken, but it's worse than that. On the rare occasions that I get challenges, my goal is to make the challenges go away, so I have two possible responses: If it's in response to mail I didn't send, i.e., they're responding to spam that happens to have a forged From: address in one of my domains, I immediately confirm it. That way, when the guy gets more spam from the forged address, it'll go straight to his inbox without bothering me. Since the vast majority of spam uses forged addresses, this handles the vast majority of the challenges. If it's in response to mail I did send, I don't confirm it, since I generally feel that if it's not important enough for them to read my mail, it's not important enough for me to send any more. In this particular case, I wrote to the manager of the mailing list and encouraged him to suspend the offending subscriber, since if he's sending me challenges, he's sending them to everyone else who posts to the list, too. You may have noticed that neither of these is likely to be what the person sending the challenges hoped I would do. But you know, if you give random strangers control over what gets into your inbox, you get what you get. So don't do that. There are plenty of other reasons not to send challenges, notably that many mail systems treat them as "blowback" spam with consequent bad results when the system sending the challenges tries to send other mail, but I'd hope the fundamental foolishness of handing your inbox to strangers would be enough to make it stop. Written by John Levine, Author, Consultant & SpeakerFollow CircleID on TwitterMore under: Email, Spam [...]



DDOS Attackers - Who and Why?

2016-09-19T20:53:00-08:00

Bruce Schneier's recent blog post, "Someone is Learning How to Take Down the Internet", reported that the incidence of DDOS attacks is on the rise. And by this he means that these attacks are on the rise both in the number of attacks and the intensity of each attack. A similar observation was made in the Versign DDOS Trends report for the second quarter of 2015, reporting that DDOS attacks are becoming more sophisticated and persistent in the second quarter of 2016. The Verisign report notes that the average attack size is 17Gbps, with a number of persistent attacks of the order of 100Gbps or greater. The number reported is 75% larger than the comparable period of a year ago. To quote from the report: "Verisign's analysis shows that the attack was launched from a well-distributed botnet of more than 30,000 bots from across the globe with almost half of the attack traffic originating in the United States." The State of the Internet report from Akamai for the second quarter of 2016 paints a disturbingly similar picture: they observed a 129% increase in DDOS attacks over the same period in 2015, with increases in NTP reflection attacks and associated UDP flooding attacks. The obvious question I have when reading these reports is, who is behind these attacks, and why are they doing it? There has been a visible evolution of malice and hostility on the Internet. The earliest recorded event that I can recall is the Morris Worm of November 1988. This was a piece of self-replicating software that operated in a manner similar to many biological viruses — once a host was infected, the host tried to infect other hosts with an exact copy of its own code. The author, Robert Morris, was evidently a curious graduate school student. This was perhaps the first public Internet example of the 'heroic hacker' form of attack, typified by apparently pointless exploits that have no obvious ulterior motive other than flag planting, or other forms of discovery. A public declaration that "I was here” appeared to the motivation that was the primary objective of many of these hacker exploits. However, this situation did not remain so for long. While the task of finding new attack vectors was a challenging task that involved some considerable expertise, it was quickly observed that the level of mediation of previously discovered vulnerabilities was woefully small. As long as the vulnerabilities remained unfixed, the attacks could simply be repeated, and pretty quickly much of this work was packaged into scripts. This resulted in a new wave of attacks typified by so-called 'script kiddies' who ran these attack scripts without detailed knowledge of precisely how they exploited vulnerabilities in host systems. While it's debatable, it appears in retrospect that the motive of the script kiddies was still predominately flag planting. The next step in this unfortunate story was the introduction of money, and predictably where money flows, then crime follows soon after. Script authors rapidly discovered that they could sell their attack scripts, so that what was once a hobby turned into a profession. Equally the potential attackers found that they could turn the threat of an attack into a monetary opportunity: launch a small attack and threaten a larger and more prolonged attack unless the victim paid up. There is no doubt that this criminal component of attack activity persists on the Internet today, but it is increasingly difficult to reconcile the level of expertise and capability that lies behind some of these large scale attacks on criminal activit[...]



Three Kinds of UDRP Disputes and Their Outcomes

2016-09-19T08:17:00-08:00

There are three kinds of UDRP disputes, those that are out-and-out cybersquatting, those that are truly contested, and those that are flat-out overreaching by trademark owners. In the first group are the plain vanilla disputes; sometimes identical with new TLD extensions ( and ; sometimes typosquatting ( and ); other times registering dominant terms of trademarks plus a qualifier ( and ). Respondents in this group have no defensible positions and invariably default in appearance; in essence the registrations are opportunistic and mischievous and clearly in breach of respondents' warranties and representations. This group comprises by far the largest number of defaults, between 85% and 90%. The second group consists of complainants whose trademarks have priority of use in commerce (they have to have priority otherwise there can be no bad faith registration) but either 1) had no reputation in the marketplace when the domain name was registered, 2) parties are located in different markets or countries, so respondents can plausibly deny knowledge of the marks, or 3) the terms are generic or descriptive, thus capable of being independent of any reference to trademark values. Examples are in Circus Belgium v. Domain Administrator, Online Guru Inc., D2016-1208 (WIPO September 5, 2016) (); Javier Narvaez Segura, Grupo Loading Systems S.L. v. Domain Admin, Mrs. Jello, LLC, D2016-1199 (WIPO August 31, 2016) () and Hopscotch Group v. Perfect Privacy LLC/Joseph William Lee, D2015- 1844 (WIPO January 20, 2016) (). In Javier Naarvaez, the Panel notes for example that the dominant word element of the trademark is descriptive, and as such a respondent has a right to register and use a domain name to attract Internet traffic based on the appeal of commonly used descriptive or dictionary terms, in the absence of circumstances indicating that the respondent's aim in registering the disputed domain name was to profit from and exploit the complainant's trademark. The underlying policy in registering domain names that by happenstance correspond to trademarks is they are not per se unlawful without proof respondents registered them with complainants' trademarks in mind, and for this reason there's no breach of registrants' warranties and representations. Although Respondents in the cited disputes prevailed, the Panels denied reverse domain name hijacking in part because weak cases in this group but not the third group do not rise to the level of proof necessary to support sanctions. In Javier Narvaez the Panel notes that "Respondent has plausibly denied that it knew of the Complainant when Respondent registered the Disputed Domain Name [and there's] no evidence that in making the registration Respondent targeted Complainant or Complainant's trademark." Because arbitration is different from litigation, defaulting respondents in the second group can be exonerated of bad faith if the circumstances support good faith registration. Bigfoot Ventures LLC v. Shaun Driessen, D2016-1330 (WIPO August 6, 2016) () (RDNH denied because of the unusual combination of words). Complainants can even be charged with reverse domain name hijacking when respondents default; that is, sanctions don't have to be requested (although some panelists believe they do have to be!). Cyberbit Ltd. v. Mr. Kieran Ambrose, Cyberbit A/S, D2016-0126 (WIPO February 26, 2016) (). The third group is trademark owners whose rights postdate the registrations of the [...]