Subscribe: CircleID
http://www.circleid.com/rss/rss_intblog/
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
circleid twittermore  disputed domain  dns  domain names  domain  global  internet  network  new  panel  security  service  udrp 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID

CircleID



Latest posts on CircleID



Updated: 2017-11-18T11:45:00-08:00

 



ICANN Issues Guidance to Domain Registrars and Registries in Light of Hurricane Maria

2017-11-18T11:45:00-08:00

ICANN has issued a guidance notice to registrars and registries in relation to Hurricane Maria, which caused massive damage throughout the Caribbean. * * * Dear gTLD Registries and Registrars, As you know, Hurricane Maria caused catastrophic damage in the Caribbean and surrounding areas. We have also heard from community members that ongoing issues with electric power grids and the telecommunications infrastructure are affecting the ability of some registrants to renew their domain names. In order to assist registrars and registries in providing continuity of service to affected customers, ICANN hereby approves Hurricane Maria and other similar natural disasters as an extenuating circumstance under RAA section 3.7.5.1. "3.7.5.1 Extenuating circumstances are defined as: UDRP action, valid court order, failure of a Registrar's renewal process (which does not include failure of a registrant to respond), the domain name is used by a nameserver that provides DNS service to third-parties (additional time may be required to migrate the records managed by the nameserver), the registrant is subject to bankruptcy proceedings, payment dispute (where a registrant claims to have paid for a renewal, or a discrepancy in the amount paid), billing dispute (where a registrant disputes the amount on a bill), domain name subject to litigation in a court of competent jurisdiction, or other circumstance as approved specifically by ICANN." Based on this approval, registrars will be permitted to temporarily forebear from canceling domain registrations that were unable to be renewed as a result of the natural disaster. This and other devastating events highlight the potential need for a policy initiative to protect registrants when they are unable to renew their domains as a result of natural disasters or other extraordinary circumstances. In the interim, we encourage you to take these circumstances into consideration when reviewing renewal delinquencies from affected areas. Thank you for your attention. Please let me know if you have any questions, or if there is anything else ICANN might be able to do to assist you in providing continuity of service to customers affected by Hurricane Maria or other natural disasters. Sincerely, Akram Atallah President, Global Domains Division * * * This isn't the first time that this has happened, with a previous incident in Asia triggering action from both registrars and registries to give domain name registrants impacted by the natural disaster breathing space. Several people within the broader ICANN community had raised the issue related to Caribbean registrants in the last couple of weeks. ICANN giving registrars and registries a "green light" means that there shouldn't be any issues with contractual compliance should a registrar or registry give people extra leeway. Written by Michele Neylon, MD of Blacknight SolutionsFollow CircleID on TwitterMore under: Domain Management, Domain Names, ICANN, Registry Services [...]



Berners-Lee Talks Net Neutrality in Washington, "ISPs Should be Treated More Like Utilities"

2017-11-17T12:34:00-08:00

Tim Berners-Lee is in Washington urging lawmakers to reconsider the rollback of net neutrality laws — while remaining optimistic, he sees a "nasty wind" blowing amid concerns. Olivia Solon reporting in The Guardian writes: "These powerful gatekeepers ... control access to the internet and pose a threat to innovation if they are allowed to pick winners and losers by throttling or blocking services. It makes sense, therefore, that ISPs should be treated more like utilities. ... 'Gas is a utility, so is clean water, and connectivity should be too,' said Berners-Lee. 'It's part of life and shouldn't have an attitude about what you use it for — just like water.'"

Follow CircleID on Twitter

More under: Access Providers, Net Neutrality, Policy & Regulation




U.S. Government Takes Steps Towards Increased Transparency for Vulnerabilities Equities Process

2017-11-16T18:47:00-08:00

The White House has released a charter offering more transparency into the Vulnerabilities Equities Process. Tom Spring from ThreatPost reports: "On Wednesday it released the 'Vulnerabilities Equities Policy and Process' [PDF] charter that outlines how the government will disclose cyber security flaws and when it will keep them secret. The release of the charter is viewed as a positive by critics and a step toward addressing private-sector concerns that the VEP's framework is to secretive."

Follow CircleID on Twitter

More under: Cybersecurity, Policy & Regulation




IBM Launches Quad9, a DNS-based Privacy and Security Service to Protect Users from Malicious Sites

2017-11-16T17:58:00-08:00

In a joint project, IBM Security along with Packet Clearing House (PCH) and The Global Cyber Alliance (GCA) today launched a free service designed to give consumers and businesses added online privacy and security protection. The new DNS service is called Quad9 in reference to the IP address 9.9.9.9 offered for the service. The group says the service is aimed at protecting users from accessing malicious websites known to steal personal information, infect users with ransomware and malware, or conduct fraudulent activity. Quad9 is said to provide these protections without compromising the speed of users' online experience. From the announcement: "Leveraging PCH's expertise and global assets around the world, Quad9 has points of presence in over 70 locations across 40 countries at launch. Over the next 18 months, Quad9 points of presence are expected to double, further improving the speed, performance, privacy and security for users globally. Telemetry data on blocked domains from Quad9 will be shared with threat intelligence partners for the improvement of their threat intelligence responses for their customers and Quad9." — The Genesis of Quad9: "Quad9 began as the brainchild of GCA. The intent was to provide security to end users on a global scale by leveraging the DNS service to deliver a comprehensive threat intelligence feed. This idea lead to the collaboration of the three entities: GCA: Provides system development capabilities and brought the threat intelligence community together; PCH: Provides Quad9's network infrastructure; and IBM: Provides IBM X-Force threat intelligence and the easily memorable IP address (9.9.9.9)." — Philip Reitinger, President and CEO of the Global Cyber Alliance: "Protecting against attacks by blocking them through DNS has been available for a long time, but has not been used widely. Sophisticated corporations can subscribe to dozens of threat feeds and block them through DNS, or pay a commercial provider for the service. However, small to medium-sized businesses and consumers have been left behind — they lack the resources, are not aware of what can be done with DNS, or are concerned about exposing their privacy and confidential information. Quad9 solves these problems. It is memorable, easy to use, relies on excellent and broad threat information, protects privacy, and security and is free." Follow CircleID on TwitterMore under: Cyberattack, Cybercrime, DNS, DNS Security, Malware, Privacy, Web [...]



UDRPs Filed - Brand Owners Take Note

2017-11-16T13:27:00-08:00

After being in the domain industry for over 15 years, there aren't too many things that catch me by surprise, but recently a few UDRP filings have me scratching my head. Both ivi.com and ktg.com have had UDRPs filed against them, and I have to say for anyone holding a valuable domain name, it's a cautionary tale and one that should have folks paying attention to the outcome of each. Just as a refresher, to be successful in a UDRP filing, the complainant must prove the following: the domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights; and the registrant has no rights or legitimate interests in respect of the domain name; and the domain name has been registered and is being used in bad faith. With that in mind, let's look a little closer at the details of these two troubling UDRP filings. Ivi.com is registered to WebMD, LLC, a long-time provider of health and wellness information on the Internet, and the domain has been registered since 1992. The domain name currently doesn't resolve to any content, so it's not actively being used. The complainant is Equipo IVI SL, an assisted reproduction group based in Spain. They appear to operate their company off of the domain ivi-fertility.com. According to their website, IVI appears to have been initially founded in 1990 in Valencia. The domain ktg.com is registered to HUKU LLC which appears to be an entity based in Belize and has been registered since at least 2001. According to a reverse WHOIS lookup, this entity owns a few hundred generic domain names in a variety of extensions. The domain ktg.com resolves to a Domain Holdings page with a message stating that this domain may be for sale. The complainant is a company called Kitchens To Go which operates off the kitchenstogo.com domain which was registered in 1998. They also appear to operate the k-t-g.com domain name as well. Based on prima facie evidence, I'm doubtful that either of these UDRP filings should be successful — but then again the domain imi.com was recently handed over to the complainant in a case which appears to have very similar circumstances to these latest two. It should be noted though in that case, the registrant did not even respond to the UDRP. What can brand owners do to ensure that they don't find themselves losing a domain in a questionable UDRP filing? A few things: Ensure your WHOIS information is up-to-date and accurate so that any correspondence sent to the contacts is received. People think nothing of value comes to those published contacts, but UDRP filings would certainly be something you'd want to make sure you received. If you do find a long-held domain being subject to a UDRP (or any UDRP for that matter), make sure you file a response so that you don't leave the complainant as the only voice in front of the UDRP panelists. Make sure that your registrar has a procedure in place to notify you of any UDRP filing they may receive for your domains. In addition to communication to the domain owner, the registrar of record also receives notification, and they should be passing those notifications on to their clients. It will be very interesting to see how these two UDRP filings play out, and we'll be sure to report back once the decisions have been made public. Written by Matt Serlin, SVP, Client Services and Operations at BrandsightFollow CircleID on TwitterMore under: Domain Names, Intellectual Property, UDRP [...]



When UDRP Consolidation Requests Go Too Far

2017-11-16T07:25:00-08:00

Although including multiple domain names in a single UDRP complaint can be a very efficient way for a trademark owner to combat cybersquatting, doing so is not always appropriate. One particularly egregious example involves a case that originally included 77 domain names — none of which the UDRP panel ordered transferred to the trademark owner, simply because consolidation against the multiple registrants of the domain names was improper. The UDRP case, filed by O2 Worldwide Limited, is an important reminder to trademark owners that they should not overreach when filing large complaints — at least when the disputed domain names are held by different registrants. The Same Domain-Name Holder Under the UDRP rules, a "complaint may relate to more than one domain name, provided that the domain names are registered by the same domain-name holder." As a result, many UDRP complaints include multiple domain names — from two to as many as more than 1,500. While this UDRP rule may seem straightforward, it can become more complicated in practice, especially as some clever cybersquatters try to hide behind aliases to frustrate trademark owners. Where the registrants appear to be different, the WIPO Overview of WIPO Panel Views on Selected UDRP Questions, Third Edition, says that UDRP panels often consider the following in considering whether it is proper to include multiple domain names in a single complaint: "whether (i) the domain names or corresponding websites are subject to common control, and (ii) the consolidation would be fair and equitable to all parties." The Overview adds: "Procedural efficiency would also underpin panel consideration of such a consolidation scenario." Not Procedurally Efficient In the O2 case, the panel found that consolidation was not appropriate, based on a most unusual set of facts. O2 had argued that "unifying features… link all of the domains" and that a single individual "maintain[ed] common control" over all of the domain names. But the panel strongly disagreed, noting that 25 different entities were named as respondents for the 77 domain names in the original complaint. Incredibly, the panel said: The administrative procedure that the [WIPO] Center was required to undertake as a result of this filing involved: (i) numerous communications with four different Registrars; (ii) the withdrawal of the Complaint against 11 of the domain names due to the fact that they were no longer registered; (iii) the receipt of 20 separate communications, from 12 different Respondents or Other Submissions, respectively, each of whom appeared to be operating independently of the others and whose positions were not identical; (iv) the receipt of two separate formal Responses; and (v) the filing of one unsolicited Supplemental Filing by the Complainant. This, the panel wrote, created an "administrative burden" that was "undue — and certainly not procedurally efficient." Further, the panel said that because "the Respondents appear to be separate persons whose positions are not necessarily identical," treating them alike in a single proceeding "is unlikely to be fair and equitable." Not only did the panel reject O2's consolidation arguments, but it also rejected O2's request to proceed against any of the disputed domain names: In the Panel's view, what the Complainant has sought to do is throw a large number of disputed domain names registered by a large number of separate Respondents into one Complaint, request consolidation on the basis of a general assertion of connectedness, rely on the Center to verify the situation of every disputed domain name and Respondent to identify those against whom the Complaint can proceed, and rely on the Panel to work through the case of every Respondent to determine in respect of whom consolidation would be fair and equitable. The Panel does not wish to encourage Complainants to adopt this approach.[...]



Russia Targeted British Telecom, Media, Energy Sectors, Reveals UK National Cyber Security Centre

2017-11-15T12:14:00-08:00

Speaking at The Times Tech Summit in London, Ciaran Martin, chief of the National Cyber Security Centre (NCSC), warned Russia is seeking to undermine the international system. "I can't get into too much of the details of intelligence matters, but I can confirm that Russian interference, seen by the National Cyber Security Centre, has included attacks on the UK media, telecommunications and energy sectors. ... The government is prioritising cyber security because we care so much about the digital future of the country. We're doing it broadly on the themes that will come up today — defend networks, deter attackers and develop the skills base."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, Policy & Regulation




Airplanes Vulnerable to Hacking, Says U.S. Department of Homeland Security

2017-11-15T10:03:00-08:00

Researchers have been able to successfully demonstrate a commercial aircraft can be remotely hacked. Calvin Biesecker reporting in Avionics reports: "A team of government, industry and academic officials successfully demonstrated that a commercial aircraft could be remotely hacked in a non-laboratory setting last year, a U.S. Department of Homeland Security (DHS) official said Wednesday at the 2017 CyberSat Summit in Tysons Corner, Virginia. [U.S. Department of Homeland Security aviation program manager says] 'We got the airplane on Sept. 19, 2016. Two days later, I was successful in accomplishing a remote, non-cooperative, penetration ... [which] means I didn't have anybody touching the airplane, I didn't have an insider threat. I stood off using typical stuff that could get through security and we were able to establish a presence on the systems of the aircraft."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity




Your Online Freedoms are Under Threat - 2017 Freedom on the Net Report

2017-11-14T08:08:00-08:00

As more people get online every day, Internet Freedom is facing a global decline for the 7th year in a row. Today, Freedom House released their 2017 Freedom on the Net report, one of the most comprehensive assessments of countries' performance regarding online freedoms. The Internet Society is one of the supporters of this report. We think it brings solid and needed evidence-based data in an area that fundamentally impacts user trust. Looking across 65 countries, the report highlights several worrying trends, including: manipulation of social media in democratic processes restrictions of virtual private networks (VPNs) censoring of mobile connectivity attacks against netizens and online journalists Elections prove to be particular tension points for online freedoms (see also Freedom House's new Internet Freedom Election Monitor). Beyond the reported trend towards more sophisticated government attempts to control online discussions, the other side of the coin is an increase in restrictions to Internet access, whether through shutting down networks entirely, or blocking specific communication platforms and services. These Internet shutdowns are at the risk of becoming the new normal. In addition to their impact on freedom of expression and peaceful assembly, shutdowns generate severe economic costs, affecting entire economies [1] and the livelihood of tech entrepreneurs, often in regions that would benefit the most from digital growth. We need to build on these numbers as they open a new door to ask governments for accountability. By adopting the U.N. Sustainable Developed Goals (SDGs) last year, governments of the world have committed to leveraging the power of the Internet in areas such as education, health and economic growth. Cutting off entire populations from the Internet sets the path in the wrong direction. Mindful that there is urgency to address this issue, the Internet Society is releasing today a new policy brief on Internet shutdowns, which provides an entry into this issue, teases various impacts of such measures and offers some preliminary recommendations to governments and other stakeholders. Of course, this can only be the beginning of any action and we need everyone to get informed and make their voices heard on shutdowns and other issues related to online freedoms. Here is what you can do: Follow the live video stream of the launch event for Freedom House's 2017 Freedom on the Net report. The Internet Society's Vice President of Global Policy Development, Sally Wentworth, is among the panelists. (14 November 2017, 9:30 am EDT) Read the new Freedom on the Net report and pay particular attention to country reports relevant to you. Ask people to spread the word that Internet shutdowns cost everyone. Governments should stop using Internet shutdowns and other means of denying access as a policy tool: we must keep the Internet on. Tweet using #ShapeTomorrow and #NetFreedom2017. You'll find more tweets on the Internet Society's Twitter account. Read the Internet Society's new Policy brief on Internet shutdowns, and look back at our paper on Internet Content Blocking for a deeper technical assessment on some common content filtering techniques. Read again ISOC's findings on personal rights and freedoms from our 2017 Global Internet Report. Join the Keep It On movement to collectively call for an end to shutdowns [1] Among other similar studies, Brookings assessed a cost of about USD 2.4 billion resulting from shutdowns across countries evaluated between July 1, 2015 and June 30, 2016. Written by Nicolas Seidler, Senior Policy advisorFollow CircleID on TwitterMore under: Censorship, Internet Governance, Policy & Regulation [...]



Telesat - a Fourth Satellite Internet Competitor

2017-11-13T12:58:00-08:00

Telesat will begin with only 117 satellites while SpaceX and the others plan to launch thousands — how can they hope to compete? The answer lies in their patent-pending deployment plan. I've been following SpaceX, OneWeb and Boeing satellite Internet projects, but have not mentioned Telesat's project. Telesat is a Canadian company that has provided satellite communication service since 1972. (They claim their "predecessors" worked on Telstar, which relayed the first intercontinental transmission, in 1962). Earlier this month, the FCC approved Telesat's petition to provide Internet service in the US using a proposed constellation of 117 low-Earth orbit (LEO) satellites. Note that Telesat will begin with only 117 satellites while SpaceX and the others plan to launch thousands — how can they hope to compete? The answer lies in their patent-pending approach to deployment. They plan a polar-orbit constellation of six equally-spaced (30 degrees apart) planes inclined at 99.5 degrees at an altitude of approximately 1,000 kilometers and an inclined-orbit constellation of five equally-spaced (36 degrees apart) planes inclined at 37.4 degrees at an approximate altitude of 1,248 kilometers. Telesat's LEO constellation will combine polar (green) and inclined (red) orbits. This hybrid polar-inclined constellation will result in global coverage with a minimum elevation angle of approximately 20 degrees using their ground stations in Svalbard Norway and Inuvic Canada. Their analysis shows that 168 polar-orbit satellites would be required to match the global coverage of their 117-satellite hybrid constellation and according to Erwin Hudson, Vice President of Telesat LEO, their investment per Gbps of sellable capacity will be as low, or lower than, any existing or announced satellite system. They also say their hybrid architecture will simplify spectrum-sharing. An inter-constellation route (source)The figure (right) from their patent application illustrates hybrid routing. The first hop in a route to the Internet for a user in a densely populated area like Mexico City (410) would be to a visible inclined-orbit satellite (420). The next hop would be to a satellite in the polar-orbit constellation (430), then to a ground station on the Internet (440). The up and downlinks will use radio frequencies, and the inter-satellite links will use optical transmission. Since the ground stations are in sparsely populated areas and the distances between satellites are low near the poles, capacity will be balanced. This scheme may result in Telesat customers experiencing slightly higher latencies than those of their competitors, but the difference will be negligible for nearly all applications. They will launch two satellites this year — one on a Russian Soyuz rocket and the other on an Indian Polar Satellite Launch Vehicle. These will be used in tests and Telesat says a number of their existing geostationary satellite customers are enthusiastic about participating in the tests. They will launch their phase 2 satellites beginning in 2020 and commence commercial service in 2021. They consider 25 satellites per launch vehicle a practical number so they will have global availability before their competitors. Their initial capacity will be relatively low, but they will add satellites as demand grows. Like OneWeb, Telesat will work with strategic partners for launches and design and production of satellites and antennae. They have not yet selected those partners, but are evaluating candidates and are confident they will be ready in time for their launch dates. Their existing ground stations give them a head start. (OneWeb just contracted with Hughes for ground stations). Their satellites will work with mechanical and electronically steered antennae, and each satellite will have [...]



Google Now a Target for Regulation

2017-11-13T11:35:00-08:00

Headline in the Washington Post: "Tech companies pushed for net neutrality. Now Sen. Al Franken wants to turn it on them." 9 Nov 2017 The time was — way back around the turn of the century — when all Internet companies believed that the Internet should be free from government regulation. I lobbied along with Google and Amazon to that end (there were no Twitter and Facebook then); we were successful over the objection of traditional telcos who wanted the protection of regulation. The Federal Communications Commission (FCC) under both Democrats and Republicans agreed to forbear from regulating the Internet the way they regulate the telephone network; the Internet flourished, to put it mildly. Fast forward to 2015. Google and other Internet giants and their trade group, the Internet Association, were successful in convincing the Obama FCC to reverse that policy and regulate Internet Service Providers (ISPs) under the same regulation which helped to stifle innovation in telephony for decades. The intent, according to the Internet Association, was to protect Net Neutrality (a very good name) and assure that ISPs didn't either censor or prefer their own content over the content of others — Google, for example. The regulation was acknowledged to be preemptive - ISPs weren't discriminating but they might. This spring Trump's FCC Chair, Ajit Pai, announced the beginning of an effort to repeal the 2015 regulations and return the Internet to its former lightly regulated state. The Internet Association and its allies mounted a massive online campaign against deregulation in order, they said, to protect Net Neutrality. One of their allies was the Open Market Initiative, which was then part of The New America Foundation. More about them below. I blogged to Google: "You run a fantastically successful business. You deliver search results so valuable that we willingly trade the history of our search requests for free access. Your private network of data centers, content caches and Internet connections assure that Google data pops quickly off our screen. Your free Chrome browser, Android operating system, and gmail see our communication before it gets to the Internet and gets a last look at what comes back from the Internet before passing it on to us. You make billions by monetizing this information with at least our implied consent. I mean all this as genuine praise. "But I think you've made a mistake by inviting the regulatory genie on to the Internet. Have you considered that Google is likely to be the next regulatory target?" It didn't take long. In August the European Union declared a penalty against Google. Barry Lynn of the Open Market Initiative posted praise for the EU decision on the New America website. According to the NY Times: "The New America Foundation has received more than $21 million from Google; its parent company's executive chairman, Eric Schmidt; and his family's foundation since the think tank's founding in 1999. That money helped to establish New America as an elite voice in policy debates on the American left and helped Google shape those debates… "Hours after this article was published online Wednesday morning, Ms. Slaughter announced that the think tank had fired Mr. Lynn on Wednesday for 'his repeated refusal to adhere to New America's standards of openness and institutional collegiality.'" Mr. Lynn and his colleagues immediately founded The Open Market Institute. The front page of their websites says: "Amazon, Google and other online super-monopolists, armed with massive dossiers of data on every American, are tightening their grip on the most vital arteries of commerce, and their control over the media we use to share news and information with one another." Sen. Al Franken and the Open Market Ins[...]



Court Finds Anti-Malware Provider Immune Under CDA for Calling Competitor's Product Security Threat

2017-11-13T10:26:00-08:00

Plaintiff anti-malware software provider sued defendant — who also provides software that protects internet users from malware, adware etc. — bringing claims for false advertising under the Section 43(a) of Lanham Act, as well as other business torts [Enigma Software Group v. Malwarebytes Inc., 2017 WL 5153698 (N.D. Cal., November 7, 2017)]. Plaintiff claimed that defendant wrongfully revised its software's criteria to identify plaintiff's software as a security threat when, according to plaintiff, its software is "legitimate" and posed no threat to users' computers. Defendant moved to dismiss the complaint for failure to state a claim upon which relief may be granted. It argued that the provisions of the Communications Decency Act at Section 230(c)(2) immunized it from plaintiff's claims. Section 230(c)(2) reads as follows: No provider or user of an interactive computer service shall be held liable on account of — (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in [paragraph (A)]. Specifically, defendant argued that the provision of its software using the criteria it selected was an action taken to make available to others the technical means to restrict access to malware, which is objectionable material. The court agreed with defendant's argument that the facts of this case were "indistinguishable" from the Ninth Circuit's opinion in in Zango, Inc. v. Kaspersky, 568 F.3d 1169 (9th Cir. 2009), in which the court found that Section 230 immunity applied in the anti-malware context. Here, plaintiff had argued that immunity should not apply because malware is not within the scope of "objectionable" material that it is okay to seek to filter in accordance with 230(c)(2)(B). Under plaintiff's theory, malware is "not remotely related to the content categories enumerated" in Section 230(c)(2)(A), which (B) refers to. In other words, the objectionableness of malware is of a different nature than the objectionableness of material that is obscene, lewd, lascivious, filthy, excessively violent, harassing. The court rejected this argument on the basis that the determination of whether something is objectionable is up to the provider's discretion. Since defendant found plaintiff's software "objectionable" in accordance with its own judgment, the software qualifies as "objectionable" under the statute. Plaintiff also argued that immunity should not apply because defendant's actions taken to warn of plaintiff's software were not taken in good faith. But the court applied the plain meaning of the statute to reject this argument — the good faith requirement only applies to conduct under Section 230(c)(2)(A), not (c)(2)(B). Finally, plaintiff had argued that immunity should not apply with respect to its Lanham Act claim because of Section 230(e)(2), which provides that "nothing in [Section 230] shall be construed to limit or expand any law pertaining to intellectual property." The court rejected this argument because although the claim was brought under the Lanham Act, which includes provisions concerning trademark infringement (which clearly relates to intellectual property), the nature of the Lanham Act claim here was for unfair competition, which is not considered to be an intellectual property claim. Written by Evan D. Brown, AttorneyFollow CircleID on TwitterMore under: Law, Malware [...]



Weaponizing the Internet Using the "End-to-end Principle" Myth

2017-11-12T14:39:00-08:00

At the outset of the Internet Engineering Task Force (IETF) 100th meeting, a decidedly non-technical initial "Guide for human rights protocol considerations” was just published. Although the IETF has always remained true to its DARPA origins as a tool for developing disruptive new technical ideas, it launches into bizarre territory when dealing with non-technical matters. The rather self-referential draft Guide asserts research containing 19 different proposed "guidelines" based on work of a small group of people over the past two years known as the Human Rights Protocol Considerations Research Group (HRPC). The preponderance of the work and postings were those of the chair, and 2/3 of all the posts were from only five people. Whatever one might think about the initiative, it is a well-intentioned attempt by activists in several human rights arenas to articulate their interests and needs based on their conceptualisation of "the internet." At the outset of the guidelines is a clause dubbed "connectivity" that consists of an implementation of the internet "end-to-end principle." Connectivity is explained as the end-to-end principle [which] [Saltzer] holds that 'the intelligence is end to end rather than hidden in the network' [RFC1958]. The end-to-end principle is important for the robustness of the network and innovation. Such robustness of the network is crucial to enabling human rights like freedom of expression. [Amusingly, the first citation is not freely available and requires $15 to view] There are several ironies here. The Saltzer article was written in 1984 shortly after DARPA had adopted TCP and IP for use on its own highly controlled packet networks. RFC1958 was written in 1996 shortly after the DARPA Internet became widely used for NREN (National Research and Educational Network) purposes and still largely controlled by U.S. government agencies for deployments in the U.S. and its international scientific research partners. Already, the DARPA Director who had originally authorized DARPA internet development in the 1970, had become significantly concerned about it becoming part of a public infrastructure and weaponized. The concern was turned into action as CRISP (Consortium for Research on Information Security and Policy) at Stanford University. The CRISP team described in considerable detail how the DARPA internet in a global public environment was certain to be used to orchestrate all manner of network-based attacks by State and non-State actors on public infrastructures, end-users, and trust systems. Twenty years later, it is incredulous that decades-old technical papers prepared for closed or tightly managed U.S. government networks are being cited as global public connectivity mantras for human right purposes — after the predicted profoundly adverse CRISP exploits have been massively manifested. Never mind that the notion is also founded on a kind of chaotic utopian dream where running code somehow provides for unfettered communication and information capabilities for every human and object on the planet rather than business, legal, and economic systems. To the extent that global internetworking capabilities have actually come into existence, it has occurred first and foremost by commercial mobile providers and vendors using their own internet protocols, combined with the telecommunication, commercial internet, and cable providers and vendors worldwide. The "end-to-end principle" which has never really existed except as some kind of alt-truth political slogan, is plainly a recipe for disaster on multiple levels. It is disastrous because the complexities and vulnerabilities of our networking infrastructure today results in a highly asymmetric threat environment. Th[...]



Data on Cuba's SNET and a Few Suggestions for ETECSA

2017-11-10T14:42:00-08:00

What would be the impact of, say, a $100,000 equipment grant from ETECSA to SNET? I've written several posts on Cuba's user-deployed street networks (SNET), the largest of which is SNET in Havana. [1] (SNET was originally built by the gaming community, but the range of services has grown substantially). My posts and journalist's accounts like this one describe SNET, but a new paper presents SNET measurement data as well as descriptive material. The abstract of the paper sums it up: Working in collaboration with SNET operators, we describe the network's infrastructure and map its topology, and we measure bandwidth, available services, usage patterns, and user demographics. Qualitatively, we attempt to answer why the SNET exists and what benefits it has afforded its users. We go on to discuss technical challenges the network faces, including scalability, security, and organizational issues. You should read the paper — it's interesting and well-written — but I can summarize a few points that caught my attention. * * * The Street Network in Havana – Community-created map showing the service areas of several SNET pillars spanning metro Havana. Source SNET is a decentralized network comprised of local nodes, each serving up to 200 users in a neighborhood. The users connect to local nodes using Ethernet cables strung over rooftops, etc. or WiFi. The local nodes connect to regional "pillars" and the pillars peer with each other over fixed wireless links. The node and pillar administrators form a decentralized organization, setting policy, supporting users and keeping their servers running and online as best they can. (This reminds me of my school's first Web server — a Windows 3 PC on my desk that crashed frequently). SNET organization Source The average utilized bandwidth between two pillars during a 24-hour period was 120 Mb/s of a maximum throughput of 250 Mb/s and the authors concluded that throughput is generally constrained by the available bandwidth in the WiFi links between pillars. As such, faster inter-pillar links and/or adding new pillars would improve performance. Faster links from local nodes to pillars, new node servers, etc. would also add to capacity and availability, but that hardware would cost money. The Cuban government would probably see the provision of outside funds as subversive, but what would be the impact of, say, a $100,000 equipment grant from ETECSA to SNET? The paper drills down on the network topology, discusses applications and presents usage and performance statistics. Forums are one of the applications and one of the forums is Netlab, a technical community of over 6,000 registered members who have made over 81,000 posts. They focus on open-source development and have written a SNET search engine and technical guides on topics like Android device repair. The export of Cuban content and technology has been a long-standing focus of this blog, and it would be cool to see Netlab available to others on the open Internet. Netlab growth – Registration dates of Netlab users since its creation showing accelerated growth over the past year Source The authors of the paper say that as far as they know, "SNET is the largest isolated community-driven network in existence" (my italics). While it may be the largest isolated community network there are larger Internet-connected community networks and that is a shame. I hope Cuba plans to "leapfrog" to next-generation technology and policy) while implementing stopgap measures like WiFi hotspots, 3G mobile and DSL. If SNET and other community networks were legitimized, supported and linked to the Internet (or even the Cuban intranet), they would be useful stopgap [...]



How a DNS Proxy Service Can Reduce the Risk of VoIP Service Downtime

2017-11-10T13:13:00-08:00

Consumers are embracing VoIP services now more than ever as they get used to calling over Internet application services such as Skype, Facetime, and Google Hangouts. Market Research Store predict that the global value of the VoIP services market is expected to reach above USD140 billion in 2021, representing a compound annual growth rate of above 9.1% between 2016 and 2021. For Cable MSOs deploying voice services, the ability to implement and manage Dynamic DNS (DDNS) is essential. However, DDNS updates pose significant challenges for large Tier 1 and 2 operators due to the difficulty of synchronizing DNS servers and DHCP servers in large "zones" or domains. When DNS servers become too difficult to manage, it often results in unreliable or even unavailable VoIP services. In cases when resynchronization is needed between DNS and DHCP servers, service downtime can take up to an hour to resolve. Customer will typically have higher quality of experience expectations when using voice services, so unwanted downtime can increase the risk of a negative experience and potentially cause customer churn. The VoIP market shows no signs of slowing its growth, so how are today's operators going to manage the increasing complexity of synchronizing DNS servers and DHCP servers? One emerging solution for today's operator is to eliminate the need for using DDNS altogether and instead deploy a DNS proxy service. These proxy services send DNS requests directly to the DHCP server, significantly simplifying the management of large DNS zones and reducing the risk of VoIP service downtime. Because the DHCP server already knows the relationship between the IP and Fully Qualified Domain Name (FQDN), since it is the authority on IP-FQDN mapping, a DNS Proxy Service can request the mapping directly from the DHCP server without the necessity of completing dynamic DNS updates and the headache of managing large DNS zones. In many cases, this simple solution can be integrated seamlessly with existing or updated network topologies, making the most of an operator's existing device provisioning investments. As a result, DNS synchronization is no longer a concern since the DHCP server is where the IP-to-FQDN assignment originates. This means increased reliability of the DNS solution, less chance of subscriber service downtime, and by association, reduced risk of customer churn. Learn more about providing higher availability of mission-critical services such as VoIP by reading the Incognito DNS Proxy Service fact sheet. Written by Pat Kinnerk, Senior Product Manager at Incognito Software SystemsFollow CircleID on TwitterMore under: Access Providers, DNS, VoIP [...]



Internet Hall of Fame Inductees Gather at GNTC to Discuss New Generation of Internet Infrastructure

2017-11-10T10:37:00-08:00

Confronted with the rapid development of the Internet, the traditional network is facing severe challenges. Therefore, it is imperative to accelerate the construction of global network infrastructure and build a new generation of Internet infrastructure to adapt to the Internet of Everything and the intelligent society. From November 28 to 30, 2017, "GNTC 2017 – Global Network Technology Conference” organized by BII Group and CFIEC, will see a grand opening in Beijing. "Global Internet Infrastructure Forum", as the most famous link of the conference, will attract several international Internet Hall of Fame inductees as well as a number of authoritative experts in the field of Internet to discuss Internet infrastructure technology changes and infrastructure development and challenges. Since human beings achieved data transmission between two computers for the first time in 1969, great changes have taken place in a few decades from military field to scientific research and even civil use, thus ushering in a brilliant chapter of "era of Internet" in history. However, as "Internet+" and industrial Internet deepen, it has become an increasingly clear trend that the Internet fully subverts all walks of life and becomes a common infrastructure. In this context, the existing architecture has exposed more and more problems in scalability, security and controllability and manageability due to its complex design, inadequate openness and low efficiency. As a result, it has constantly been improved in the industry for decades, but it is the fundamental way for future long-term development to upgrade Internet infrastructure in every aspect. In the "Global Internet Infrastructure Forum" of GNTC Conference, father of the Internet and Internet Hall of Fame inductee Vint Cerf, Father of Korean Internet and Internet Hall of Fame inductee Kilnam Chon, the inventor of DNS Internet Hall of Fame inductee Paul Mockapetris, Internet Hall of Fame inductee Paul Vixie, APNIC's Director General Paul Wilson and other global Internet authoritative experts will gather together in Beijing. Meanwhile, presidents of organizations and institutions, senior management of Internet companies and global operator representatives will also be invited to attend the conference, focusing on technological change, infrastructure development, root server, new opportunities and challenges and other directions, and exploring that how will Internet infrastructure fully upgrade to adapt to the new world of Internet of Everything in the rapid application of IPv6, SDN and other network technology. As the largest network technology event in China, there will be more than 2000 elites attending the conference. It will set up two main sessions, one roundtable forum, eight technical summits (SDN, NFV, IPv6, 5G, NB-IoT, Network Security, Cloud and Data Center, Edge Computing) and a number of workshops (P4, the Third Network, CORD, ONAP, etc.). By providing a platform for the parties to communicate and exchange, it is dedicated to promoting win-win cooperation and the process of network reconstruction. Written by Xudong Zhang, Vice President of BII GroupFollow CircleID on TwitterMore under: Cloud Computing, Cybersecurity, Data Center, Internet Governance, Internet of Things, Internet Protocol, IP Addressing, IPv6, Networks [...]



Apple (Not Surprisingly) is Not a Cybersquatter

2017-11-09T10:50:00-08:00

It's highly unusual for a well-known trademark owner to be accused of cybersquatting, but that's what happened when a Mexican milk producer filed a complaint against Apple Inc. under the Uniform Domain Name Dispute Resolution Policy (UDRP) in an attempt to get the domain name . Not only did Apple win the case, but the panel issued a finding of "reverse domain name hijacking" (RDNH) against the company that filed the complaint. The 'LA LA' Story According to the UDRP decision, Apple obtained the domain name in 2009 when it purchased the online music-streaming company La La Media, Inc. The domain name had been registered in 1996 and was acquired in 2005 by La La Media, which used it in connection with its online music service between 2006 and 2009. Although Apple stopped operating the La La Music service in 2010, and the corresponding LA LA trademarks were canceled in 2015 and 2017, Apple said that it continues to use the domain name in connection with "residual email services." Apparently seizing on the cancelled LA LA trademarks, Comercializadora de Lacteos y Derivados filed a UDRP complaint against Apple for the domain name, arguing that it "claims to have used LALA as a trademark before the registration of the Disputed Domain Name, since as early as 1987" — long before Apple acquired . The complainant further argued that Apple "registered and used the Disputed Domain Name with the bad faith intent to defraud the Complainant's customers" and that "Respondent's passive holding of the Disputed Domain Name constitutes sufficient evidence of its bad faith use and registration." Apple's 'LA LA' Rights The UDRP panel rejected these arguments, as well as those related to the UDRP's "rights or legitimate interests" requirement, finding that the complainant had "put these assertions forward without any supporting argumentation or evidence." Importantly, the panel wrote: The Panel is of the opinion that, between June 2006 and May 2010, Respondent and its predecessor-in-interest made legitimate use of the Disputed Domain Name to offer bona fide services under its own LA LA mark. These services are unrelated to the Complainant and its LALA mark. The Panel also wrote: [T]he fact that the Respondent chose to cease active use of the Disputed Domain Name does not demonstrate in itself that the Respondent has no rights or legitimate interests in the Disputed Domain Name. It is common practice for trademark holders to maintain the registration of a domain name, even if the corresponding trademark was abandoned, e.g., following a rebranding exercise. Apart from the goodwill that might be associated to the trademark, the domain name in question may have intrinsic value. In the case at hand, the Panel notes that the term "la-la" is often used as a nonsense refrain in songs or as a reference to babbling speech, and that there are many concurrent uses of the "LALA" sign as a brand. In such circumstances, a domain name holder has a legitimate interest to maintain the registration of a potentially valuable domain name. (Interestingly, the panel said nothing about "La La Land," the 2016 movie that won six Academy Awards — and which uses the domain name .) After its conclusion in favor of Apple, allowing the computer company to keep the domain name, the panel found that the "Complainant was, or should have been aware, of [Apple]'s bona fide acquisition and use of the Disputed Domain Name" and that it "must have been aware, before filing the Complaint, that the Disputed Domain Name[...]



Qatar Crisis Started With a Hack, Now Political Tsunami in Saudi Arabia - How Will You Be Impacted?

2017-11-09T10:35:00-08:00

The world has officially entered what the MLi Group labels as the "New Era of The Unprecedented". In this new era, traditional cyber security strategies are failing on daily basis, political and terrorist destruction-motivated cyber attacks are on the rise threatening "Survivability", and local political events unfold to impact the world overnight and forever. Decision makers know they cannot continue doing the same old stuff, but don't know what else to do next or differently that would be effective. Deloitte and Equifax are giants who discovered the hard way they were not immune. The Qatar crisis with damage in $Billions was triggered by a cyber-attack which a Washington Post report claims was perpetrated by its neighbor the UAE. Now comes the Saudi Tsunami with ramification that will impact stakeholders worldwide. If you thought these events in Far Far Away lands don't impact you and your businesses, then I suggest you take your head out of the sand, and fast. Local Geo-Political events are sending shockwaves globally. To learn what to be on the look out for, or learn how you can mitigate them watch the MLi Group's "Era of the Unprecedented" Video on the Saudi Tsunami by clicking here. Written by Khaled Fattal, Group Chairman, The Multilingual Internet GroupFollow CircleID on TwitterMore under: Censorship, Cyberattack, Cybersecurity, Data Center, Internet Governance [...]



Why Aren't We Fixing Route Leaks?

2017-11-08T14:55:00-08:00

In case you missed it (you probably didn't), the Internet was hit with the Monday blues this week. As operator-focused lists and blogs identified, "At 17:47:05 UTC yesterday (6 November 2017), Level 3 (AS3356) began globally announcing thousands of BGP routes that had been learned from customers and peers and that were intended to stay internal to Level 3. By doing so, internet traffic to large eyeball networks like Comcast and Bell Canada, as well as major content providers like Netflix, was mistakenly sent through Level 3's misconfigured routers." In networking lingo, a "route leak" had occurred, and a substantial one at that. Specifically, the Internet was the victim of a Type 6 route leak, where: "An offending AS simply leaks its internal prefixes to one or more of its transit-provider ASes and/or ISP peers. The leaked internal prefixes are often more-specific prefixes subsumed by an already announced, less-specific prefix. The more-specific prefixes were not intended to be routed in External BGP (eBGP). Further, the AS receiving those leaks fails to filter them. Typically, these leaked announcements are due to some transient failures within the AS; they are short-lived and typically withdrawn quickly following the announcements. However, these more-specific prefixes may momentarily cause the routes to be preferred over other aggregate (i.e., less specific) route announcements, thus redirecting traffic from its normal best path." In this case, the painful result was significant Internet congestion for millions of users in different parts of the world for about 90 minutes. One of the main culprits apparently fessed up, with CenturyLink/Level 3 quickly issuing a reason for the outage (I pity "that guy," being a network engineer at the world's largest ISP ain't easy). Can't we fix this? Route leaks are a fact of life on the Internet. According to one ISP's observations, on any given day of the week, between 10-20% of announcements are actually leaks. Type 6 route leaks can be alleviated in part by technical and/or operational measures. For internal prefixes never meant to be routed on the Internet, one suggestion is to use origin validation to filter leaks, but this requires adoption of RPKI and only deals with two specific types of leak. Source: Job Snijders, "Everyday practical BGP filtering" presented at NANOG 67From a contractual and operational perspective, Level 3's customers and others affected are presumably closely scrutinizing their SLAs. Maybe this episode will incentivize Level 3 to take some corrective action(s), like setting a fail-safe maximum announcement limit on their routers to catch potential errors. Perhaps Level 3's peering partners are similarly considering reconfiguring their routers to not blindly accept thousands of additional routes. Although, the frequency or other characteristics of changes in routing announcements might make this infeasible. Another potential solution requiring broader collective action is NTT's peer locking, where NTT prevents leaked announcements from propagating further by filtering on behalf of other ISPs with which it has an agreement. It's an approach that is mutually beneficial. Much of the routing chaos could have been prevented if peer locking arrangements had been in place between NTT (or other large backbone ISPs peering with Level 3) and any of the impacted ASes (e.g., Comcast had ~20 impacted ASes). NTT has apparently had some success with the approach, having arrangements with many of the world's largest carriers of Internet traffic. In one case where they deployed [...]



Poland to Test a Cybersecurity Program for Aviation Sector

2017-11-08T13:08:00-08:00

During the two-day Cybersecurity in Civil Aviation conference, Poland announced an agreement to test a cybersecurity pilot program for the aviation sector as Europe's European Aviation Safety Agency (EASA) civil aviation authority face increasing threats posed by hackers to air traffic. "We want to have a single point in the air transport sector that will coordinate all cybersecurity activities… for airlines, airports, and air traffic," said Piotr Samson, head of Poland's ULC civil aviation authority. "Despite the assurances of experts in the field, computer systems failures triggered by hackers or accident have caused flight chaos in recent years. Poland's flagship carrier LOT was briefly forced to suspend operations in June 2015 after a hack attack." See full report.

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity